path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
input/eg01-eg24/eg01-24-overall-plots.ipynb | ###Markdown
EG01-24** Deployment & Comparison of Prediction Method Plots **These plots were used in the DDCA Journal Article (https://github.com/arfc/2019-ddca-journal-article), Demonstration of Demand Driven Deployment Capabilities in Cyclus Global Presentation (https://github.com/arfc/2019-chee-global), and the DDCA final quarterly report (https://github.com/arfc/ddca_numerical_exp). To generate the plots in this Jupyter Notebook, you must go to the ARFC Fuel-Cycle Box (fuel-cycle/cyclus_output/d3ploy-transition-scenarios/eg01-eg24) and download the following sqlite files: * eg01-eg24-linpower-d3ploy-buffer6000-fft.sqlite* eg01-eg24-linpower-d3ploy-buffer0-ma.sqlite* eg01-eg24-linpower-d3ploy-buffer0-arma.sqlite* eg01-eg24-linpower-d3ploy-buffer0-arch.sqlite* eg01-eg24-linpower-d3ploy-buffer0-poly.sqlite* eg01-eg24-linpower-d3ploy-buffer0-exp_smoothing.sqlite* eg01-eg24-linpower-d3ploy-buffer0-holt_winters.sqlite* eg01-eg24-linpower-d3ploy-buffer0-fft.sqlite* eg01-eg24-linpower-d3ploy-buffer0-sw_seasonal.sqlite* eg01-eg24-linpower-d3ploy-buffer2000-fft.sqlite* eg01-eg24-linpower-d3ploy-buffer4000-fft.sqlite* eg01-eg24-linpower-d3ploy-buffer8000-fft.sqlite
###Code
import sys
sys.path.insert(0, '../../scripts/')
import transition_plots as tp
sqlite24= 'eg01-eg24-linpower-d3ploy-buffer6000-fft.sqlite'
all_agents24 = tp.format_agent_dict(sqlite24)
tp.plot_agents(all_agents24,name='eg24-stack')
commods = ['sourceout',
'enrichmentout',
'mixerout',
'power']
commodnames = ['Natural Uranium',
'LWR Fuel',
'FR Fuel',
'Power']
methods = ['ma','arma','arch','poly','exp_smoothing','holt_winters','fft','sw_seasonal']
general_sqlite = 'eg01-eg24-linpower-d3ploy-buffer0-'
tp.plot_all_undersupply(commods,commodnames,methods,general_sqlite,demand_driven=True,demand_eq='60000 + 250*t/12',
title='EG1-24: Time steps with an undersupply of each commodity for different prediction methods',
name = 'eg24-undersupply')
commods = ['lwrout',
'lwrstorageout',
'lwrtru',
'frtru',
'frout',
'frstorageout']
commodnames = ['LWR Spent Fuel',
'Cooled LWR Spent Fuel',
'Extracted TRU (LWR)',
'Extracted TRU (FR)',
'FR Spent Fuel',
'Cooled FR Spent Fuel']
methods = ['ma','arma','arch','poly','exp_smoothing','holt_winters','fft','sw_seasonal']
general_sqlite = 'eg01-eg24-linpower-d3ploy-buffer0-'
tp.plot_all_undersupply(commods,commodnames,methods,general_sqlite,demand_driven=False,
title='EG1-24: Time steps with an undercapacity of each commodity for different prediction methods',
name = 'eg24-undercapacity')
commods = ['power']
commodnames = ['power']
methods = ['0-fft','2000-fft','4000-fft','6000-fft','8000-fft']
general_sqlite = 'eg01-eg24-linpower-d3ploy-buffer'
tp.plot_all_undersupply(commods,commodnames,methods,general_sqlite,demand_eq='60000 + 250*t/12',
title='EG1-24: Time steps with an undersupply of power for varying power buffer sizes ',
name='eg24-sa')
commods1 = ['sourceout',
'enrichmentout',
'mixerout',
'power']
commodnames1 = ['Natural Uranium',
'LWR Fuel',
'FR Fuel',
'Power']
commods2 = ['lwrout',
'lwrstorageout',
'lwrtru',
'frtru',
'frout',
'frstorageout']
commodnames2 = ['LWR Used Fuel',
'Cool LWR Used Fuel',
'Extracted TRU (LWR)',
'Extracted TRU (FR)',
'FR Used Fuel',
'Cool FR Used Fuel']
methods = ['ma','arma','arch','poly','exp_smoothing','holt_winters','fft','sw_seasonal']
methodnames = ['MA','ARMA','ARCH','POLY','EXP SMOOTHING','HOLT WINTERS','FFT', 'STEPWISE SEASONAL']
general_sqlite = 'eg01-eg24-linpower-d3ploy-buffer0-'
demand_driven = True
demand_eq='60000+250*t/12'
title = 'EG1-24: Time steps with an undersupply or under capacity of each commodity for different prediction methods'
name = 'eg01-24-histogram'
yticks=[0,5,10,15,20,25]
tp.plot_histogram(commods1,commodnames1,commods2,commodnames2,methods,methodnames,general_sqlite,demand_eq,title,name,yticks)
###Output
_____no_output_____ |
_notebooks/2022_05_29_exercise_manipulating_geospatial_data.ipynb | ###Markdown
"exercise-manipulating-geospatial-data"> "exercise-manipulating-geospatial-data"- toc:true- branch: master- badges: true- comments: true- author: Yunjihye- categories: [jupyter, python] **This notebook is an exercise in the [Geospatial Analysis](https://www.kaggle.com/learn/geospatial-analysis) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/manipulating-geospatial-data).**--- IntroductionYou are a Starbucks big data analyst ([that’s a real job!](https://www.forbes.com/sites/bernardmarr/2018/05/28/starbucks-using-big-data-analytics-and-artificial-intelligence-to-boost-performance/130c7d765cdc)) looking to find the next store into a [Starbucks Reserve Roastery](https://www.businessinsider.com/starbucks-reserve-roastery-compared-regular-starbucks-2018-12also-on-the-first-floor-was-the-main-coffee-bar-five-hourglass-like-units-hold-the-freshly-roasted-coffee-beans-that-are-used-in-each-order-the-selection-rotates-seasonally-5). These roasteries are much larger than a typical Starbucks store and have several additional features, including various food and wine options, along with upscale lounge areas. You'll investigate the demographics of various counties in the state of California, to determine potentially suitable locations.Before you get started, run the code cell below to set everything up.
###Code
import math
import pandas as pd
import geopandas as gpd
#from geopy.geocoders import Nominatim # What you'd normally run
from learntools.geospatial.tools import Nominatim # Just for this exercise
import folium
from folium import Marker
from folium.plugins import MarkerCluster
from learntools.core import binder
binder.bind(globals())
from learntools.geospatial.ex4 import *
###Output
_____no_output_____
###Markdown
You'll use the `embed_map()` function from the previous exercise to visualize your maps.
###Code
def embed_map(m, file_name):
from IPython.display import IFrame
m.save(file_name)
return IFrame(file_name, width='100%', height='500px')
###Output
_____no_output_____
###Markdown
Exercises 1) Geocode the missing locations.Run the next code cell to create a DataFrame `starbucks` containing Starbucks locations in the state of California.
###Code
# Load and preview Starbucks locations in California
starbucks = pd.read_csv("../input/geospatial-learn-course-data/starbucks_locations.csv")
starbucks.head()
###Output
_____no_output_____
###Markdown
Most of the stores have known (latitude, longitude) locations. But, all of the locations in the city of Berkeley are missing.
###Code
# How many rows in each column have missing values?
print(starbucks.isnull().sum())
# View rows with missing locations
rows_with_missing = starbucks[starbucks["City"]=="Berkeley"]
rows_with_missing
###Output
_____no_output_____
###Markdown
Use the code cell below to fill in these values with the Nominatim geocoder.Note that in the tutorial, we used `Nominatim()` (from `geopy.geocoders`) to geocode values, and this is what you can use in your own projects outside of this course. In this exercise, you will use a slightly different function `Nominatim()` (from `learntools.geospatial.tools`). This function was imported at the top of the notebook and works identically to the function from GeoPandas.So, in other words, as long as: - you don't change the import statements at the top of the notebook, and - you call the geocoding function as `geocode()` in the code cell below, your code will work as intended!
###Code
# Create the geocoder
geolocator = Nominatim(user_agent="kaggle_learn")
def my_geocoder(row):
point = geolocator.geocode(row).point
return pd.Series({'Latitude': point.latitude, 'Longitude': point.longitude})
berkeley_locations = rows_with_missing.apply(lambda x: my_geocoder(x['Address']), axis=1)
starbucks.update(berkeley_locations)
###Output
_____no_output_____
###Markdown
2) View Berkeley locations.Let's take a look at the locations you just found. Visualize the (latitude, longitude) locations in Berkeley in the OpenStreetMap style.
###Code
# Create a base map
m_2 = folium.Map(location=[37.88,-122.26], zoom_start=13)
# Add a marker for each Berkeley location
for idx, row in starbucks[starbucks["City"]=='Berkeley'].iterrows():
Marker([row['Latitude'], row['Longitude']]).add_to(m_2)
# Show the map
embed_map(m_2, 'q_2.html')
###Output
_____no_output_____
###Markdown
Considering only the five locations in Berkeley, how many of the (latitude, longitude) locations seem potentially correct (are located in the correct city)?
###Code
# solution
All five locations appear to be correct!
###Output
_____no_output_____
###Markdown
3) Consolidate your data.Run the code below to load a GeoDataFrame `CA_counties` containing the name, area (in square kilometers), and a unique id (in the "GEOID" column) for each county in the state of California. The "geometry" column contains a polygon with county boundaries.
###Code
CA_counties = gpd.read_file("../input/geospatial-learn-course-data/CA_county_boundaries/CA_county_boundaries/CA_county_boundaries.shp")
CA_counties.head()
###Output
_____no_output_____
###Markdown
Next, we create three DataFrames:- `CA_pop` contains an estimate of the population of each county.- `CA_high_earners` contains the number of households with an income of at least $150,000 per year.- `CA_median_age` contains the median age for each county.
###Code
CA_pop = pd.read_csv("../input/geospatial-learn-course-data/CA_county_population.csv", index_col="GEOID")
CA_high_earners = pd.read_csv("../input/geospatial-learn-course-data/CA_county_high_earners.csv", index_col="GEOID")
CA_median_age = pd.read_csv("../input/geospatial-learn-course-data/CA_county_median_age.csv", index_col="GEOID")
###Output
_____no_output_____
###Markdown
Use the next code cell to join the `CA_counties` GeoDataFrame with `CA_pop`, `CA_high_earners`, and `CA_median_age`.Name the resultant GeoDataFrame `CA_stats`, and make sure it has 8 columns: "GEOID", "name", "area_sqkm", "geometry", "population", "high_earners", and "median_age". Also, make sure the CRS is set to `{'init': 'epsg:4326'}`.
###Code
cols_to_add = CA_pop.join([CA_high_earners, CA_median_age]).reset_index()
CA_stats = CA_counties.merge(cols_to_add, on="GEOID")
CA_stats.crs = {'init': 'epsg:4326'}
###Output
_____no_output_____
###Markdown
Now that we have all of the data in one place, it's much easier to calculate statistics that use a combination of columns. Run the next code cell to create a "density" column with the population density.
###Code
CA_stats["density"] = CA_stats["population"] / CA_stats["area_sqkm"]
###Output
_____no_output_____
###Markdown
4) Which counties look promising?Collapsing all of the information into a single GeoDataFrame also makes it much easier to select counties that meet specific criteria.Use the next code cell to create a GeoDataFrame `sel_counties` that contains a subset of the rows (and all of the columns) from the `CA_stats` GeoDataFrame. In particular, you should select counties where:- there are at least 100,000 households making \$150,000 per year,- the median age is less than 38.5, and- the density of inhabitants is at least 285 (per square kilometer).Additionally, selected counties should satisfy at least one of the following criteria:- there are at least 500,000 households making \$150,000 per year,- the median age is less than 35.5, or- the density of inhabitants is at least 1400 (per square kilometer).
###Code
sel_counties = CA_stats[((CA_stats.high_earners > 100000) &
(CA_stats.median_age < 38.5) &
(CA_stats.density > 285) &
((CA_stats.median_age < 35.5) |
(CA_stats.density > 1400) |
(CA_stats.high_earners > 500000)))]
###Output
_____no_output_____
###Markdown
5) How many stores did you identify?When looking for the next Starbucks Reserve Roastery location, you'd like to consider all of the stores within the counties that you selected. So, how many stores are within the selected counties?To prepare to answer this question, run the next code cell to create a GeoDataFrame `starbucks_gdf` with all of the starbucks locations.
###Code
starbucks_gdf = gpd.GeoDataFrame(starbucks, geometry=gpd.points_from_xy(starbucks.Longitude, starbucks.Latitude))
starbucks_gdf.crs = {'init': 'epsg:4326'}
###Output
_____no_output_____
###Markdown
So, how many stores are in the counties you selected?
###Code
# Fill in your answer
locations_of_interest = gpd.sjoin(starbucks_gdf, sel_counties)
num_stores = len(locations_of_interest)
###Output
_____no_output_____
###Markdown
6) Visualize the store locations.Create a map that shows the locations of the stores that you identified in the previous question.
###Code
# Create a base map
m_6 = folium.Map(location=[37,-120], zoom_start=6)
# show selected store locations
mc = MarkerCluster()
locations_of_interest = gpd.sjoin(starbucks_gdf, sel_counties)
for idx, row in locations_of_interest.iterrows():
if not math.isnan(row['Longitude']) and not math.isnan(row['Latitude']):
mc.add_child(folium.Marker([row['Latitude'], row['Longitude']]))
m_6.add_child(mc)
# Show the map
embed_map(m_6, 'q_6.html')
###Output
_____no_output_____ |
10 Linked List-2/10.05 Mid Point of a Linked List.ipynb | ###Markdown
Midpoint = (length - 1)/2 Maintain two pointers fast and slow where fast goes (2 x speed) of slow. When fast will reach the end then the slow will reach the midpoint.When fast.next.next == null (for even case) and fast.next == null (for odd case) then we need to stop till then we need to increment slow = slow.next and fast = fast.next.next. && which keeps on iterating till the end. || doesn't works as the even condition fails.** How can we reach the midpoint in one pass?
###Code
class Node:
def __init__(self, data):
self.data = data
self.next = None
def midpoint_linkedlist(head):
slow = head
fast = head
while fast.next != None and fast.next.next != None:
slow = slow.next
fast = fast.next.next
return slow
def ll(arr):
if len(arr)==0:
return None
head = Node(arr[0])
last = head
for data in arr[1:]:
last.next = Node(data)
last = last.next
return head
# Main
# Read the link list elements including -1
arr=list(int(i) for i in input().strip().split(' '))
# Create a Linked list after removing -1 from list
l = ll(arr[:-1])
node = midpoint_linkedlist(l)
if node:
print(node.data)
###Output
1 2 3 4 5 6 -1
3
|
src/04_word2vec/02_w2v_train_socdiv.ipynb | ###Markdown
Training Word2Vec on Biomedical Abstracts in PubMed Brandon Kramer - University of Virginia's Biocomplexity Institute This notebook borrows from several resources to train a Word2Vec model on a subset of the PubMed database taken from January 2021. Overall, I am interested in testing whether diversity and racial terms are becoming more closely related over time. To do this, I train the model on 1990-1995 data and then a random sample of 2015-2020 data. Import packages and ingest data Let's load all of our packages
###Code
# load packages
import os
import psycopg2 as pg
import pandas.io.sql as psql
import pandas as pd
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from textblob import Word
from gensim.models import Word2Vec
import multiprocessing
# set cores, grab stop words
cores_available = multiprocessing.cpu_count() - 1
stop = stopwords.words('english')
###Output
_____no_output_____
###Markdown
Matching the Sample Sizes Since the 2010-2020 data is larger than the 1990-2000 data, we want to take a random sample of the later data to make the sample sizes the same for comparison later.
###Code
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/word_embeddings/")
pubmed_earlier = pd.read_csv("transformed_1990_2000_0821.csv")
pubmed_earlier['abstract_clean'] = pubmed_earlier['abstract'].str.lower()
pubmed_earlier['abstract_clean'] = pubmed_earlier['abstract_clean'].str.replace('-', ' ')
pubmed_earlier['abstract_clean'] = pubmed_earlier['abstract_clean'].str.replace(r'[^\w\s]+', '', regex=True)
pubmed_earlier['abstract_clean'] = pubmed_earlier['abstract_clean'].apply(lambda x:' '.join(x for x in x.split() if not x.isdigit()))
pubmed_earlier['abstract_clean'] = pubmed_earlier['abstract_clean'].apply(lambda x:' '.join(x for x in x.split() if not x in stop))
pubmed_earlier['abstract_clean'] = pubmed_earlier['abstract_clean'].apply(lambda x:' '.join([Word(word).lemmatize() for word in x.split()]))
pubmed_earlier.head()
###Output
_____no_output_____
###Markdown
Cleaning the text dataConvert all text to lower case, remove punctuation, numbers, dots, digits and stop words, and finally lemmatize.
###Code
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/word_embeddings/")
pubmed_later = pd.read_csv("transformed_2010_2020_0821.csv")
pubmed_later = pubmed_later[pubmed_later['abstract'].notnull()]
pubmed_later['abstract_clean'] = pubmed_later['abstract'].str.lower()
pubmed_later['abstract_clean'] = pubmed_later['abstract_clean'].str.replace('-', ' ')
pubmed_later['abstract_clean'] = pubmed_later['abstract_clean'].str.replace(r'[^\w\s]+', '', regex=True)
pubmed_later['abstract_clean'] = pubmed_later['abstract_clean'].apply(lambda x:' '.join(x for x in x.split() if not x.isdigit()))
pubmed_later['abstract_clean'] = pubmed_later['abstract_clean'].apply(lambda x:' '.join(x for x in x.split() if not x in stop))
pubmed_later['abstract_clean'] = pubmed_later['abstract_clean'].apply(lambda x:' '.join([Word(word).lemmatize() for word in x.split()]))
pubmed_later.head()
pubmed_later.count()
###Output
_____no_output_____
###Markdown
Training the Word2Vec Models Now, let's train these Word2Vec models and save them as a binary file to visualize later.
###Code
# run the model on the earlier data
earlier_list=[]
for i in pubmed_earlier['abstract_clean']:
li = list(i.split(" "))
earlier_list.append(li)
earlier_model = Word2Vec(earlier_list, min_count=5, vector_size=512, window=5, epochs=5, seed=123, workers=cores_available)
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/word_embeddings/")
earlier_model.save("word2vec_1990_2000_socdiv_0821.model")
earlier_model.save("word2vec_1990_2000_socdiv_0821.bin")
# run the model on the later data
later_list=[]
for i in pubmed_later['abstract_clean']:
li = list(i.split(" "))
later_list.append(li)
later_model = Word2Vec(later_list, min_count=5, vector_size=512, window=5, epochs=5, seed=123, workers=cores_available)
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/word_embeddings/")
later_model.save("word2vec_2010_2020_socdiv_0821.model")
later_model.save("word2vec_2010_2020_socdiv_0821.bin")
###Output
_____no_output_____ |
lecture_01_intro/python_basics.ipynb | ###Markdown
Python basics tutorial* This tutotial is for Python version 3.x, although most of it should also be valid for older versions 2.x* Author: Marcel Goldschen-Ohm Files* **.ipynb**: Jupyter notebook files (e.g. this is what you're looking at now)* **.py**: Python files (we'll use these in a different environement such as PyCharm) PEP8 Code Syle Conventions* You should endevour to adhere to these coding style conventions as much as possible, but don't fret so long as your code is very understandable. Comments* Use these *A LOT*, otherwise not only will no one else know what your code means, but you won't remember either a month later!
###Code
"""
This is a multi-line comment:
Press Shift-Enter to
run this cell.
"""
# Display a message. (This is a single line comment.)
print("hi")
print("Hello") # Say hello. (This is a comment that follows a line of code.)
###Output
hi
Hello
###Markdown
Getting HELP* Of course, the internet is also a good place.
###Code
help(print)
###Output
Help on built-in function print in module builtins:
print(...)
print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
flush: whether to forcibly flush the stream.
###Markdown
Variables* Variable names must begin with a letter, but thereafter can contain letters, numbers and underscores.* Variable names should be informative.* !!! ONLY use cryptic variable names like 'x', 'a', 'i' in cases where it is VERY obvious what they mean. *I break that rule for this tutorial, but when we get to some real code examples of actual problems later on, I will endevour to not abuse this.*
###Code
x = 3
y = 5
y = "hi"
my_v6_55 = 82.01
x, y, my_v6_55
###Output
_____no_output_____
###Markdown
Data Types* Data types are inferred based on their value.
###Code
x = 1 + 3.5 # float
y = 1 + 3.0 # float
z = 1 + 3 # integer
b = False # bool
na = None # nothing
print("type of x =", type(x))
print("type of y =", type(y))
print("type of z =", type(z))
print("type of b =", type(b))
print("type of na =", type(na))
###Output
type of x = <class 'float'>
type of y = <class 'float'>
type of z = <class 'int'>
type of b = <class 'bool'>
type of na = <class 'NoneType'>
###Markdown
Basic Operations
###Code
print("2 + 3 =", 2 + 3) # add
print("2 - 3 =", 2 - 3) # subtract
print("2 * 3 =", 2 * 3) # multiply
print("2 / 3 =", 2 / 3) # divide
print("2**3 =", 2**3) # 2 to the 3rd power
print("8 % 3 =", 8 % 3) # modulus (remainder)
# Can't do operations with incompatible types
"1" + 3
###Output
_____no_output_____
###Markdown
Lists* Array of pretty much anything.
###Code
a = [1, 2, 3.0, 4.5, "hello", True, ['a', 'b', 'c']]
an_empty_list = []
len(a), len(an_empty_list) # number of elements in the list (i.e. list length)
###Output
_____no_output_____
###Markdown
List indexing* !!! First index is 0, NOT 1In some sane languages like MATLAB and Julia, the first index is 1 as it should be. Sigh.
###Code
a = [1, 2, 3.0, 4.5, "hello", True, ['a', 'b', 'c']]
print(a)
print()
print(a[0]) # 1st element
print(a[1]) # 2nd element
print(a[-2]) # 2nd to last element
print(a[-1]) # last element
print(a[-1][2]) # 3rd element of the array that is the last element
print(a[-2])
a[2] = False # set 3rd element
print(a)
###Output
[1, 2, False, 4.5, 'hello', True, ['a', 'b', 'c']]
###Markdown
Index ranges* [start:stop]* [start:stop:step]* !!! The range does NOT include stop.* Omitting start or stop assumes a range from first or through last element, respectivley.
###Code
print(a[1:4]) # start at index 1 and stop at index 4 <== !!! does NOT include index 4
print(a[1:]) # start at index 1 and go to end
print(a[:4]) # start at beginning (index 0) and stop at index 4
print(a[:]) # start at beginning (index 0) and go to end
print(a[1:4:2]) # start at index 1 and stop at index 4, step by 2
###Output
[2, False, 4.5]
[2, False, 4.5, 'hello', True, ['a', 'b', 'c']]
[1, 2, False, 4.5]
[1, 2, False, 4.5, 'hello', True, ['a', 'b', 'c']]
[2, 4.5]
###Markdown
Growing/Shrinking a list
###Code
a = [1, 2, 3.0, 4.5, "hello", True, ['a', 'b', 'c']]
print(a)
a.append(38)
a.append([40, 50, 60])
a.extend([40, 50, 60])
a = a + [1, 2]
print(a)
a.pop(4) # removes the 5th element
a.remove(2) # remove the first value of 2 in the list
del a[-4:-1] # removes the 4th through 2nd to last elements
print(a)
###Output
[1, 2, 3.0, 4.5, 'hello', True, ['a', 'b', 'c']]
[1, 2, 3.0, 4.5, 'hello', True, ['a', 'b', 'c'], 38, [40, 50, 60], 40, 50, 60, 1, 2]
[1, 3.0, 4.5, True, ['a', 'b', 'c'], 38, [40, 50, 60], 40, 2]
###Markdown
Mutable vs Immutable objects* mutable = containers that can change (e.g. list, set, dict)* immutable = values that can't be changed (e.g. number, string, tuple) Copy vs Reference* Immutable objects are copied* Mutable objects are passed by reference
###Code
x = 3
y = x
x = 2
y
a = [1, 2, 3]
b = a
a[1] = 100
b
###Output
_____no_output_____
###Markdown
copy
###Code
import copy # import copy module
a = [1, 2, 3]
b = copy.copy(a)
a[1] = 100
b
# copy does a shallow copy (nested containers are NOT copied)
a = [[1, 2, 3], [4, 5, 6]]
b = copy.copy(a)
a[1][1] = 100
b
###Output
_____no_output_____
###Markdown
deepcopy
###Code
# deepcopy copies everything including all levels of nested containers
a = [[1, 2, 3], [4, 5, 6]]
b = copy.deepcopy(a)
a[1][1] = 100
b, a
###Output
_____no_output_____
###Markdown
StringsA list of chars.
###Code
a_str = "Hello"
print(a_str[1])
print(a_str[-1])
###Output
e
o
###Markdown
Tuples* A collection of any kind of objects.* Items can be accessed by index just like lists.* Tuples are immutable, they can't be changed, umm..., unless the item in the tuple is itself a mutable container like a list.
###Code
tup = (1, True, 3.5, [1, 2, 3])
tup[0] # 1st item
# Tuples can't change, they're immutable
tup = (1, True, 3.5, [1, 2, 3])
tup[0] = 2
# But mutable objects like lists inside tuples can change!
tup = (1, True, 3.5, [1, 2, 3])
print(tup)
tup[-1][0] = 100
print(tup)
###Output
(1, True, 3.5, [1, 2, 3])
(1, True, 3.5, [100, 2, 3])
###Markdown
Dictionaries* Collection of (key, value) pairs.* Values accessed by key instead of by index.
###Code
colors = {"apple": "Red", "banana": "Yellow", "lettuce": "Green", "gray_RGB": [0.5, 0.5, 0.5]}
colors["apple"] = "Green"
colors["strawberry"] = "Red"
print("apples are", colors["apple"])
print("bananas are", colors["banana"])
print("lettuce is", colors["lettuce"])
print("The RGB values for gray are", colors["gray_RGB"])
print()
print(colors.keys())
print(colors.values())
###Output
apples are Green
bananas are Yellow
lettuce is Green
The RGB values for gray are [0.5, 0.5, 0.5]
dict_keys(['apple', 'banana', 'lettuce', 'gray_RGB', 'strawberry'])
dict_values(['Green', 'Yellow', 'Green', [0.5, 0.5, 0.5], 'Red'])
###Markdown
Conditionals
###Code
print("1 == 1 is", 1 == 1)
print("1.0 == 1 is", 1.0 == 1)
print("1.5 == 1 is ", 1.5 == 1)
print("1.5 != 1 is ", 1.5 != 1)
print("3 < 5.5 is ", 3 < 5.5)
print("-1 > 2 is ", -1 > 2)
print("7 <= 7.1 is ", 7 <= 7.1)
print("7 >= 7 is", 7 >= 7)
print("(7 == 7) == False is", (7 == 7) == False)
print("not (7 == 7) is", not (7 == 7))
###Output
1 == 1 is True
1.0 == 1 is True
1.5 == 1 is False
1.5 != 1 is True
3 < 5.5 is True
-1 > 2 is False
7 <= 7.1 is True
7 >= 7 is True
(7 == 7) == False is False
not (7 == 7) is False
###Markdown
Logical AND, OR
###Code
print("(1 == 1) and (2 == 2) is", (1 == 1) and (2 == 2))
print("(1 == 1) and (1 == 2) is", (1 == 1) and (1 == 2))
print("(1 == 1) or (1 == 2) is", (1 == 1) or (1 == 2))
print("(1 == 0) or (1 == 2) is", (1 == 0) or (1 == 2))
###Output
(1 == 1) and (2 == 2) is True
(1 == 1) and (1 == 2) is False
(1 == 1) or (1 == 2) is True
(1 == 0) or (1 == 2) is False
###Markdown
if statements
###Code
x = 3
if x > 5:
print("x < 5")
y = 3
if False:
print("stuffg")
z = 7
z = 4
if x >= 5:
print("x >= 5")
else:
print("x < 5")
if x >= 5:
print("x >= 5")
elif x < 5 and x > 3:
print("3 < x < 5")
else:
print("x <= 3")
###Output
x <= 3
###Markdown
for loops
###Code
arr = [1, 2, 3.0, 4.5, "hello", True, ['a', 'b', 'c']]
for i in arr:
print(i)
# Loop over both indexes and values
for (index, item) in enumerate(arr):
print(index, item)
# range(start, stop)
for index in range(0, len(arr)):
print(index, arr[index])
# Every 3rd number from 10-19.
import time
for i in range(10, 20, 3):
print(i)
time.sleep(1) # sleep for 1 sec, notice that the output is asynchronous
###Output
10
13
16
19
###Markdown
while loops
###Code
n = 6
while n <= 10:
print(n)
n += 1
###Output
6
7
8
9
10
###Markdown
Exiting a loop
###Code
for n in range(1, 6):
if n == 4:
break
print(n)
###Output
1
2
3
###Markdown
Skipping to the next loop iteration
###Code
for n in range(1, 6):
if n < 3:
continue
print(n)
###Output
3
4
5
###Markdown
Iterate a dictionary's (key, value) pairs
###Code
colors = {"apple": "Red", "banana": "Yellow", "lettuce": "Green", "gray_RGB": [0.5, 0.5, 0.5]}
for (key, value) in colors.items():
print(key, ":", value)
###Output
apple : Red
banana : Yellow
lettuce : Green
gray_RGB : [0.5, 0.5, 0.5]
###Markdown
Save pickled data* pickle saves data in binary format
###Code
import pickle # get the pickle module
data = ["Some", "data", 3.5, True, arr, colors]
# Save data to file
with open("my_data.dat", "wb") as f: # open file for binary writing
pickle.dump(data, f)
data
###Output
_____no_output_____
###Markdown
Load pickled data* !!! WARNING !!! This is unsecure! But if you or someone you trust pickled the data, then it should be no problem. However, I would NOT unpickle some unknown data you downloaded. It might break your computer.
###Code
# Load data from file
with open("my_data.dat", "rb") as f: # open file for binary reading
newdata = pickle.load(f)
newdata
###Output
_____no_output_____
###Markdown
Functions
###Code
def say_hi(name):
print("Hi", name)
say_hi("Tim")
add_numbers(3, 4.5)
def add_numbers(x, y):
return x + y
add_numbers(3, 4.5)
def get_sub_and_prod(x, y=2):
sub = x - y
prod = x * y
return sub, prod
s, p = get_sub_and_prod(2, 3)
s, p
###Output
_____no_output_____
###Markdown
Default and named function arguments
###Code
s, p = get_sub_and_prod(2)
s, p
s, p = get_sub_and_prod(x=2, y=3)
s, p
s, p = get_sub_and_prod(y=3, x=2)
s, p
###Output
_____no_output_____
###Markdown
Variable scope
###Code
x = 3
def myfunc(x):
x = 2
myfunc(x)
x
yy
###Output
_____no_output_____
###Markdown
Immutable args are copied, whereas mutable args are passed by reference
###Code
a = [1, 2, 3]
def myarrayfunc(arr):
arr[1] = 100
myarrayfunc(a)
a
###Output
_____no_output_____
###Markdown
List comprehensions
###Code
[x**2 for x in range(10)] # range(stop) == range(0, stop)
S = [2**i for i in range(13)]
S
[x for x in S if x < 128]
###Output
_____no_output_____
###Markdown
Classes
###Code
class Animal:
"""
An animal with some number of legs.
"""
def __init__(self, num_legs=0):
# initialize the class
self.num_legs = num_legs # member variables self.xxx
def run_away(self):
num_legs = 6
if self.num_legs > 0:
print("Running away...")
else:
print("Slithering away...")
larva = Animal()
snake = Animal(num_legs=0)
cat = Animal(4)
larva.run_away()
snake.run_away()
cat.run_away()
larva.num_legs, snake.num_legs, cat.num_legs
###Output
Slithering away...
Slithering away...
Running away...
|
titanic/titanic_survival_exploration[1].ipynb | ###Markdown
Machine Learning Engineer Nanodegree Introduction and Foundations Project: Titanic Survival ExplorationIn 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.> **Tip:** Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. Getting StartedTo begin working with the RMS Titanic passenger data, we'll first need to `import` the functionality we need, and load our data into a `pandas` DataFrame. Run the code cell below to load our data and display the first few entries (passengers) for examination using the `.head()` function.> **Tip:** You can run a code cell by clicking on the cell and using the keyboard shortcut **Shift + Enter** or **Shift + Return**. Alternatively, a code cell can be executed using the **Play** button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. [Markdown](http://daringfireball.net/projects/markdown/syntax) allows you to write easy-to-read plain text that can be converted to HTML.
###Code
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
###Output
_____no_output_____
###Markdown
From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:- **Survived**: Outcome of survival (0 = No; 1 = Yes)- **Pclass**: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)- **Name**: Name of passenger- **Sex**: Sex of the passenger- **Age**: Age of the passenger (Some entries contain `NaN`)- **SibSp**: Number of siblings and spouses of the passenger aboard- **Parch**: Number of parents and children of the passenger aboard- **Ticket**: Ticket number of the passenger- **Fare**: Fare paid by the passenger- **Cabin** Cabin number of the passenger (Some entries contain `NaN`)- **Embarked**: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)Since we're interested in the outcome of survival for each passenger or crew member, we can remove the **Survived** feature from this dataset and store it as its own separate variable `outcomes`. We will use these outcomes as our prediction targets. Run the code cell below to remove **Survived** as a feature of the dataset and store it in `outcomes`.
###Code
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
###Output
_____no_output_____
###Markdown
The very same sample of the RMS Titanic data now shows the **Survived** feature removed from the DataFrame. Note that `data` (the passenger data) and `outcomes` (the outcomes of survival) are now *paired*. That means for any passenger `data.loc[i]`, they have the survival outcome `outcomes[i]`.To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how *accurate* our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our `accuracy_score` function and test a prediction on the first five passengers. **Think:** *Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?*
###Code
def accuracy_score(truth, pred):
""" Returns accuracy score for input truth and predictions. """
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
###Output
Predictions have an accuracy of 60.00%.
###Markdown
> **Tip:** If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off. Making PredictionsIf we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking. The `predictions_0` function below will always predict that a passenger did not survive.
###Code
def predictions_0(data):
""" Model with no features. Always predicts a passenger did not survive. """
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
###Output
_____no_output_____
###Markdown
Question 1*Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?* **Hint:** Run the code cell below to see the accuracy of this prediction.
###Code
print accuracy_score(outcomes, predictions)
###Output
Predictions have an accuracy of 61.62%.
###Markdown
**Answer:** *Replace this text with the prediction accuracy you found above.* Predictions have an accuracy of 61.62%. ***Let's take a look at whether the feature **Sex** has any indication of survival rates among passengers using the `survival_stats` function. This function is defined in the `visuals.py` Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across. Run the code cell below to plot the survival outcomes of passengers based on their sex.
###Code
vs.survival_stats(data, outcomes, 'Sex')
###Output
_____no_output_____
###Markdown
Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females *did* survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive. Fill in the missing code below so that the function will make this prediction. **Hint:** You can access the values of each feature for a passenger like a dictionary. For example, `passenger['Sex']` is the sex of the passenger.
###Code
def predictions_1(data):
""" Model with one feature:
- Predict a passenger survived if they are female. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
#pass
if passenger['Sex']=="female":
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
###Output
_____no_output_____
###Markdown
Question 2*How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?* **Hint:** Run the code cell below to see the accuracy of this prediction.
###Code
print accuracy_score(outcomes, predictions)
###Output
Predictions have an accuracy of 78.68%.
###Markdown
**Answer**: *Replace this text with the prediction accuracy you found above.* Predictions have an accuracy of 78.68%. ***Using just the **Sex** feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the **Age** of each male, by again using the `survival_stats` function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the **Sex** 'male' will be included. Run the code cell below to plot the survival outcomes of male passengers based on their age.
###Code
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
###Output
_____no_output_____
###Markdown
Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older *did not survive* the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive. Fill in the missing code below so that the function will make this prediction. **Hint:** You can start your implementation of this function using the prediction code you wrote earlier from `predictions_1`.
###Code
def predictions_2(data):
""" Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
#pass
if passenger["Sex"]=="female":
predictions.append(1)
#elif passenger["Sex"]=="male":
# predictions.append(0)
elif passenger["Sex"]=="male" and passenger["Age"] < 10:
predictions.append(1)
#elif passenger["Sex"]=="male" and passenger["Age"] > 10:
# predictions.append(0)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
###Output
_____no_output_____
###Markdown
Question 3*How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?* **Hint:** Run the code cell below to see the accuracy of this prediction.
###Code
print accuracy_score(outcomes, predictions)
###Output
Predictions have an accuracy of 79.35%.
###Markdown
Predictions have an accuracy of 79.35%. **Answer**: *Replace this text with the prediction accuracy you found above.* ***Adding the feature **Age** as a condition in conjunction with **Sex** improves the accuracy by a small margin more than with simply using the feature **Sex** alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. **Pclass**, **Sex**, **Age**, **SibSp**, and **Parch** are some suggested features to try.Use the `survival_stats` function below to to examine various survival statistics. **Hint:** To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: `["Sex == 'male'", "Age < 18"]`
###Code
vs.survival_stats(data, outcomes, 'Sex', [ "Pclass == 3" ])
###Output
_____no_output_____
###Markdown
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age < 18"])
###Code
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'female'" , "Embarked == C"])
###Output
_____no_output_____
###Markdown
After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction. Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model. **Hint:** You can start your implementation of this function using the prediction code you wrote earlier from `predictions_2`.
###Code
def predictions_3(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
#pass
#if passenger["Sex"] == "female" :
if passenger["Sex"] == "female":
if passenger["Pclass"] ==3 :
predictions.append(0)
else:
predictions.append(1)
else:
if passenger['Age'] < 10 and passenger['Pclass'] in (1, 2):
predictions.append(1)
elif passenger['Age'] < 18 and passenger['Pclass'] == 1:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
###Output
_____no_output_____
###Markdown
Question 4*Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?* **Hint:** Run the code cell below to see the accuracy of your predictions.
###Code
print accuracy_score(outcomes, predictions)
###Output
Predictions have an accuracy of 80.13%.
|
examples/ch05/snippets_ipynb/05_11.ipynb | ###Markdown
5.11 Simulating Stacks with Lists
###Code
stack = []
stack.append('red')
stack
stack.append('green')
stack
stack.pop()
stack
stack.pop()
stack
stack.pop()
##########################################################################
# (C) Copyright 2019 by Deitel & Associates, Inc. and #
# Pearson Education, Inc. All Rights Reserved. #
# #
# DISCLAIMER: The authors and publisher of this book have used their #
# best efforts in preparing the book. These efforts include the #
# development, research, and testing of the theories and programs #
# to determine their effectiveness. The authors and publisher make #
# no warranty of any kind, expressed or implied, with regard to these #
# programs or to the documentation contained in these books. The authors #
# and publisher shall not be liable in any event for incidental or #
# consequential damages in connection with, or arising out of, the #
# furnishing, performance, or use of these programs. #
##########################################################################
###Output
_____no_output_____
###Markdown
5.11 Simulating Stacks with Lists
###Code
stack = []
stack.append('red')
stack
stack.append('green')
stack
stack.pop()
stack
stack.pop()
stack
stack.pop()
##########################################################################
# (C) Copyright 2019 by Deitel & Associates, Inc. and #
# Pearson Education, Inc. All Rights Reserved. #
# #
# DISCLAIMER: The authors and publisher of this book have used their #
# best efforts in preparing the book. These efforts include the #
# development, research, and testing of the theories and programs #
# to determine their effectiveness. The authors and publisher make #
# no warranty of any kind, expressed or implied, with regard to these #
# programs or to the documentation contained in these books. The authors #
# and publisher shall not be liable in any event for incidental or #
# consequential damages in connection with, or arising out of, the #
# furnishing, performance, or use of these programs. #
##########################################################################
###Output
_____no_output_____ |
examples/StudentPerformanceData_BiasCheck.ipynb | ###Markdown
audit-AIIn this notebook, I'll be diving into the capabilities of pymetrics-bias-testing-package as a tool to measure and mitigate the effects discriminatory patterns in training data and the predictions made by machine learning algorithms trained for the purposes of socially sensitive decision processes.The overall goal of this research is to come up with a reasonable way to think about how to make machine learning algorithms more fair. While identifying potential bias in training datasets and by consequence the machine learning algorithms trained on them is not sufficient to solve the problem of discrimination, in a world where more and more decisions are being automated by Artifical Intelligence, our ability to understand and identify the degree to which an algorithm is fair or biased is a step in the right direction.In this notebook, I'll be using the Student Performance Data Set from the UCI Machine Learning Repository, which consists of 385 students with 33 input variables (including sex, age, and health status) and 1 continuous target variable G3, which is the overall score that ranges from 0-20.In the context of a machine learning model to predict grades , our objectives are to:1. Measure the degree of bias in the training data with respect to some bias metric and protected class(age, gender, and previous performance).2. Train a model on the training data, make predictions on a testing set, and show that analgorithm trained on a biased training set lead to a biased algorithm.3. Measure the effects of using a biased machine learning model. Import packages
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import (GridSearchCV,
learning_curve,
ShuffleSplit,
train_test_split)
from sklearn.preprocessing import (LabelEncoder,
StandardScaler)
from sklearn.metrics import (accuracy_score,
classification_report,
mean_absolute_error,
precision_score,
recall_score)
from auditai.misc import bias_test_check
from auditai.viz import (plot_group_proportions,
plot_kdes,
plot_threshold_tests)
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Import Data
###Code
data = pd.read_csv('../data/student-mat.csv', delimiter=';')
data.head()
# Take the final grade and converting that into a percentage
data['Grade'] = data['G3']/20*100
###Output
_____no_output_____
###Markdown
Measure Bias in Training data Protected Class: Sex
###Code
# Preliminary stats
breakdown_by_gender = pd.value_counts(data['sex'].values, sort=False)
breakdown_by_gender
female_students = data[data['sex'] == 'F']
male_students = data[data['sex'] == 'M']
print ('Mean grade% of female students = ', np.mean(female_students['Grade']))
print ('Median grade% of female students = ', np.median(female_students['Grade']))
print ('Mean grade% of male students = ', np.mean(male_students['Grade']))
print ('Median grade% of male students = ', np.median(male_students['Grade']))
sns.set()
sns.distplot(female_students['Grade'], label='Female students')
sns.distplot(male_students['Grade'], label='Male Students')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
As we can see the mean and median scores for male students is higher than that of female students. Also the distribution of grades for male and female students show that in the training data, male students perform slightly better than female students. There is a possibility that a machine learning algorithm trained on this data will assume for all other features being the same, that a male student will score more than a female student. Protected Class: Previous Academic performance
###Code
data['failures'].value_counts()
# Preliminary stats
data['failed_before'] = data['failures'] != 0
breakdown_by_failed_before = pd.value_counts(data['failed_before'].values, sort=False)
breakdown_by_failed_before
failed_before = data[data['failed_before'] == True]
never_failed_before = data[data['failed_before'] == False]
print('Mean grade% of students who have failed before = ', np.mean(failed_before['Grade']))
print('Median grade% of students who have failed before = ', np.mean(failed_before['Grade']))
print('Mean grade% of students who have never failed before = ', np.mean(never_failed_before['Grade']))
print('Median grade% of students who have never failed before = ', np.median(never_failed_before['Grade']))
sns.set()
sns.distplot(failed_before['Grade'], label='Students who have failed before')
sns.distplot(never_failed_before['Grade'], label='Students who have never failed before')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
As we can see the mean and median scores for students who have never failed before is significantly higher than those who have failed before. While history is sometimes a good predictore of the future, a machine learning model trained on this dataset will be very harsh in terms of penalizing students who have failed before. Preprocessing and feature engineering
###Code
# One hot encoding categorical features
nominal_features = ['Mjob',
'Fjob',
'reason',
'guardian']
# One hot encoding for all nominal features
data = pd.get_dummies(data, columns = nominal_features)
# Label encoding binary features
binary_features = [
'school',
'sex',
'address',
'famsize',
'Pstatus',
'schoolsup',
'famsup',
'paid',
'activities',
'nursery',
'higher',
'internet',
'romantic'
]
# Label Encoding for binary features
le = LabelEncoder()
for f in binary_features:
if f =='sex':
data['sex_lenc'] = le.fit_transform(data[f])
else:
data[f] = le.fit_transform(data[f])
features = ['school', 'sex_lenc', 'age', 'address', 'famsize', 'Pstatus', 'Medu', 'Fedu',
'traveltime', 'studytime', 'schoolsup', 'famsup', 'paid',
'activities', 'nursery', 'higher', 'internet', 'romantic', 'famrel',
'freetime', 'goout', 'Dalc', 'Walc', 'health', 'absences', 'failed_before',
'Mjob_at_home', 'Mjob_health' , 'Mjob_other', 'Mjob_services', 'Mjob_teacher', 'Fjob_at_home',
'Fjob_health', 'Fjob_other', 'Fjob_services', 'Fjob_teacher',
'reason_course', 'reason_home', 'reason_other', 'reason_reputation',
'guardian_father', 'guardian_mother', 'guardian_other']
X = data[features]
y = data['Grade']
# Creating training and testing sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42)
# Create linear regression object
regr = LinearRegression()
# Train the model using the training sets
regr.fit(X_train, y_train)
# Make predictions using the testing set
y_pred = regr.predict(X_test)
# The root mean squared error
print("Mean Absolute Error: %.2f %%"
% (mean_absolute_error(y_test, y_pred)))
###Output
Mean Absolute Error: 16.18 %
###Markdown
Testing models for bias against protected classes Protected class: Sex
###Code
X_test['sex'] = X_test['sex_lenc'].apply(lambda x: 'Male' if x == 1 else 'Female')
bias_test_check(X_test['sex_lenc'], y_pred, category='Gender')
a = plot_kdes(labels=X_test['sex'],results=y_pred, category='Gender')
###Output
_____no_output_____
###Markdown
Protected class: Previous performance
###Code
bias_test_check(X_test['failed_before'], y_pred, category='Previous performance')
b = plot_kdes(labels=X_test['failed_before'],results=y_pred)
###Output
_____no_output_____
###Markdown
audit-AIIn this notebook, I'll be diving into the capabilities of pymetrics-bias-testing-package as a tool to measure and mitigate the effects discriminatory patterns in training data and the predictions made by machine learning algorithms trained for the purposes of socially sensitive decision processes.The overall goal of this research is to come up with a reasonable way to think about how to make machine learning algorithms more fair. While identifying potential bias in training datasets and by consequence the machine learning algorithms trained on them is not sufficient to solve the problem of discrimination, in a world where more and more decisions are being automated by Artifical Intelligence, our ability to understand and identify the degree to which an algorithm is fair or biased is a step in the right direction.In this notebook, I'll be using the Student Performance Data Set from the UCI Machine Learning Repository, which consists of 385 students with 33 input variables (including sex, age, and health status) and 1 continuous target variable G3, which is the overall score that ranges from 0-20.In the context of a machine learning model to predict grades , our objectives are to:1. Measure the degree of bias in the training data with respect to some bias metric and protected class(age, gender, and previous performance).2. Train a model on the training data, make predictions on a testing set, and show that analgorithm trained on a biased training set lead to a biased algorithm.3. Measure the effects of using a biased machine learning model. Import packages
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import (GridSearchCV,
learning_curve,
ShuffleSplit,
train_test_split)
from sklearn.preprocessing import (LabelEncoder,
StandardScaler)
from sklearn.metrics import (accuracy_score,
classification_report,
mean_absolute_error,
precision_score,
recall_score)
from auditai.misc import bias_test_check
from auditai.viz import (plot_group_proportions,
plot_kdes,
plot_threshold_tests)
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Import Data
###Code
data = pd.read_csv('../data/student-mat.csv', delimiter=';')
data.head()
# Take the final grade and converting that into a percentage
data['Grade'] = data['G3']/20*100
###Output
_____no_output_____
###Markdown
Measure Bias in Training data Protected Class: Sex
###Code
# Preliminary stats
breakdown_by_gender = pd.value_counts(data['sex'].values, sort=False)
breakdown_by_gender
female_students = data[data['sex'] == 'F']
male_students = data[data['sex'] == 'M']
print ('Mean grade% of female students = ', np.mean(female_students['Grade']))
print ('Median grade% of female students = ', np.median(female_students['Grade']))
print ('Mean grade% of male students = ', np.mean(male_students['Grade']))
print ('Median grade% of male students = ', np.median(male_students['Grade']))
sns.set()
sns.distplot(female_students['Grade'], label='Female students')
sns.distplot(male_students['Grade'], label='Male Students')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
As we can see the mean and median scores for male students is higher than that of female students. Also the distribution of grades for male and female students show that in the training data, male students perform slightly better than female students. There is a possibility that a machine learning algorithm trained on this data will assume for all other features being the same, that a male student will score more than a female student. Protected Class: Previous Academic performance
###Code
data['failures'].value_counts()
# Preliminary stats
data['failed_before'] = data['failures'] != 0
breakdown_by_failed_before = pd.value_counts(data['failed_before'].values, sort=False)
breakdown_by_failed_before
failed_before = data[data['failed_before'] == True]
never_failed_before = data[data['failed_before'] == False]
print('Mean grade% of students who have failed before = ', np.mean(failed_before['Grade']))
print('Median grade% of students who have failed before = ', np.mean(failed_before['Grade']))
print('Mean grade% of students who have never failed before = ', np.mean(never_failed_before['Grade']))
print('Median grade% of students who have never failed before = ', np.median(never_failed_before['Grade']))
sns.set()
sns.distplot(failed_before['Grade'], label='Students who have failed before')
sns.distplot(never_failed_before['Grade'], label='Students who have never failed before')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
As we can see the mean and median scores for students who have never failed before is significantly higher than those who have failed before. While history is sometimes a good predictore of the future, a machine learning model trained on this dataset will be very harsh in terms of penalizing students who have failed before. Preprocessing and feature engineering
###Code
# One hot encoding categorical features
nominal_features = ['Mjob',
'Fjob',
'reason',
'guardian']
# One hot encoding for all nominal features
data = pd.get_dummies(data, columns = nominal_features)
# Label encoding binary features
binary_features = [
'school',
'sex',
'address',
'famsize',
'Pstatus',
'schoolsup',
'famsup',
'paid',
'activities',
'nursery',
'higher',
'internet',
'romantic'
]
# Label Encoding for binary features
le = LabelEncoder()
for f in binary_features:
if f =='sex':
data['sex_lenc'] = le.fit_transform(data[f])
else:
data[f] = le.fit_transform(data[f])
features = ['school', 'sex_lenc', 'age', 'address', 'famsize', 'Pstatus', 'Medu', 'Fedu',
'traveltime', 'studytime', 'schoolsup', 'famsup', 'paid',
'activities', 'nursery', 'higher', 'internet', 'romantic', 'famrel',
'freetime', 'goout', 'Dalc', 'Walc', 'health', 'absences', 'failed_before',
'Mjob_at_home', 'Mjob_health' , 'Mjob_other', 'Mjob_services', 'Mjob_teacher', 'Fjob_at_home',
'Fjob_health', 'Fjob_other', 'Fjob_services', 'Fjob_teacher',
'reason_course', 'reason_home', 'reason_other', 'reason_reputation',
'guardian_father', 'guardian_mother', 'guardian_other']
X = data[features]
y = data['Grade']
# Creating training and testing sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42)
# Create linear regression object
regr = LinearRegression()
# Train the model using the training sets
regr.fit(X_train, y_train)
# Make predictions using the testing set
y_pred = regr.predict(X_test)
# The root mean squared error
print("Mean Absolute Error: %.2f %%"
% (mean_absolute_error(y_test, y_pred)))
###Output
Mean Absolute Error: 16.18 %
###Markdown
Testing models for bias against protected classes Protected class: Sex
###Code
X_test['sex'] = X_test['sex_lenc'].apply(lambda x: 'Male' if x == 1 else 'Female')
bias_test_check(X_test['sex_lenc'], y_pred, category='Gender')
a = plot_kdes(labels=X_test['sex'],results=y_pred, category='Gender')
###Output
_____no_output_____
###Markdown
Protected class: Previous performance
###Code
bias_test_check(X_test['failed_before'], y_pred, category='Previous performance')
b = plot_kdes(labels=X_test['failed_before'],results=y_pred)
###Output
_____no_output_____ |
pyram/mock/RGB_image_PTS.ipynb | ###Markdown
Starting from an already performed simulation result (not run in this Python instance).log file 지우면 안 됨. 한 directory에는 하나의 run만.
###Code
#from pts.do.prompt import do
#%matplotlib inline
#%matplotlib notebook
import numpy as np
import matplotlib
matplotlib.use("Qt5Agg")
from matplotlib import rcParams
import matplotlib.pyplot as plt
import logging # This is a Python standard package.
import pts.simulation as sm
import pts.utils as ut
import pts.visual as vis
from glob import glob
from PIL import Image
import pyram
#redundant = np.genfromtxt("../list_aexprestart_nout.txt", dtype=[('nout','int'),('zred','float')])
#good_nouts = np.setdiff1d(np.arange(900), redundant["nout"])
###Output
_____no_output_____
###Markdown
RGB imageLooking at pts/visual/do/make_images.py Two use cases:1. Specify wavelength 2. Specify bands Q: Any support for SB limit??Decades not applied to arcsinh?
###Code
import pts.band as bnd
wavelengths = None
if wavelengths is not None:
tuples = { name: wavelengths << sm.unit("micron") }
for sim in sims:
vis.makeRGBImages(sim, wavelengthTuples=tuples, fileType=type)
target_nouts=[169, 174, 179, 183,
188, 193, 198, 202,
206, 211, 216, 221]
good_nouts=[110, 158, 169, 174, 179,
183, 188, 193, 198, 202, 206,
208, 211, 216, 222, 232, 237,
242, 247, 300, 446]
output_type = "total"
quality = ["intermediate", "high"][0]
if quality == 'intermediate':
name = "SDSS_RGB_LQ"
prefix = lambda nout: f"g13_{nout}"
decs=[3.5,4,4.5,5,5.5]
elif quality == "high":
name = "SDSS_RGB_HQ"
prefix = lambda nout: f"g13_{nout}_HQ"
decs=[3.5,4,4.5]
# Keep flux range the same throughout snapshots.z`
fmin_ash, fmax_ash = 2e-3, 2e2
fmin_log1, fmax_log1 = None, None
fmin_log2, fmax_log2 = 5e-3, 5e2
fmin_log3, fmax_log3 = None, None
all_fns=[]
#for nout in target_nouts[]:
for nout in [206]:
# try bands
#colors = "SDSS_Z,SDSS_G,SDSS_U"#,
colors = "SDSS_I,SDSS_G,SDSS_U" #MASS_2MASS_H,2MASS_2MASS_J,2MASS_2MASS_KS"
segments = colors.split(',')
if len(segments) != 3:
raise ut.UserError("colors argument must have three comma-separated segments")
try: bands = [ bnd.BroadBand(segment) for segment in segments ]
except ValueError: bands = None
print("Generating mock images... nout=", nout)
repo = f"./01000/{nout:05d}/"#faceon_redshift_"
# load values from JP's value
sim = sm.createSimulation(outDirPath=repo, prefix=prefix(nout))
skifile = sim.parameters()
#for inst in sim.instruments():
inst = sim.instruments()[0]
totalfluxpath = inst.outFilePaths(fileType="total.fits")[0]
datacube = sm.loadFits(totalfluxpath) # loadFits & getFitsAxes return 'astropy' quantities with units attached.
x,y,wavelengths = sm.getFitsAxes(totalfluxpath)
if bands is not None:
# 0.6, 0.75, 1.2
contributions = [ (bands[0], 0.6, 0, 0), (bands[1], 0, 0.75, 0), (bands[2], 0, 0, 1.4) ]
# Could loop over sims
# Make RGB images of ALL instruments.
#fmax_ash, fn_ash = vis.makeConvolvedRGBImages(sim,
# contributions=contributions,
# fileType=output_type,
# name=name,
# stretch=np.arcsinh,
# decades=5,
# fmin=fmin_ash,
# fmax=fmax_ash)
print("arcsinh", fmin_ash, fmax_ash)
fmax_log0, fns_log0 = vis.makeConvolvedRGBImages(sim,
contributions=contributions,
fileType=output_type,
name=name,
stretch='log',
decades=decs[0],
fmin=fmin_log1,
fmax=fmax_log1)
fmax_log1, fns_log1 = vis.makeConvolvedRGBImages(sim,
contributions=contributions,
fileType=output_type,
name=name,
stretch='log',
decades=decs[1],
fmin=fmin_log2,
fmax=fmax_log2)
#fmin_log2, fmax_log2, fns_log2 = vis.makeConvolvedRGBImages(sim,
# contributions=contributions,
# fileType=output_type,
# name=name,
# stretch='log',
# decades=decs[2],
# fmin=fmin_log3,
# fmax=fmax_log3,
# return_fn=True)
#print(dec, fmin_log3, fmax_log3)
#all_fns.append(fns_log2)
###Output
Generating mock images... nout= 206
arcsinh 0.002 200.0
###Markdown
Merge stamps
###Code
im.histogram
from PIL import ImageOps
rotate = False
fig, axs = plt.subplots(3,4)
fig.set_size_inches(8,6)
axs = axs.ravel()
ori ='faceon'
getfn = lambda nout: f"/home/hoseung/Work/data/NH/JP/01000/00{nout}/g13_{nout}_{ori}_total_SDSS_RGB_LQ_log_dec5.0.png"
target_nouts=[169, 174, 179, 183,
188, 193, 198, 202,
206, 211, 216, 221]
crop_dx = 250
npix=1000
normalize=True
for ax, nout in zip(axs,target_nouts):
fn = getfn(nout)
proj = fn.split("_total_")[0][-6:]
im = Image.open(fn)
if rotate:
im = im.rotate(80)
if normalize:
ImageOps.autocontrast(im, cutoff=5)
im = im.crop((crop_dx,crop_dx,npix-crop_dx,npix-crop_dx))
#new_img_size = int((npix-2*crop_dx)/1)
#im = im.resize((new_img_size,new_img_size), resample=Image.BILINEAR)
ax.imshow(im)
ax.set_yticks(())
ax.set_xticks(())
s = pyram.load.sim.Sim(nout=nout, base='../')
if nout == target_nouts[0]:
t0 = s.info.tGyr
ax.text(0.05, 0.9, "z={:.2f}".format(s.info.zred), color='w', transform=ax.transAxes)
elif nout == target_nouts[-1]:
ax.text(0.05, 0.9, "+{:.2}Gyr z={:.2f}".format(s.info.tGyr - t0, s.info.zred),
color='w', transform=ax.transAxes)
else:
ax.text(0.05, 0.9, "+{:.2}Gyr".format(s.info.tGyr - t0), color='w', transform=ax.transAxes)
fig.subplots_adjust(left=0.1, right=0.97, top=0.97,bottom=0.1, wspace=0.01, hspace = .01)
# turn off ticks
axs = axs.reshape(3,4)
for ax in axs[:,0]:
ax.set_yticks((50,175,300)) # 1000 : 60 -> new_im....
ax.set_yticklabels(["15", "0", "-15"])
for ax in axs[2,:]:
ax.set_xticks((50,175,300))
ax.set_xticklabels(["-15", "0", "15"])
fig.text(0.54, 0.04, 'kpc', va='center', ha='center', fontsize=rcParams['axes.labelsize'])
fig.text(0.04, 0.54, 'kpc', va='center', ha='center', rotation='vertical', fontsize=rcParams['axes.labelsize'])
plt.savefig("sequence_" + proj + fn.split(".png")[0][-11:-1]+"UGI.pdf", dpi=300)
plt.close()
###Output
Age of the universe (now/z=0): -3.114 / 0.000 Gyr, z = 2.84816
Age of the universe (now/z=0): -3.114 / 0.000 Gyr, z = 2.84816
Simulation set up.
Age of the universe (now/z=0): -3.051 / 0.000 Gyr, z = 2.78239
Age of the universe (now/z=0): -3.051 / 0.000 Gyr, z = 2.78239
Simulation set up.
Age of the universe (now/z=0): -2.990 / 0.000 Gyr, z = 2.71959
Age of the universe (now/z=0): -2.990 / 0.000 Gyr, z = 2.71959
Simulation set up.
Age of the universe (now/z=0): -2.931 / 0.000 Gyr, z = 2.65899
Age of the universe (now/z=0): -2.931 / 0.000 Gyr, z = 2.65899
Simulation set up.
Age of the universe (now/z=0): -2.874 / 0.000 Gyr, z = 2.60104
Age of the universe (now/z=0): -2.874 / 0.000 Gyr, z = 2.60104
Simulation set up.
Age of the universe (now/z=0): -2.819 / 0.000 Gyr, z = 2.54501
Age of the universe (now/z=0): -2.819 / 0.000 Gyr, z = 2.54501
Simulation set up.
Age of the universe (now/z=0): -2.765 / 0.000 Gyr, z = 2.49096
Age of the universe (now/z=0): -2.765 / 0.000 Gyr, z = 2.49096
Simulation set up.
Age of the universe (now/z=0): -2.714 / 0.000 Gyr, z = 2.43911
Age of the universe (now/z=0): -2.714 / 0.000 Gyr, z = 2.43911
Simulation set up.
Age of the universe (now/z=0): -2.663 / 0.000 Gyr, z = 2.38883
Age of the universe (now/z=0): -2.663 / 0.000 Gyr, z = 2.38883
Simulation set up.
Age of the universe (now/z=0): -2.614 / 0.000 Gyr, z = 2.34040
Age of the universe (now/z=0): -2.614 / 0.000 Gyr, z = 2.34040
Simulation set up.
Age of the universe (now/z=0): -2.567 / 0.000 Gyr, z = 2.29380
Age of the universe (now/z=0): -2.567 / 0.000 Gyr, z = 2.29380
Simulation set up.
Age of the universe (now/z=0): -2.520 / 0.000 Gyr, z = 2.24836
Age of the universe (now/z=0): -2.520 / 0.000 Gyr, z = 2.24836
Simulation set up.
###Markdown
Smoothing?
###Code
# SED plot
micron = sm.unit("micron")
vis.plotSeds(sim, decades=4, figSize=(7,5), outFileName="try1_sed.png")
###Output
_____no_output_____ |
Unsupervised Learning/Hierarchical Clustering/Hierarchical_Clustering.ipynb | ###Markdown
Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Data Preprocessing Importing the datasetEach row correponds to a customerFeature: spending score (evaluation metric) --> measure how much each customer spendsWe will identify some patterns within the customer base
###Code
data = pd.read_csv("Mall_Customers.csv")
# Unsupervised learning has only X (features)
# feature - customer id is not needed for our model (exclude it)
# Note:
# To visualize our clusters, we will need 2 features from our dataset (One axis per feature). (2 features --> 2D plot), so
# for the time being, we will not consider other features other than 2 chosen.
# Features chosen -- Annual Income (index 3), Spending Score (index 4)
X = data.iloc[:, [3, 4]].values
# Take all rows, of column index 3 and 4
###Output
_____no_output_____
###Markdown
Also, since there is no y, we won't be splitting our dataset into Train and Test setX[0] --> Annual IncomeX[1] --> Spending Score Using the dendrogram to find the optimal number of clusters
###Code
# Dendogram plot is within scipy library
import scipy.cluster.hierarchy as sch
plt.figure(figsize=(8, 6))
# create a dendogram
dendrogram = sch.dendrogram(sch.linkage(X, method='ward')) # in HC --> recommended method that brings clusters - minimum variance
# method='ward' ---> minimizes variance in each cluster
# Plot the dendrogram - Visualization
plt.title('Dendrogram')
plt.xlabel('Customers') # each row - observation points
plt.ylabel('Euclidean distances')
plt.show()
###Output
_____no_output_____
###Markdown
Training the Hierarchical Clustering model on the dataset
###Code
# from the dendogram, optimal no. of clusters = 5
K = 5
from sklearn.cluster import AgglomerativeClustering
# Create the model
hc = AgglomerativeClustering(n_clusters=K, affinity='euclidean', linkage='ward')
# affinity --> type of distance to be computed to measure variance between the clusters
# ward --> minimum variance method
# fit_predict() --> trains model, and also returns y(dependent variable)
y_hc = hc.fit_predict(X) # just like y_pred
###Output
_____no_output_____
###Markdown
The ward method - minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in thissense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach.'affinity' refers to the distance used in the K-Means algorithm, which is the Euclidean distance. It refers to how the HC algorithm defines (finds) the closest centroids toeach point.
###Code
print(y_hc)
# all the clusters that each customer belongs to
# first customer --> belongs to cluster 4, second customer --> cluster 3, third customer --> cluster 4, .......
###Output
[4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4
3 4 3 4 3 4 1 4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 1 2 0 2 0 2 1 2 0 2 0 2 0 2 0 2 1 2 0 2 1 2
0 2 0 2 0 2 0 2 0 2 0 2 1 2 0 2 0 2 0 2 0 2 0 2 0 2 0 2 0 2 0 2 0 2 0 2 0
2 0 2 0 2 0 2 0 2 0 2 0 2 0 2]
###Markdown
Visualising the clusters
###Code
# Scatter plot each cluster separately
plt.figure(figsize=(8, 6))
# Plot the Clusters
# X --> Annual income, X[0]. y --> Spending Score, X[1]
# X[y_hc == 0, 0] --> select all customers from Annual Income that belong to Cluster 0
# X[y_hc == 0, 1] --> select all customers from Spending Score that belong to Cluster 0
plt.scatter(X[y_hc == 0, 0], X[y_hc == 0, 1], s=100, c = 'red', label='Cluster 0')
plt.scatter(X[y_hc == 1, 0], X[y_hc == 1, 1], s=100, c = 'blue', label='Cluster 1')
plt.scatter(X[y_hc == 2, 0], X[y_hc == 2, 1], s=100, c = 'green', label='Cluster 2')
plt.scatter(X[y_hc == 3, 0], X[y_hc == 3, 1], s=100, c = 'cyan', label='Cluster 3')
plt.scatter(X[y_hc == 4, 0], X[y_hc == 4, 1], s=100, c = 'magenta', label='Cluster 4')
plt.title("Clusters of Customers")
plt.xlabel("Annual Income (k$)")
plt.ylabel("Spending Score (1-100)")
plt.legend()
plt.show()
###Output
_____no_output_____ |
examples/wcs.ipynb | ###Markdown
Load raster data via WCS and xarray
###Code
from datetime import datetime
import geoengine as ge
###Output
_____no_output_____
###Markdown
Initialize Geo Engine
###Code
ge.initialize("http://localhost:3030")
session = ge.get_session()
session
###Output
_____no_output_____
###Markdown
Define workflow of MODIS NDVI raster
###Code
workflow = ge.register_workflow({
"type": "Raster",
"operator": {
"type": "GdalSource",
"params": {
"dataset": {
"type": "internal",
"datasetId": "36574dc3-560a-4b09-9d22-d5945f2b8093"
}
}
}
})
workflow
###Output
_____no_output_____
###Markdown
Query raster via WCS
###Code
time = datetime.strptime(
'2014-04-01T12:00:00.000Z', "%Y-%m-%dT%H:%M:%S.%f%z")
data = workflow.get_xarray(
ge.QueryRectangle(
[-180.0, -90.0, 180.0, 90.0],
[time, time],
resolution=[360. / 16, 180. / 16],
)
)
data
###Output
_____no_output_____
###Markdown
Plot the rsater via matplotlib
###Code
data.plot()
###Output
_____no_output_____
###Markdown
Select North America (left upper part) from the data array via geo coordinates
###Code
data.sel(x=slice(-180, 0), y=slice(90, 0)).plot()
###Output
_____no_output_____
###Markdown
Get a pixel via geo coordinate
###Code
data.sel(x=[-150], y=[60], method="nearest")
###Output
_____no_output_____ |
_doc/notebooks/onnx_float32_and_64.ipynb | ###Markdown
Side by sideWe may wonder where the discrepencies start. But for that, we need to do a side by side.
###Code
from mlprodict.onnxrt.validate.side_by_side import side_by_side_by_values
sbs = side_by_side_by_values([(oinf32, {'X': X_test.astype(numpy.float32)}),
(oinf64, {'X': X_test.astype(numpy.float64)})])
from pandas import DataFrame
df = DataFrame(sbs)
# dfd = df.drop(['value[0]', 'value[1]', 'value[2]'], axis=1).copy()
df
###Output
_____no_output_____
###Markdown
The differences really starts for output ``'O0'`` after the matrix multiplication. This matrix melts different number with very different order of magnitudes and that alone explains the discrepencies with doubles and floats on that particular model.
###Code
%matplotlib inline
ax = df[['name', 'v[1]']].iloc[1:].set_index('name').plot(kind='bar', figsize=(14,4), logy=True)
ax.set_title("Relative differences for each output between float32 and "
"float64\nfor a GaussianProcessRegressor");
###Output
_____no_output_____
###Markdown
Before going further, let's check how sensitive the trained model is about converting double into floats.
###Code
pg1 = gau.predict(X_test)
pg2 = gau.predict(X_test.astype(numpy.float32).astype(numpy.float64))
numpy.sort(numpy.sort(numpy.squeeze(pg1 - pg2)))[-5:]
###Output
_____no_output_____
###Markdown
Having float or double inputs should not matter. We confirm that with the model converted into ONNX.
###Code
p1 = oinf64.run({'X': X_test})['GPmean']
p2 = oinf64.run({'X': X_test.astype(numpy.float32).astype(numpy.float64)})['GPmean']
numpy.sort(numpy.sort(numpy.squeeze(p1 - p2)))[-5:]
###Output
_____no_output_____
###Markdown
Last verification.
###Code
sbs = side_by_side_by_values([(oinf64, {'X': X_test.astype(numpy.float32).astype(numpy.float64)}),
(oinf64, {'X': X_test.astype(numpy.float64)})])
df = DataFrame(sbs)
ax = df[['name', 'v[1]']].iloc[1:].set_index('name').plot(kind='bar', figsize=(14,4), logy=True)
ax.set_title("Relative differences for each output between float64 and float64 rounded to float32"
"\nfor a GaussianProcessRegressor");
###Output
_____no_output_____
###Markdown
ONNX graph, single or double floatsThe notebook shows discrepencies obtained by using double floats instead of single float in two cases. The second one involves [GaussianProcessRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.gaussian_process.GaussianProcessRegressor.html).
###Code
from jyquickhelper import add_notebook_menu
add_notebook_menu()
###Output
_____no_output_____
###Markdown
Simple case of a linear regressionA linear regression is simply a matrix multiplication followed by an addition: $Y=AX+B$. Let's train one with [scikit-learn](https://scikit-learn.org/stable/).
###Code
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
data = load_boston()
X, y = data.data, data.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
clr = LinearRegression()
clr.fit(X_train, y_train)
clr.score(X_test, y_test)
clr.coef_
clr.intercept_
###Output
_____no_output_____
###Markdown
Let's predict with *scikit-learn* and *python*.
###Code
ypred = clr.predict(X_test)
ypred[:5]
py_pred = X_test @ clr.coef_ + clr.intercept_
py_pred[:5]
clr.coef_.dtype, clr.intercept_.dtype
###Output
_____no_output_____
###Markdown
With ONNXWith *ONNX*, we would write this operation as follows... We still need to convert everything into single floats = float32.
###Code
%load_ext mlprodict
from skl2onnx.algebra.onnx_ops import OnnxMatMul, OnnxAdd
import numpy
onnx_fct = OnnxAdd(OnnxMatMul('X', clr.coef_.astype(numpy.float32), op_version=12),
numpy.array([clr.intercept_], dtype=numpy.float32),
output_names=['Y'], op_version=12)
onnx_model32 = onnx_fct.to_onnx({'X': X_test.astype(numpy.float32)})
# add -l 1 if nothing shows up
%onnxview onnx_model32
###Output
_____no_output_____
###Markdown
The next line uses a python runtime to compute the prediction.
###Code
from mlprodict.onnxrt import OnnxInference
oinf = OnnxInference(onnx_model32, inplace=False)
ort_pred = oinf.run({'X': X_test.astype(numpy.float32)})['Y']
ort_pred[:5]
###Output
_____no_output_____
###Markdown
And here is the same with [onnxruntime](https://github.com/microsoft/onnxruntime)...
###Code
from mlprodict.tools.asv_options_helper import get_ir_version_from_onnx
# line needed when onnx is more recent than onnxruntime
onnx_model32.ir_version = get_ir_version_from_onnx()
oinf = OnnxInference(onnx_model32, runtime="onnxruntime1")
ort_pred = oinf.run({'X': X_test.astype(numpy.float32)})['Y']
ort_pred[:5]
###Output
_____no_output_____
###Markdown
With double instead of single float[ONNX](https://onnx.ai/) was originally designed for deep learning which usually uses floats but it does not mean cannot be used. Every number is converted into double floats.
###Code
onnx_fct = OnnxAdd(OnnxMatMul('X', clr.coef_.astype(numpy.float64), op_version=12),
numpy.array([clr.intercept_], dtype=numpy.float64),
output_names=['Y'], op_version=12)
onnx_model64 = onnx_fct.to_onnx({'X': X_test.astype(numpy.float64)})
###Output
_____no_output_____
###Markdown
And now the *python* runtime...
###Code
oinf = OnnxInference(onnx_model64)
ort_pred = oinf.run({'X': X_test})['Y']
ort_pred[:5]
###Output
_____no_output_____
###Markdown
And the *onnxruntime* version of it.
###Code
oinf = OnnxInference(onnx_model64, runtime="onnxruntime1")
ort_pred = oinf.run({'X': X_test.astype(numpy.float64)})['Y']
ort_pred[:5]
###Output
_____no_output_____
###Markdown
And now the GaussianProcessRegressorThis shows a case
###Code
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import DotProduct
gau = GaussianProcessRegressor(alpha=10, kernel=DotProduct())
gau.fit(X_train, y_train)
from mlprodict.onnx_conv import to_onnx
onnxgau32 = to_onnx(gau, X_train.astype(numpy.float32))
oinf32 = OnnxInference(onnxgau32, runtime="python", inplace=False)
ort_pred32 = oinf32.run({'X': X_test.astype(numpy.float32)})['GPmean']
numpy.squeeze(ort_pred32)[:25]
onnxgau64 = to_onnx(gau, X_train.astype(numpy.float64))
oinf64 = OnnxInference(onnxgau64, runtime="python", inplace=False)
ort_pred64 = oinf64.run({'X': X_test.astype(numpy.float64)})['GPmean']
numpy.squeeze(ort_pred64)[:25]
###Output
_____no_output_____
###Markdown
The differences between the predictions for single floats and double floats...
###Code
numpy.sort(numpy.sort(numpy.squeeze(ort_pred32 - ort_pred64)))[-5:]
###Output
_____no_output_____
###Markdown
Who's right or wrong... The differences between the predictions with the original model...
###Code
pred = gau.predict(X_test.astype(numpy.float64))
numpy.sort(numpy.sort(numpy.squeeze(ort_pred32 - pred)))[-5:]
numpy.sort(numpy.sort(numpy.squeeze(ort_pred64 - pred)))[-5:]
###Output
_____no_output_____
###Markdown
Double predictions clearly wins.
###Code
# add -l 1 if nothing shows up
%onnxview onnxgau64
###Output
_____no_output_____
###Markdown
Saves...Let's keep track of it.
###Code
with open("gpr_dot_product_boston_32.onnx", "wb") as f:
f.write(onnxgau32.SerializePartialToString())
from IPython.display import FileLink
FileLink('gpr_dot_product_boston_32.onnx')
with open("gpr_dot_product_boston_64.onnx", "wb") as f:
f.write(onnxgau64.SerializePartialToString())
FileLink('gpr_dot_product_boston_64.onnx')
###Output
_____no_output_____
###Markdown
ONNX graph, single or double floatsThe notebook shows discrepencies obtained by using double floats instead of single float in two cases. The second one involves [GaussianProcessRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.gaussian_process.GaussianProcessRegressor.html).
###Code
from jyquickhelper import add_notebook_menu
add_notebook_menu()
###Output
_____no_output_____
###Markdown
Simple case of a linear regressionA linear regression is simply a matrix multiplication followed by an addition: $Y=AX+B$. Let's train one with [scikit-learn](https://scikit-learn.org/stable/).
###Code
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
data = load_boston()
X, y = data.data, data.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
clr = LinearRegression()
clr.fit(X_train, y_train)
clr.score(X_test, y_test)
clr.coef_
clr.intercept_
###Output
_____no_output_____
###Markdown
Let's predict with *scikit-learn* and *python*.
###Code
ypred = clr.predict(X_test)
ypred[:5]
py_pred = X_test @ clr.coef_ + clr.intercept_
py_pred[:5]
clr.coef_.dtype, clr.intercept_.dtype
###Output
_____no_output_____
###Markdown
With ONNXWith *ONNX*, we would write this operation as follows... We still need to convert everything into single floats = float32.
###Code
%load_ext mlprodict
from skl2onnx.algebra.onnx_ops import OnnxMatMul, OnnxAdd
import numpy
onnx_fct = OnnxAdd(OnnxMatMul('X', clr.coef_.astype(numpy.float32), op_version=12),
numpy.array([clr.intercept_], dtype=numpy.float32),
output_names=['Y'], op_version=12)
onnx_model32 = onnx_fct.to_onnx({'X': X_test.astype(numpy.float32)})
# add -l 1 if nothing shows up
%onnxview onnx_model32
###Output
_____no_output_____
###Markdown
The next line uses a python runtime to compute the prediction.
###Code
from mlprodict.onnxrt import OnnxInference
oinf = OnnxInference(onnx_model32)
ort_pred = oinf.run({'X': X_test.astype(numpy.float32)})['Y']
ort_pred[:5]
###Output
_____no_output_____
###Markdown
And here is the same with [onnxruntime](https://github.com/microsoft/onnxruntime)...
###Code
from mlprodict.tools.asv_options_helper import get_ir_version_from_onnx
# line needed when onnx is more recent than onnxruntime
onnx_model32.ir_version = get_ir_version_from_onnx()
oinf = OnnxInference(onnx_model32, runtime="onnxruntime1")
ort_pred = oinf.run({'X': X_test.astype(numpy.float32)})['Y']
ort_pred[:5]
###Output
_____no_output_____
###Markdown
With double instead of single float[ONNX](https://onnx.ai/) was originally designed for deep learning which usually uses floats but it does not mean cannot be used. Every number is converted into double floats.
###Code
onnx_fct = OnnxAdd(OnnxMatMul('X', clr.coef_.astype(numpy.float64), op_version=12),
numpy.array([clr.intercept_], dtype=numpy.float64),
output_names=['Y'], op_version=12)
onnx_model64 = onnx_fct.to_onnx({'X': X_test.astype(numpy.float64)})
###Output
_____no_output_____
###Markdown
And now the *python* runtime...
###Code
oinf = OnnxInference(onnx_model64)
ort_pred = oinf.run({'X': X_test})['Y']
ort_pred[:5]
###Output
_____no_output_____
###Markdown
And the *onnxruntime* version of it.
###Code
oinf = OnnxInference(onnx_model64, runtime="onnxruntime1")
ort_pred = oinf.run({'X': X_test.astype(numpy.float64)})['Y']
ort_pred[:5]
###Output
_____no_output_____
###Markdown
And now the GaussianProcessRegressorThis shows a case
###Code
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import DotProduct
gau = GaussianProcessRegressor(alpha=10, kernel=DotProduct())
gau.fit(X_train, y_train)
from mlprodict.onnx_conv import to_onnx
onnxgau32 = to_onnx(gau, X_train.astype(numpy.float32))
oinf32 = OnnxInference(onnxgau32, runtime="python")
ort_pred32 = oinf32.run({'X': X_test.astype(numpy.float32)})['GPmean']
numpy.squeeze(ort_pred32)[:25]
onnxgau64 = to_onnx(gau, X_train.astype(numpy.float64))
oinf64 = OnnxInference(onnxgau64, runtime="python")
ort_pred64 = oinf64.run({'X': X_test.astype(numpy.float64)})['GPmean']
numpy.squeeze(ort_pred64)[:25]
###Output
_____no_output_____
###Markdown
The differences between the predictions for single floats and double floats...
###Code
numpy.sort(numpy.sort(numpy.squeeze(ort_pred32 - ort_pred64)))[-5:]
###Output
_____no_output_____
###Markdown
Who's right or wrong... The differences between the predictions with the original model...
###Code
pred = gau.predict(X_test.astype(numpy.float64))
numpy.sort(numpy.sort(numpy.squeeze(ort_pred32 - pred)))[-5:]
numpy.sort(numpy.sort(numpy.squeeze(ort_pred64 - pred)))[-5:]
###Output
_____no_output_____
###Markdown
Double predictions clearly wins.
###Code
# add -l 1 if nothing shows up
%onnxview onnxgau64
###Output
_____no_output_____
###Markdown
Saves...Let's keep track of it.
###Code
with open("gpr_dot_product_boston_32.onnx", "wb") as f:
f.write(onnxgau32.SerializePartialToString())
from IPython.display import FileLink
FileLink('gpr_dot_product_boston_32.onnx')
with open("gpr_dot_product_boston_64.onnx", "wb") as f:
f.write(onnxgau64.SerializePartialToString())
FileLink('gpr_dot_product_boston_64.onnx')
###Output
_____no_output_____
###Markdown
Side by sideWe may wonder where the discrepencies start. But for that, we need to do a side by side.
###Code
from mlprodict.onnxrt.validate.side_by_side import side_by_side_by_values
sbs = side_by_side_by_values([(oinf32, {'X': X_test.astype(numpy.float32)}),
(oinf64, {'X': X_test.astype(numpy.float64)})])
from pandas import DataFrame
df = DataFrame(sbs)
# dfd = df.drop(['value[0]', 'value[1]', 'value[2]'], axis=1).copy()
df
###Output
_____no_output_____
###Markdown
The differences really starts for output ``'O0'`` after the matrix multiplication. This matrix melts different number with very different order of magnitudes and that alone explains the discrepencies with doubles and floats on that particular model.
###Code
%matplotlib inline
ax = df[['name', 'v[1]']].iloc[1:].set_index('name').plot(kind='bar', figsize=(14,4), logy=True)
ax.set_title("Relative differences for each output between float32 and "
"float64\nfor a GaussianProcessRegressor");
###Output
_____no_output_____
###Markdown
Before going further, let's check how sensitive the trained model is about converting double into floats.
###Code
pg1 = gau.predict(X_test)
pg2 = gau.predict(X_test.astype(numpy.float32).astype(numpy.float64))
numpy.sort(numpy.sort(numpy.squeeze(pg1 - pg2)))[-5:]
###Output
_____no_output_____
###Markdown
Having float or double inputs should not matter. We confirm that with the model converted into ONNX.
###Code
p1 = oinf64.run({'X': X_test})['GPmean']
p2 = oinf64.run({'X': X_test.astype(numpy.float32).astype(numpy.float64)})['GPmean']
numpy.sort(numpy.sort(numpy.squeeze(p1 - p2)))[-5:]
###Output
_____no_output_____
###Markdown
Last verification.
###Code
sbs = side_by_side_by_values([(oinf64, {'X': X_test.astype(numpy.float32).astype(numpy.float64)}),
(oinf64, {'X': X_test.astype(numpy.float64)})])
df = DataFrame(sbs)
ax = df[['name', 'v[1]']].iloc[1:].set_index('name').plot(kind='bar', figsize=(14,4), logy=True)
ax.set_title("Relative differences for each output between float64 and float64 rounded to float32"
"\nfor a GaussianProcessRegressor");
###Output
_____no_output_____
###Markdown
ONNX graph, single or double floatsThe notebook shows discrepencies obtained by using double floats instead of single float in two cases. The second one involves [GaussianProcessRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.gaussian_process.GaussianProcessRegressor.html).
###Code
from jyquickhelper import add_notebook_menu
add_notebook_menu()
###Output
_____no_output_____
###Markdown
Simple case of a linear regressionA linear regression is simply a matrix multiplication followed by an addition: $Y=AX+B$. Let's train one with [scikit-learn](https://scikit-learn.org/stable/).
###Code
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
data = load_boston()
X, y = data.data, data.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
clr = LinearRegression()
clr.fit(X_train, y_train)
clr.score(X_test, y_test)
clr.coef_
clr.intercept_
###Output
_____no_output_____
###Markdown
Let's predict with *scikit-learn* and *python*.
###Code
ypred = clr.predict(X_test)
ypred[:5]
py_pred = X_test @ clr.coef_ + clr.intercept_
py_pred[:5]
clr.coef_.dtype, clr.intercept_.dtype
###Output
_____no_output_____
###Markdown
With ONNXWith *ONNX*, we would write this operation as follows... We still need to convert everything into single floats = float32.
###Code
%load_ext mlprodict
from skl2onnx.algebra.onnx_ops import OnnxMatMul, OnnxAdd
import numpy
onnx_fct = OnnxAdd(OnnxMatMul('X', clr.coef_.astype(numpy.float32)),
numpy.array([clr.intercept_]),
output_names=['Y'])
onnx_model32 = onnx_fct.to_onnx({'X': X_test.astype(numpy.float32)},
dtype=numpy.float32)
# add -l 1 if nothing shows up
%onnxview onnx_model32
###Output
_____no_output_____
###Markdown
The next line uses a python runtime to compute the prediction.
###Code
from mlprodict.onnxrt import OnnxInference
oinf = OnnxInference(onnx_model32)
ort_pred = oinf.run({'X': X_test.astype(numpy.float32)})['Y']
ort_pred[:5]
###Output
_____no_output_____
###Markdown
And here is the same with [onnxruntime](https://github.com/microsoft/onnxruntime)...
###Code
oinf = OnnxInference(onnx_model32, runtime="onnxruntime1")
ort_pred = oinf.run({'X': X_test.astype(numpy.float32)})['Y']
ort_pred[:5]
###Output
_____no_output_____
###Markdown
With double instead of single float[ONNX](https://onnx.ai/) was originally designed for deep learning which usually uses floats but it does not mean cannot be used. Every number is converted into double floats.
###Code
onnx_fct = OnnxAdd(OnnxMatMul('X', clr.coef_.astype(numpy.float64)),
numpy.array([clr.intercept_]),
output_names=['Y'])
onnx_model64 = onnx_fct.to_onnx({'X': X_test.astype(numpy.float64)},
dtype=numpy.float64)
###Output
_____no_output_____
###Markdown
And now the *python* runtime...
###Code
oinf = OnnxInference(onnx_model64)
ort_pred = oinf.run({'X': X_test})['Y']
ort_pred[:5]
###Output
_____no_output_____
###Markdown
And the *onnxruntime* version of it, not fully supportive of double yet...
###Code
try:
oinf = OnnxInference(onnx_model64, runtime="onnxruntime1")
ort_pred = oinf.run({'X': X_test.astype(numpy.float64)})['Y']
ort_pred[:5]
except RuntimeError as e:
print(e)
###Output
[ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for the node Ad_Add:Add(7)
###Markdown
And now the GaussianProcessRegressorThis shows a case
###Code
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import DotProduct
gau = GaussianProcessRegressor(alpha=10, kernel=DotProduct())
gau.fit(X_train, y_train)
from mlprodict.onnxrt import to_onnx
onnxgau32 = to_onnx(gau, X_train.astype(numpy.float32), dtype=numpy.float32)
oinf32 = OnnxInference(onnxgau32, runtime="python")
ort_pred32 = oinf32.run({'X': X_test.astype(numpy.float32)})['GPmean']
numpy.squeeze(ort_pred32)[:25]
onnxgau64 = to_onnx(gau, X_train.astype(numpy.float64), dtype=numpy.float64)
oinf64 = OnnxInference(onnxgau64, runtime="python")
ort_pred64 = oinf64.run({'X': X_test.astype(numpy.float64)})['GPmean']
numpy.squeeze(ort_pred64)[:25]
###Output
_____no_output_____
###Markdown
The differences between the predictions for single floats and double floats...
###Code
numpy.sort(numpy.sort(numpy.squeeze(ort_pred32 - ort_pred64)))[-5:]
###Output
_____no_output_____
###Markdown
Who's right or wrong... The differences between the predictions with the original model...
###Code
pred = gau.predict(X_test.astype(numpy.float64))
numpy.sort(numpy.sort(numpy.squeeze(ort_pred32 - pred)))[-5:]
numpy.sort(numpy.sort(numpy.squeeze(ort_pred64 - pred)))[-5:]
###Output
_____no_output_____
###Markdown
Double predictions clearly wins.
###Code
# add -l 1 if nothing shows up
%onnxview onnxgau64
###Output
_____no_output_____
###Markdown
Saves...Let's keep track of it.
###Code
with open("gpr_dot_product_boston_32.onnx", "wb") as f:
f.write(onnxgau32.SerializePartialToString())
from IPython.display import FileLink
FileLink('gpr_dot_product_boston_32.onnx')
with open("gpr_dot_product_boston_64.onnx", "wb") as f:
f.write(onnxgau64.SerializePartialToString())
FileLink('gpr_dot_product_boston_64.onnx')
###Output
_____no_output_____
###Markdown
Side by sideWe may wonder where the discrepencies start. But for that, we need to do a side by side.
###Code
from mlprodict.onnxrt.side_by_side import side_by_side_by_values
sbs = side_by_side_by_values([(oinf32, {'X': X_test.astype(numpy.float32)}),
(oinf64, {'X': X_test.astype(numpy.float64)})])
from pandas import DataFrame
df = DataFrame(sbs)
# dfd = df.drop(['value[0]', 'value[1]', 'value[2]'], axis=1).copy()
df
###Output
_____no_output_____
###Markdown
The differences really starts for output ``'O0'`` after the matrix multiplication. This matrix melts different number with very different order of magnitudes and that alone explains the discrepencies with doubles and floats on that particular model.
###Code
%matplotlib inline
ax = df[['name', 'v[1]']].iloc[1:].set_index('name').plot(kind='bar', figsize=(14,4), logy=True)
ax.set_title("Relative differences for each output between float32 and float64\nfor a GaussianProcessRegressor");
###Output
_____no_output_____
###Markdown
Before going further, let's check how sensitive the trained model is about converting double into floats.
###Code
pg1 = gau.predict(X_test)
pg2 = gau.predict(X_test.astype(numpy.float32).astype(numpy.float64))
numpy.sort(numpy.sort(numpy.squeeze(pg1 - pg2)))[-5:]
###Output
_____no_output_____
###Markdown
Having float or double inputs should not matter. We confirm that with the model converted into ONNX.
###Code
p1 = oinf64.run({'X': X_test})['GPmean']
p2 = oinf64.run({'X': X_test.astype(numpy.float32).astype(numpy.float64)})['GPmean']
numpy.sort(numpy.sort(numpy.squeeze(p1 - p2)))[-5:]
###Output
_____no_output_____
###Markdown
Last verification.
###Code
sbs = side_by_side_by_values([(oinf64, {'X': X_test.astype(numpy.float32).astype(numpy.float64)}),
(oinf64, {'X': X_test.astype(numpy.float64)})])
df = DataFrame(sbs)
ax = df[['name', 'v[1]']].iloc[1:].set_index('name').plot(kind='bar', figsize=(14,4), logy=True)
ax.set_title("Relative differences for each output between float64 and float64 rounded to float32"
"\nfor a GaussianProcessRegressor");
###Output
_____no_output_____
###Markdown
Partial use of float64
###Code
onnxgau48 = to_onnx(gau, X_train.astype(numpy.float32), dtype=numpy.float32,
options={GaussianProcessRegressor: {'float64': True}})
%onnxview onnxgau48
oinf48 = OnnxInference(onnxgau48, runtime="python")
ort_pred48 = oinf48.run({'X': X_test.astype(numpy.float32)})['GPmean']
numpy.sort(numpy.sort(numpy.squeeze(ort_pred48 - ort_pred64)))[-5:]
sbs = side_by_side_by_values([(oinf48, {'X': X_test.astype(numpy.float32)}),
(oinf64, {'X': X_test.astype(numpy.float64)})])
df = DataFrame(sbs)
ax = df[['name', 'v[1]']].iloc[1:].set_index('name').plot(kind='bar', figsize=(14,4), logy=True)
ax.set_title("Relative differences for each output between float64 and float64 rounded to float32"
"\nfor a GaussianProcessRegressor");
df
###Output
_____no_output_____ |
CookieTTS/scripts/WaveGlow from Ground Truth.ipynb | ###Markdown
1 - Initialize WaveGlow and Load Checkpoint/Weights
###Code
# Load WaveGlow
def load_waveglow(
waveglow_path = r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_HDN\best_val_model",
config_fpath = r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_HDN\config.json",):
import json
def is_ax(config):
"""Quickly check if a model uses the Ax WaveGlow core by what's available in the config file."""
return True if 'upsample_first' in config.keys() else False
# Load config file
with open(config_fpath) as f:
data = f.read()
config = json.loads(data)
train_config = config["train_config"]
data_config = config["data_config"]
if 'preempthasis' not in data_config.keys():
data_config['preempthasis'] = 0.0
if 'use_logvar_channels' not in data_config.keys():
data_config['use_logvar_channels'] = False
if 'load_hidden_from_disk' not in data_config.keys():
data_config['load_hidden_from_disk'] = False
if not 'iso226_empthasis' in data_config.keys():
data_config["iso226_empthasis"] = False
dist_config = config["dist_config"]
data_config['n_mel_channels'] = config["waveglow_config"]['n_mel_channels'] if 'n_mel_channels' in config["waveglow_config"].keys() else 160
waveglow_config = {
**config["waveglow_config"],
'win_length': data_config['win_length'],
'hop_length': data_config['hop_length'],
'preempthasis': data_config['preempthasis'],
'n_mel_channels': data_config["n_mel_channels"],
'use_logvar_channels': data_config["use_logvar_channels"],
'load_hidden_from_disk': data_config["load_hidden_from_disk"],
'iso226_empthasis': data_config["iso226_empthasis"]
}
print(waveglow_config)
print(f"Config File from '{config_fpath}' successfully loaded.")
# import the correct model core
if is_ax(waveglow_config):
from CookieTTS._4_mtw.waveglow.efficient_model_ax import WaveGlow
else:
if waveglow_config["yoyo"]:
from CookieTTS._4_mtw.waveglow.efficient_model import WaveGlow
else:
from CookieTTS._4_mtw.waveglow.glow import WaveGlow
from CookieTTS._4_mtw.waveglow.denoiser import Denoiser
# initialize model
print(f"intializing WaveGlow model... ", end="")
waveglow = WaveGlow(**waveglow_config).cuda()
print(f"Done!")
# load checkpoint from file
print(f"loading WaveGlow checkpoint... ", end="")
checkpoint = torch.load(waveglow_path)
waveglow.load_state_dict(checkpoint['model']) # and overwrite initialized weights with checkpointed weights
waveglow.cuda().eval() # move to GPU and convert to half precision
#waveglow.half()
#waveglow.remove_weightnorm()
print(f"Done!")
print(f"initializing Denoiser... ", end="")
cond_channels = waveglow_config['n_mel_channels']*(waveglow_config['use_logvar_channels']+1)
denoiser = Denoiser(waveglow, n_mel_channels=cond_channels, mu=0.0, var=1.0, stft_device='cpu', speaker_dependant=False)
print(f"Done!")
waveglow_iters = checkpoint['iteration']
print(f"WaveGlow trained for {waveglow_iters} iterations")
speaker_lookup = checkpoint['speaker_lookup'] # ids lookup
training_sigma = train_config['sigma']
return waveglow, denoiser, speaker_lookup, training_sigma, waveglow_iters, waveglow_config, data_config
waveglow, denoiser, speaker_lookup, training_sigma, waveglow_iters, waveglow_config, data_config = load_waveglow(
waveglow_path = r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_DTW2\best_val_model",
config_fpath = r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_DTW2\config.json")
###Output
{'shift_spect': 0.0, 'scale_spect': 1.0, 'preceived_vol_scaling': False, 'waveflow': True, 'channel_mixing': 'permute', 'mix_first': False, 'n_flows': 8, 'n_group': 20, 'n_early_every': 16, 'n_early_size': 2, 'memory_efficient': 0.0, 'spect_scaling': False, 'upsample_mode': 'normal', 'WN_config': {'gated_unit': 'GTU', 'n_layers': 8, 'n_channels': 128, 'kernel_size_w': 7, 'kernel_size_h': 7, 'n_layers_dilations_w': None, 'n_layers_dilations_h': 1, 'speaker_embed_dim': 96, 'rezero': False, 'cond_layers': 3, 'cond_activation_func': 'lrelu', 'cond_out_activation_func': True, 'negative_slope': 0.5, 'cond_hidden_channels': 256, 'cond_kernel_size': 1, 'cond_padding_mode': 'zeros', 'seperable_conv': True, 'res_skip': True, 'merge_res_skip': False, 'upsample_mode': 'linear'}, 'n_mel_channels': 160, 'speaker_embed': 96, 'cond_layers': 5, 'cond_activation_func': 'lrelu', 'negative_slope': 0.25, 'cond_hidden_channels': 512, 'cond_output_channels': 256, 'cond_residual': True, 'cond_res_rezero': True, 'cond_padding_mode': 'zeros', 'upsample_first': False, 'cond_kernel_size': 5, 'win_length': 2400, 'hop_length': 600, 'preempthasis': 0.9, 'use_logvar_channels': True, 'load_hidden_from_disk': False, 'iso226_empthasis': False}
Config File from 'H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_DTW2\config.json' successfully loaded.
intializing WaveGlow model... Flow 0 using Normal Backprop
Flow 1 using Normal Backprop
Flow 2 using Normal Backprop
Flow 3 using Normal Backprop
Flow 4 using Normal Backprop
Flow 5 using Normal Backprop
Flow 6 using Normal Backprop
Flow 7 using Normal Backprop
Done!
loading WaveGlow checkpoint... Done!
initializing Denoiser... Done!
WaveGlow trained for 56000 iterations
###Markdown
2 - Setup STFT to generate wavs from audio files
###Code
# Setup for generating Spectrograms from Audio files
def load_mel(path):
if path.endswith('.wav') or path.endswith('.flac'):
audio, sampling_rate, max_audio_value = load_wav_to_torch(path)
if sampling_rate != stft.sampling_rate:
raise ValueError("{} {} SR doesn't match target {} SR".format(
sampling_rate, stft.sampling_rate))
audio_norm = audio / max_audio_value
audio_norm = audio_norm.unsqueeze(0)
audio_norm = torch.autograd.Variable(audio_norm, requires_grad=False)
melspec = stft.mel_spectrogram(audio_norm)
elif path.endswith('.npy'):
melspec = torch.from_numpy(np.load(path)).float()
else:
pass
return melspec
print('Initializing STFT...')
stft = TacotronSTFT(data_config['filter_length'], data_config['hop_length'], data_config['win_length'],
data_config['n_mel_channels'], data_config['sampling_rate'], data_config['mel_fmin'],
data_config['mel_fmax'])
print('Done!')
###Output
Initializing STFT...
Done!
###Markdown
3 - Reconstruct Audio from Audio Spectrogram using WaveGlow/Flow
###Code
waveglow_paths = [
# r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_AEF4.2_iso226\best_model",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_AEF4.1\best_model",
# r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_AEF\best_val_model",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_DTW2\best_val_model",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_6_Flow_512C_ssvae2_2\best_val_model",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_6_Flow_512C_ssvae2\best_model",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_DTW\best_model",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow\best_val_model_gt3",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_HDN\best_val_model",
]
config_fpaths = [
# r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_AEF4.2_iso226\config.json",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_AEF4.1\config.json",
# r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_AEF\config.json",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_DTW2\config.json",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_6_Flow_512C_ssvae2_2\config.json",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_6_Flow_512C_ssvae2\config.json",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_DTW\config.json",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow\config_original.json",
r"H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_HDN\config.json",
]
output_dirnames = [
# "AR_8_Flow_AEF4.2_iso226",
"AR_8_Flow_AEF4.1_gt",
# "AR_8_Flow_AEF_gt",
"AR_8_Flow_DTW2",
"AR_6_Flow_512C_ssvae2_2",
"AR_6_Flow_512C_ssvae2",
"AR_8_Flow_DTW",
"AR_8_Flow_gt3",
"AR_8_Flow_HDN",
]
exts = [
# '__*.npy',
'__*.npy',
# '__*.npy',
'__*.mel.npy',
'__*.mel.npy',
'__*.mel.npy',
'__*.mel.npy',
'__*.mel.npy',
'__*.hdn.npy',
]
folder_paths = [
r"H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1",
r"H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e2",
r"H:\ClipperDatasetV2\SlicedDialogue\FiM\S4\s4e12",
r"H:\ClipperDatasetV2\SlicedDialogue\FiM\S5\s5e18",
r"H:\ClipperDatasetV2\SlicedDialogue\FiM\S9\s9e8",
]
gt_ext = '.npy'
sigmas = [0.0, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0,]
denoise_strengths = [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0, 2.0, 5.0]
speaker_ids = [0,]
speaker_ids = [speaker_lookup[x] for x in speaker_ids] # map speaker ids to internel
speaker_ids = torch.tensor(speaker_ids).cuda().long()
use_DTW = False
display_audio = False
display_denoised_audio = False
save_outputs = True
for waveglow_path, config_fpath, output_dirname, ext in list(zip(waveglow_paths, config_fpaths, output_dirnames, exts))[:1]:
waveglow, denoiser, speaker_lookup, training_sigma, waveglow_iters, waveglow_config, data_config = load_waveglow(
waveglow_path=waveglow_path, config_fpath=config_fpath)
output_folder = r"D:\Downloads\infer\WaveFlow\\" + f"{output_dirname}" + f"_{waveglow_iters}"
audio_paths = [glob(os.path.join(folder_path, '**', f'*{ext}'), recursive=True) for folder_path in folder_paths]
audio_paths = [item for sublist in audio_paths for item in sublist]
if ext is '.npy' or '__*.npy':
audio_paths = [x for x in audio_paths if not (x.endswith('.hdn.npy') or x.endswith('.mel.npy') or x.endswith('0.npy') or x.endswith('gdur.npy') or x.endswith('genc_out.npy') or x.endswith('pdur.npy') or x.endswith('penc_out.npy'))]
print(f'Generating Audio from {len(audio_paths)} Files...')
for audio_path in audio_paths:
print(f"Audio Path:\n'{audio_path}'")
mel_outputs_postnet = load_mel(audio_path).cuda()
if not waveglow_config['use_logvar_channels'] and (mel_outputs_postnet.shape[0] == waveglow.n_mel_channels*2):
mel_outputs_postnet = mel_outputs_postnet.chunk(2, dim=0)[0]
mel_logvars_postnet = None
elif not waveglow_config['use_logvar_channels'] and (mel_outputs_postnet.shape[0] == waveglow.n_mel_channels):
mel_logvars_postnet = None
elif waveglow_config['use_logvar_channels'] and (mel_outputs_postnet.shape[0] == waveglow.n_mel_channels*2):
mel_outputs_postnet, mel_logvars_postnet = mel_outputs_postnet.chunk(2, dim=0)
elif waveglow_config['use_logvar_channels'] and (mel_outputs_postnet.shape[0] == waveglow.n_mel_channels*2):
mel_logvars_postnet = mel_outputs_postnet.new_ones(mel_outputs_postnet.shape) * -4.9
else:
print(f"Saved file has Wrong Shape!\nPath: '{audio_path}'")
continue
if use_DTW and not waveglow_config['load_hidden_from_disk']:
gt_mel_outputs_postnet = load_mel(audio_path.replace(ext, gt_ext)).cuda()
mel_outputs_postnet = DTW(mel_outputs_postnet.unsqueeze(0), gt_mel_outputs_postnet.unsqueeze(0), 8, 7).squeeze(0)
if mel_logvars_postnet is not None:
mel_outputs_postnet = torch.cat((mel_outputs_postnet, mel_logvars_postnet), dim=0)
output_path = os.path.join(output_folder, os.path.splitext(os.path.split(audio_path)[-1])[0])
os.makedirs(output_path, exist_ok=True)
audios = []
save_path = os.path.join(output_path, 'Ground Truth.wav')
wav_path = audio_path.replace('.hdn.npy','.wav').replace('.mel.npy','.wav').replace('.npy','.wav')
shutil.copy(wav_path, save_path)
with torch.no_grad():
for i, sigma in enumerate(sigmas):
with torch.random.fork_rng(devices=[0,]):
torch.random.manual_seed(0)# use same Z / random seed during validation so results are more consistent and comparable.
audio = waveglow.infer(mel_outputs_postnet.unsqueeze(0), sigma=sigma, speaker_ids=speaker_ids, return_CPU=True).float().clamp(min=-0.999, max=0.999)
if (torch.isnan(audio) | torch.isinf(audio)).any():
print('inf or nan found in audio')
audio[torch.isnan(audio) | torch.isinf(audio)] = 0.0
#audio[:,-1]=1.0
audios.append(audio)
if display_audio:
ipd.display(ipd.Audio(audio[0].data.cpu().numpy(), rate=data_config['sampling_rate']))
if save_outputs:
save_path = os.path.join(output_path, f'denoise_{0.00:0.2f}_sigma_{sigma:0.2f}.wav')
write(save_path, data_config['sampling_rate'], (audio[0]* 2**15).data.cpu().numpy().astype('int16'))
for i, (audio, sigma) in enumerate(zip(audios, sigmas)):
for denoise_strength in denoise_strengths:
audio_denoised = denoiser(audio, speaker_ids=speaker_ids, strength=denoise_strength)[:, 0]
if (torch.isnan(audio) | torch.isinf(audio)).any():
print('inf or nan found in audio')
assert (not torch.isinf(audio_denoised).any()) or (not torch.isnan(audio_denoised).any())
#print(f"[Denoised Strength {denoise_strength}] [sigma {sigma}]")
if display_denoised_audio:
ipd.display(ipd.Audio(audio_denoised.cpu().numpy(), rate=data_config['sampling_rate']))
if save_outputs:
save_path = os.path.join(output_path, f'denoise_{denoise_strength:0.2f}_sigma_{sigma:0.2f}.wav')
write(save_path, data_config['sampling_rate'], (audio_denoised[0]* 2**15).data.cpu().numpy().astype('int16'))
print('')
###Output
{'shift_spect': 0.0, 'scale_spect': 1.0, 'preceived_vol_scaling': False, 'waveflow': True, 'channel_mixing': 'permute', 'mix_first': False, 'n_flows': 8, 'n_group': 20, 'n_early_every': 16, 'n_early_size': 2, 'memory_efficient': 0.0, 'spect_scaling': False, 'upsample_mode': 'normal', 'WN_config': {'gated_unit': 'GTU', 'n_layers': 8, 'n_channels': 256, 'kernel_size_w': 7, 'kernel_size_h': 7, 'n_layers_dilations_w': None, 'n_layers_dilations_h': 1, 'speaker_embed_dim': 0, 'rezero': False, 'transposed_conv_hidden_dim': 256, 'transposed_conv_kernel_size': [2, 3, 5], 'transposed_conv_scales': None, 'cond_layers': 0, 'cond_activation_func': 'lrelu', 'cond_out_activation_func': False, 'negative_slope': 0.5, 'cond_hidden_channels': 256, 'cond_kernel_size': 1, 'cond_padding_mode': 'zeros', 'seperable_conv': True, 'res_skip': True, 'merge_res_skip': False, 'upsample_mode': 'linear'}, 'n_mel_channels': 160, 'speaker_embed': 32, 'cond_layers': 4, 'cond_activation_func': 'lrelu', 'negative_slope': 0.1, 'cond_hidden_channels': 1024, 'cond_output_channels': 1024, 'cond_residual': '1x1conv', 'cond_res_rezero': True, 'cond_kernel_size': 1, 'cond_padding_mode': 'zeros', 'upsample_first': True, 'transposed_conv_hidden_dim': 1024, 'transposed_conv_kernel_size': [4, 9, 5], 'transposed_conv_scales': [2, 3, 5], 'transposed_conv_output_dim': 1024, 'transposed_conv_residual': True, 'transposed_conv_residual_linear': True, 'transposed_conv_res_rezero': True, 'group_conv_output_dim': 4096, 'win_length': 2400, 'hop_length': 600, 'preempthasis': 0.9, 'use_logvar_channels': False, 'load_hidden_from_disk': False, 'iso226_empthasis': False}
Config File from 'H:\TTCheckpoints\waveflow\4thLargeKernels\AR_8_Flow_AEF4.1\config.json' successfully loaded.
intializing WaveGlow model... Flow 0 using Normal Backprop
Flow 1 using Normal Backprop
Flow 2 using Normal Backprop
Flow 3 using Normal Backprop
Flow 4 using Normal Backprop
Flow 5 using Normal Backprop
Flow 6 using Normal Backprop
Flow 7 using Normal Backprop
Done!
loading WaveGlow checkpoint... Done!
initializing Denoiser... Done!
WaveGlow trained for 296076 iterations
Generating Audio from 571 Files...
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_09_35_Celestia_Annoyed__What have you done with the elements of harmony!_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_00_34_Cheerilee_Neutral__What do you notice about it__.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_00_34_Cheerilee_Neutral__What do you notice about it__.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_15_50_Twilight_Anxious Confused__What do you suppose has her so upset_ It's not like her_.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_15_50_Twilight_Anxious Confused__What do you suppose has her so upset_ It's not like her_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_07_07_Twilight_Neutral__Don't listen to her, princess.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_14_11_Twilight_Confused__What_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_00_57_Scootaloo_Annoyed__And it is too chaos.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_15_58_Twilight_Anxious Whispering__Better pick up the pace Before the stress of this gets the better of all of us.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_13_42_Applejack_Anxious Sad__That just can't be the truth.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_05_28_Twilight_Anxious__Is this about the weather, and the animals' weird behavior_ What's happening out there_ why isn't my magic working_ is there.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_06_40_Celestia_Neutral__You six showed the full potential of the elements, By harnessing the magic of your friendship, to beat a mighty foe_.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_06_40_Celestia_Neutral__You six showed the full potential of the elements, By harnessing the magic of your friendship, to beat a mighty foe_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_06_23_Celestia_Neutral__Where the elements are kept inside.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_14_03_Twilight_Neutral__Who were you talking to__.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_14_03_Twilight_Neutral__Who were you talking to__.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_14_17_Twilight_Anxious__Did applejack just.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_10_08_Twilight_Neutral__twists and turns! that's it!_.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_10_08_Twilight_Neutral__twists and turns! that's it!_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_06_58_Twilight_Neutral__Princess celestia, you can count on_.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_06_58_Twilight_Neutral__Princess celestia, you can count on_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_06_21_Celestia_Neutral__This, is canterlot tower.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_06_46_Celestia_Neutral__Although luna and i once wielded the elements, It is you who now control their power, and it is you who must defeat discord_.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_06_46_Celestia_Neutral__Although luna and i once wielded the elements, It is you who now control their power, and it is you who must defeat discord_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_14_08_Applejack_Neutral__Nopony whatsoever.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_00_47_Cheerilee_Neutral__What do you suppose that represents_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_00_36_Apple Bloom_Neutral__It's got an evil claw!_.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_00_36_Apple Bloom_Neutral__It's got an evil claw!_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_08_59_Rarity_Annoyed__I can't believe we're wasting our time Talking to a tacky window_.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_08_59_Rarity_Annoyed__I can't believe we're wasting our time Talking to a tacky window_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_05_28_Twilight_Anxious__Is this about the weather, and the animals' weird behavior_ What's happening out there_ why isn't my magic working_ is there_.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_05_28_Twilight_Anxious__Is this about the weather, and the animals' weird behavior_ What's happening out there_ why isn't my magic working_ is there_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_00_57_Scootaloo_Annoyed__And it is too chaos_.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_00_57_Scootaloo_Annoyed__And it is too chaos_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_00_40_Cheerilee_Neutral__This creature is called a draconequus.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_10_19_Twilight_Neutral__Thanks, princess. we won't let you down.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_00_55_Scootaloo_Annoyed__Don't call me things i don't know the meaning of_.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_00_55_Scootaloo_Annoyed__Don't call me things i don't know the meaning of_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_10_08_Twilight_Neutral__twists and turns! that's it!.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_18_28_Fluttershy_Happy__Not really, In fact, i think i'm awfully lucky To have friends who want me to be the best i can be_.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_18_28_Fluttershy_Happy__Not really, In fact, i think i'm awfully lucky To have friends who want me to be the best i can be_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_13_42_Applejack_Anxious Sad__That just can't be the truth_.npy'
Saved file has Wrong Shape!
Path: 'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_13_42_Applejack_Anxious Sad__That just can't be the truth_.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_07_09_Twilight_Neutral__We'd be honored to use the elements of harmony again.npy'
Audio Path:
'H:\ClipperDatasetV2\SlicedDialogue\FiM\S2\s2e1\00_18_28_Fluttershy_Happy__Not really, In fact, i think i'm awfully lucky To have friends who want me to be the best i can be.npy'
###Markdown
(Testing) Blending GT and Pred Spectrograms
###Code
import torch
min_ = 110
max_ = 120
n_mel_channels = 160
gt_perc = ((torch.arange(1, n_mel_channels+1).float()-min_).clamp(0)/(max_-min_)).clamp(max=1.0)
print(gt_perc)
###Output
_____no_output_____
###Markdown
(Testing) Dynamic Time Warping for GTA Alignment
###Code
import torch
import numpy as np
target = torch.rand(1, 2, 700)
pred = torch.rand(1, 2, 700)
@torch.jit.script
def DTW(batch_pred, batch_target, scale_factor: int, range_: int):
"""
Calcuates ideal time-warp for each frame to minimize L1 Error from target.
Params:
scale_factor: Scale factor for linear interpolation.
Values greater than 1 allows blends neighbouring frames to be used.
range_: Range around the target frame that predicted frames should be tested as possible candidates to output.
If range is set to 1, then predicted frames with more than 0.5 distance cannot be used. (where 0.5 distance means blending the 2 frames together).
"""
assert range_ % 2 == 1, 'range_ must be an odd integer.'
assert batch_pred.shape == batch_target.shape, 'pred and target shapes do not match.'
batch_pred_dtw = batch_pred * 0.
for i, (pred, target) in enumerate(zip(batch_pred, batch_target)):
pred = pred.unsqueeze(0)
target = target.unsqueeze(0)
# shift pred into all aligned forms that might produce improved L1
pred_pad = torch.nn.functional.pad(pred, (range_//2, range_//2))
pred_expanded = torch.nn.functional.interpolate(pred_pad, scale_factor=float(scale_factor), mode='linear', align_corners=False)# [B, C, T] -> [B, C, T*s]
p_shape = pred.shape
pred_list = []
for j in range(scale_factor*range_):
pred_list.append(pred_expanded[:,:,j::scale_factor][:,:,:p_shape[2]])
pred_dtw = pred.clone()
for pred_interpolated in pred_list:
new_l1 = torch.nn.functional.l1_loss(pred_interpolated, target, reduction='none').sum(dim=1, keepdim=True)
old_l1 = torch.nn.functional.l1_loss(pred_dtw, target, reduction='none').sum(dim=1, keepdim=True)
pred_dtw = torch.where(new_l1 < old_l1, pred_interpolated, pred_dtw)
batch_pred_dtw[i:i+1] = pred_dtw
return batch_pred_dtw
pred_dtw = DTW(pred, target, 4, 3)
print(torch.nn.functional.l1_loss(pred, target))
print(torch.nn.functional.l1_loss(pred_dtw, target))
import matplotlib
%matplotlib inline
import matplotlib.pylab as plt
import IPython.display as ipd
def plot_data(data, title=None, figsize=(20, 5)):
%matplotlib inline
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
plt.imshow(data, cmap='inferno', origin='lower',
interpolation='none')
ax.set_aspect('equal')
cax = fig.add_axes([0.12, 0.1, 0.78, 0.8])
cax.get_xaxis().set_visible(False)
cax.get_yaxis().set_visible(False)
cax.patch.set_alpha(0)
cax.set_frame_on(False)
plt.colorbar(orientation='vertical')
plt.show()
import torch
torch.tensor(-0.4).exp()
import random
filetext = open(r"G:\TwiBot\CookiePPPTTS\CookieTTS\_2_ttm\tacotron2\GTA_flist2\map_val.txt", "r").read().split("\n")
filter_str = [".mel100",".mel200",".mel300",".mel400",".mel500"]
filetext = [x for x in filetext if not any(str_ in x for str_ in filter_str)]
rand_start = int(random.random()*len(filetext))-file_count
rand_start = 10
file_count = 20
for line in filetext[rand_start:rand_start+file_count]:
pred_mel_path = line.split("|")[1].replace("\n","").replace("/media/cookie/Samsung 860 QVO/", "H:\\")
mel_pred = torch.from_numpy(np.load(pred_mel_path)).float().unsqueeze(0)
mel_pred[:, 120:, :] = 0.0
mel_target = torch.from_numpy(np.load(pred_mel_path.replace('.mel.npy','.npy'))).float().unsqueeze(0)
mel_target[:, 120:, :] = 0.0
mel_pred_dtw = DTW(mel_pred, mel_target, scale_factor = 8, range_= 7)
print(mel_pred.shape)
print(
torch.nn.functional.mse_loss(mel_pred, mel_target),
torch.nn.functional.mse_loss(mel_pred_dtw, mel_target),
sep='\n')
start_frame = 0
end_frame = 999
plot_data(mel_pred[0][:,start_frame:end_frame].numpy())
plot_data(mel_target[0][:,start_frame:end_frame].numpy())
plot_data(mel_pred_dtw[0][:,start_frame:end_frame].numpy())
print("\n\n\n")
###Output
_____no_output_____
###Markdown
(Testing) Timestamps for Model inputs
###Code
alignments = torch.rand(1, 80, 12)
sequence = torch.rand(1, 12)
dur_frames = torch.histc(torch.argmax(alignments[0], dim=1).float(), min=0, max=sequence.shape[1]-1, bins=sequence.shape[1])# number of frames each letter taken the maximum focus of the model.
dur_seconds = dur_frames * (275.625/22050)# convert from frames to seconds
end_times = dur_seconds * 0.0# new empty list
for i, dur_second in enumerate(dur_seconds): # calculate the end times for each letter.
end_times[i] = end_times[i-1] + dur_second# by adding up the durations of all the letters that go before it
start_times = torch.nn.functional.pad(end_times, (1,0))[:-1]# calculate the start times by assuming the next letter starts the moment the last one ends.
for i, (dur, start, end) in enumerate(zip(dur_seconds, start_times, end_times)):
print(f"[Letter {i:02}]\nDuration:\t{dur:.3f}\nStart Time:\t{start:.3f}\nEnd Time:\t{end:.3f}\n")
###Output
_____no_output_____ |
notebooks/Monte_Carlo_COVID.ipynb | ###Markdown
This notebook demonstrates Monte Carlo simulations using 20 days of observed data on trends of Covid-19 cases in Singapore.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import datetime as dt
df1 = pd.read_csv('corvid2019.csv')
df1.head()
df1.dtypes
df1['Date'] = pd.to_datetime(df1['Date'])
df1.head()
df1.shape[0]
df1['Count'].describe()
plt.hist(df1['Count'])
plt.show()
np.arange(8)
# assuming daily count comes from distribution following current observed data
np.random.choice(8, 5, p=[3/20, 3/20, 6/20, 5/20, 1/20, 0, 1/20, 1/20]).round()
# assuming distribution comes from normal distribution based on current observed data
np.random.normal(2.35, 1.81, 5).round() #note that there can be less than 0
df1['Cumulative'].max()
max(df1['Date'].dt.date)
# create list of running dates
pd.DataFrame({'Date': pd.date_range(start='2012-02-12', periods=5, freq='D')})
###Output
_____no_output_____
###Markdown
How're we going to generate cumulative counts for the next few days
###Code
num_days = 5
count_sim = np.random.choice(8, num_days, p=[3/20, 3/20, 6/20, 5/20, 1/20, 0, 1/20, 1/20]).round()
cuml_count_df = []
cuml_count = df1['Cumulative'].max()
for j in range(0,num_days):
cuml_count = cuml_count + count_sim[j]
cuml_count_df.append((cuml_count))
pd.DataFrame({'cum_count' : cuml_count_df})
###Output
_____no_output_____
###Markdown
Generating the dataset with dates and daily counts and cumulative count
###Code
num_days = 5
count_sim_df = []
cuml_count_df = []
date_list = []
# generate count for next num_days
count_sim = np.random.choice(8, num_days, p=[3/20, 3/20, 6/20, 5/20, 1/20, 0, 1/20, 1/20]).round()
count_sim_df = pd.DataFrame({'Count' : count_sim})
# cumulative count
cuml_count = df1['Cumulative'].max()
for j in range(0,num_days):
cuml_count = cuml_count + count_sim[j]
cuml_count_df.append((cuml_count))
cuml_sim_df = pd.DataFrame({'Cumulative' : cuml_count_df})
# create running dates
date_list = pd.DataFrame({'Date': pd.date_range(start='2020-02-12', periods=num_days, freq='D')})
# combine all dataframes
df_sim = pd.concat([date_list, count_sim_df, cuml_sim_df], axis=1)
df_sim
df_sim_final = pd.DataFrame(columns=['Date', 'Count', 'Cumulative'])
df_sim_final = df_sim_final.append((df_sim))
df_sim_final
###Output
_____no_output_____
###Markdown
Running N simulations using current data distribution to simulate daily count
###Code
num_sim = 100
num_days = 5
# Define a list to keep all the results from each simulation that we want to analyze
df_sim_final = pd.DataFrame(columns=['Date', 'Count', 'Cumulative'])
df_sim = []
# Loop through many simulations
for i in range(num_sim):
count_sim_df = []
cuml_count_df = []
date_list = []
# generate count for next num_days
count_sim = np.random.choice(8, num_days, p=[3/20, 3/20, 6/20, 5/20, 1/20, 0, 1/20, 1/20]).round()
count_sim_df = pd.DataFrame({'Count' : count_sim})
# cumulative count
cuml_count = df1['Cumulative'].max()
for j in range(0,num_days):
cuml_count = cuml_count + count_sim[j]
cuml_count_df.append((cuml_count))
cuml_sim_df = pd.DataFrame({'Cumulative' : cuml_count_df})
# create running dates
date_list = pd.DataFrame({'Date': pd.date_range(start='2020-02-12', periods=num_days, freq='D')})
# combine all dataframes
df_sim = pd.concat([date_list, count_sim_df, cuml_sim_df], axis=1)
df_sim_final = df_sim_final.append((df_sim))
df_sim_final.tail()
df_sim_final.shape[0]
df = df1.copy()
list(df)
df = df.append(df_sim_final)
df.tail()
plt.plot('Date', 'Cumulative',data=df[df['Date']<='2020-02-11'])
plt.plot('Date', 'Cumulative',data=df[df['Date']>'2020-02-11'],color="orange")
plt.xticks(rotation=45)
plt.axvline(dt.datetime(2020, 2, 11),color = 'black',linestyle='--')
plt.axhline(y=47, color = 'black',linestyle='--')
###Output
C:\Users\chua1\Anaconda3\lib\site-packages\pandas\plotting\_converter.py:129: FutureWarning: Using an implicitly registered datetime converter for a matplotlib plotting method. The converter was registered by pandas on import. Future versions of pandas will require you to explicitly register matplotlib converters.
To register the converters:
>>> from pandas.plotting import register_matplotlib_converters
>>> register_matplotlib_converters()
warnings.warn(msg, FutureWarning)
###Markdown
Running N simulations using normal distribution to simulate daily count
###Code
num_sim = 100
num_days = 5
# Define a list to keep all the results from each simulation that we want to analyze
df_sim_final = pd.DataFrame(columns=['Date', 'Count', 'Cumulative'])
df_sim = []
# Loop through many simulations
for i in range(num_sim):
count_sim_df = []
cuml_count_df = []
date_list = []
# generate count for next num_days
count_sim = np.random.normal(2.35, 1.81, 5).round()
count_sim_df = pd.DataFrame({'Count' : count_sim})
# cumulative count
cuml_count = df1['Cumulative'].max()
for j in range(0,num_days):
cuml_count = cuml_count + max(count_sim[j],0)
cuml_count_df.append((cuml_count))
cuml_sim_df = pd.DataFrame({'Cumulative' : cuml_count_df})
# create running dates
date_list = pd.DataFrame({'Date': pd.date_range(start='2020-02-12', periods=num_days, freq='D')})
# combine all dataframes
df_sim = pd.concat([date_list, count_sim_df, cuml_sim_df], axis=1)
df_sim_final = df_sim_final.append((df_sim))
df = df1.copy()
df = df.append(df_sim_final)
plt.plot('Date', 'Cumulative',data=df[df['Date']<='2020-02-11'])
plt.plot('Date', 'Cumulative',data=df[df['Date']>'2020-02-11'],color="orange")
plt.xticks(rotation=45)
plt.axvline(dt.datetime(2020, 2, 11),color = 'black',linestyle='--')
plt.axhline(y=47, color = 'black',linestyle='--')
###Output
_____no_output_____ |
Week-07/4_Challenge_Clickbait_Recognition.ipynb | ###Markdown
Challenge - Clickbait Title Detection Background informationClickbait titles and tumbnails are plagueing the internet and lead to lesser user satisfaction with services like YouTube or news servers. Due to the amount on new content on these sites, it is impossible to control content manually. That is why giants like Facebook(Meta), Twitter, Amazon or Google(Alphabet) are investing huge resources towards creating NLP systems that ae able to curate internet enviroment autonomously.To make our Clickbait Detection model we will use Bag of Words encoding and sequential model. DataWe will use clickbait data, which you can download from our GitHub.It has 2 categories ("headline" - containing titles & clickbait - containing the labels). As the separator, we use ";" because comma can be problematic on some system due to commas being also used in the text.
###Code
#Importing required libraries and download NLTK resources
import numpy as np
from numpy import array
import pandas as pd
import tensorflow as tf
from tensorflow.keras.preprocessing.text import one_hot
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Embedding
import nltk
nltk.download('all')
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.stem.porter import *
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import CountVectorizer
#Load data into dataframe
DATA_PATH = './data/train.csv'
df = pd.read_csv(DATA_PATH)
df.dropna(subset = ["clickbait"], inplace=True)
np.random.shuffle(df.values)
#Load corpus and labels
stop_words = set(stopwords.words('english'))
lemmatizer = WordNetLemmatizer()
###Output
_____no_output_____
###Markdown
PreprocessingIn NLP there are multiple ways how to approach preprocessing. It is more or less up to us, what kinds of preprocessing we want to do and not always are all of them helpful.The most common preprocessing techniques are:- Removing stopwords- Lemmatizaton- Stemming
###Code
#Get all unique words
all_words = []
for index in range(len(df.get('clickbait'))):
is_clickbait = int(df.get('clickbait')[index])
if not is_clickbait:
continue
sentence = df.get('headline')[index]
words = [lemmatizer.lemmatize(i) for i in word_tokenize(sentence)]
for j in words:
if not j in stop_words:
all_words.append(j)
unique_words = set(all_words)
unique_words
###Output
_____no_output_____
###Markdown
EmbeddingsCreating embeddings could be also seen as a form of preprocessing, which is maybe the most important choice you make when building NLP model. We are using the Bag of Words approach, which is very simplistic. They are better embeddings for this task, but there are situation when BoW is the best option.
###Code
#Create embeddings
countVectorizer = CountVectorizer(stop_words='english')
text_vector = countVectorizer.fit_transform(all_words).toarray()
#Split data
from sklearn.model_selection import train_test_split
#Create model
model = tf.keras.models.Sequential([
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences = True, input_shape = (5, 10))),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(10)
])
#Compile model
model.compile()
#Fit model
model.fit(text_vector)
#Count accuracy
###Output
_____no_output_____
###Markdown
Challenge - Clickbait Title Detection Background informationClickbait titles and tumbnails are plagueing the internet and lead to lesser user satisfaction with services like YouTube or news servers. Due to the amount on new content on these sites, it is impossible to control content manually. That is why giants like Facebook(Meta), Twitter, Amazon or Google(Alphabet) are investing huge resources towards creating NLP systems that ae able to curate internet enviroment autonomously.To make our Clickbait Detection model we will use Bag of Words encoding and sequential model. DataWe will use clickbait data, which you can download from our GitHub.It has 2 categories ("headline" - containing titles & clickbait - containing the labels). As the separator, we use ";" because comma can be problematic on some system due to commas being also used in the text.
###Code
#Importing required libraries and download NLTK resources
import numpy as np
from numpy import array
import pandas as pd
import tensorflow as tf
from tensorflow.keras.preprocessing.text import one_hot
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Embedding
import nltk
nltk.download('all')
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.stem.porter import *
from nltk.corpus import stopwords
#Load data into dataframe
DATA_PATH = ''
df = pd.read_csv(DATA_PATH)
df.dropna(subset = ["clickbait"], inplace=True)
np.random.shuffle(df.values)
#Load corpus and labels
###Output
_____no_output_____
###Markdown
PreprocessingIn NLP there are multiple ways how to approach preprocessing. It is more or less up to us, what kinds of preprocessing we want to do and not always are all of them helpful.The most common preprocessing techniques are:- Removing stopwords- Lemmatizaton- Stemming
###Code
#Get all unique words
all_words = []
unique_words = set(all_words)
###Output
_____no_output_____
###Markdown
EmbeddingsCreating embeddings could be also seen as a form of preprocessing, which is maybe the most important choice you make when building NLP model. We are using the Bag of Words approach, which is very simplistic. They are better embeddings for this task, but there are situation when BoW is the best option.
###Code
#Create embeddings
#Split data
from sklearn.model_selection import train_test_split
#Create model
#Compile model
#Fit model
#Count accuracy
###Output
_____no_output_____ |
sample_image_converter/readme.ipynb | ###Markdown
Image converterThis is the sample project to use `tornado_instant_webapi`.Firstly, run `image_convert_server.py` like this:```bashpip install git+https://github.com/Hiroshiba/tornado_instant_webapipip install requestspip install Pillow image librarypython image_convert_server.py```Then, you can request.
###Code
from io import BytesIO
from PIL import Image
import requests
url = 'http://localhost:8000/'
Image.open(open('sample.jpg', 'rb'))
###Output
_____no_output_____
###Markdown
In `image_converter/image_converter.py`, the some static methods are writtern.```pyclass ImageConverter(object): ... @staticmethod def convert(image: Image, mode: str = None, matrix=None, dither: int = None, palette: int = 0, colors: int = 256): return image.convert(mode, matrix, dither, palette, colors) ...```If you want to use `convert` method, you should post a query to `/convert`.
###Code
r = requests.post(url + 'convert', files=dict(image=open('sample.jpg', 'rb')), data=dict(mode='L'))
Image.open(BytesIO(r.content))
###Output
_____no_output_____
###Markdown
If you want to post the `list` or `dict` query, you can use JSON strings.```py @staticmethod def resize(image: Image, size: tuple, resample: int = 0): return image.resize(size, resample)```
###Code
r = requests.post(url + 'resize', files=dict(image=open('sample.jpg', 'rb')), data=dict(size='[64, 128]'))
Image.open(BytesIO(r.content))
###Output
_____no_output_____
###Markdown
If you want to make new web API, you only add method with type hinting.```py @staticmethod def effect_spread(image: Image, distance: int): return image.effect_spread(distance)```
###Code
r = requests.post(url + 'effect_spread', files=dict(image=open('sample.jpg', 'rb')), data=dict(distance=10))
Image.open(BytesIO(r.content))
###Output
_____no_output_____ |
examples/seglearn_integration.ipynb | ###Markdown
Get the data
###Code
from tsflex.utils.data import load_empatica_data
df_tmp, df_acc, df_gsr, df_ibi = load_empatica_data(["tmp", "acc", "gsr", "ibi"])
from pandas.tseries.frequencies import to_offset
data = [df_tmp, df_acc, df_gsr, df_ibi]
for df in data:
print("Time-series:", df.columns.values)
print(df.shape)
try:
print("Sampling rate:", 1 / pd.to_timedelta(to_offset(pd.infer_freq(df.index))).total_seconds(), "Hz")
except:
print("Irregular sampling rate")
print()
###Output
Time-series: ['TMP']
(30200, 1)
Irregular sampling rate
Time-series: ['ACC_x' 'ACC_y' 'ACC_z']
(241620, 3)
Irregular sampling rate
Time-series: ['EDA']
(30204, 1)
Irregular sampling rate
Time-series: ['IBI']
(1230, 1)
Irregular sampling rate
###Markdown
Look at the data
###Code
import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(
rows=len(data), cols=1, shared_xaxes=True,
subplot_titles=[df.columns.values[0].split('_')[0] for df in data],
vertical_spacing=0.1,
)
for plot_idx, df in enumerate(data, 1):
# Select first minute of data
sub_df = df.first('1min')
for col in df.columns:
fig.add_trace(
go.Scattergl(x=sub_df.index, y=sub_df[col].values, name=col, mode='markers'),
row=plot_idx, col=1
)
fig.update_layout(height=len(data)*200)
fig.show(renderer='iframe')
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(16,4))
for plot_idx, df in enumerate(data):
df.plot(kind='box', ax=axes[plot_idx])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
These visualizations indicate that some preprocessing might be necessary for the signals (some sort of clipping) tsflex processing This is roughly identical to the processing of [this paper notebook](https://github.com/predict-idlab/tsflex/blob/main/examples/tsflex_paper.ipynb)
###Code
import pandas as pd; import numpy as np; from scipy.signal import savgol_filter
from tsflex.processing import SeriesProcessor, SeriesPipeline
# Create the processing functions
def clip_data(sig: pd.Series, min_val=None, max_val=None) -> np.ndarray:
return np.clip(sig, a_min=min_val, a_max=max_val)
def smv(*sigs) -> pd.Series:
sig_prefixes = set(sig.name.split('_')[0] for sig in sigs)
result = np.sqrt(np.sum([np.square(sig) for sig in sigs], axis=0))
return pd.Series(result, index=sigs[0].index, name='|'.join(sig_prefixes)+'_'+'SMV')
# Create the series processors (with their keyword arguments)
tmp_clippper = SeriesProcessor(clip_data, series_names="TMP", max_val=35)
acc_savgol = SeriesProcessor(
savgol_filter, ["ACC_x", "ACC_y", "ACC_z"], window_length=33, polyorder=2
)
acc_smv = SeriesProcessor(smv, ("ACC_x", "ACC_y", "ACC_z"))
# Create the series pipeline & process the data
series_pipe = SeriesPipeline([tmp_clippper, acc_savgol, acc_smv])
series_pipe
out_data = series_pipe.process(data, drop_keys=["ACC_x", "ACC_y", "ACC_z"])
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(16,4))
for plot_idx, df in enumerate(out_data):
df.plot(kind='box', ax=axes[plot_idx])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
tsflex feature extraction with [seglearn](https://github.com/dmbee/seglearn) integration
###Code
# !pip install seglearn
###Output
_____no_output_____
###Markdown
> Useful link; > [Documentation of all of the seglearn features](https://dmbee.github.io/seglearn/feature_functions.html)[seglearn feature dictionaries](https://github.com/dmbee/seglearn/blob/master/seglearn/feature_functions.py) is how seglearn represents a collection of features. **=> requires wrapping this dictionary in a `seglearn_feature_dict_wrapper` for interoperability with tsflex.**As [seglearn feature-functions](https://github.com/dmbee/seglearn/blob/master/seglearn/feature_functions.py) are vectorized along the first axis (axis=0), we need to expand our window-data. => convert `1D np.array` to a `2D np.array` with all the window-data in `axis=1`
###Code
# This wrapper handles segleran its feature dictionaries
from tsflex.features.integrations import seglearn_feature_dict_wrapper
# This wrapper does exactly that conversion
from tsflex.features.integrations import seglearn_wrapper
from tsflex.features import MultipleFeatureDescriptors, FeatureCollection
###Output
_____no_output_____
###Markdown
Using seglearn feature dictionaries
###Code
# Import base feature & all feature functions from seg-learn
from seglearn.feature_functions import base_features, all_features
###Output
_____no_output_____
###Markdown
Calculate the features for a seglearn feature dictionary. Note that;* `seglearn_feature_dict_wrapper` transforms this feature extraction settings object to a list of features that you can directly pass as the `function` argument of tsflex `MultipleFeatureDescriptors`.
###Code
basic_feats = MultipleFeatureDescriptors(
functions=seglearn_feature_dict_wrapper(base_features()),
series_names=["ACC_SMV", "EDA", "TMP"],
windows=["5min", "2.5min"],
strides="2min",
)
feature_collection = FeatureCollection(basic_feats)
feature_collection
features_df = feature_collection.calculate(out_data, return_df=True, show_progress=True)
features_df
###Output
_____no_output_____
###Markdown
Extract all seglearn features.
###Code
all_feats = MultipleFeatureDescriptors(
functions=seglearn_feature_dict_wrapper(all_features()),
series_names=["ACC_SMV", "EDA", "TMP"],
windows=["5min", "2.5min"],
strides="2min",
)
feature_collection = FeatureCollection(all_feats)
feature_collection
features_df = feature_collection.calculate(out_data, return_df=True, show_progress=True)
features_df
###Output
_____no_output_____
###Markdown
Plot the EDA features
###Code
import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(
rows=2, cols=1, shared_xaxes=True,
subplot_titles=['Raw EDA data', 'EDA features'],
vertical_spacing=0.1
)
fig.add_trace(
go.Scattergl(x=df_gsr.index[::4*5], y=df_gsr['EDA'].values[::4*5], name='EDA', mode='markers'),
row=1, col=1
)
ibi_feats = [c for c in features_df.columns if 'EDA_' in c and 'w=2m30s_' in c]
for col in ibi_feats:
sub_df = features_df[[col]].dropna()
if not np.issubdtype(sub_df.values.dtype, np.number):
continue
fig.add_trace(
go.Scattergl(x=sub_df.index, y=sub_df[col].values, name=col, mode='markers'),
row=2, col=1
)
fig.update_layout(height=2*350)
fig.show(renderer='iframe')
###Output
_____no_output_____
###Markdown
Using basic seglearn features Wrapping seglearn features individually.
###Code
# Import base feature functions from seg-learn
from seglearn.feature_functions import base_features
basic_feats = MultipleFeatureDescriptors(
functions=[seglearn_wrapper(f, f_name) for f_name, f in base_features().items()],
series_names=["ACC_SMV", "EDA", "TMP"],
windows=["5min", "2.5min"],
strides="2min",
)
feature_collection = FeatureCollection(basic_feats)
feature_collection
features_df = feature_collection.calculate(out_data, return_df=True)
features_df
###Output
_____no_output_____
###Markdown
Plot the EDA features
###Code
import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(
rows=2, cols=1, shared_xaxes=True,
subplot_titles=['Raw EDA data', 'EDA features'],
vertical_spacing=0.1,
)
fig.add_trace(
go.Scattergl(x=df_gsr.index[::4*5], y=df_gsr['EDA'].values[::4*5], name='EDA', mode='markers'),
row=1, col=1
)
ibi_feats = [c for c in features_df.columns if 'EDA_' in c and 'w=2m30s_' in c]
for col in ibi_feats:
sub_df = features_df[[col]].dropna()
fig.add_trace(
go.Scattergl(x=sub_df.index, y=sub_df[col].values, name=col, mode='markers'),
row=2, col=1
)
fig.update_layout(height=2*350)
fig.show(renderer='iframe')
###Output
_____no_output_____
###Markdown
Get the data
###Code
import pandas as pd
url = "https://github.com/predict-idlab/tsflex/raw/main/examples/data/empatica/"
df_tmp = pd.read_parquet(url+"tmp.parquet").set_index("timestamp")
df_acc = pd.read_parquet(url+"acc.parquet").set_index("timestamp")
df_gsr = pd.read_parquet(url+"gsr.parquet").set_index("timestamp")
df_ibi = pd.read_parquet(url+"ibi.parquet").set_index("timestamp")
from pandas.tseries.frequencies import to_offset
data = [df_tmp, df_acc, df_gsr, df_ibi]
for df in data:
print("Time-series:", df.columns.values)
print(df.shape)
try:
print("Sampling rate:", 1 / pd.to_timedelta(to_offset(pd.infer_freq(df.index))).total_seconds(), "Hz")
except:
print("Irregular sampling rate")
print()
###Output
Time-series: ['TMP']
(30200, 1)
Sampling rate: 4.0 Hz
Time-series: ['ACC_x' 'ACC_y' 'ACC_z']
(241620, 3)
Sampling rate: 32.0 Hz
Time-series: ['EDA']
(30204, 1)
Sampling rate: 4.0 Hz
Time-series: ['IBI']
(1230, 1)
Irregular sampling rate
###Markdown
Look at the data
###Code
import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(
rows=len(data), cols=1, shared_xaxes=True,
subplot_titles=[df.columns.values[0].split('_')[0] for df in data]
)
for plot_idx, df in enumerate(data, 1):
# Select first minute of data
sub_df = df.first('1min')
for col in df.columns:
fig.add_trace(
go.Scattergl(x=sub_df.index, y=sub_df[col].values, name=col, mode='markers'),
row=plot_idx, col=1
)
fig.update_layout(height=len(data)*200)
fig.show(renderer='iframe')
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(16,4))
for plot_idx, df in enumerate(data):
df.plot(kind='box', ax=axes[plot_idx])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
These visualizations indicate that some preprocessing might be necessary for the signals (some sort of clipping) tsflex processing This is roughly identical to the processing of [this paper notebook](https://github.com/predict-idlab/tsflex/blob/main/examples/tsflex_paper.ipynb)
###Code
from tsflex.processing import SeriesProcessor, SeriesPipeline
# Import / create the processing functions
import numpy as np
from scipy.signal import savgol_filter
def clip_quantiles(sig: pd.Series, lower_q=0.01, upper_q=0.99) -> np.ndarray:
# Note that this function induces a data leakage
quantile_vals = np.quantile(sig, q=[lower_q, upper_q])
return np.clip(sig, *quantile_vals)
def smv(*sigs) -> pd.Series:
sig_prefixes = set(sig.name.split('_')[0] for sig in sigs)
result = np.sqrt(np.sum([np.square(sig) for sig in sigs], axis=0))
return pd.Series(result, index=sigs[0].index, name='|'.join(sig_prefixes)+'_'+'SMV')
# Create the series processors (with their keyword arguments)
clipper_tmp = SeriesProcessor(clip_quantiles, series_names="TMP", lower_q=0, upper_q=0.999)
savgol_eda = SeriesProcessor(savgol_filter, "EDA", window_length=5, polyorder=2)
savgol_acc = SeriesProcessor(savgol_filter, ["ACC_x", "ACC_y", "ACC_z"], window_length=33, polyorder=2)
smv_processor = SeriesProcessor(smv, ("ACC_x", "ACC_y", "ACC_z"))
# Create the series pipeline
series_pipe = SeriesPipeline(
processors=[clipper_tmp, savgol_eda, savgol_acc, smv_processor]
)
series_pipe
out_data = series_pipe.process(data, drop_keys=["ACC_x", "ACC_y", "ACC_z"])
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(16,4))
for plot_idx, df in enumerate(out_data):
df.plot(kind='box', ax=axes[plot_idx])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
tsflex feature extraction with [seglearn](https://github.com/dmbee/seglearn) integration
###Code
!pip install seglearn
###Output
_____no_output_____
###Markdown
> Useful link; > [Documentation of all of the seglearn features](https://dmbee.github.io/seglearn/feature_functions.html)As [seglearn feature-functions](https://github.com/dmbee/seglearn/blob/master/seglearn/feature_functions.py) are vectorized along the first axis (axis=0), we need to expand our window-data. => convert `1D np.array` to a `2D np.array` with all the window-data in `axis=1`
###Code
# This wrapper does exactly that conversion
from tsflex.features.integrations import seglearn_wrapper
###Output
_____no_output_____
###Markdown
Using basic seglearn features
###Code
from tsflex.features import MultipleFeatureDescriptors, FeatureCollection
# Import base feature functions from seg-learn
from seglearn.feature_functions import base_features
basic_feats = MultipleFeatureDescriptors(
functions=[seglearn_wrapper(f, f_name) for f_name, f in base_features().items()],
series_names=["ACC_SMV", "EDA", "TMP"],
windows=["5min", "2.5min"],
strides="2min",
)
feature_collection = FeatureCollection(basic_feats)
feature_collection
features_df = feature_collection.calculate(out_data, return_df=True)
features_df
###Output
_____no_output_____
###Markdown
Plot the EDA features
###Code
import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(
rows=2, cols=1, shared_xaxes=True,
subplot_titles=['Raw EDA data', 'EDA features']
)
fig.add_trace(
go.Scattergl(x=df_gsr.index[::4*5], y=df_gsr['EDA'].values[::4*5], name='EDA', mode='markers'),
row=1, col=1
)
ibi_feats = [c for c in features_df.columns if 'EDA_' in c and 'w=2m30s_' in c]
for col in ibi_feats:
sub_df = features_df[[col]].dropna()
fig.add_trace(
go.Scattergl(x=sub_df.index, y=sub_df[col].values, name=col, mode='markers'),
row=2, col=1
)
fig.update_layout(height=2*350)
fig.show(renderer='iframe')
###Output
_____no_output_____
###Markdown
Using all seglearn features
###Code
# Import all feature functions from seg-learn
from seglearn.feature_functions import all_features
from tsflex.features import FeatureCollection, MultipleFeatureDescriptors
all_feats = MultipleFeatureDescriptors(
functions=[
seglearn_wrapper(f, k) for k, f in all_features().items() if k != "hist4"
]
# As hist returns `bins` number of outputs => `bins` number of output_names should be passed
+ [
seglearn_wrapper(
all_features()["hist4"], output_names=[f"hist{i}" for i in range(1, 5)]
)
],
series_names=["ACC_SMV", "EDA", "TMP"],
windows=["5min", "2.5min"],
strides=["2.5min"],
)
feature_collection = FeatureCollection(all_feats)
feature_collection
len(all_features())
features_df = feature_collection.calculate(out_data, return_df=True)
features_df
###Output
_____no_output_____
###Markdown
Plot the EDA features
###Code
import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(
rows=2, cols=1, shared_xaxes=True,
subplot_titles=['Raw EDA data', 'EDA features']
)
fig.add_trace(
go.Scattergl(x=df_gsr.index[::4*5], y=df_gsr['EDA'].values[::4*5], name='EDA', mode='markers'),
row=1, col=1
)
ibi_feats = [c for c in features_df.columns if 'EDA_' in c and 'w=2m30s_' in c]
for col in ibi_feats:
sub_df = features_df[[col]].dropna()
fig.add_trace(
go.Scattergl(x=sub_df.index, y=sub_df[col].values, name=col, mode='markers'),
row=2, col=1
)
fig.update_layout(height=2*350)
fig.show(renderer='iframe')
###Output
_____no_output_____ |
E-箱线图/基础箱线图MA_E_01/MA_E_01.ipynb | ###Markdown
Matplotlib图鉴——基础箱线图 公众号:可视化图鉴
###Code
import matplotlib
print(matplotlib.__version__) #查看Matplotlib版本
import pandas as pd
print(pd.__version__) #查看pandas版本
import numpy as np
print(np.__version__) #查看numpy版本
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif'] = ['STHeiti'] #设置中文
###Output
3.3.3
1.2.0
1.19.4
###Markdown
注意,代码在以下环境全部通过测试:- Python 3.7.1- Matplotlib == 3.3.3- pandas == 1.2.0- numpy == 1.19.4因版本不同,可能会有部分语法差异,如有报错,请先检查拼写及版本是否一致! 基础箱线图
###Code
#模拟四组数据
y1 = [1,2,3,4,5,6]
y2 = [2,3,4,4,5,7]
y3 = [4,4,3,2,2,3]
y4 = [5,6,6,3,2,2]
data = pd.DataFrame({"A":y1, "B":y2, "C":y3, "D":y4})
fig1, ax1 = plt.subplots(figsize=(9,8))
plt.xlabel("我是x轴", fontsize = 20)
plt.ylabel("我是y轴", fontsize = 20)
plt.title('我是标题', fontsize = 20)
plt.tick_params(labelsize=13)
ax1.boxplot(data)
plt.savefig("O_01.png")
plt.show()
###Output
_____no_output_____
###Markdown
上图为最基本的Matplotlib箱线图图绘制,有关boxplot其他参数说明如下:>matplotlib.pyplot.boxplot(x, notch=None, sym=None, vert=None, whis=None, positions=None, widths=None, patch_artist=None, bootstrap=None, usermedians=None, conf_intervals=None, meanline=None, showmeans=None, showcaps=None, showbox=None, showfliers=None, boxprops=None, labels=None, flierprops=None, medianprops=None, meanprops=None, capprops=None, whiskerprops=None, manage_xticks=True, autorange=False, zorder=None, hold=None, data=None)部分参数解释如下所示:- x: 数组 data- notch: 如果为True, 将生成一个带缺口的箱形图。否则,将生成一个矩形箱线图。 缺口表示中位数附近的置信区间(CI)。- sym: 传单 **- vert: 如果为True, 则使箱线图垂直绘制。 如果为False,则所有内容均为水平绘制。- whis: float, 确定晶须到达第一和第三四分位数以外的范围。- bootstrap: int, 指定是否在带槽的箱形图的中间值附近引导置信区间。- usermedians: array-like, 其第一维(或长度)与x兼容的数组或序列。 这会覆盖由matplotlib为usermedians的每个元素计算的中位数,该中位数不是None。当usermedians的元素为None时,matplotlib将正常计算中间值。- conf_intervals: array-like,第一维(或长度)与x兼容且第二维为2的数组或序列。当conf_intervals的a元素不为None时,将覆盖matplotlib计算的槽口位置(如果notch为True)。当conf_intervals的元素为None时,槽口由其他kwargs指定的方法(例如bootstrap)计算。- positions: 设置盒子的位置。 刻度和限制会自动设置为与位置匹配。 默认值为range(1,N + 1),其中N是要绘制的框数。- widths : 用标量或序列设置每个框的宽度。 如果较小,则默认值为0.5或0.15 *(极限位置之间的距离)。- patch_artist : bool, 使用Line2D artist产生盒子的图形, 否则使用Patch artists画盒子。- labels : 每个数据集的标签。 长度必须与x的尺寸对应。- manage_xticks : bool, optional (True)调整刻度- autorange : bool, optional (False),当True和数据分布使得第25个百分位数和第75个百分位数相等时, whis设置为“范围”, 以使晶须末端位于数据的最小值和最大值。- meanline : bool, optional(False), 如果为True(且showmeans为True),则将根据meanprops尝试将均值渲染为跨越框的整个宽度的线。如果showpotches也为True, 则不建议使用。否则,均值将显示为点。- zorder : 设置箱线图的zorder。 基础箱线图 - 改变方向、改变颜色、增加标签
###Code
y1 = [1,2,3,4,5,6]
y2 = [2,3,4,4,5,7]
y3 = [4,4,3,2,2,3]
y4 = [5,6,6,3,2,2]
data = pd.DataFrame({"A":y1, "B":y2, "C":y3, "D":y4})
#green_diamond = dict(markerfacecolor='g', marker='D')
fig3, ax3 = plt.subplots(figsize=(9,8))
ax3.set_title('我是标题', fontsize = 20)
plt.xlabel("我是x轴", fontsize = 20)
plt.ylabel("我是y轴", fontsize = 20)
plt.tick_params(labelsize=13)
ax3.boxplot(data,labels = labels ,flierprops=green_diamond, vert=False, patch_artist = True,boxprops = {'color':'black','facecolor':'yellow'})
plt.savefig("O_02.png")
plt.show()
###Output
_____no_output_____
###Markdown
基础箱线图 - 显示异常值
###Code
import numpy as np
import matplotlib.pyplot as plt
fig3, ax3 = plt.subplots(figsize=(9,8))
ax3.set_title('我是标题', fontsize = 20)
plt.xlabel("我是x轴", fontsize = 20)
plt.ylabel("我是y轴", fontsize = 20)
plt.tick_params(labelsize=13)
blue_diamond = dict(markerfacecolor='b', marker='D')
np.random.seed(100)
data=np.random.normal(size=(1500,4), loc=0, scale=1) #表示有4组数据,每组数据有100个
labels=['A','B','C','D']
plt.boxplot(data,labels=labels,flierprops=blue_diamond,patch_artist = True,boxprops = {'color':'black','facecolor':'grey'})
plt.savefig("O_03.png")
plt.show()
###Output
_____no_output_____ |
SMS_text_classification.ipynb | ###Markdown
*Note: You are currently reading this using Google Colaboratory which is a cloud-hosted version of Jupyter Notebook. This is a document containing both text cells for documentation and runnable code cells. If you are unfamiliar with Jupyter Notebook, watch this 3-minute introduction before starting this challenge: https://www.youtube.com/watch?v=inN8seMm7UI*---In this challenge, you need to create a machine learning model that will classify SMS messages as either "ham" or "spam". A "ham" message is a normal message sent by a friend. A "spam" message is an advertisement or a message sent by a company.You should create a function called `predict_message` that takes a message string as an argument and returns a list. The first element in the list should be a number between zero and one that indicates the likeliness of "ham" (0) or "spam" (1). The second element in the list should be the word "ham" or "spam", depending on which is most likely.For this challenge, you will use the [SMS Spam Collection dataset](http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/). The dataset has already been grouped into train data and test data.The first two cells import the libraries and data. The final cell tests your model and function. Add your code in between these cells.
###Code
# import libraries
# try:
# # %tensorflow_version only exists in Colab.
# !pip install tf-nightly
# except Exception:
# pass
# !pip install tensorflow
# !pip install numpy --upgrade --ignore-installed
!pip install -q tensorflow-text
import requests
import tensorflow_text as tf_text
import tensorflow as tf
import pandas as pd
from tensorflow import keras
# !pip install tensorflow-datasets
import tensorflow_datasets as tfds
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
print(tf.__version__)
# get data files
!wget https://cdn.freecodecamp.org/project-data/sms/train-data.tsv
!wget https://cdn.freecodecamp.org/project-data/sms/valid-data.tsv
train_file_path = "train-data.tsv"
test_file_path = "valid-data.tsv"
train_file_path = "https://cdn.freecodecamp.org/project-data/sms/train-data.tsv"
test_file_path = "https://cdn.freecodecamp.org/project-data/sms/valid-data.tsv"
# train_files = pd.read_csv('https://cdn.freecodecamp.org/project-data/sms/train-data.tsv', header=0, names=['label', 'message'], sep='\t')
train_files = pd.read_csv('train-data.tsv', sep='\t', header=None, names=['Classification', 'Message'])
valid_files = pd.read_csv('valid-data.tsv', sep='\t', header=None, names=['Classification', 'Message'])
# Configure parameters
batch_size = 32
epochs = 15
BUFFER_SIZE = 10000
!mkdir trainData
!mv train-data.tsv trainData
!mkdir valData
!mv valid-data.tsv valData
# this shows there are duplicates that probably need to be removed from the datasets
train_files.describe()
duplicatedRow = train_files[train_files.duplicated()]
print(duplicatedRow[:10])
# Remove duplicate entries
train_files = train_files.drop_duplicates()
valid_files = valid_files.drop_duplicates()
train_files.describe()
train_files['Classification'].value_counts()
valid_files['Classification'].value_counts()
# ham messages are much more frequent than spam, so we should balance the data before training the model with it
lenTrainSpam = len(train_files[train_files['Classification'] == 'spam'])
# gather lenTrainSpam amount of random samples where message is considered ham
trainHam = train_files[train_files['Classification'] == 'ham'].sample(n=lenTrainSpam)
trainSpam = train_files[train_files['Classification'] == 'spam'].sample(n=lenTrainSpam)
# Merge lenTrainSpam amount of ham and spam samples into one dataframe
trainSet = trainHam.merge(trainSpam,how='outer', on=None)
# Shuffle dataset
trainSet = trainSet.sample(len(trainSet), random_state=1)
trainSet
###Output
_____no_output_____
###Markdown
New Section
###Code
# Encode hams as 0, spams as 1 to allow for a binary model
trainSet['Classification'] = trainSet['Classification'].map({'ham':0, 'spam':1})
valid_files['Classification'] = valid_files['Classification'].map({'ham':0, 'spam':1})
trainSet.head()
vocab_size = 1000
embedding_dim = 16
max_length = 100
trunc_type='post'
padding_type='post'
oov_tok = "<OOV>"
tokenizer = Tokenizer(num_words = vocab_size, char_level=False, oov_token=oov_tok)
tokenizer.fit_on_texts(trainSet['Message'])
X_train = tokenizer.texts_to_sequences(trainSet['Message'])
X_test = tokenizer.texts_to_sequences(valid_files['Message'])
X_train_padded = pad_sequences(X_train, maxlen=max_length, padding=padding_type, truncating=trunc_type)
X_test_padded = pad_sequences(X_test, maxlen=max_length, padding=padding_type, truncating=trunc_type)
Y_test = valid_files['Classification']
Y_train = trainSet['Classification']
Y_train_array = np.array(Y_train)
Y_test_array = np.array(Y_test)
word_index = tokenizer.word_index
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(8, activation='relu'), tf.keras.layers.Dropout(0.25),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
num_epochs = 55
history = model.fit(X_train_padded, Y_train_array, epochs=num_epochs, validation_data=(X_test_padded, Y_test_array))
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(num_epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# function to predict messages based on model
# (should return list containing prediction and label, ex. [0.008318834938108921, 'ham'])
def predict_message(pred_text):
predictions = []
pred_text = pd.Series(pred_text)
tokenized_string = tokenizer.texts_to_sequences(pred_text)
pad_text_sequence = pad_sequences(tokenized_string, maxlen=max_length)
result = model.predict(pad_text_sequence)[0]
if result[0] <= 0.5:
predictions = [result[0], "ham"]
else:
predictions = [result[0], "spam"]
print(predictions)
return predictions
pred_text = "how are you doing today?"
prediction = predict_message(pred_text)
print(prediction)
# Run this cell to test your function and model. Do not modify contents.
def test_predictions():
test_messages = ["how are you doing today",
"sale today! to stop texts call 98912460324",
"i dont want to go. can we try it a different day? available sat",
"our new mobile video service is live. just install on your phone to start watching.",
"you have won £1000 cash! call to claim your prize.",
"i'll bring it tomorrow. don't forget the milk.",
"wow, is your arm alright. that happened to me one time too"
]
test_answers = ["ham", "spam", "ham", "spam", "spam", "ham", "ham"]
passed = True
for msg, ans in zip(test_messages, test_answers):
prediction = predict_message(msg)
if prediction[1] != ans:
passed = False
if passed:
print("You passed the challenge. Great job!")
else:
print("You haven't passed yet. Keep trying.")
test_predictions()
###Output
[0.01832077, 'ham']
[0.58677477, 'spam']
[0.018805355, 'ham']
[0.9959966, 'spam']
[0.99592066, 'spam']
[0.010900795, 'ham']
[0.017705321, 'ham']
You passed the challenge. Great job!
|
dev/examples/ulmfit.ipynb | ###Markdown
ULMFiT Finetune a pretrained Language Model First we get our data and tokenize it.
###Code
path = untar_data(URLs.IMDB)
tokenize_folder(path, folders=['train', 'test', 'unsup'])
path = untar_data(URLs.IMDB).parent/'imdb_tok'
count = pickle.load(open(path/'counter.pkl', 'rb'))
vocab = make_vocab(count)
texts = get_files(path, extensions=['.txt'])
len(texts)
###Output
_____no_output_____
###Markdown
Then we put it in a `DataSource`. For a language model, we don't have targets, so there is only one transform to numericalize the texts. Note that `tokenize_df` returns the count of the words in the corpus to make it easy to create a vocabulary.
###Code
def read_file(f): return L(f.read().split(' '))
splits = RandomSplitter(valid_pct=0.1)(texts)
vocab = make_vocab(count)
dsrc = DataSource(texts, [[read_file, Numericalize(vocab)]], splits=splits, dl_type=LMDataLoader)
###Output
_____no_output_____
###Markdown
Then we use that `DataSource` to create a `DataBunch`. Here the class of `TfmdDL` we need to use is `LMDataLoader` which will concatenate all the texts in a source (with a shuffle at each epoch for the training set), split it in `bs` chunks then read continuously through it.
###Code
bs,sl=256,80
dbunch_lm = dsrc.databunch(bs=bs, seq_len=sl, val_bs=bs, after_batch=Cuda)
dbunch_lm.show_batch()
###Output
_____no_output_____
###Markdown
Then we have a convenience method to directly grab a `Learner` from it, using the `AWD_LSTM` architecture.
###Code
opt_func = partial(Adam, wd=0.1)
learn = language_model_learner(dbunch_lm, AWD_LSTM, vocab, opt_func=opt_func, metrics=[accuracy, Perplexity()], path=path)
learn = learn.to_fp16(clip=0.1)
learn.fit_one_cycle(1, 2e-2, moms=(0.8,0.7,0.8))
learn.save('stage1')
learn.load('stage1');
learn.unfreeze()
learn.fit_one_cycle(10, 2e-3, moms=(0.8,0.7,0.8))
###Output
_____no_output_____
###Markdown
Once we have fine-tuned the pretrained language model to this corpus, we save the encoder since we will use it for the classifier.
###Code
learn.save_encoder('finetuned1')
###Output
_____no_output_____
###Markdown
Use it to train a classifier
###Code
texts = get_files(path, extensions=['.txt'], folders=['train', 'test'])
splits = GrandparentSplitter(valid_name='test')(texts)
###Output
_____no_output_____
###Markdown
For classification, we need to use two set of transforms: one to numericalize the texts and the other to encode the labels as categories.
###Code
dsrc = DataSource(texts, [[read_file, Numericalize(vocab)], [parent_label, Categorize()]], splits=splits, dl_type=SortedDL)
bs = 64
dbunch = dsrc.databunch(before_batch=pad_input, after_batch=Cuda, bs=bs)
dbunch.show_batch(max_n=2)
###Output
_____no_output_____
###Markdown
Then we once again have a convenience function to create a classifier from this `DataBunch` with the `AWD_LSTM` architecture.
###Code
opt_func = partial(Adam, wd=0.1)
learn = text_classifier_learner(dbunch, AWD_LSTM, vocab, metrics=[accuracy], path=path, drop_mult=0.5, opt_func=opt_func)
###Output
_____no_output_____
###Markdown
We load our pretrained encoder.
###Code
learn = learn.load_encoder('finetuned1')
learn = learn.to_fp16(clip=0.1)
###Output
_____no_output_____
###Markdown
Then we can train with gradual unfreezing and differential learning rates.
###Code
lr = 1e-1 * bs/128
learn.fit_one_cycle(1, lr, moms=(0.8,0.7,0.8), wd=0.1)
learn.freeze_to(-2)
lr /= 2
learn.fit_one_cycle(1, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
learn.freeze_to(-3)
lr /= 2
learn.fit_one_cycle(1, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
learn.unfreeze()
lr /= 5
learn.fit_one_cycle(2, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
###Output
_____no_output_____
###Markdown
ULMFiT Finetune a pretrained Language Model First we get our data and tokenize it.
###Code
path = untar_data(URLs.IMDB)
tokenize_folder(path, folders=['train', 'test', 'unsup'])
path = untar_data(URLs.IMDB).parent/'imdb_tok'
count = pickle.load(open(path/'counter.pkl', 'rb'))
vocab = make_vocab(count)
texts = get_files(path, extensions=['.txt'])
len(texts)
###Output
_____no_output_____
###Markdown
Then we put it in a `DataSource`. For a language model, we don't have targets, so there is only one transform to numericalize the texts. Note that `tokenize_df` returns the count of the words in the corpus to make it easy to create a vocabulary.
###Code
def read_file(f): return L(f.read().split(' '))
splits = RandomSplitter()(texts)
vocab = make_vocab(count)
dsrc = DataSource(texts, [[read_file, Numericalize(vocab)]], filts=splits, dl_type=LMDataLoader)
###Output
_____no_output_____
###Markdown
Then we use that `DataSource` to create a `DataBunch`. Here the class of `TfmdDL` we need to use is `LMDataLoader` which will concatenate all the texts in a source (with a shuffle at each epoch for the training set), split it in `bs` chunks then read continuously through it.
###Code
bs,sl=256,80
dbunch_lm = dsrc.databunch(lens=lens, bs=bs, seq_len=sl, val_bs=bs, after_batch=Cuda)
dbunch_lm.show_batch()
###Output
_____no_output_____
###Markdown
Then we have a convenience method to directly grab a `Learner` from it, using the `AWD_LSTM` architecture.
###Code
opt_func = partial(Adam, wd=0.1)
learn = language_model_learner(dbunch_lm, AWD_LSTM, vocab, opt_func=opt_func, metrics=[accuracy, Perplexity()], path=path)
learn.to_fp16(clip=0.1)
learn.fit_one_cycle(1, 2e-2, moms=(0.8,0.7,0.8))
learn.save('stage1')
learn.load('stage1')
learn.opt = learn.create_opt() #Need this because of FP16 for now
learn.unfreeze()
learn.fit_one_cycle(10, 2e-3, moms=(0.8,0.7,0.8))
###Output
_____no_output_____
###Markdown
Once we have fine-tuned the pretrained language model to this corpus, we save the encoder since we will use it for the classifier.
###Code
learn.save_encoder('finetuned')
###Output
_____no_output_____
###Markdown
Use it to train a classifier
###Code
texts = get_files(path, extensions=['.txt'], folders=['train', 'test'])
splits = GrandparentSplitter(valid_name='test')(texts)
###Output
_____no_output_____
###Markdown
For classification, we need to use two set of transforms: one to numericalize the texts and the other to encode the labels as categories.
###Code
dsrc = DataSource(texts, [[read_file, Numericalize(vocab)], [parent_label, Categorize()]], filts=splits, dl_type=SortedDL)
bs = 64
dbunch = dsrc.datanumch(before_batch=pad_input, after_batch=Cuda, bs=bs)
dbunch.show_batch(max_n=2)
###Output
_____no_output_____
###Markdown
Then we once again have a convenience function to create a classifier from this `DataBunch` with the `AWD_LSTM` architecture.
###Code
opt_func = partial(Adam, wd=0.1)
learn = text_classifier_learner(dbunch, AWD_LSTM, vocab, metrics=[accuracy], path=path, drop_mult=0.5, opt_func=opt_func)
###Output
_____no_output_____
###Markdown
We load our pretrained encoder.
###Code
learn = learn.load_encoder('finetuned')
learn.to_fp16(clip=0.1)
###Output
_____no_output_____
###Markdown
Then we can train with gradual unfreezing and differential learning rates.
###Code
lr = 1e-1 * bs/128
learn.fit_one_cycle(1, lr, moms=(0.8,0.7,0.8), wd=0.1)
learn.opt = learn.create_opt()
learn.freeze_to(-2)
lr /= 2
learn.fit_one_cycle(1, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
learn.opt = learn.create_opt()
learn.freeze_to(-3)
lr /= 2
learn.fit_one_cycle(1, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
learn.opt = learn.create_opt()
learn.unfreeze()
lr /= 5
learn.fit_one_cycle(2, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
###Output
_____no_output_____
###Markdown
ULMFiT Finetune a pretrained Language Model First we get our data and tokenize it.
###Code
path = untar_data(URLs.IMDB)
tokenize_folder(path, folders=['train', 'test', 'unsup'])
path = untar_data(URLs.IMDB).parent/'imdb_tok'
count = pickle.load(open(path/'counter.pkl', 'rb'))
vocab = make_vocab(count)
texts = get_files(path, extensions=['.txt'])
len(texts)
###Output
_____no_output_____
###Markdown
Then we put it in a `DataSource`. For a language model, we don't have targets, so there is only one transform to numericalize the texts. Note that `tokenize_df` returns the count of the words in the corpus to make it easy to create a vocabulary.
###Code
def read_file(f): return L(f.read().split(' '))
splits = RandomSplitter()(texts)
vocab = make_vocab(count)
dsrc = DataSource(texts, [[read_file, Numericalize(vocab)]], filts=splits, dl_type=LMDataLoader)
###Output
_____no_output_____
###Markdown
Then we use that `DataSource` to create a `DataBunch`. Here the class of `TfmdDL` we need to use is `LMDataLoader` which will concatenate all the texts in a source (with a shuffle at each epoch for the training set), split it in `bs` chunks then read continuously through it.
###Code
bs,sl=256,80
dbunch_lm = dsrc.databunch(lens=lens, bs=bs, seq_len=sl, val_bs=bs, after_batch=Cuda)
dbunch_lm.show_batch()
###Output
_____no_output_____
###Markdown
Then we have a convenience method to directly grab a `Learner` from it, using the `AWD_LSTM` architecture.
###Code
opt_func = partial(Adam, wd=0.1)
learn = language_model_learner(dbunch_lm, AWD_LSTM, vocab, opt_func=opt_func, metrics=[accuracy, Perplexity()], path=path)
learn.to_fp16(clip=0.1)
learn.fit_one_cycle(1, 2e-2, moms=(0.8,0.7,0.8))
learn.save('stage1')
learn.load('stage1')
learn.opt = learn.create_opt() #Need this because of FP16 for now
learn.unfreeze()
learn.fit_one_cycle(10, 2e-3, moms=(0.8,0.7,0.8))
###Output
_____no_output_____
###Markdown
Once we have fine-tuned the pretrained language model to this corpus, we save the encoder since we will use it for the classifier.
###Code
learn.save_encoder('finetuned')
###Output
_____no_output_____
###Markdown
Use it to train a classifier
###Code
texts = get_files(path, extensions=['.txt'], folders=['train', 'test'])
splits = GrandparentSplitter(valid_name='test')(texts)
###Output
_____no_output_____
###Markdown
For classification, we need to use two set of transforms: one to numericalize the texts and the other to encode the labels as categories.
###Code
dsrc = DataSource(texts, [[read_file, Numericalize(vocab)], [parent_label, Categorize()]], filts=splits, dl_type=SortedDL)
bs = 64
dbunch = dsrc.datanumch(create_batch=pad_collate, after_batch=Cuda, bs=bs)
dbunch.show_batch(max_n=2)
###Output
_____no_output_____
###Markdown
Then we once again have a convenience function to create a classifier from this `DataBunch` with the `AWD_LSTM` architecture.
###Code
opt_func = partial(Adam, wd=0.1)
learn = text_classifier_learner(dbunch, AWD_LSTM, vocab, metrics=[accuracy], path=path, drop_mult=0.5, opt_func=opt_func)
###Output
_____no_output_____
###Markdown
We load our pretrained encoder.
###Code
learn = learn.load_encoder('finetuned')
learn.to_fp16(clip=0.1)
###Output
_____no_output_____
###Markdown
Then we can train with gradual unfreezing and differential learning rates.
###Code
lr = 1e-1 * bs/128
learn.fit_one_cycle(1, lr, moms=(0.8,0.7,0.8), wd=0.1)
learn.opt = learn.create_opt()
learn.freeze_to(-2)
lr /= 2
learn.fit_one_cycle(1, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
learn.opt = learn.create_opt()
learn.freeze_to(-3)
lr /= 2
learn.fit_one_cycle(1, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
learn.opt = learn.create_opt()
learn.unfreeze()
lr /= 5
learn.fit_one_cycle(2, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
###Output
_____no_output_____
###Markdown
ULMFiT Finetune a pretrained Language Model First we get our data and tokenize it.
###Code
path = untar_data(URLs.IMDB)
tokenize_folder(path, folders=['train', 'test', 'unsup'])
path = untar_data(URLs.IMDB).parent/'imdb_tok'
count = pickle.load(open(path/'counter.pkl', 'rb'))
vocab = make_vocab(count)
texts = get_files(path, extensions=['.txt'])
len(texts)
###Output
_____no_output_____
###Markdown
Then we put it in a `DataSource`. For a language model, we don't have targets, so there is only one transform to numericalize the texts. Note that `tokenize_df` returns the count of the words in the corpus to make it easy to create a vocabulary.
###Code
def read_file(f): return L(f.read().split(' '))
splits = RandomSplitter(valid_pct=0.1)(texts)
vocab = make_vocab(count)
dsrc = DataSource(texts, [[read_file, Numericalize(vocab)]], splits=splits, dl_type=LMDataLoader)
###Output
_____no_output_____
###Markdown
Then we use that `DataSource` to create a `DataBunch`. Here the class of `TfmdDL` we need to use is `LMDataLoader` which will concatenate all the texts in a source (with a shuffle at each epoch for the training set), split it in `bs` chunks then read continuously through it.
###Code
bs,sl=256,80
dbunch_lm = dsrc.databunch(bs=bs, seq_len=sl, val_bs=bs, after_batch=Cuda)
dbunch_lm.show_batch()
###Output
_____no_output_____
###Markdown
Then we have a convenience method to directly grab a `Learner` from it, using the `AWD_LSTM` architecture.
###Code
opt_func = partial(Adam, wd=0.1)
learn = language_model_learner(dbunch_lm, AWD_LSTM, vocab, opt_func=opt_func, metrics=[accuracy, Perplexity()], path=path)
learn = learn.to_fp16(clip=0.1)
learn.fit_one_cycle(1, 2e-2, moms=(0.8,0.7,0.8))
learn.save('stage1')
learn.load('stage1');
learn.unfreeze()
learn.fit_one_cycle(10, 2e-3, moms=(0.8,0.7,0.8))
###Output
_____no_output_____
###Markdown
Once we have fine-tuned the pretrained language model to this corpus, we save the encoder since we will use it for the classifier.
###Code
learn.save_encoder('finetuned1')
###Output
_____no_output_____
###Markdown
Use it to train a classifier
###Code
texts = get_files(path, extensions=['.txt'], folders=['train', 'test'])
splits = GrandparentSplitter(valid_name='test')(texts)
###Output
_____no_output_____
###Markdown
For classification, we need to use two set of transforms: one to numericalize the texts and the other to encode the labels as categories.
###Code
dsrc = DataSource(texts, [[read_file, Numericalize(vocab)], [parent_label, Categorize()]], splits=splits, dl_type=SortedDL)
bs = 64
dbunch = dsrc.databunch(before_batch=pad_input, after_batch=Cuda, bs=bs)
dbunch.show_batch(max_n=2)
###Output
_____no_output_____
###Markdown
Then we once again have a convenience function to create a classifier from this `DataBunch` with the `AWD_LSTM` architecture.
###Code
opt_func = partial(Adam, wd=0.1)
learn = text_classifier_learner(dbunch, AWD_LSTM, vocab, metrics=[accuracy], path=path, drop_mult=0.5, opt_func=opt_func)
###Output
_____no_output_____
###Markdown
We load our pretrained encoder.
###Code
learn = learn.load_encoder('finetuned1')
learn = learn.to_fp16(clip=0.1)
###Output
_____no_output_____
###Markdown
Then we can train with gradual unfreezing and differential learning rates.
###Code
lr = 1e-1 * bs/128
learn.fit_one_cycle(1, lr, moms=(0.8,0.7,0.8), wd=0.1)
learn.freeze_to(-2)
lr /= 2
learn.fit_one_cycle(1, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
learn.freeze_to(-3)
lr /= 2
learn.fit_one_cycle(1, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
learn.unfreeze()
lr /= 5
learn.fit_one_cycle(2, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
###Output
_____no_output_____
###Markdown
ULMFiT Finetune a pretrained Language Model First we get our data and tokenize it.
###Code
path = untar_data(URLs.IMDB)
tokenize_folder(path, folders=['train', 'test', 'unsup'])
path = untar_data(URLs.IMDB).parent/'imdb_tok'
count = pickle.load(open(path/'counter.pkl', 'rb'))
vocab = make_vocab(count)
texts = get_files(path, extensions=['.txt'])
lens = [int(f.with_suffix('.len').read()) for f in texts]
len(texts),len(lens)
###Output
_____no_output_____
###Markdown
Then we put it in a `DataSource`. For a language model, we don't have targets, so there is only one transform to numericalize the texts. Note that `tokenize_df` returns the count of the words in the corpus to make it easy to create a vocabulary.
###Code
def read_file(f): return L(f.read().split(' '))
splits = RandomSplitter()(texts)
vocab = make_vocab(count)
dsrc = DataSource(texts, [[read_file, Numericalize(vocab)]], filts=splits)
###Output
_____no_output_____
###Markdown
Then we use that `DataSource` to create a `DataBunch`. Here the class of `TfmdDL` we need to use is `LMDataLoader` which will concatenate all the texts in a source (with a shuffle at each epoch for the training set), split it in `bs` chunks then read continuously through it.
###Code
bs,sl=256,80
dbunch_lm = LMDataLoader.dbunchify(dsrc, lens=lens, bs=bs, seq_len=sl, val_bs=bs)
dbunch_lm.show_batch()
###Output
_____no_output_____
###Markdown
Then we have a convenience method to directly grab a `Learner` from it, using the `AWD_LSTM` architecture.
###Code
opt_func = partial(Adam, wd=0.1)
learn = language_model_learner(dbunch_lm, AWD_LSTM, vocab, opt_func=opt_func, metrics=[accuracy, Perplexity()], path=path)
learn.to_fp16(clip=0.1)
learn.fit_one_cycle(1, 2e-2, moms=(0.8,0.7,0.8))
learn.save('stage1')
learn.load('stage1')
learn.opt = learn.create_opt() #Need this because of FP16 for now
learn.unfreeze()
learn.fit_one_cycle(10, 2e-3, moms=(0.8,0.7,0.8))
###Output
_____no_output_____
###Markdown
Once we have fine-tuned the pretrained language model to this corpus, we save the encoder since we will use it for the classifier.
###Code
learn.save_encoder('finetuned')
###Output
_____no_output_____
###Markdown
Use it to train a classifier
###Code
texts = get_files(path, extensions=['.txt'], folders=['train', 'test'])
splits = GrandparentSplitter(valid_name='test')(texts)
###Output
_____no_output_____
###Markdown
For classification, we need to use two set of transforms: one to numericalize the texts and the other to encode the labels as categories.
###Code
dsrc = DataSource(texts, [[read_file, Numericalize(vocab)], [parent_label, Categorize()]], filts=splits)
lens = [int(f.with_suffix('.len').read()) for f in texts]
res = [L(lens, use_list=None)[f] for f in dsrc.filts]
bs = 64
trn_dl = SortedDL(dsrc.train, res=res[0], create_batch=pad_collate, after_batch=[Cuda], shuffle=True, drop_last=True, bs=bs)
val_dl = SortedDL(dsrc.valid, res=res[1], create_batch=pad_collate, after_batch=[Cuda], bs=bs)
dbunch = DataBunch(trn_dl, val_dl)
dbunch.show_batch(max_n=2)
###Output
_____no_output_____
###Markdown
Then we once again have a convenience function to create a classifier from this `DataBunch` with the `AWD_LSTM` architecture.
###Code
opt_func = partial(Adam, wd=0.1)
learn = text_classifier_learner(dbunch, AWD_LSTM, vocab, metrics=[accuracy], path=path, drop_mult=0.5, opt_func=opt_func)
###Output
_____no_output_____
###Markdown
We load our pretrained encoder.
###Code
learn = learn.load_encoder('finetuned')
learn.to_fp16(clip=0.1)
###Output
_____no_output_____
###Markdown
Then we can train with gradual unfreezing and differential learning rates.
###Code
lr = 1e-1 * bs/128
learn.fit_one_cycle(1, lr, moms=(0.8,0.7,0.8), wd=0.1)
learn.opt = learn.create_opt()
learn.freeze_to(-2)
lr /= 2
learn.fit_one_cycle(1, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
learn.opt = learn.create_opt()
learn.freeze_to(-3)
lr /= 2
learn.fit_one_cycle(1, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
learn.opt = learn.create_opt()
learn.unfreeze()
lr /= 5
learn.fit_one_cycle(2, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
###Output
_____no_output_____
###Markdown
ULMFiT Finetune a pretrained Language Model First we get our data and tokenize it.
###Code
path = untar_data(URLs.IMDB)
tokenize_folder(path, folders=['train', 'test', 'unsup'])
path = untar_data(URLs.IMDB).parent/'imdb_tok'
count = pickle.load(open(path/'counter.pkl', 'rb'))
vocab = make_vocab(count)
texts = get_files(path, extensions=['.txt'])
len(texts)
###Output
_____no_output_____
###Markdown
Then we put it in a `DataSource`. For a language model, we don't have targets, so there is only one transform to numericalize the texts. Note that `tokenize_df` returns the count of the words in the corpus to make it easy to create a vocabulary.
###Code
def read_file(f): return L(f.read().split(' '))
splits = RandomSplitter(valid_pct=0.1)(texts)
vocab = make_vocab(count)
dsrc = DataSource(texts, [[read_file, Numericalize(vocab)]], splits=splits, dl_type=LMDataLoader)
###Output
_____no_output_____
###Markdown
Then we use that `DataSource` to create a `DataBunch`. Here the class of `TfmdDL` we need to use is `LMDataLoader` which will concatenate all the texts in a source (with a shuffle at each epoch for the training set), split it in `bs` chunks then read continuously through it.
###Code
bs,sl=256,80
dbunch_lm = dsrc.databunch(bs=bs, seq_len=sl, val_bs=bs, after_batch=Cuda)
dbunch_lm.show_batch()
###Output
_____no_output_____
###Markdown
Then we have a convenience method to directly grab a `Learner` from it, using the `AWD_LSTM` architecture.
###Code
opt_func = partial(Adam, wd=0.1)
learn = language_model_learner(dbunch_lm, AWD_LSTM, vocab, opt_func=opt_func, metrics=[accuracy, Perplexity()], path=path)
learn = learn.to_fp16(clip=0.1)
learn.fit_one_cycle(1, 2e-2, moms=(0.8,0.7,0.8))
learn.save('stage1')
learn.load('stage1');
learn.unfreeze()
learn.fit_one_cycle(10, 2e-3, moms=(0.8,0.7,0.8))
###Output
_____no_output_____
###Markdown
Once we have fine-tuned the pretrained language model to this corpus, we save the encoder since we will use it for the classifier.
###Code
learn.save_encoder('finetuned1')
###Output
_____no_output_____
###Markdown
Use it to train a classifier
###Code
texts = get_files(path, extensions=['.txt'], folders=['train', 'test'])
splits = GrandparentSplitter(valid_name='test')(texts)
###Output
_____no_output_____
###Markdown
For classification, we need to use two set of transforms: one to numericalize the texts and the other to encode the labels as categories.
###Code
dsrc = DataSource(texts, [[read_file, Numericalize(vocab)], [parent_label, Categorize()]], splits=splits, dl_type=SortedDL)
bs = 64
dbunch = dsrc.databunch(before_batch=pad_input, after_batch=Cuda, bs=bs)
dbunch.show_batch(max_n=2)
###Output
_____no_output_____
###Markdown
Then we once again have a convenience function to create a classifier from this `DataBunch` with the `AWD_LSTM` architecture.
###Code
opt_func = partial(Adam, wd=0.1)
learn = text_classifier_learner(dbunch, AWD_LSTM, vocab, metrics=[accuracy], path=path, drop_mult=0.5, opt_func=opt_func)
###Output
_____no_output_____
###Markdown
We load our pretrained encoder.
###Code
learn = learn.load_encoder('finetuned1')
learn = learn.to_fp16(clip=0.1)
###Output
_____no_output_____
###Markdown
Then we can train with gradual unfreezing and differential learning rates.
###Code
lr = 1e-1 * bs/128
learn.fit_one_cycle(1, lr, moms=(0.8,0.7,0.8), wd=0.1)
learn.freeze_to(-2)
lr /= 2
learn.fit_one_cycle(1, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
learn.freeze_to(-3)
lr /= 2
learn.fit_one_cycle(1, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
learn.unfreeze()
lr /= 5
learn.fit_one_cycle(2, slice(lr/(2.6**4),lr), moms=(0.8,0.7,0.8), wd=0.1)
###Output
_____no_output_____ |
examples/gkt/compute_test_function_errors.ipynb | ###Markdown
Notebook for evaluating various test function errors ONLY AFTER the coresets are already generated
###Code
import numpy as np
import numpy.random as npr
import numpy.linalg as npl
from scipy.spatial.distance import pdist
from argparse import ArgumentParser
import pickle as pkl
import pathlib
import os
import os.path
# import kernel thinning
from goodpoints import kt # kt.thin is the main thinning function; kt.split and kt.swap are other important functions
from goodpoints.util import isnotebook # Check whether this file is being executed as a script or as a notebook
from goodpoints.util import fprint # for printing while flushing buffer
from goodpoints.tictoc import tic, toc # for timing blocks of code
# utils for generating samples, evaluating kernels, and mmds
from util_sample import sample, compute_mcmc_params_p, compute_diag_mog_params, sample_string, compute_params_p
from util_k_mmd import kernel_eval, p_kernel, ppn_kernel, pp_kernel, pnpn_kernel, squared_mmd, get_combined_results_filename, compute_params_k
from util_parse import init_parser, convert_arg_flags
# for partial functions, to use kernel_eval for kernel
from functools import partial
# set things a bit when running the notebook
if isnotebook():
# Autoreload packages that are modified
%load_ext autoreload
%autoreload 2
%matplotlib inline
%load_ext line_profiler
# https://jakevdp.github.io/PythonDataScienceHandbook/01.07-timing-and-profiling.html
###Output
_____no_output_____
###Markdown
1. Function for loading input and coresets
###Code
def load_input_and_coreset(m, params_p, params_k_split, params_k_swap, rep_id, thin_str="", delta=0.5,
sample_seed=1234567, thin_seed=9876543, results_dir="results_new", verbose=False):
"""Return exisiting KT coresets by loading from disk, and the associated MC points used for finding the coresets
along with ST coresets
Gives error if the coreset does not exist
Args:
m: Number of halving rounds (number of sample points n = 2^{2m})
params_p: Dictionary of distribution parameters recognized by sample()
params_k_split: Dictionary of kernel parameters recognized by kernel() # used for kt split
params_k_swap: Dictionary of kernel parameters recognized by kernel() # used for kt swap; and computing mmd
rep_id: A single rep id for which coreset to be returned
thin_str: (Optional), str to be appended to filenames when loading coresets other than KT and iid/ST, e.g., kt.split + rand
delta: If c is None, delta/(4^m) is the failure probability for
adaptive threshold sequence
sample_seed: (Optional) random seed is set to sample_seed + rep
prior to generating input sample for replication rep
thin_seed: (Optional) random seed is set to thin_seed + rep
prior to running thinning for replication rep
results_dir: (Optional) Directory in which results is to be loaded from
verbose: (Optional) If True, print intermediate updates
"""
d = params_p["d"]
assert(d == params_k_split["d"])
assert(d == params_k_swap["d"])
sample_str = sample_string(params_p, sample_seed)
split_kernel_str = "{}_var{:.3f}_seed{}".format(params_k_split["name"], params_k_split["var"], thin_seed)
swap_kernel_str = "{}_var{:.3f}".format(params_k_swap["name"], params_k_swap["var"])
thresh_str = f"delta{delta}"
file_template = os.path.join(results_dir, f"kt{thin_str}-coresets-{sample_str}-split{split_kernel_str}-swap{swap_kernel_str}-d{d}-m{m}-{thresh_str}-rep{{}}.pkl")
filename = file_template.format(rep_id)
n = int(2**(2*m))
ncoreset = int(2**m)
X = sample(n, params_p, seed=sample_seed+rep_id)
if os.path.exists(filename):
with open(filename, 'rb') as file:
if verbose:
print(f"Loading KT coreset indices from {filename}")
coresets = pkl.load(file)
else:
raise ValueError(f"File {filename} not found")
if verbose:
print(f"Returning all {n} input MC points and {ncoreset} KT points")
return(X, coresets[:ncoreset])
###Output
_____no_output_____
###Markdown
2. Function for evaluating integration errors
###Code
def evaluate_fun_approx_quality(fun_str, ms, params_p, params_k_split, params_k_swap, rep_ids,
thin_str="", delta=0.5, sample_seed=1234567, thin_seed=9876543,
compute_fun_diff = True, rerun=False, results_dir="results_new", return_val=False):
"""Returns |Pinf-Pout f|, |Pf - Poutf| for KT/KTrt/KT+ and ST
Args:
fun_str: the test function to be evaluated must be in the list
{k0, x, x^2, pk, kmean, x1x2, cov, l1_x, linf_x, cif, gfun, cos, cosg, kernel}
- some functions might have constraints on what settings allowed
- add another function by adding an "if block" in this code
ms: range of output coreset sizes (2^m for m in ms)
params_p: Dictionary of distribution parameters recognized by sample()
params_k_split: Dictionary of kernel parameters recognized by kernel_eval()
params_k_swap: Dictionary of kernel parameters recognized by kernel_eval()
rep_ids: Which replication numbers of experiment to run; the replication
number determines the seeds set for reproducibility
thin_str: (Optional), str to be appended to filenames when loading coresets other than KT/KT power and iid/ST, e.g., "-plus" for KT+
delta: delta/(4^m) is the failure probability for
adaptive threshold sequence;
sample_seed: (Optional) random seed is set to sample_seed + rep
prior to generating input sample for replication rep
thin_seed: (Optional) random seed is set to thin_seed + rep
prior to running thinning for replication rep
compute_fun_diff: (Optional)
rerun: (Optional) If False and results have been previously saved to
disk, load results from disk instead of recomputing the errors
results_dir: (Optional) Directory in which results should be saved
return_val:(Optional) Whether to return the |Pinf-Pout f|, |Pf - Poutf| for KT, and ST (4 quantities in total) OR not
"""
# Create results directory if necessary
pathlib.Path(results_dir).mkdir(parents=True, exist_ok=True)
d = params_p["d"]
assert(d==params_k_split["d"])
assert(d==params_k_swap["d"])
# create split and swap kernel functions from the parameters
split_kernel = partial(kernel_eval, params_k=params_k_split)
swap_kernel = partial(kernel_eval, params_k=params_k_swap)
# Construct results filename template with placeholder for rep value
sample_str = sample_string(params_p, sample_seed)
split_kernel_str = "{}_var{:.3f}_seed{}".format(params_k_split["name"], params_k_split["var"], thin_seed)
swap_kernel_str = "{}_var{:.3f}".format(params_k_swap["name"], params_k_swap["var"])
thresh_str = f"delta{delta}"
orig_fun_str = fun_str # for easeness, as we change fun_str to use previous results
if fun_str == 'k0': fun_str = "" # changed to remain consistent with previous computations
########### DEFINE FUNCTIONS AND COMPUTE Pf WHENEVER COMPUTABLE ##########
# If Pf is not available, then we later set Pf = Pinf (which is always COMPUTABLE)
if fun_str == "": # f(x) = k(0, x)
yloc = np.zeros((1, d))
fun = partial(swap_kernel, y=yloc)
p_fun = p_kernel(yloc, params_k=params_k_swap, params_p=params_p)[0] # fun is fixed to be k(yloc, .)
if fun_str == 'pk':
fun = partial(p_kernel, params_k=params_k_swap, params_p=params_p)
p_fun = pp_kernel(params_k_swap, params_p)
if fun_str == 'kmean': # f(x)=Pk(x), enabled only for MCMC experiments where P is fixed to Phat/Pnmax
assert("Pnmax" in params_p)
yloc = params_p["Pnmax"].mean(0).reshape(1, -1) # mean over samples
fun = partial(swap_kernel, y=yloc)
p_fun = p_kernel(yloc, params_k=params_k_swap, params_p=params_p)[0] # fun is fixed to be k(yloc, .)
if fun_str == 'x': # first coordinate
def fun(x): return(x[:,0])
if params_p["name"] == "gauss":
p_fun = 0.
if "Pnmax" in params_p:
p_fun = np.mean(fun(params_p["Pnmax"]))
if params_p["name"] == "diag_mog" and (len(params_p["weights"])==4 or len(params_p["weights"])==8):
p_fun = 0.
if fun_str == 'x1x2': # product of first two coordinates
def fun(x): return(x[:,0]*x[:,1])
if params_p["name"] == "gauss":
p_fun = 0.
if "Pnmax" in params_p:
p_fun = np.mean(fun(params_p["Pnmax"]))
if params_p["name"] == "diag_mog" and (len(params_p["weights"])==4 or len(params_p["weights"])==8):
p_fun = 0.
if fun_str == 'cov': # Covariance function E[(X-mu)(X-mu)^T]
assert(params_p["name"]=="gauss")
def fun(X): return(X.T.dot(X)/X.shape[0]-np.outer(X.mean(0),X.mean(0)))
if params_p["name"] == "gauss":
p_fun = params_p["var"]*np.eye(d)
if fun_str == 'l1_x' or fun_str == 'linf_x': # want to compute |P X-Pout X|_1 and |P X-Pout X|_inf, so here we compute PX
def fun(x): return(x)
if params_p["name"] == "gauss":
p_fun = np.zeros(d)
if "Pnmax" in params_p:
p_fun = np.mean(fun(params_p["Pnmax"]), 0)
if params_p["name"] == "diag_mog" and (len(params_p["weights"])==4 or len(params_p["weights"])==8):
p_fun = np.zeros(d)
if fun_str == 'x^2': # first coordinate squared
def fun(x): return(x[:, 0]**2)
if params_p["name"] == "gauss":
p_fun = params_p["var"]
if "Pnmax" in params_p:
p_fun = np.mean(fun(params_p["Pnmax"]), 0)
if params_p["name"] == "diag_mog" and (len(params_p["weights"])==4 or len(params_p["weights"])==8):
p_fun = params_p["mean_sqdist"]/4.
# for these functions Pf is not directly available; and we only compute Pinf (And set Pf = Pinf)
if fun_str == 'cif': # https://www.sfu.ca/~ssurjano/cont.html
def fun(x):
# function from here
d = x.shape[1]
u = npr.default_rng(0).uniform(size=(1, d))
a = 1. / d * np.ones(d)
return(np.exp(np.sum(-np.abs(x-u.reshape(1, -1)) * a, axis=1 ) ))
if fun_str == "gfun": # https://www.sfu.ca/~ssurjano/gfunc.html
def fun(x):
# function from here
d = x.shape[1]
a = 0.5*np.arange(1, d+1) - 1
return(np.prod((np.abs(4*x - 2) + a.reshape(1, -1) ) / (1+a), axis=1))
if fun_str == "cos": # cosine function; https://www.sfu.ca/~ssurjano/oscil.html
def fun(x):
d = x.shape[1]
u = npr.default_rng(0).uniform()
return(np.cos(2*np.pi*u+ 5./d * np.sum(x , axis=1 ) ))
if fun_str == "cosg": # cosine * gaussian function; p
def fun(x):
d = x.shape[1]
u = npr.default_rng(0).uniform()
return(np.exp(-5./d*np.sum(x**2, axis=1)) * np.cos(2*np.pi*u+ 5./d * np.sum(x , axis=1 ) ))
if fun_str == "kernel": # f(x) = k(X', x) for X' ~ P
if params_p["name"] != "gauss" and "mog" not in params_p["name"]:
u = npr.default_rng(100).choice(len(params_p["Pnmax"]))
u = 2 * params_p["Pnmax"][u] - np.mean(params_p["Pnmax"], 0)
else:
# generate a randomvariable
u = sample(1, params_p, npr.default_rng(0)) # 2 * np.sqrt(params_p["var"]) * npr.default_rng(0).standard_normal(size=(1, params_p["d"]))
def fun(x):
d = x.shape[1]
return(kernel_eval(x, u, params_k=params_k_swap))
if params_p["name"] == "gauss" and params_k["name"] == "gauss":
p_fun = p_kernel(y=u, params_k=params_k_swap, params_p=params_p)
########### COMPUTE Pinf and Poutf ##########
# initialize matrices of entries
fun_diff_p = np.zeros((len(ms), len(rep_ids))) # for Pf - Pout f
fun_diff_p_sin = np.zeros((len(ms), len(rep_ids))) # Pinf - Pout f
fun_diff_p_st = np.zeros((len(ms), len(rep_ids))) # Pf - Pout f for standard thinning
fun_diff_p_sin_st = np.zeros((len(ms), len(rep_ids))) # Pinf - Pout f for standard thinning
fprint(f"Evaluating coresets for function {orig_fun_str} for setting \
{get_combined_results_filename('', ms, params_p, params_k_split=params_k_split, params_k_swap=params_k_swap, rep_ids=rep_ids, delta=delta)}.....")
generic_prefixes = [f"-combinedfundiff{fun_str}-", f"-sin-combinedfundiff{fun_str}-"]
compute_st = False
compute_kt = False
# check if things are already stored then don't compute the respective items
if not rerun:
prefixes = ["mc" + prefix for prefix in generic_prefixes]
for prefix in prefixes:
filename = get_combined_results_filename(prefix, ms, params_p, params_k_split=params_k_split, params_k_swap=params_k_swap, rep_ids=rep_ids, delta=delta)
if not os.path.exists(filename):
compute_st = True
prefixes = [f"kt{thin_str}" + prefix for prefix in generic_prefixes]
for prefix in prefixes:
filename = get_combined_results_filename(prefix, ms, params_p, params_k_split=params_k_split, params_k_swap=params_k_swap, rep_ids=rep_ids, delta=delta)
if not os.path.exists(filename):
compute_kt = True
else:
compute_st = True
compute_kt = True
if compute_st or compute_kt:
for m in ms:
# print(m)
for r_i, rep in enumerate(rep_ids):
# load coresets
Xin, kt_coresets = load_input_and_coreset(m, params_p, params_k_split, params_k_swap, rep_id=rep, thin_str=thin_str, delta=delta,
sample_seed=sample_seed, thin_seed=thin_seed, results_dir=results_dir, verbose=False)
# compute Pinf for various functions
if fun_str == 'cov':
pin_fun = fun(Xin)
elif fun_str not in ['cif', 'gfun', 'cos' ,'cosg', 'kernel']:
pin_fun = np.mean(fun(Xin), 0) if not params_p["saved_samples"] else p_fun # to save time, ignore pk setting with pin for mcmc cases
else:
pin_fun = np.mean(fun(Xin), 0)
if 'p_fun' not in locals(): p_fun = pin_fun
# compute Pout f for KT
if compute_kt:
if fun_str == 'cov':
pout_fun_kt = fun(Xin[kt_coresets])
else:
pout_fun_kt = np.mean(fun(Xin[kt_coresets]), 0)
if fun_str == 'l1_x' or fun_str == 'cov':
multiply_factor = 1. if fun_str == 'l1_x' else 1./d**2 # normalize by d^2 for covariance
fun_diff_p[m, r_i] = multiply_factor * np.sum(np.abs(p_fun-pout_fun_kt))
fun_diff_p_sin[m, r_i] = multiply_factor * np.sum(np.abs(pin_fun-pout_fun_kt))
elif fun_str == 'linf_x':
fun_diff_p[m, r_i] = max(np.abs(p_fun-pout_fun_kt))
fun_diff_p_sin[m, r_i] = max(np.abs(pin_fun-pout_fun_kt))
else:
fun_diff_p[m, r_i] = np.abs(p_fun-pout_fun_kt)
fun_diff_p_sin[m, r_i] = np.abs(pin_fun-pout_fun_kt)
# compute Poutf for ST
if compute_st:
step = int(2**m)
if fun_str == 'cov':
pout_fun_st = fun(Xin[step-1:int(2**(2*m)):step])
else:
pout_fun_st = np.mean(fun(Xin[step-1:int(2**(2*m)):step]))
if fun_str == 'l1_x' or fun_str == 'cov':
multiply_factor = 1. if fun_str == 'l1_x' else 1./d**2
fun_diff_p_st[m, r_i] = multiply_factor * np.sum(np.abs(p_fun-pout_fun_st))
fun_diff_p_sin_st[m, r_i] = multiply_factor * np.sum(np.abs(pin_fun-pout_fun_st))
elif fun_str == 'linf_x':
fun_diff_p_st[m, r_i] = max(np.abs(p_fun-pout_fun_st))
fun_diff_p_sin_st[m, r_i] = max(np.abs(pin_fun-pout_fun_st))
else:
fun_diff_p_st[m, r_i] = np.abs(p_fun-pout_fun_st)
fun_diff_p_sin_st[m, r_i] = np.abs(pin_fun-pout_fun_st)
########### SAVE RESULTS ##########
# stanfard thinning results are saved with "mc prefix"
prefixes = ["mc" + prefix for prefix in generic_prefixes]
if compute_st:
for prefix, data_array in zip(prefixes, [fun_diff_p_st, fun_diff_p_sin_st]):
filename = get_combined_results_filename(prefix, ms, params_p, params_k_split=params_k_split, params_k_swap=params_k_swap, rep_ids=rep_ids, delta=delta)
with open(filename, 'wb') as file:
print(f"Saving {prefix} to {filename}")
pkl.dump(data_array, file, protocol=pkl.HIGHEST_PROTOCOL)
else:
if return_val:
prefix = prefixes[0]
filename = get_combined_results_filename(prefix, ms, params_p, params_k_split=params_k_split, params_k_swap=params_k_swap, rep_ids=rep_ids, delta=delta)
with open(filename, 'rb') as file:
print(f"Loading {prefix} from {filename}")
fun_diff_p_st = pkl.load(file)
prefix = prefixes[1]
filename = get_combined_results_filename(prefix, ms, params_p, params_k_split=params_k_split, params_k_swap=params_k_swap, rep_ids=rep_ids, delta=delta)
with open(filename, 'rb') as file:
print(f"Loading {prefix} from {filename}")
fun_diff_p_sin_st = pkl.load(file)
prefixes = [f"kt{thin_str}" + prefix for prefix in generic_prefixes]
if compute_kt:
for prefix, data_array in zip(prefixes, [fun_diff_p, fun_diff_p_sin]):
filename = get_combined_results_filename(prefix, ms, params_p, params_k_split=params_k_split, params_k_swap=params_k_swap, rep_ids=rep_ids, delta=delta)
with open(filename, 'wb') as file:
print(f"Saving {prefix} to {filename}")
pkl.dump(data_array, file, protocol=pkl.HIGHEST_PROTOCOL)
else:
if return_val:
prefix = prefixes[0]
filename = get_combined_results_filename(prefix, ms, params_p, params_k_split=params_k_split, params_k_swap=params_k_swap, rep_ids=rep_ids, delta=delta)
with open(filename, 'rb') as file:
print(f"Loading {prefix} from {filename}")
fun_diff_p = pkl.load(file)
prefix = prefixes[1]
filename = get_combined_results_filename(prefix, ms, params_p, params_k_split=params_k_split, params_k_swap=params_k_swap, rep_ids=rep_ids, delta=delta)
with open(filename, 'rb') as file:
print(f"Loading {prefix} from {filename}")
fun_diff_p_sin = pkl.load(file)
if return_val:
return(fun_diff_p, fun_diff_p_sin, fun_diff_p_st, fun_diff_p_sin_st)
###Output
_____no_output_____
###Markdown
Initialize argumennts
###Code
# if isnotebook():
parser = init_parser()
args, opt = parser.parse_known_args()
args = convert_arg_flags(args)
###Output
_____no_output_____
###Markdown
Gauss Experiments Results
###Code
rerun = True # whether to rerun the integratione rror computations
args.d = 2 # d
args.kernel = "gauss" # kernel
args.P = "gauss" # target P setting; allowed values depend on the feasible arguments in compute_params_p; currently {gauss, mog, mcmc}
args.computepower = True # whether to compute results for power KT (same as root KT when power = 0.5)
args.power = 0.5 # power of the kernel to be used for power KT and KT+
args.ktplus = False # whether to run results for KT+
args.targetkt = True # whether to run results for KT+
args.powerkt = True # whether to run results for KT+
fun_strs = ['kernel', 'x', 'cif'] # which functions to evaluate
# for gauss P the only free parameter is dimension d; everything else is computed in compute_params_p/compute_params_k
ds = [2, 10, 20, 50, 100]
for d in ds:
args.d = d
d, params_p, var_k = compute_params_p(args)
params_k, params_k_power = compute_params_k(args, var_k, args.computepower, args.power)
if args.ktplus: # if running KT+, need to define the KT+ kernel called as params_k_combo
assert(args.power is not None)
params_k_combo = dict()
params_k_combo["name"] = "combo_" + params_k["name"] + f"_{args.power}"
params_k_combo["k"] = params_k.copy()
params_k_combo["kpower"] = params_k_power.copy()
params_k_combo["var"] = params_k["var"]
params_k_combo["d"] = args.d
params_k_split_list = []
thin_str_list = []
if args.targetkt:
params_k_split_list.append(params_k)
thin_str_list.append("")
if args.powerkt:
params_k_split_list.append(params_k_power)
thin_str_list.append("")
if args.ktplus:
params_k_split_list.append(params_k_combo)
thin_str_list.append("-plus")
for fun_str in fun_strs:
for params_k_split, thin_str in zip(params_k_split_list, thin_str_list):
fun_diff_p, fun_diff_p_sin, fun_diff_p_st, fun_diff_p_sin_st = evaluate_fun_approx_quality(fun_str=fun_str,
ms=range(7+1), params_p=params_p, params_k_split=params_k_split, params_k_swap=params_k, rep_ids=range(10),
thin_str=thin_str,
delta=0.5,
sample_seed=1234567, thin_seed=9876543,
rerun=rerun, results_dir="results_new",return_val = True)
###Output
_____no_output_____
###Markdown
MCMC results
###Code
all_mcmc_filenames = np.array(['Hinch_P_seed_1_temp_1_scaled', 'Hinch_P_seed_2_temp_1_scaled',
'Hinch_TP_seed_1_temp_8_scaled', 'Hinch_TP_seed_2_temp_8_scaled',
'Goodwin_RW_float_step', 'Goodwin_ADA-RW_float_step',
'Goodwin_MALA_float_step', 'Goodwin_PRECOND-MALA_float_step',
'Lotka_RW_float_step', 'Lotka_ADA-RW_float_step',
'Lotka_MALA_float_step', 'Lotka_PRECOND-MALA_float_step'])
###Output
_____no_output_____
###Markdown
Lotka and Goodwin
###Code
rerun = True
args.d = int(4)
args.kernel, args.power = "laplace", 0.81 # we used different power for different kernels
# args.kernel, args.power = "imq", 0.5
# args.kernel, args.power = "gauss", 0.5
args.P = "mcmc" # target P setting; allowed values depend on the feasible arguments in compute_params_p; currently {gauss, mog, mcmc}
args.computepower = True # whether to compute results for power KT (same as root KT when power = 0.5)
args.ktplus = True # whether to run results for KT+
args.targetkt = False # whether to run results for Target KT
args.powerkt = False # whether to run results for power KT
file_idx = range(4, 12) # since this code block runs only for Goodwin and Lotka to run results for KT+
fun_strs = ['kernel', 'x', 'x^2', 'cif'] #, 'x', 'x^2', 'cif']
for filename in all_mcmc_filenames[file_idx]:
args.filename = filename
d, params_p, var_k = compute_params_p(args)
args.d = d
params_k, params_k_power = compute_params_k(args, var_k, args.computepower, args.power)
if args.ktplus:
assert(args.power is not None)
params_k_combo = dict()
params_k_combo["name"] = "combo_" + params_k["name"] + f"_{args.power}"
params_k_combo["k"] = params_k.copy()
params_k_combo["kpower"] = params_k_power.copy()
params_k_combo["var"] = params_k["var"]
params_k_combo["d"] = args.d
params_k_split_list = []
thin_str_list = []
if args.targetkt:
params_k_split_list.append(params_k)
thin_str_list.append("")
if args.powerkt:
params_k_split_list.append(params_k_power)
thin_str_list.append("")
if args.ktplus:
params_k_split_list.append(params_k_combo)
thin_str_list.append("-plus")
for fun_str in fun_strs:
for params_k_split, thin_str in zip(params_k_split_list, thin_str_list):
evaluate_fun_approx_quality(fun_str=fun_str,
ms=range(7+1), params_p=params_p, params_k_split=params_k_split, params_k_swap=params_k, rep_ids=range(10),
thin_str=thin_str,
delta=0.5,
sample_seed=1234567, thin_seed=9876543,
rerun=rerun, results_dir="results_new",return_val = False)
###Output
_____no_output_____
###Markdown
Hinch experiments
###Code
rerun = True
args.d = int(38)
args.kernel, args.power = "imq", 0.5
args.P = "mcmc" # target P setting; allowed values depend on the feasible arguments in compute_params_p; currently {gauss, mog, mcmc}
args.computepower = True
args.ktplus = True # whether to run results for KT+
args.targetkt = False # whether to run results for Target KT
args.powerkt = False # whether to run results for power KT
file_idx = range(4) # since this code block runs only for Hinch
fun_strs = ['kernel', 'x', 'x^2', 'cif'] #, 'x', 'x^2', 'cif']
for filename in all_mcmc_filenames[file_idx]:
args.filename = filename
d, params_p, var_k = compute_params_p(args)
args.d = d
params_k, params_k_power = compute_params_k(args, var_k, args.computepower, args.power)
if args.ktplus:
assert(args.power is not None)
params_k_combo = dict()
params_k_combo["name"] = "combo_" + params_k["name"] + f"_{args.power}"
params_k_combo["k"] = params_k.copy()
params_k_combo["kpower"] = params_k_power.copy()
params_k_combo["var"] = params_k["var"]
params_k_combo["d"] = args.d
params_k_split_list = []
thin_str_list = []
if args.targetkt:
params_k_split_list.append(params_k)
thin_str_list.append("")
if args.powerkt:
params_k_split_list.append(params_k_power)
thin_str_list.append("")
if args.ktplus:
params_k_split_list.append(params_k_combo)
thin_str_list.append("-plus")
for fun_str in fun_strs:
for params_k_split, thin_str in zip(params_k_split_list, thin_str_list):
evaluate_fun_approx_quality(fun_str=fun_str,
ms=range(7+1), params_p=params_p, params_k_split=params_k_split, params_k_swap=params_k, rep_ids=range(10),
thin_str=thin_str,
delta=0.5,
sample_seed=1234567, thin_seed=9876543,
rerun=rerun, results_dir="results_new",return_val = False)
###Output
_____no_output_____
###Markdown
MOG results with 4 and 8 components
###Code
rerun = False # whether to rerun the integratione rror computations
args.d = 2 # d
args.kernel = "gauss" # kernel
args.P = "mog" # target P setting; allowed values depend on the feasible arguments in compute_params_p; currently {gauss, mog, mcmc}
args.computepower = True # whether to compute results for power KT (same as root KT when power = 0.5)
args.power = 0.5 # power of the kernel to be used for power KT and KT+
args.ktplus = True # whether to run results for KT+
fun_strs = ['kernel'] # which functions to evaluate
# for mog P the only free parameter is number of components M; everything else is computed in compute_params_p/compute_params_k
Ms = [4, 8]
for M in Ms:
args.M = M
d, params_p, var_k = compute_params_p(args)
params_k, params_k_power = compute_params_k(args, var_k, args.computepower, args.power)
if args.ktplus: # if running KT+, need to define the KT+ kernel called as params_k_combo
assert(args.power is not None)
params_k_combo = dict()
params_k_combo["name"] = "combo_" + params_k["name"] + f"_{args.power}"
params_k_combo["k"] = params_k.copy()
params_k_combo["kpower"] = params_k_power.copy()
params_k_combo["var"] = params_k["var"]
params_k_combo["d"] = args.d
for fun_str in fun_strs: # compute results for each function;
# if args.ktplus is trye then results for KT/powerKT/KT+ are all computed, else only for KT
# in either case, results for ST are returned
for params_k_split, thin_str in zip([params_k, params_k_power, params_k_combo], ["", "", "-plus"]) if args.ktplus else zip([params_k], [""]):
fun_diff_p, fun_diff_p_sin, fun_diff_p_st, fun_diff_p_sin_st = evaluate_fun_approx_quality(fun_str=fun_str,
ms=range(7+1), params_p=params_p, params_k_split=params_k_split, params_k_swap=params_k, rep_ids=range(10),
thin_str=thin_str,
delta=0.5,
sample_seed=1234567, thin_seed=9876543,
rerun=rerun, results_dir="results_new",return_val = True)
###Output
_____no_output_____ |
sphinx/_static/Osher_Solution.ipynb | ###Markdown
Osher solution to a scalar Riemann problemImplementation of the general solution to the scalar Riemann problem that is valid also for non-convex fluxes.$$Q(\xi) = \begin{cases} \text{argmin}_{q_l \leq q \leq q_r} [f(q) - \xi q]& \text{if} ~q_l\leq q_r,\\ \text{argmax}_{q_r \leq q \leq q_l} [f(q) - \xi q]& \text{if} ~q_r\leq q_l.\\\end{cases}$$From: S.J. Osher, *Riemann Solvers, the Entropy Condition, and Difference Approximations*, SIAM J. Numer. Anal. 21(1984), pp. 217-235. [doi:10.1137/0721016](http://dx.doi.org/10.1137/0721016)See also Section 16.1.2. of [FVMHP](http://depts.washington.edu/clawpack/book.html).
###Code
%pylab inline
from __future__ import print_function
###Output
_____no_output_____
###Markdown
Select an animation style:
###Code
import animation_tools # local version, rather than from Clawpack
#animation_style = 'ipywidgets'
animation_style = 'JSAnimation'
def osher_solution(f, q_left, q_right, xi_left=None, xi_right=None):
"""
Compute the Riemann solution to a scalar conservation law.
Compute the similarity solution Q(x/t) and also the
(possibly multi-valued) solution determined by tracing
characteristics.
Input:
f = flux function (possibly nonconvex)
q_left, q_right = Riemann data
xi_left, xi_right = optional left and right limits for xi = x/t
in similarity solution.
If not specified, chosen based on the characteristic speeds.
Returns:
xi = array of values between xi_left and xi_right
q = array of corresponding q(xi) values (xi = x/t)
q_char = array of values of q between q_left and q_right
xi_char = xi value for each q_char for use in plotting the
(possibly multi-valued) solution where each q value
propagates at speed f'(q).
"""
q_min = min(q_left, q_right)
q_max = max(q_left, q_right)
qv = linspace(q_min, q_max, 1000)
# define the function qtilde as in (16.7)
if q_left <= q_right:
def qtilde(xi):
Q = empty(xi.shape, dtype=float)
for j,xij in enumerate(xi):
i = argmin(f(qv) - xij*qv)
Q[j] = qv[i]
return Q
else:
def qtilde(xi):
Q = empty(xi.shape, dtype=float)
for j,xij in enumerate(xi):
i = argmax(f(qv) - xij*qv)
Q[j] = qv[i]
return Q
# The rest is just for plotting purposes:
fv = f(qv)
dfdq = diff(fv) / (qv[1] - qv[0])
dfdq_min = min(dfdq)
dfdq_max = max(dfdq)
#print("Mininum characteristic velocity: %g" % dfdq_min)
#print("Maximum characteristic velocity: %g" % dfdq_max)
dfdq_range = dfdq_max - dfdq_min
if xi_left is None:
xi_left = min(0,dfdq_min) - 0.1*dfdq_range
if xi_right is None:
xi_right = max(0,dfdq_max) + 0.1*dfdq_range
q_char = hstack((q_min, 0.5*(qv[:-1] + qv[1:]), q_max))
if q_left <= q_right:
xi_min = xi_left
xi_max = xi_right
else:
xi_min = xi_right
xi_max = xi_left
xi_char = hstack((xi_min, dfdq, xi_max))
xi = linspace(xi_left, xi_right, 1000)
q = qtilde(xi)
return xi, q, q_char, xi_char
###Output
_____no_output_____
###Markdown
Traffic flowFirst try a convex flux, such as $f(q) = q(1-q)$ from traffic flow (with $u_{max}=1$ in the notation of Chapter 11):
###Code
f = lambda q: q*(1-q)
figure(figsize=(12,5))
subplot(121)
q_left = 0.6
q_right = 0.1
xi, qxi, q_char, xi_char = osher_solution(f, q_left, q_right, -2, 2)
plot(xi_char, q_char,'r')
plot(xi, qxi, 'b', linewidth=2)
ylim(-0.1,1.1)
title('Rarefaction solution')
subplot(122)
q_left = 0.1
q_right = 0.6
xi, qxi, q_char, xi_char = osher_solution(f, q_left, q_right, -2, 2)
plot(xi_char, q_char,'r')
plot(xi, qxi, 'b', linewidth=2)
ylim(-0.1,1.1)
title('Shock solution')
###Output
_____no_output_____
###Markdown
Buckley-Leverett EquationThe Buckley-Leverett equation for two-phase flow is described in Section 16.1.1. It has the non-convext flux function$$ f(q) = \frac{q^2}{q^2 + a(1-q)^2}$$where $a$ is some constant.
###Code
a = 0.5
f_buckley_leverett = lambda q: q**2 / (q**2 + a*(1-q)**2)
q_left = 1.
q_right = 0.
###Output
_____no_output_____
###Markdown
Plot the flux and its derivative
###Code
qvals = linspace(q_right, q_left, 200)
fvals = f_buckley_leverett(qvals)
dfdq = diff(fvals) / (qvals[1]-qvals[0]) # approximate df/dq
qmid = 0.5*(qvals[:-1] + qvals[1:]) # midpoints for plotting dfdq
figure(figsize=(10,4))
subplot(131)
plot(qvals,fvals)
xlabel('q')
ylabel('f(q)')
title('flux function f(q)')
subplot(132)
plot(qmid, dfdq)
xlabel('q')
ylabel('df/dq')
title('characteristic speed df/dq')
subplot(133)
plot(dfdq, qmid, 'r')
xlabel('df/dq')
ylabel('q')
title('q vs. df/dq')
subplots_adjust(left=0.)
###Output
_____no_output_____
###Markdown
Note that the third plot above shows $q$ on the vertical axis and $df/dq$ on the horizontal axis (it's the middle figure turned sideways). You can think of this as showing the characteristic velocity for each point on a jump discontinuity from $q=0$ to $q=1$, and hence a triple valued solution of the Riemann problem at $t=1$ when each $q$ value has propagated this far. Below we show this together with the correct solution to the Riemann problem, with a shock wave inserted (as computed using the Osher solution defined above). Note that for this non-convex flux function the Riemann solution consists partly of a rarefaction wave together with a shock wave.
###Code
xi, qxi, q_char, xi_char = osher_solution(f_buckley_leverett, q_left, q_right, -2, 2)
plot(xi_char, q_char,'r')
plot(xi, qxi, 'b', linewidth=2)
ylim(-0.1,1.1)
###Output
_____no_output_____
###Markdown
Create an animation:
###Code
figs = []
# adjust first and last elements in xi arrays
# so things plot nicely for t \approx 0:
xi[0] *= 1e6; xi[-1] *= 1e6
xi_char[0] *= 1e6; xi_char[-1] *= 1e6
times = linspace(0,1,11)
times[0] = 1e-3 # adjust first time to be >0
for t in times:
fig = figure(figsize=(6,3))
plot(xi_char*t,q_char,'r')
plot(xi*t, qxi, 'b', linewidth=2)
xlim(-2, 2.5)
ylim(-0.1,1.1)
figs.append(fig)
close(fig)
anim = animation_tools.animate_figs(figs, style=animation_style, figsize=(6,3))
display(anim)
###Output
_____no_output_____
###Markdown
Sinusoidal fluxAs another test, the flux function $f(q) = \sin(q)$ is used in Example 16.1 to produce the figure 16.4, reproduced below.
###Code
f_sin = lambda q: sin(q)
q_left = pi/4.
q_right = 15*pi/4.
xi, qxi, q_char, xi_char = osher_solution(f_sin, q_left, q_right, -1.5, 1.5)
plot(xi_char, q_char,'r')
plot(xi, qxi, 'b', linewidth=2)
ylim(0.,14.)
###Output
_____no_output_____
###Markdown
Make an animation
###Code
figs = []
# adjust first and last elements in xi arrays
# so things plot nicely for t \approx 0:
xi[0] *= 1e6; xi[-1] *= 1e6
xi_char[0] *= 1e6; xi_char[-1] *= 1e6
times = linspace(0,1,11)
times[0] = 1e-3 # adjust first time to be >0
for t in times:
fig = figure(figsize=(6,3))
plot(xi_char*t,q_char,'r')
plot(xi*t, qxi, 'b', linewidth=2)
xlim(-1.5, 1.5)
ylim(0.,14.)
figs.append(fig)
close(fig)
anim = animation_tools.animate_figs(figs, style=animation_style, figsize=(6,3))
display(anim)
###Output
_____no_output_____
###Markdown
Yet another example
###Code
f = lambda q: q*sin(q)
q_left = 2.
q_right = 20.
xi, qxi, q_char, xi_char = osher_solution(f, q_left, q_right)
plot(xi_char,q_char,'r')
plot(xi, qxi, 'b', linewidth=2)
ylim(0,25)
###Output
_____no_output_____ |
L13-cnns-part2/code/nin-cifar10.ipynb | ###Markdown
STAT 453: Deep Learning (Spring 2020)Instructor: Sebastian Raschka ([email protected])- Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2020/ - GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss20 Network in Network CIFAR-10 Classifier based on - Lin, Min, Qiang Chen, and Shuicheng Yan. "Network in network." arXiv preprint arXiv:1312.4400 (2013). Imports
###Code
import os
import time
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torch.utils.data.dataset import Subset
from torchvision import datasets
from torchvision import transforms
import matplotlib.pyplot as plt
from PIL import Image
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
###Output
_____no_output_____
###Markdown
Model Settings
###Code
##########################
### SETTINGS
##########################
# Hyperparameters
RANDOM_SEED = 1
LEARNING_RATE = 0.001
BATCH_SIZE = 256
NUM_EPOCHS = 50
# Architecture
NUM_CLASSES = 10
# Other
DEVICE = "cuda:0"
GRAYSCALE = False
##########################
### CIFAR-10 Dataset
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_indices = torch.arange(0, 49000)
valid_indices = torch.arange(49000, 50000)
train_and_valid = datasets.CIFAR10(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_dataset = Subset(train_and_valid, train_indices)
valid_dataset = Subset(train_and_valid, valid_indices)
test_dataset = datasets.CIFAR10(root='data',
train=False,
transform=transforms.ToTensor())
#####################################################
### Data Loaders
#####################################################
train_loader = DataLoader(dataset=train_dataset,
batch_size=BATCH_SIZE,
num_workers=8,
shuffle=True)
valid_loader = DataLoader(dataset=valid_dataset,
batch_size=BATCH_SIZE,
num_workers=8,
shuffle=False)
test_loader = DataLoader(dataset=test_dataset,
batch_size=BATCH_SIZE,
num_workers=8,
shuffle=False)
#####################################################
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
for images, labels in test_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
for images, labels in valid_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
##########################
### MODEL
##########################
class NiN(nn.Module):
def __init__(self, num_classes):
super(NiN, self).__init__()
self.num_classes = num_classes
self.classifier = nn.Sequential(
nn.Conv2d(3, 192, kernel_size=5, stride=1, padding=2),
nn.ReLU(inplace=True),
nn.Conv2d(192, 160, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.Conv2d(160, 96, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
nn.Dropout(0.5),
nn.Conv2d(96, 192, kernel_size=5, stride=1, padding=2),
nn.ReLU(inplace=True),
nn.Conv2d(192, 192, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.Conv2d(192, 192, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.AvgPool2d(kernel_size=3, stride=2, padding=1),
nn.Dropout(0.5),
nn.Conv2d(192, 192, kernel_size=3, stride=1, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(192, 192, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.Conv2d(192, 10, kernel_size=1, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.AvgPool2d(kernel_size=8, stride=1, padding=0),
)
def forward(self, x):
x = self.classifier(x)
logits = x.view(x.size(0), self.num_classes)
probas = torch.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(RANDOM_SEED)
model = NiN(NUM_CLASSES)
model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
def compute_accuracy(model, data_loader, device):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
start_time = time.time()
# use random seed for reproducibility (here batch shuffling)
torch.manual_seed(RANDOM_SEED)
for epoch in range(NUM_EPOCHS):
model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
### PREPARE MINIBATCH
features = features.to(DEVICE)
targets = targets.to(DEVICE)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 120:
print (f'Epoch: {epoch+1:03d}/{NUM_EPOCHS:03d} | '
f'Batch {batch_idx:03d}/{len(train_loader):03d} |'
f' Cost: {cost:.4f}')
# no need to build the computation graph for backprop when computing accuracy
with torch.set_grad_enabled(False):
train_acc = compute_accuracy(model, train_loader, device=DEVICE)
valid_acc = compute_accuracy(model, valid_loader, device=DEVICE)
print(f'Epoch: {epoch+1:03d}/{NUM_EPOCHS:03d} Train Acc.: {train_acc:.2f}%'
f' | Validation Acc.: {valid_acc:.2f}%')
elapsed = (time.time() - start_time)/60
print(f'Time elapsed: {elapsed:.2f} min')
elapsed = (time.time() - start_time)/60
print(f'Total Training Time: {elapsed:.2f} min')
###Output
Epoch: 001/050 | Batch 000/192 | Cost: 2.3067
Epoch: 001/050 | Batch 120/192 | Cost: 2.1674
Epoch: 001/050 Train Acc.: 22.48% | Validation Acc.: 20.20%
Time elapsed: 0.34 min
Epoch: 002/050 | Batch 000/192 | Cost: 2.1179
Epoch: 002/050 | Batch 120/192 | Cost: 2.1106
Epoch: 002/050 Train Acc.: 29.32% | Validation Acc.: 30.50%
Time elapsed: 0.69 min
Epoch: 003/050 | Batch 000/192 | Cost: 2.0580
Epoch: 003/050 | Batch 120/192 | Cost: 1.8711
Epoch: 003/050 Train Acc.: 33.26% | Validation Acc.: 33.10%
Time elapsed: 1.05 min
Epoch: 004/050 | Batch 000/192 | Cost: 1.9344
Epoch: 004/050 | Batch 120/192 | Cost: 1.8512
Epoch: 004/050 Train Acc.: 34.02% | Validation Acc.: 31.60%
Time elapsed: 1.42 min
Epoch: 005/050 | Batch 000/192 | Cost: 1.8957
Epoch: 005/050 | Batch 120/192 | Cost: 1.8384
Epoch: 005/050 Train Acc.: 37.51% | Validation Acc.: 36.20%
Time elapsed: 1.78 min
Epoch: 006/050 | Batch 000/192 | Cost: 1.8078
Epoch: 006/050 | Batch 120/192 | Cost: 1.8425
Epoch: 006/050 Train Acc.: 38.72% | Validation Acc.: 38.10%
Time elapsed: 2.15 min
Epoch: 007/050 | Batch 000/192 | Cost: 1.7487
Epoch: 007/050 | Batch 120/192 | Cost: 1.6374
Epoch: 007/050 Train Acc.: 40.07% | Validation Acc.: 39.20%
Time elapsed: 2.51 min
Epoch: 008/050 | Batch 000/192 | Cost: 1.6959
Epoch: 008/050 | Batch 120/192 | Cost: 1.7574
Epoch: 008/050 Train Acc.: 42.16% | Validation Acc.: 40.00%
Time elapsed: 2.87 min
Epoch: 009/050 | Batch 000/192 | Cost: 1.7303
Epoch: 009/050 | Batch 120/192 | Cost: 1.6021
Epoch: 009/050 Train Acc.: 43.44% | Validation Acc.: 43.10%
Time elapsed: 3.24 min
Epoch: 010/050 | Batch 000/192 | Cost: 1.4960
Epoch: 010/050 | Batch 120/192 | Cost: 1.6851
Epoch: 010/050 Train Acc.: 42.98% | Validation Acc.: 40.60%
Time elapsed: 3.60 min
Epoch: 011/050 | Batch 000/192 | Cost: 1.6222
Epoch: 011/050 | Batch 120/192 | Cost: 1.6066
Epoch: 011/050 Train Acc.: 44.05% | Validation Acc.: 42.90%
Time elapsed: 3.96 min
Epoch: 012/050 | Batch 000/192 | Cost: 1.4728
Epoch: 012/050 | Batch 120/192 | Cost: 1.5312
Epoch: 012/050 Train Acc.: 45.42% | Validation Acc.: 43.80%
Time elapsed: 4.33 min
Epoch: 013/050 | Batch 000/192 | Cost: 1.5224
Epoch: 013/050 | Batch 120/192 | Cost: 1.3618
Epoch: 013/050 Train Acc.: 46.01% | Validation Acc.: 44.00%
Time elapsed: 4.69 min
Epoch: 014/050 | Batch 000/192 | Cost: 1.5263
Epoch: 014/050 | Batch 120/192 | Cost: 1.5477
Epoch: 014/050 Train Acc.: 46.58% | Validation Acc.: 44.40%
Time elapsed: 5.06 min
Epoch: 015/050 | Batch 000/192 | Cost: 1.4636
Epoch: 015/050 | Batch 120/192 | Cost: 1.3477
Epoch: 015/050 Train Acc.: 48.28% | Validation Acc.: 45.60%
Time elapsed: 5.42 min
Epoch: 016/050 | Batch 000/192 | Cost: 1.3972
Epoch: 016/050 | Batch 120/192 | Cost: 1.3104
Epoch: 016/050 Train Acc.: 48.62% | Validation Acc.: 45.40%
Time elapsed: 5.79 min
Epoch: 017/050 | Batch 000/192 | Cost: 1.3821
Epoch: 017/050 | Batch 120/192 | Cost: 1.4975
Epoch: 017/050 Train Acc.: 47.96% | Validation Acc.: 45.10%
Time elapsed: 6.15 min
Epoch: 018/050 | Batch 000/192 | Cost: 1.5004
Epoch: 018/050 | Batch 120/192 | Cost: 1.3488
Epoch: 018/050 Train Acc.: 47.51% | Validation Acc.: 43.80%
Time elapsed: 6.51 min
Epoch: 019/050 | Batch 000/192 | Cost: 1.5946
Epoch: 019/050 | Batch 120/192 | Cost: 1.5711
Epoch: 019/050 Train Acc.: 49.31% | Validation Acc.: 45.50%
Time elapsed: 6.88 min
Epoch: 020/050 | Batch 000/192 | Cost: 1.3957
Epoch: 020/050 | Batch 120/192 | Cost: 1.4255
Epoch: 020/050 Train Acc.: 50.20% | Validation Acc.: 46.70%
Time elapsed: 7.25 min
Epoch: 021/050 | Batch 000/192 | Cost: 1.3301
Epoch: 021/050 | Batch 120/192 | Cost: 1.3862
Epoch: 021/050 Train Acc.: 49.89% | Validation Acc.: 45.20%
Time elapsed: 7.61 min
Epoch: 022/050 | Batch 000/192 | Cost: 1.3356
Epoch: 022/050 | Batch 120/192 | Cost: 1.3525
Epoch: 022/050 Train Acc.: 50.92% | Validation Acc.: 46.40%
Time elapsed: 7.97 min
Epoch: 023/050 | Batch 000/192 | Cost: 1.2572
Epoch: 023/050 | Batch 120/192 | Cost: 1.4804
Epoch: 023/050 Train Acc.: 51.61% | Validation Acc.: 47.30%
Time elapsed: 8.33 min
Epoch: 024/050 | Batch 000/192 | Cost: 1.4254
Epoch: 024/050 | Batch 120/192 | Cost: 1.3272
Epoch: 024/050 Train Acc.: 50.03% | Validation Acc.: 45.40%
Time elapsed: 8.69 min
Epoch: 025/050 | Batch 000/192 | Cost: 1.3821
Epoch: 025/050 | Batch 120/192 | Cost: 1.5024
Epoch: 025/050 Train Acc.: 51.70% | Validation Acc.: 47.60%
Time elapsed: 9.05 min
Epoch: 026/050 | Batch 000/192 | Cost: 1.3543
Epoch: 026/050 | Batch 120/192 | Cost: 1.2453
Epoch: 026/050 Train Acc.: 51.06% | Validation Acc.: 46.40%
Time elapsed: 9.42 min
Epoch: 027/050 | Batch 000/192 | Cost: 1.4060
Epoch: 027/050 | Batch 120/192 | Cost: 1.2686
Epoch: 027/050 Train Acc.: 52.37% | Validation Acc.: 47.20%
Time elapsed: 9.78 min
Epoch: 028/050 | Batch 000/192 | Cost: 1.2657
Epoch: 028/050 | Batch 120/192 | Cost: 1.4804
Epoch: 028/050 Train Acc.: 52.10% | Validation Acc.: 47.10%
Time elapsed: 10.14 min
Epoch: 029/050 | Batch 000/192 | Cost: 1.4005
Epoch: 029/050 | Batch 120/192 | Cost: 1.3050
Epoch: 029/050 Train Acc.: 51.70% | Validation Acc.: 46.60%
Time elapsed: 10.51 min
Epoch: 030/050 | Batch 000/192 | Cost: 1.3317
Epoch: 030/050 | Batch 120/192 | Cost: 1.2712
Epoch: 030/050 Train Acc.: 52.43% | Validation Acc.: 45.60%
Time elapsed: 10.87 min
Epoch: 031/050 | Batch 000/192 | Cost: 1.2810
Epoch: 031/050 | Batch 120/192 | Cost: 1.2746
Epoch: 031/050 Train Acc.: 52.69% | Validation Acc.: 46.10%
Time elapsed: 11.23 min
Epoch: 032/050 | Batch 000/192 | Cost: 1.2820
Epoch: 032/050 | Batch 120/192 | Cost: 1.2992
Epoch: 032/050 Train Acc.: 52.93% | Validation Acc.: 46.20%
Time elapsed: 11.59 min
Epoch: 033/050 | Batch 000/192 | Cost: 1.3235
Epoch: 033/050 | Batch 120/192 | Cost: 1.2060
Epoch: 033/050 Train Acc.: 53.87% | Validation Acc.: 47.50%
Time elapsed: 11.96 min
Epoch: 034/050 | Batch 000/192 | Cost: 1.4052
Epoch: 034/050 | Batch 120/192 | Cost: 1.3133
Epoch: 034/050 Train Acc.: 53.10% | Validation Acc.: 47.40%
Time elapsed: 12.32 min
Epoch: 035/050 | Batch 000/192 | Cost: 1.2642
Epoch: 035/050 | Batch 120/192 | Cost: 1.2783
Epoch: 035/050 Train Acc.: 54.01% | Validation Acc.: 47.40%
Time elapsed: 12.69 min
Epoch: 036/050 | Batch 000/192 | Cost: 1.2821
Epoch: 036/050 | Batch 120/192 | Cost: 1.2368
Epoch: 036/050 Train Acc.: 53.62% | Validation Acc.: 47.60%
Time elapsed: 13.05 min
Epoch: 037/050 | Batch 000/192 | Cost: 1.2431
Epoch: 037/050 | Batch 120/192 | Cost: 1.2036
Epoch: 037/050 Train Acc.: 54.39% | Validation Acc.: 47.40%
Time elapsed: 13.41 min
Epoch: 038/050 | Batch 000/192 | Cost: 1.1775
Epoch: 038/050 | Batch 120/192 | Cost: 1.2894
Epoch: 038/050 Train Acc.: 54.19% | Validation Acc.: 47.20%
Time elapsed: 13.78 min
Epoch: 039/050 | Batch 000/192 | Cost: 1.3535
Epoch: 039/050 | Batch 120/192 | Cost: 1.2592
Epoch: 039/050 Train Acc.: 54.40% | Validation Acc.: 48.10%
Time elapsed: 14.14 min
Epoch: 040/050 | Batch 000/192 | Cost: 1.0722
Epoch: 040/050 | Batch 120/192 | Cost: 1.3128
Epoch: 040/050 Train Acc.: 54.31% | Validation Acc.: 48.00%
Time elapsed: 14.51 min
Epoch: 041/050 | Batch 000/192 | Cost: 1.2739
Epoch: 041/050 | Batch 120/192 | Cost: 1.2073
Epoch: 041/050 Train Acc.: 54.85% | Validation Acc.: 48.40%
Time elapsed: 14.87 min
Epoch: 042/050 | Batch 000/192 | Cost: 1.1646
Epoch: 042/050 | Batch 120/192 | Cost: 1.0915
Epoch: 042/050 Train Acc.: 54.34% | Validation Acc.: 46.70%
Time elapsed: 15.23 min
Epoch: 043/050 | Batch 000/192 | Cost: 1.2877
Epoch: 043/050 | Batch 120/192 | Cost: 1.2527
Epoch: 043/050 Train Acc.: 54.74% | Validation Acc.: 47.60%
Time elapsed: 15.58 min
Epoch: 044/050 | Batch 000/192 | Cost: 1.2332
Epoch: 044/050 | Batch 120/192 | Cost: 1.3329
Epoch: 044/050 Train Acc.: 54.99% | Validation Acc.: 48.50%
Time elapsed: 15.95 min
Epoch: 045/050 | Batch 000/192 | Cost: 1.1682
Epoch: 045/050 | Batch 120/192 | Cost: 1.1306
Epoch: 045/050 Train Acc.: 54.71% | Validation Acc.: 46.90%
Time elapsed: 16.31 min
Epoch: 046/050 | Batch 000/192 | Cost: 1.2754
Epoch: 046/050 | Batch 120/192 | Cost: 1.2868
Epoch: 046/050 Train Acc.: 54.07% | Validation Acc.: 48.20%
Time elapsed: 16.66 min
Epoch: 047/050 | Batch 000/192 | Cost: 1.2001
Epoch: 047/050 | Batch 120/192 | Cost: 1.0525
Epoch: 047/050 Train Acc.: 55.30% | Validation Acc.: 47.60%
Time elapsed: 17.03 min
Epoch: 048/050 | Batch 000/192 | Cost: 1.0808
Epoch: 048/050 | Batch 120/192 | Cost: 1.2474
Epoch: 048/050 Train Acc.: 55.39% | Validation Acc.: 47.80%
Time elapsed: 17.38 min
Epoch: 049/050 | Batch 000/192 | Cost: 1.0728
Epoch: 049/050 | Batch 120/192 | Cost: 1.2751
Epoch: 049/050 Train Acc.: 54.32% | Validation Acc.: 46.70%
Time elapsed: 17.75 min
Epoch: 050/050 | Batch 000/192 | Cost: 1.2246
Epoch: 050/050 | Batch 120/192 | Cost: 1.1755
Epoch: 050/050 Train Acc.: 55.22% | Validation Acc.: 48.00%
Time elapsed: 18.11 min
Total Training Time: 18.11 min
|
docs/tutorials/api.ipynb | ###Markdown
API - Quick start Python interfaceLet's take a look at the `kissim` quick start API to encode a set of structures (from the [KLIFS](https://klifs.net/) database) and perform an all-against-all comparison. 
###Code
from kissim.api import encode, compare
# Load path to test data
from kissim.dataset.test import PATH as PATH_TEST_DATA
###Output
_____no_output_____
###Markdown
Encode structures into fingerprintsThe `encode` function is a quick start API to generate fingerprints in bulk based on structure KLIFS IDs. Input parameters are:- `structure_klifs_ids`: Structure KLIFS IDs.- `fingerprints_json_filepath`: (Optionally) Path to output json file containing fingerprints.- `local_klifs_download_path` : (Optionally) Set path local KLIFS download or - if not set - fetch data from KLIFS database.- `n_cores`: (Optionally) Number of cores used to generate fingerprints.The return object is of type `FingerprintGenerator`.
###Code
# flake8-noqa-cell
encode?
###Output
_____no_output_____
###Markdown
Run `encode` function
###Code
structure_klifs_ids = [109, 118, 12347, 1641, 3833, 9122]
fingerprint_generator = encode(
structure_klifs_ids=structure_klifs_ids,
fingerprints_filepath=None,
n_cores=2,
local_klifs_download_path=PATH_TEST_DATA / "KLIFS_download",
)
###Output
_____no_output_____
###Markdown
Inspect output: `FingerprintGenerator`
###Code
print(f"Number of structures (input): {len(structure_klifs_ids)}")
print(f"Number of fingerprints (output): {len(fingerprint_generator.data.keys())}")
fingerprint_generator
fingerprint_generator.data
###Output
_____no_output_____
###Markdown
Find more information about the `FingerprintGenerator` object [here](https://kissim.readthedocs.io/en/latest/tutorials/encoding.html). Compare fingerprintsThe `compare` function is a quick start API to perform a pairwise all-against-all (bulk) comparison for a set of fingerprints. Input parameters are:- `fingerprint_generator`: Fingerprints.- `output_path`: (Optionally) Path to output folder for distances json files.- `feature_weights`: (Optionally) Feature weights used to calculate the final fingerprint distance.- `n_cores`: (Optionally) Number of cores used to generate distances.The return objects are of type `FeatureDistancesGenerator` and `FingerprintDistanceGenerator`.
###Code
# flake8-noqa-cell
compare?
###Output
_____no_output_____
###Markdown
Run `compare` function
###Code
feature_distances_generator, fingerprint_distance_generator = compare(
fingerprint_generator=fingerprint_generator,
output_path=None,
feature_weights=None,
n_cores=2,
)
###Output
_____no_output_____
###Markdown
For final fingerprint distances, please refer to the `FingerprintDistanceGenerator` object. Inspect output: `FingerprintDistanceGenerator`
###Code
print(f"Number of fingerprints (input): {len(fingerprint_generator.data)}")
print(f"Number of pairwise comparisons (output): {len(fingerprint_distance_generator.data)}")
fingerprint_distance_generator
fingerprint_distance_generator.data
###Output
_____no_output_____
###Markdown
Get the structure distance matrix.
###Code
fingerprint_distance_generator.structure_distance_matrix()
###Output
_____no_output_____
###Markdown
Map structure pairs to kinase pairs (example: here use structure pair with minimum distance as representative for kinase pair).
###Code
fingerprint_distance_generator.kinase_distance_matrix(by="minimum")
###Output
_____no_output_____
###Markdown
Inspect output: `FeatureDistancesGenerator`
###Code
print(f"Number of fingerprints (input): {len(fingerprint_generator.data.keys())}")
print(f"Number of pairwise comparisons (output): {len(feature_distances_generator.data)}")
feature_distances_generator
feature_distances_generator.data
###Output
_____no_output_____
###Markdown
API - Quick start Python interfaceLet's take a look at the `kissim` quick start API to encode a set of structures (from the [KLIFS](https://klifs.net/) database) and perform an all-against-all comparison. 
###Code
from pathlib import Path
from kissim.api import encode, compare
# Path to this notebook
HERE = Path(_dh[-1]) # noqa: F821
###Output
_____no_output_____
###Markdown
Encode structures into fingerprintsThe `encode` function is a quick start API to generate fingerprints in bulk based on structure KLIFS IDs. Input parameters are:- `structure_klifs_ids`: Structure KLIFS IDs.- `fingerprints_json_filepath`: (Optionally) Path to output json file containing fingerprints.- `local_klifs_download_path` : (Optionally) Set path local KLIFS download or - if not set - fetch data from KLIFS database.- `n_cores`: (Optionally) Number of cores used to generate fingerprints.The return object is of type `FingerprintGenerator`.
###Code
# flake8-noqa-cell
encode?
###Output
_____no_output_____
###Markdown
Run `encode` function
###Code
structure_klifs_ids = [109, 118, 12347, 1641, 3833, 9122]
fingerprint_generator = encode(
structure_klifs_ids=structure_klifs_ids,
fingerprints_filepath=None,
n_cores=2,
local_klifs_download_path=HERE / "../../kissim/tests/data/KLIFS_download/",
)
###Output
_____no_output_____
###Markdown
Inspect output: `FingerprintGenerator`
###Code
print(f"Number of structures (input): {len(structure_klifs_ids)}")
print(f"Number of fingerprints (output): {len(fingerprint_generator.data.keys())}")
fingerprint_generator
fingerprint_generator.data
###Output
_____no_output_____
###Markdown
Find more information about the `FingerprintGenerator` object [here](https://kissim.readthedocs.io/en/latest/tutorials/encoding.html). Compare fingerprintsThe `compare` function is a quick start API to perform a pairwise all-against-all (bulk) comparison for a set of fingerprints. Input parameters are:- `fingerprint_generator`: Fingerprints.- `output_path`: (Optionally) Path to output folder for distances json files.- `feature_weights`: (Optionally) Feature weights used to calculate the final fingerprint distance.- `n_cores`: (Optionally) Number of cores used to generate distances.The return objects are of type `FeatureDistancesGenerator` and `FingerprintDistanceGenerator`.
###Code
# flake8-noqa-cell
compare?
###Output
_____no_output_____
###Markdown
Run `compare` function
###Code
feature_distances_generator, fingerprint_distance_generator = compare(
fingerprint_generator=fingerprint_generator,
output_path=None,
feature_weights=None,
n_cores=2,
)
###Output
_____no_output_____
###Markdown
For final fingerprint distances, please refer to the `FingerprintDistanceGenerator` object. Inspect output: `FingerprintDistanceGenerator`
###Code
print(f"Number of fingerprints (input): {len(fingerprint_generator.data)}")
print(f"Number of pairwise comparisons (output): {len(fingerprint_distance_generator.data)}")
fingerprint_distance_generator
fingerprint_distance_generator.data
###Output
_____no_output_____
###Markdown
Get the structure distance matrix.
###Code
fingerprint_distance_generator.structure_distance_matrix()
###Output
_____no_output_____
###Markdown
Map structure pairs to kinase pairs (example: here use structure pair with minimum distance as representative for kinase pair).
###Code
fingerprint_distance_generator.kinase_distance_matrix(by="minimum")
###Output
_____no_output_____
###Markdown
Inspect output: `FeatureDistancesGenerator`
###Code
print(f"Number of fingerprints (input): {len(fingerprint_generator.data.keys())}")
print(f"Number of pairwise comparisons (output): {len(feature_distances_generator.data)}")
feature_distances_generator
feature_distances_generator.data
###Output
_____no_output_____ |
Spring_2022_DeCal_Material/Homework/Week6/.ipynb_checkpoints/HW_7_Solutions-checkpoint.ipynb | ###Markdown
Homework 6 This homework is all about useful external libraries that are most common to use in astronomy research. The two most important libraries apart from scipy, numpy, and matplotlib are **astropy** and **pandas**. We explore the basics of these super versatile libraries. Astropy (40 Points) CRAZY UNIT CONVERSION!!! (20 Points) As you take more astronomy classes, you will face more and more unit conversion problems - they are annoying. That's why astropy.units is very helpful. Let's do some practices here.The documentations for astropy.units and astropy.constants will very helpful to you.astropy.units documentation: https://docs.astropy.org/en/stable/units/astropy.constants documentation: https://docs.astropy.org/en/stable/constants/NOTE: In this problem, you MUST use astropy.constants when doing calculations involving fundamental constants. Also, you cannot look up values such as solar mass, earth mass, etc. Use the two packages solely. Problem 1) Speed of light (5 Points)What is the speed of light ($c$) in $pc/yr$?
###Code
### Write your code here
import astropy.constants as cons
import astropy.units as u
cons.c.to(u.pc / u.yr)
###Output
_____no_output_____
###Markdown
Problem 2) Newton's 2nd Law (5 Points)Recall that NII states $$F =ma\,\,.$$Say a force of $97650134N$ is exerted on an object having a mass of $0.0071$ earth mass. What is the acceleration of the object in $AU/days^2$?
###Code
### Write your code here
a = (97650134 * u.N) / (0.0071*u.kg) #a = F/m
a.to(u.AU / (u.d)**2)
###Output
_____no_output_____
###Markdown
Problem 3) Newton's Universal Law of Gravitation (10 Points)Recall that the gravitational acceleration due to an object with mass $m$ at a distance $r$ is given by $$a_g = \frac{Gm}{r^2}\,\,.$$What is the gravitational acceleration due to a planet of $3.1415926$ Jupiter-mass at a distance of $1.523AU$? Give your answer in $pc/yr^2$.
###Code
### Write your code here
a = cons.G*(3.1415926*cons.M_jup)/(1.523*u.AU)**2
a.to(u.pc / (u.yr)**2)
###Output
_____no_output_____
###Markdown
Visualising Coordinate Transformation (20 Points) We introduced coordinate transformation using astropy, but maybe that was too astract to you, so let's use this problem as a way for you to visualise this process. Each part will be worth **5 Points**There are several things you need to do:1. Open up the FITS file named 'clusters.fits' (this part of the code is written for you already)2. Read it as a table using astropy.table (you will have to import the packages you need and write your own code from hereafter)3. Plot the positions of all the objects in the table, COLOUR-CODED by their types (there is a column named 'CLASS'), with RA on the x-axis and DEC on the y-axis. You should see a curved trend with a huge dip in the middle.4. Carry out a coordinate transformation from the ICRS coordinates to the galactic coordinates - there is a column named "DISTANCE" which you will need. 5. Now plot the position of all the objects in the galactic coordinates, with $\ell$ on the x-axis and $b$ on the y-axis; again, colour-code everything by their "CLASS". If you did everything correctly, you should see that the curve in the previous plot resembles a horizontal band. 6. Answer this question: What is that curved band in the first plot and the horizontal band in the second plot? Does it make sense that the band got straightened up? Why?Note: When you make your plots, please include the axis labels with units and the legend.
###Code
from astropy.io import fits
#You will have to import other packages to complete this problem
###IMPORT YOUR OTHER PACKAGES HERE
from astropy.table import Table
from astropy.coordinates import SkyCoord
import matplotlib.pyplot as plt
import numpy as np
fits_file = fits.open('clusters.fits')
#To read the fits file as a table, simply run the line: Table.read(fits_file)
#Although you will have to write up your code to get that Table function
### YOUR CODE HERE
data = Table.read(fits_file)
CLASS = np.array(data['CLASS'])
ra_data = np.array(data['RA'])
dec_data = np.array(data['DEC'])
print(np.unique(CLASS))
RA1,DEC1 = [], []
RA2,DEC2 = [], []
RA3,DEC3 = [], []
RA4,DEC4 = [], []
RA5,DEC5 = [], []
for i in range(len(ra_data)):
if CLASS[i] == ' NEBULA\n':
RA1.append(ra_data[i])
DEC1.append(dec_data[i])
elif CLASS[i] == ' UNIDENTIFIED\n':
RA2.append(ra_data[i])
DEC2.append(dec_data[i])
elif CLASS[i] == ' OPEN STAR CLUSTER\n':
RA3.append(ra_data[i])
DEC3.append(dec_data[i])
elif CLASS[i] == ' OB ASSOCIATION/HII REGION\n':
RA4.append(ra_data[i])
DEC4.append(dec_data[i])
else:
RA5.append(ra_data[i])
DEC5.append(dec_data[i])
plt.figure(figsize=(12,8))
plt.scatter(RA1,DEC1,s = 10, c = 'red', label = 'Nebula')
plt.scatter(RA2,DEC2,s = 10, c = 'pink', label = 'Unidentified')
plt.scatter(RA3,DEC3,s = 3, c = 'lightblue', label = 'Open Star Clusters')
plt.scatter(RA4,DEC4,s = 10, c = 'orange', label = 'OB Association/Hii Region')
plt.scatter(RA5,DEC5,s = 10, c = 'green', label = 'Extragalactic')
plt.xlabel('RA in Degrees')
plt.ylabel('DEC in Degrees')
plt.legend()
plt.title('ICRS Coordinates')
plt.show()
#################################################################
#################################################################
dist = np.array(data['DISTANCE'])
icrs = SkyCoord(ra=ra_data*u.deg, dec=dec_data*u.deg)
GAL = icrs.transform_to('galactic')
L_data = np.array(GAL.l)
B_data = np.array(GAL.b)
L1,B1 = [], []
L2,B2 = [], []
L3,B3 = [], []
L4,B4 = [], []
L5,B5 = [], []
for i in range(len(ra_data)):
if CLASS[i] == ' NEBULA\n':
L1.append(L_data[i])
B1.append(B_data[i])
elif CLASS[i] == ' UNIDENTIFIED\n':
L2.append(L_data[i])
B2.append(B_data[i])
elif CLASS[i] == ' OPEN STAR CLUSTER\n':
L3.append(L_data[i])
B3.append(B_data[i])
elif CLASS[i] == ' OB ASSOCIATION/HII REGION\n':
L4.append(L_data[i])
B4.append(B_data[i])
else:
L5.append(L_data[i])
B5.append(B_data[i])
plt.figure(figsize=(12,8))
plt.scatter(L1,B1 , s = 10, c = 'red', label = 'Nebula')
plt.scatter(L2,B2 , s = 10, c = 'pink', label = 'Unidentified')
plt.scatter(L3,B3 , s = 3, c = 'lightblue', label = 'Open Star Clusters')
plt.scatter(L4,B4 , s = 10, c = 'orange', label = 'OB Association/Hii Region')
plt.scatter(L5,B5 , s = 10, c = 'green', label = 'Extragalactic')
plt.xlabel('l in Degrees')
plt.ylabel('b in Degrees')
plt.title('Galactic Coordinates')
plt.legend()
plt.show()
###Output
[' NEBULA\n'
' UNIDENTIFIED\n'
' OPEN STAR CLUSTER\n'
' OB ASSOCIATION/HII REGION\n'
'GLOBULAR CLUSTER EXTENDED GALACTIC OR EXTRAGALACTIC\n']
###Markdown
(DOUBLE CLICK HERE TO ANSWER QUESTION 6):YOUR ANSWER: Pandas (30 Points)One of the most efficient and easy to use libraries for importing data files. We will explore the basics here.Let's import some data that represents the position of a ball being thrown off the roof of Campbell Hall. Using some basic kinematics we can derive the following equation.$$y(t) = -\frac{1}{2} g t^2 + v_{0,y} t + y_0$$For this problem we need to import our position measurements from our fellow colleagues in our research group. Problem 5 (5 Points)Your job for this problem is to simply read in the file named **"projectile.csv"** using the pandas library (DONT USE `numpy`). Print out your DataFrame so we can see what the data looks like as a table.
###Code
###YOUR CODE HERE###
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as fitter
import pandas as pd
data = pd.read_csv('projectile.csv')
data
###Output
_____no_output_____
###Markdown
Problem 6 (5 Points)Now load your DataFrame columns into numpy arrays and make a plot of Position vs. Time.
###Code
###YOUR CODE HERE###
time = data['Time[s]']
position = data['Position[m]']
plt.figure(figsize=(12,8))
plt.plot(time, position, 'ro')
plt.title('Position vs. Time')
plt.xlabel("Time [s]")
plt.ylabel("Position [m]")
plt.show()
###Output
_____no_output_____
###Markdown
Problem 7 (5 Points)In the last problem set we learned how to curve fit a quadratic equation. The above equation is also a quadratic equation with respect to time. Use what we learned last week to fit a curve to the noisy data from our fellow researchers. Explicitly print out what the initial velocity $v_{0,y}$ and initial height $y_0$ are based on your curve fit along with their respective errors.
###Code
###YOUR CODE HERE###
"""This solution is from physics 77"""
#we have to define our model with our needed parameters
def model_quad(x, a, b, c):
return a*x**2 + b*x + c
par0 = np.array([-2.5, 1.5, 100.0]) # initial guess for parameters
par, cov = fitter.curve_fit(model_quad, time, position, par0) #fitter.curve_fit takes in the model, x,y data, guess, and sigma
# par arrays contains the values of parameters. cov is the covariance matrix
# decode it now
a = par[0]
ea = np.sqrt(cov[0,0])
print('a={0:6.3f}+/-{1:5.3f}'.format(a,ea))
b = par[1]
eb = np.sqrt(cov[1,1])
print('b={0:6.3f}+/-{1:5.3f}'.format(b,eb))
c = par[2]
ec = np.sqrt(cov[2,2])
print('c={0:6.3f}+/-{1:5.3f}'.format(c,ec))
print("""\n Initial velocity in the y direction is going to be 13.298 m/s and the initial
height was 97.839 m""")
plt.figure(figsize=(12,8))
plt.plot(time, model_quad(time, a,b,c))
plt.plot(time, position, 'ro')
plt.title('Position vs. Time')
plt.xlabel("Time [s]")
plt.ylabel("Position [m]")
plt.show()
###Output
a=-5.013+/-0.235
b=13.298+/-1.167
c=97.839+/-1.211
Initial velocity in the y direction is going to be 13.298 m/s and the initial
height was 97.839 m
###Markdown
Problem 8 (5 Points)Alright now we have a model function that can fit the function as a function of time. create two lists/arrays of values using this function. One list's values should be time where we use `t = np.linspace(0,5,100)` to create the values and the other list should be your model's output after taking in all those times. (A list of the values you would normally plot)Once you have created your two lists of values, construct a pandas DataFrame using these lists. Your data frame should have two columns with 100 values each.
###Code
###Your Code Here###
t = np.linspace(0,5,100)
new_position = model_quad(t, a,b,c)
DataFrame = pd.DataFrame({'time': t, 'position': new_position})
DataFrame
###Output
_____no_output_____
###Markdown
Problem 9 (10 Points)Last part of the problem set! This is basically one line of code. Export your new DataFrame to a csv file called **"trajectory.csv"**, this will be useful for your colleagues!
###Code
###Your Code Here###
DataFrame.to_csv('trajectory.csv')
###Output
_____no_output_____
###Markdown
Object Oriented Programming (30 Points) Problem 10 (10 Points)Create a "vector" class from scratch. Look at the lecture slides for how to write one from scratch. Your vector should be able to have a normalize method, and a method for calculating the dot product as well as finding the angle between two vectors.
###Code
###Your Code Here###
import math
class Vector:
def __init__(self, x, y):
self.x = x
self.y = y
#all the properties of vectors and the tools we can use with them. We could␣
#→use numpy, but this made more intuitive sense to make our own
#class, and rewrite our own operators. It is nice to stay consistent with␣
# →the class and object style from before with the balls.
def len(self):
return math.sqrt(self.x*self.x + self.y*self.y)
def __add__(self, other):
return Vector(self.x + other.x, self.y + other.y)
def __sub__(self, other):
return Vector(self.x - other.x, self.y - other.y)
def __mul__(self, other):
return Vector(self.x * other, self.y * other)
def __rmul__(self, other):
return Vector(self.x * other, self.y * other)
def __truediv__(self, other):
return Vector(self.x / other, self.y / other)
def angle(self):
return math.atan2(self.y, self.x)
def norm(self):
if self.x == 0 and self.y == 0:
return Vector(0, 0)
return self / self.len()
def dot(self, other):
return self.x*other.x + self.y*other.y
###Output
_____no_output_____
###Markdown
Problem 11 (10 Points)Create a star class that uses vector objects as it's position and velocity traits. This star class should also have a temperature trait. Then create two star objects with initial positions Vector(0,0) and Vector(80, 30). The initial velocities can be (0,0) for both, set star1's temperature to be 6000 and star2's temperature to be 10000. Find the distance between the stars using the object traits and methods from both the star and vector classes.
###Code
###Your Code Here###
class Star:
def __init__(self, position, velocity, temp):
self.position = position
self.velocity = velocity
self.temp = temp
star1 = Star(Vector(0,0), Vector(0,0), 6000)
star2 = Star(Vector(80,30), Vector(0,0), 10000)
r1 = star1.position
r2 = star2.position
#calc distance between two stars
deltar = (r1 - r2)
deltar.len()
###Output
_____no_output_____
###Markdown
Problem 12 (10 Points)now edit your star class to have a method called `cool_down()` which changes the object's temperature the farther apart the two stars are. This cool_down method should cool with this form$$T_{new} = T_{old} e^{-\frac{|\mathbf{\Delta r}|}{R}}$$where R = 100 and $|\mathbf{\Delta r}|$ is the distance between two stars. Note that it doesn't return anything, but instead just updates the temperature value of BOTH the stars in question.
###Code
###Your Code Here###
import numpy as np
class Star:
def __init__(self, position, velocity, temp):
self.position = position
self.velocity = velocity
self.temp = temp
def cool_down(self, other_star):
R = 100
dr = (self.position - other_star.position).len()
self.temp = self.temp * np.exp(-dr/R)
other_star.temp = other_star.temp * np.exp(-dr/R)
star1 = Star(Vector(0,0), Vector(0,0), 6000)
star2 = Star(Vector(80,30), Vector(0,0), 10000)
print("Star 1 Temp Before = {0:0.3f} K and Star 2 Temp After = {1:0.3f} K ".format(star1.temp,star2.temp))
star1.cool_down(star2)
print("Star 1 Temp After = {0:0.3f} K and Star 2 Temp After = {1:0.3f} K".format(star1.temp, star2.temp))
###Output
Star 1 Temp Before = 6000.000 K and Star 2 Temp After = 10000.000 K
Star 1 Temp After = 2553.230 K and Star 2 Temp After = 4255.383 K
|
HW6/hw6_sp2019.ipynb | ###Markdown
Data-X Spring 2019: Homework 06 Name : Shun Lin SID : 26636176 Course (IEOR 135/290) : IEOR 135 Machine LearningIn this homework, you will do some exercises with prediction. We will cover these algorithms in class, but this is for you to have some hands on with these in scikit-learn. You can refer - https://github.com/ikhlaqsidhu/data-x/blob/master/05a-tools-predicition-titanic/titanic.ipynbDisplay all your outputs.
###Code
import numpy as np
import pandas as pd
# machine learning libraries
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import Perceptron
from sklearn.tree import DecisionTreeClassifier
from sklearn import linear_model
# No warnings
import warnings
warnings.filterwarnings('ignore') # Filter out warnings
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
__ 1. Read __`diabetesdata.csv`__ file into a pandas dataframe. About the data: __1. __TimesPregnant__: Number of times pregnant 2. __glucoseLevel__: Plasma glucose concentration a 2 hours in an oral glucose tolerance test 3. __BP__: Diastolic blood pressure (mm Hg) 5. __insulin__: 2-Hour serum insulin (mu U/ml) 6. __BMI__: Body mass index (weight in kg/(height in m)^2) 7. __pedigree__: Diabetes pedigree function 8. __Age__: Age (years) 9. __IsDiabetic__: 0 if not diabetic or 1 if diabetic)
###Code
#Read data & print the head
data=pd.read_csv('diabetesdata.csv')
data.head()
###Output
_____no_output_____
###Markdown
**2. Calculate the percentage of Null values in each column and display it. **
###Code
sum_of_nans = sum(len(data) - data.count())
print("There are " + str(sum_of_nans) + " Nan values in the dataframe\n")
print('Percentage of NaNs in the dataframe:\n', data.isnull().sum() / len(data))
###Output
There are 67 Nan values in the dataframe
Percentage of NaNs in the dataframe:
TimesPregnant 0.000000
glucoseLevel 0.044271
BP 0.000000
insulin 0.000000
BMI 0.000000
Pedigree 0.000000
Age 0.042969
IsDiabetic 0.000000
dtype: float64
###Markdown
**3. Split __`data`__ into __`train_df`__ and __`test_df`__ with 15% as test.**
###Code
from IPython.display import display
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(data, test_size=0.15, random_state=100)
display(train_df.head())
display(test_df.head())
###Output
_____no_output_____
###Markdown
**4. Display the means of the features in train and test sets. Replace the null values in __`train_df`__ and __`test_df`__ with the mean of EACH feature column separately for train and test. Display head of the dataframes.**
###Code
print("The means for each features in the train_df is: ")
display(train_df.mean())
print("\n")
print("The means for each features in the test_df is: ")
display(test_df.mean())
print("\n")
# replace Nan with means
train_df.fillna(train_df.mean(), inplace=True)
test_df.fillna(test_df.mean(), inplace=True)
print("train_df head")
display(train_df.head())
print("test_df head")
display(test_df.head())
###Output
The means for each features in the train_df is:
###Markdown
**5. Split __`train_df`__ & __`test_df`__ into __`X_train`__, __`Y_train`__ and __`X_test`__, __`Y_test`__. __`Y_train`__ and __`Y_test`__ should only have the column we are trying to predict, __`IsDiabetic`__.**
###Code
# get X and Y
X_train=train_df.iloc[:,:-1]
Y_train=train_df[['IsDiabetic']]
X_test=test_df.iloc[:,:-1]
Y_test=test_df[['IsDiabetic']]
print("X_train head")
display(X_train.head())
print("Y_train head")
display(Y_train.head())
print("X_test head")
display(X_test.head())
print("Y_test head")
display(Y_test.head())
###Output
X_train head
###Markdown
**6. Use this dataset to train perceptron, logistic regression and random forest models using 15% test split. Report training and test accuracies. Try different hyperparameter values for these models and see if you can improve your accuracies.**
###Code
# 6a. Logistic Regression
LogisticRegressionModel = linear_model.LogisticRegression()
print ('Training a logistic Regression Model..')
LogisticRegressionModel.fit(X_train, Y_train)
# TRAINING ACCURACY
training_accuracy=LogisticRegressionModel.score(X_train,Y_train)
print ('Training Accuracy:',training_accuracy)
# TESTING ACCURACY
testing_accuracy=LogisticRegressionModel.score(X_test,Y_test)
print ('Testing Accuracy:',testing_accuracy, '\n')
# 6b. Perceptron
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
train_acc = perceptron.score(X_train, Y_train)
acc_perceptron = perceptron.score(X_test, Y_test)
print('Perceptron test accuracy:', str(round(train_acc*100,2)),'%')
print('Perceptron test accuracy:', str(round(acc_perceptron*100,2)),'%')
# 6c. Random Forest
random_forest = RandomForestClassifier(n_estimators=500)
random_forest.fit(X_train, Y_train)
acc_train = random_forest.score(X_train, Y_train)
acc_rf = random_forest.score(X_test, Y_test)
print('Random Forest training accuracy:', str(round(acc_train*100,2)),'%')
print('Random Forest testing accuracy:', str(round(acc_rf*100,2)),'%')
###Output
Random Forest training accuracy: 100.0 %
Random Forest testing accuracy: 70.69 %
###Markdown
**7. For your logistic regression model - ****a . Compute the log probability of classes in __`IsDiabetic`__ for the first 10 samples of your train set and display it. Also display the predicted class for those samples from your logistic regression model trained before. **
###Code
first_10_train = X_train.head(10)
probs = LogisticRegressionModel.predict_proba(first_10_train)
print("probability for each class for the top 10 samples")
print(probs)
print("\n")
print("log probability for each class for the top 10 samples")
log_probs = np.log(probs)
print(log_probs)
print("\n")
print("predicted class for the top 10 samples")
print(LogisticRegressionModel.predict(first_10_train))
###Output
probability for each class for the top 10 samples
[[0.21463785 0.78536215]
[0.55399577 0.44600423]
[0.8588704 0.1411296 ]
[0.65789977 0.34210023]
[0.86717097 0.13282903]
[0.27150999 0.72849001]
[0.77923657 0.22076343]
[0.63294397 0.36705603]
[0.63602604 0.36397396]
[0.91826023 0.08173977]]
log probability for each class for the top 10 samples
[[-1.5388031 -0.24161033]
[-0.59059823 -0.80742684]
[-0.15213725 -1.95807663]
[-0.41870268 -1.07265152]
[-0.14251913 -2.01869245]
[-1.30375635 -0.31678136]
[-0.2494406 -1.51066359]
[-0.45737337 -1.00224077]
[-0.45251578 -1.01067294]
[-0.08527446 -2.50421459]]
predicted class for the top 10 samples
[1 0 0 0 0 1 0 0 0 0]
###Markdown
**b . Now compute the log probability of classes in __`IsDiabetic`__ for the first 10 samples of your test set and display it. Also display the predicted class for those samples from your logistic regression model trained before. (using the model trained on the training set)**
###Code
first_10_test = X_test.head(10)
probs = LogisticRegressionModel.predict_proba(first_10_test)
print("probability for each class for the top 10 samples")
print(probs)
print("\n")
print("log probability for each class for the top 10 samples")
log_probs = np.log(probs)
print(log_probs)
print("\n")
print("predicted class for the top 10 samples")
print(LogisticRegressionModel.predict(first_10_test))
###Output
probability for each class for the top 10 samples
[[0.78857689 0.21142311]
[0.88984229 0.11015771]
[0.42855734 0.57144266]
[0.51234994 0.48765006]
[0.51627081 0.48372919]
[0.34209928 0.65790072]
[0.0622243 0.9377757 ]
[0.81699332 0.18300668]
[0.19503948 0.80496052]
[0.7219596 0.2780404 ]]
log probability for each class for the top 10 samples
[[-0.23752536 -1.55389391]
[-0.11671103 -2.20584226]
[-0.84733072 -0.55959114]
[-0.66874742 -0.71815721]
[-0.66112383 -0.72623005]
[-1.0726543 -0.41870124]
[-2.77700966 -0.06424449]
[-0.20212436 -1.69823264]
[-1.63455325 -0.21696205]
[-0.3257861 -1.27998885]]
predicted class for the top 10 samples
[0 0 1 0 0 1 1 0 1 0]
###Markdown
**c . What can you interpret from the log probabilities and the predicted classes?** The log probabilites is defined as $$ log(prob) $$And we know that probability lies in the range of 0 and 1, therefore the log probabiliy is less than or equal to 0. We can interpret from the log probabilities how confident our logistic model is for predicting the class of a sample. The less negative (higher) the log probabiliy is, the higher the probability our logistic model predicts for the given class. And the more negative (lower) the log probailiy is, the lower the probabiliy our logistic model predicts for the given class. Thus, given the log probabiliy, the logistic regression model will make prediction for the one that has higher log probability. Thus the predicted class for a given sample in a logistic regression model is the one that has the higher log probabiliites. Given the log probabiliy, the logistic regression model will make prediction for the one that has higher log probability. Thus the predicted class for a given sample in a logistic regression model is the one that has the higher log probabiliites. **8. Is mean imputation is the best type of imputation (as we did in 4.) to use? Why or why not? What are some other ways to impute the data?** Mean imputation may not be the best type of imputation to use, the best imputation for a column of data depends on the type of data. For glucoseLevel instead of using the mean from the data we can use what is the normal glucose level to fill in the NAN values in glucoseLevel column. For age we can use the median instead of average to fight against out-liers. For other data sometimes it is better to use 0 instead of mean/average because sometimes the absence of measurement may implies 0. Other ways to impute the data includes Substitution, hot/cold deck imputation, Regression imputation, Stochastic regression imputation, Interpolation and extrapolation.Reference:https://www.theanalysisfactor.com/seven-ways-to-make-up-data-common-methods-to-imputing-missing-data/ Extra Credit (2 pts) - MANDATORY for students enrolled in IEOR 290 **9. Implement the K-Nearest Neighbours (https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) algorithm for k=1 from scratch in python (do not use KNN from existing libraries). KNN uses Euclidean distance to find nearest neighbors. Split your dataset into test and train as before. Also fill in the null values with mean of features as done earlier. Use this algorithm to predict values for 'IsDiabetic' for your test set. Display your accuracy. **
###Code
import copy
# get X_train, Y_train, X_test, Y_test
X_train=train_df.iloc[:,:-1]
Y_train=train_df[['IsDiabetic']]
X_test=test_df.iloc[:,:-1]
Y_test=test_df[['IsDiabetic']]
# helper functions
def init_knn(X_train, Y_train, X_test):
m = len(X_test)
n = len(X_train)
# neighbors are a list of votes
# neightors[i] = [1 0 0 1 ....] where each element is the result from a point in the X_training set
neighbors = [0] * m
# Building lables so we only have to build it once
for i in range(m):
labels = [0] * n
distances = [0] * n
sample = X_test.iloc[i]
for j in range(n):
training_point = X_train.iloc[j]
training_label = Y_train.iloc[j]["IsDiabetic"]
distance = np.linalg.norm(sample-training_point)
distances[j] = distance
labels[j] = training_label
sorted_neighbors = [label for _,label in sorted(zip(distances,labels))]
neighbors[i] = copy.deepcopy(sorted_neighbors)
return neighbors
def predict_knn(neighbors, k):
m = len(neighbors)
result = [0] * m
for i in range(m):
lst = neighbors[i][0:k]
result[i] = max(lst,key=lst.count)
return result
def accuracy(Y_test, result):
total_count = len(result)
same_prediction = 0
for i in range(len(result)):
same_prediction += abs(Y_test["IsDiabetic"].iloc[i] - result[i])
return same_prediction / total_count
# initialize k_nn
neighbors = init_knn(X_train, Y_train, X_test)
# predict
k = 3
result = predict_knn(neighbors, k)
# get accuracy
acc = accuracy(Y_test, result)
print(acc)
max_k = 100
x = [i for i in range(1, max_k)]
y = [0] * len(x)
max_x = 0
max_y = 0
for i in range(len(x)):
k = x[i]
result = predict_knn(neighbors, k)
acc = accuracy(Y_test, result)
y[i] = acc
if acc > max_y:
max_x = k
max_y = acc
plt.plot(x, y)
plt.ylabel('test accuracy')
plt.xlabel('k')
plt.title('python implementation of K-NN')
plt.show()
print("max accuracy is " + str(max_y) + " at k = " + str(max_x))
print("accuracy for k = 1 is " + str(y[0]))
###Output
_____no_output_____ |
scipyExercise/integrate functions.ipynb | ###Markdown
integrate functions
###Code
from scipy import integrate
import numpy as np
import matplotlib.pyplot as plt
import sympy as sy
sy.init_printing()
a,b=-1,1
x=np.linspace(a,b,17)
y=np.exp(-x)
val_trapz=integrate.trapz(y,x)
val_trapz
val_simps=integrate.simps(y,x)
val_simps
val_true=-np.exp(-b)+np.exp(-a)
print(val_true-val_trapz)
print(val_true-val_simps)
x=np.linspace(a,b,1+2**4)
y=np.exp(-x)
dx=x[1]-x[0]
val_romb=integrate.romb(y,dx=dx)
val_romb
print(val_true-val_romb)
f=lambda x:np.exp(-x**2) * (x**12-x**5)
val,err=integrate.quad(f,0,np.inf)
val
err
###Output
_____no_output_____
###Markdown
多重積分$$\int_{x=a}^b \int_{y=g_1(x)}^{g_2(x)} f(x,y) dxdy$$where $g_1=-1$ and $g_2=0$
###Code
def f(x,y):
return 4-x**2-y**2
a,b=0,1
g1,g2=lambda x:-1,lambda x:0
integrate.dblquad(f,a,b,g1,g2)
val,err=integrate.dblquad(f,0,1,lambda x:x-1,lambda x:1-x)
val,err
###Output
_____no_output_____
###Markdown
$$\int_{a}^{b} \int_{y=g_1(x)}^{g_2(x)} \int_{z=h_1(x,y)}^{h_2(x,y)} f(x,y,z)dxdydz$$
###Code
ff=lambda x,y,z:(x+y+z)**2
a,b=-1,1
g1,g2=lambda x:-1,lambda x:1
h1,h2=lambda x,y:-1,lambda x,y:1
var,err=integrate.tplquad(ff,a,b,g1,g2,h1,h2)
val
val,err=integrate.nquad(f,[(-1,1),(-1,1),(-1,1)])
val
###Output
_____no_output_____ |
KC_RecSys/project/notebook/Petdata_generator.ipynb | ###Markdown
Generate Pet Dataset Imports
###Code
import numpy as np
import pandas as pd
from faker import Faker
import os
###Output
_____no_output_____
###Markdown
Pet datasetAssumes...* Discrete uniform distribution of ratings per user* Each user rated more than 1/3 of documents Presets
###Code
fake = Faker()
fake.seed(23)
np.random.seed(23)
num_users = 100
num_docs = 1000
###Output
_____no_output_____
###Markdown
Generate fake Ratings
###Code
def generate_ratings(num_users, num_docs, p_na_min, p_na_max, kind="5Star"):
""" Generate random user ratings
:param num_users: number of users to generate
:param num_docs: number of documents to rate
:param p_na_min: min percentage of NaN per user
:param p_na_max: max percentage of NaN per user
:param kind: kind of rating scheme, either "5Star" (1 Star ... 5 Stars) or "binary" (like/dislike)
:return user_ratings: dictionary of users with list of ratings
"""
doc_uris = []
user_ratings = {}
# Generate fake URIs
for _ in range(num_docs):
doc_uris.append(fake.uri())
user_ratings["doc_uri"] = doc_uris
# Generate ratings
for _ in range(num_users):
if kind == "5Star":
# TBD
ratings = np.random.randint(0, 6, size=num_docs).tolist() # discr uniform ratings
num_na = np.random.randint(int(num_docs * p_na_min), int(num_docs * p_na_max) + 1)
random_ixs = np.random.choice(range(num_docs), size=num_na, replace=False) # mask
for i in random_ixs:
ratings[i] = np.NaN
elif kind == "binary":
ratings = np.random.choice([1, -1], num_docs, p=[.8, .2]).tolist()
# TBD
num_na = np.random.randint(int(num_docs * p_na_min), int(num_docs * p_na_max) + 1)
random_ixs = np.random.choice(range(num_docs), size=num_na, replace=False) # mask
for i in random_ixs:
ratings[i] = np.NaN
else:
NotImplementedError
# Generate fake user
user_ratings[fake.name()] = ratings
return user_ratings
user_ratings = generate_ratings(num_users, num_docs, 0.7, 0.95, "binary")
###Output
_____no_output_____
###Markdown
Dataframe
###Code
df = pd.DataFrame.from_dict(user_ratings).set_index("doc_uri")
df.head()
###Output
_____no_output_____
###Markdown
Persist dataset
###Code
#f_name = "petdata_1000_100.csv" # 5 Star ratings
f_name = "petdata_binary_1000_100.csv" # binary ratings
path = os.path.join("../data", f_name)
df.to_csv(path)
###Output
_____no_output_____ |
Data Structure and Algorithms/Section 4.ipynb | ###Markdown
In the Towers of Hanoi puzzle, we are given a platform with three pegs, a,b, and c, sticking out of it. On peg a is a stack of n disks, each larger than the next, so that the smallest is on the top and the largest is on the bottom. The puzzle is to move all the disks from peg a to peg c, moving one disk at a time, so that we never place a larger disk on top of a smaller one.See Figure 4.15 for an example of the case n = 4. Describe a recursivealgorithm for solving the Towers of Hanoi puzzle for arbitrary n. (Hint:Consider first the subproblem of moving all but the nth disk from peg a toanother peg using the third as “temporary storage.”)
###Code
def move_disk(from_peg, to_peg):
to_peg.append(from_peg.pop())
print("=============[Status]=============")
print("[a]: ", a)
print("[b]: ", b)
print("[c]: ", c)
def hanoi(n, from_peg, help_peg, to_peg):
if n == 1:
move_disk(from_peg, to_peg)
else:
hanoi(n-1, from_peg, to_peg, help_peg)
move_disk(from_peg, to_peg)
hanoi(n-1, help_peg, from_peg, to_peg)
n = 5
a = list(reversed(range(1,int(n)+1)))
b = [];
c = [];
print("[a]: ", a)
print("[b]: ", b)
print("[c]: ", c)
hanoi(5, a, b, c)
###Output
[a]: [5, 4, 3, 2, 1]
[b]: []
[c]: []
=============[Status]=============
[a]: [5, 4, 3, 2]
[b]: []
[c]: [1]
=============[Status]=============
[a]: [5, 4, 3]
[b]: [2]
[c]: [1]
=============[Status]=============
[a]: [5, 4, 3]
[b]: [2, 1]
[c]: []
=============[Status]=============
[a]: [5, 4]
[b]: [2, 1]
[c]: [3]
=============[Status]=============
[a]: [5, 4, 1]
[b]: [2]
[c]: [3]
=============[Status]=============
[a]: [5, 4, 1]
[b]: []
[c]: [3, 2]
=============[Status]=============
[a]: [5, 4]
[b]: []
[c]: [3, 2, 1]
=============[Status]=============
[a]: [5]
[b]: [4]
[c]: [3, 2, 1]
=============[Status]=============
[a]: [5]
[b]: [4, 1]
[c]: [3, 2]
=============[Status]=============
[a]: [5, 2]
[b]: [4, 1]
[c]: [3]
=============[Status]=============
[a]: [5, 2, 1]
[b]: [4]
[c]: [3]
=============[Status]=============
[a]: [5, 2, 1]
[b]: [4, 3]
[c]: []
=============[Status]=============
[a]: [5, 2]
[b]: [4, 3]
[c]: [1]
=============[Status]=============
[a]: [5]
[b]: [4, 3, 2]
[c]: [1]
=============[Status]=============
[a]: [5]
[b]: [4, 3, 2, 1]
[c]: []
=============[Status]=============
[a]: []
[b]: [4, 3, 2, 1]
[c]: [5]
=============[Status]=============
[a]: [1]
[b]: [4, 3, 2]
[c]: [5]
=============[Status]=============
[a]: [1]
[b]: [4, 3]
[c]: [5, 2]
=============[Status]=============
[a]: []
[b]: [4, 3]
[c]: [5, 2, 1]
=============[Status]=============
[a]: [3]
[b]: [4]
[c]: [5, 2, 1]
=============[Status]=============
[a]: [3]
[b]: [4, 1]
[c]: [5, 2]
=============[Status]=============
[a]: [3, 2]
[b]: [4, 1]
[c]: [5]
=============[Status]=============
[a]: [3, 2, 1]
[b]: [4]
[c]: [5]
=============[Status]=============
[a]: [3, 2, 1]
[b]: []
[c]: [5, 4]
=============[Status]=============
[a]: [3, 2]
[b]: []
[c]: [5, 4, 1]
=============[Status]=============
[a]: [3]
[b]: [2]
[c]: [5, 4, 1]
=============[Status]=============
[a]: [3]
[b]: [2, 1]
[c]: [5, 4]
=============[Status]=============
[a]: []
[b]: [2, 1]
[c]: [5, 4, 3]
=============[Status]=============
[a]: [1]
[b]: [2]
[c]: [5, 4, 3]
=============[Status]=============
[a]: [1]
[b]: []
[c]: [5, 4, 3, 2]
=============[Status]=============
[a]: []
[b]: []
[c]: [5, 4, 3, 2, 1]
|
Python-Basics.ipynb | ###Markdown
1) Declaring Variable, List, Dictionaries
###Code
# You don't have to declare datatype for variable, it's not necessary, you can simple write
x = 5 #this will be considered as int
x_str = 'Hi, this is Tanay from Let Machine Talk!' #you can declare string directly, single
#or double quotes, both are fine, we will dive into that later
#to print
print(x)
type(x)
#List is declared with two open brackets.
y = []
type(y)
#if I have to fill something
y = [1,2,3,4,5]
print(y)
#index in python starts from 0, so if you want to access the element of a list,
#it's done through open brackets and then the index number
y[0]
#select from and to index in list
#y[from:to], to won't count, so if I want to select 2,3 from y
y[1:3]
#finding length of a list
len(y)
#adding the numbers in the list
sum(y)
#adding selected numbers
sum(y[1:4])
# accessing the last element of the list
y[-1]
#accessing the last "x" elements
y[-3:-1]
#you can make a list of string as well
y_str = ['Hi', 'this', 'is', 'tanay',',','and','you','are','watching','let machine talk']
#to join all these strings together, you can put anything into the double quotes
" ".join(y_str)
"%".join(y_str)
#if you want to access the characters of strings in list, access the particular element and then access the index
#example if i want to access machine in let machine talk
y_str[-1]
y_str[-1][4:11] #4, 11 are the index positions of m and till e
#if I only want to access 'let'
y_str[-1][0:3]
#if you want to access first few or last few letter, you don't have to specify index as well,
#suppose I want to select the whole string after "m", I don't have to specify the ending index
y_str[-1][4:]
y.sort()
y.reverse()
y
y = [3,4,3,2,1,1,3]
#counting how many times a number is repeated
y.count(3)
# removing one 3 at a time
y.remove(3)
y
y.remove(3)
y
y.remove(3)
y
#if I want to know the index of a particular element
y.index(1)
#you are going to use this command many times, this is to add an element, before using element, remember that
#you have to declare it as a list other it will throw an error
y.append(1)
y.append(3)
y
#Dictionary
#1 Declaration with curly brackets, it has a key-value pair
z = {}
#to add to the dictionary, you can simply type the key-value pair
z['key'] = 'value'
z
z[1] = 2
#it automatically adds up
z
#Now if I want to access value, I will access it like I accessed it in list except, here the indices are the keys
#itself, so you cannot use traditional indexing, index of the value is the key of the value
z['key']
z[1]
z.items()
#if you want to pop items out
z.pop('key')
z
z['key'] = 'value'
#returning only all keys
z.keys()
#returning all values
z.values()
#if you want to add new dictionary or append to dictionaries
z_new = {2:3}
z.update(z_new)
z
###Output
_____no_output_____
###Markdown
2 ) Using if, else statement
###Code
x = 10
if x%2==0:
print(x)
else:
print('not divisible')
###Output
10
###Markdown
3 ) For, while loop
###Code
# FOR loop is actually quite simple
#suppose, I want to print all the elements of y
for numbers in y:
print(numbers)
#inplace of numbers, anything can be placed, it acts as a temporary variable, you cannot access all the numbers of list
numbers
for _ in y:
print(_)
#while loop
i = 0
while i < len(y):
print(y[i])
i+=1
#two way to iterate through list
# one-direct
for x in y:
print(x)
#second-through index
for x in range(len(y)):
print(y[x])
#range command
#if you want to get a series of numbers with patters between two number, you can use range command, it is used
#very often
range(3,len(y))
range(len(y))
print(list(range(0,len(y))))
#if I want bigger gaps
print(list(range(0, 20, 4)))
#the end 4 specifies the difference between each number
#iterating through dictionary
#print all the keys
for _ in z.keys():
print(_)
#printing all the values
for _ in z.values():
print(_)
for x, _ in zip(z.keys(),z.values()):
print(x,y)
#using enumerate
for c, _ in enumerate(y_str,1):
print(c, _)
new_list = list(enumerate(y_str, 1))
new_list
# same division program using for loop
#we will learn to do the same thing in one line of code
divisible_by_2 = []
for x in [1,2,3,4,5,6,7]:
if x%2==0:
divisible_by_2.append(x)
else:
pass
divisible_by_2
###Output
_____no_output_____
###Markdown
4) Using Comprehensions
###Code
comp = [x for x in range(0, 10)]
comp
comp_1 = [x for x in y if x > 2]
comp_1
list_of_sq = [_**2 for _ in range(0,10)]
list_of_sq
y_str
[_.upper() for _ in y_str]
###Output
_____no_output_____
###Markdown
5) Working with strings
###Code
sentence = "Hi, this is Tanay and you are watching Let Machine Talk.\n"
sentence.upper()
sentence.lower()
sentence.count('this')
sentence.strip()
sentence.split(',')
sentence.split(',')[1].split(' ')
sentence.replace(',','HI HOW ARE YOU')
sentence.find('Tanay')
sentence[12:]
sentence.islower()
sentence.isupper()
###Output
_____no_output_____
###Markdown
6) What the hell are functions?
###Code
4**2
def square(x):
return x**2
square(4)
def print_list(lis):
print(lis)
x_1 = [1,2,3]
print_list(x_1)
###Output
[1, 2, 3]
###Markdown
7) What the hell is a map function?
###Code
#if you want to applya function to a set of values, then map function is very useful
print(list(map(square, [1,4,5,3,2])))
###Output
[1, 16, 25, 9, 4]
###Markdown
8) What the hell is lambda?
###Code
#lambda is very very useful, we are going to use lambda and functions all the time.
#in above example,
#instead of defining function before hand, using lambda, you can design the operation then and there
list(map(lambda x: x**2,[1,4,5,3,2]))
###Output
_____no_output_____
###Markdown
9) What is filter?
###Code
#if you want to filter elements according to certain criterion, instead of using for, if, else statement,
#we can do it in one line
list(filter(lambda x:x%3==0, [1,3,6,9,20,12,6,7,8,2,13]))
###Output
_____no_output_____
###Markdown
swapping
###Code
a = 2
b= 3
a,b=b,a
print(a,b)
###Output
3 2
###Markdown
10) Zip Command
###Code
#with zip command, you can iterate over multiple lists simultaneously
x = [1,2,3,4,5]
y=[6,7,8,9,10]
for _, f in zip(x,y):
print(_,f)
###Output
1 6
2 7
3 8
4 9
5 10
###Markdown
Basics
###Code
none
6*7
6+7
print('olá mundo')
data = [1, 2, 3]
###Output
_____no_output_____
###Markdown
criando funções e chamando funções
###Code
def append_element(some_list, element):
some_list.append(element)
append_element(data, 4)
data
a = 'one way of writing a string'
b = "another way"
print(a)
print(b)
a = 'this is a string'
b = a.replace('string', 'longer string')
b
s = 'python'
list(s)
['p', 'y', 't', 'h', 'o', 'n']
s[:3]
empty_dict = {}
d1 = {'a' : 'some value', 'b' : [1, 2, 3, 4]}
d1
###Output
_____no_output_____
###Markdown
You can access, insert, or set elements using the same syntax as for accessing elementsof a list or tuple
###Code
d1[7] = 'an integer'
d1
d1['b']
list(d1.keys())
words = ['apple', 'bat', 'bar', 'atom', 'book']
by_letter = {}
for word in words:
letter = word[0]
if letter not in by_letter:
by_letter[letter] = [word]
else:
by_letter[letter].append(word)
by_letter
###Output
_____no_output_____
###Markdown
Python BasicsDas sind meine persönlichen Notizen und Besispiele für grundsätzliche Python-Konstrukte.
###Code
print("hello")
###Output
hello
###Markdown
Variablen
###Code
chicken_count = 3
price_per_chicken = 5.20
chicken_total_value = chicken_count * price_per_chicken
chicken_total_value
from decimal import Decimal
exact_price_per_chicken = Decimal("5.20")
exact_chicken_total_value = chicken_count * exact_price_per_chicken
exact_chicken_total_value
Decimal("1") / Decimal("3")
###Output
_____no_output_____
###Markdown
###Code
1 / 3
name = "Hugo"
greeting = "Hallo " + name
greeting
f"Hallo {name}, deine Händel sind €{exact_chicken_total_value} wert."
text = "Das ist ein langer Text mit vielen Wörtern."
text
text.split()
text.split("e")
text[0]
text[:3]
text[4:7]
text[19:]
text[-8:]
text[35:]
text[-1]
text.startswith("Das")
text.endswith("Hallo")
###Output
_____no_output_____
###Markdown
Funktionen
###Code
def add(a, b):
return a + b
add(1, 2)
add(1.2, 3.4)
add("Hallo ", "Hugo")
try:
add("Hallo", 3)
except TypeError as error:
print(error)
add("Hallo", str(3))
add(1, 2.3)
def print_person(name, height, favorite_color):
print(f"{name} ist {height}cm groß und hat die Lieblingfarbe {favorite_color}")
print_person("Hugo", 170, "blau")
def print_person2(name, height=173, favorite_color="rot"):
print(f"{name} ist {height}cm groß und hat die Lieblingfarbe {favorite_color}")
print_person2("Hugo", 170, "blau")
print_person2("Susi", 182)
print_person2("Rosalinde")
try:
print_person2()
except TypeError as error:
print(error)
print_person2("Sepp", "grün")
print_person2("Sepp", favorite_color="grün")
def greeting_and_farewell(name):
return f"Hallo {name}!", f"Auf Wiedersehen, {name}!"
greeting_and_farewell("Rosalinde")
greeting, farewell = greeting_and_farewell("Hugo")
print(greeting)
print(farewell)
###Output
Hallo Hugo!
Auf Wiedersehen, Hugo!
###Markdown
Listen und Schleifen
###Code
friends = ["Hugo", "Sepp", "Susi", "Rosalinde"]
def greetings_for_starting_with_s(friends):
result = []
for friend in friends:
if friend.startswith("S"):
result.append(f"Hallo {friend}!")
return result
greetings_for_starting_with_s(friends)
# ["Hallo Sepp!", "Hallo Susi!"]
sorted(friends)
friends
friends.sort()
friends
###Output
_____no_output_____
###Markdown
List-Comprehension
###Code
[f"Hello {friend}!" for friend in friends]
[f"Hello {friend}!" for friend in friends if friend.startswith("S")]
for friend in friends:
print(friend)
###Output
Hugo
Rosalinde
Sepp
Susi
###Markdown
Schleife über Zahlenbereich
###Code
for number in range(5):
print(number)
start = 5
end = 9
for number in range(start, end + 1):
print(number)
###Output
5
6
7
8
9
###Markdown
Generatoren
###Code
def multiples_of_2(count):
result = []
for number in range(1, count + 1):
result.append(number * 2)
return result
multiples_of_2(5)
for number in multiples_of_2(7):
print(f"Die nächste Zahl ist {number}")
def multiples_of_2_generator(count):
for number in range(1, count + 1):
yield number * 2 # yield = return next
for number in multiples_of_2_generator(7):
print(f"Die nächste Zahl ist {number}")
list(multiples_of_2_generator(7))
###Output
_____no_output_____
###Markdown
Klassen
###Code
class Duck:
# TODO: name, fluffiness=0.0...1.0, color, max_noise_in_db
# constructor -> __init__ mit Parametern für die Attribute
# swim -> reduce flufiness, >= 0
# dry -> incr. fluffy
# quack -> current_noise_in_db
# shut_up -> noise=0
def __init__(self,name, fluffiness,color,max_nois_in_db):
self._name = name
self._fluffiness = fluffiness
self._max_nois_in_db = max_nois_in_db
def swim(self):
self._fluffiness -= 0.3
if self._fluffiness < 0:
self._fluffiness = 0
def dry(self):
self._fluffiness += 0.3
def __str__(self):
return f"{self._name}, fluffiness={self._fluffiness:.2f}" # TODO; Add other attributes.
donald = Duck("Donald", fluffiness = 0.8, color="white",max_nois_in_db = 95)
print(donald)
donald.swim()
print(donald)
donald.swim()
donald.swim()
print(donald)
0b1001
0xff
###Output
_____no_output_____
###Markdown
Math
###Code
import math
math.sqrt(2)
math.sqrt(2)
from math import sqrt
sqrt(2)
###Output
_____no_output_____
###Markdown
Dictionary
###Code
name_to_phone_map = {
"Hugo": "0650/123456",
"Rosalinde": "0664/234567",
"Donald": "0699/3456789"
}
name_to_phone_map
name_to_phone_map.keys()
name_to_phone_map.values()
name_to_phone_map["Hugo"]
try:
name_to_phone_map["Susi"]
except KeyError:
print("cannot find 'Susi'")
name_to_phone_map.get("Susi")
name_to_phone_map.get("Susi", "keine Telefonnummer")
name_to_phone_map.get("Hugo", "keine Telefonnummer")
name_to_phone_map["Daisy"] = "0664/9876544"
name_to_phone_map
hash("Hugo")
for name, phone in name_to_phone_map.items():
print(f"{name} hat die Telefonnummer {phone}")
for name, phone in sorted(name_to_phone_map.items()):
print(f"{name} hat die Telefonnummer {phone}")
{name: phone for name, phone in name_to_phone_map.items() if name.startswith("D")}
###Output
_____no_output_____
###Markdown
Sets (Mengen)
###Code
animals = {"duck", "donkey", "penguin", "lion", "bee", "duck"}
animals
animal_list = ["duck", "penguin", "lion", "bee", "duck"]
animal_list
"bee" in animals
"bee" in animal_list
print(sorted(animals))
set(animal_list)
list(animals)
{animal for animal in animals if animal.startswith("d")}
###Output
_____no_output_____
###Markdown
Tuples
###Code
person = ("Hugo", 173, "grün")
person
name, size, favorite_color = person
size
try:
person[0] = "Susi"
except TypeError as error:
print(error)
name = ("Hugo")
name
name = ("Hu" + "go")
name
# Tuple with a single element.
tuple_with_name = ("Hugo",)
tuple_with_name
# Empty tuple
empty_tuple = ()
empty_tuple
person_list = list(person)
person_list
person_list[0] = "Susi"
person_list
other_person = tuple(person_list)
other_person
###Output
_____no_output_____
###Markdown
Reguläre Ausdrücke (regular expressions)
###Code
a_1_de = "A 01"
a_1_en = "A1"
a_1_1_en = "A1.1"
import re
re.match("aaa", "aaa")
re.match("aaa", "...aaa...")
re.match("aaa", "...aaa")
re.match("aaa", "aaa...")
re.match("a", "123abc")
re.match(".*a.*", "123abc") # * = beliebig viele oder keines
re.match(" *a.*", " a123")
re.match(" *a.*", "a123")
re.match(" +a.*", "a123") # + = beliebig viele aber mindestens 1
re.match(r"[A-Z]\d", "A3") # \d = Ziffer (digit)
re.match(r"\w\d", "Ä3") # \w = Buchstabe (word character)
re.match(r"[A-Z]\d", "A3.x3")
re.match(r"^[A-Z]\d$", "A3.x3") # ^ = Anfang der Zeichenkette, $ = Ende der Zeichenkette
re.match(r"^[A-Z] *\d+$", a_1_de)
re.match(r"^[A-Z] *\d+$", a_1_en)
re.match(r"^[A-Z] *\d+$", a_1_1_en)
nace_match = re.match(r"^([A-Z]) *(\d+)$", a_1_de)
nace_match.groups()
nace_match.group(1)
nace_match.group(2)
nace_match = re.match(r"^(?P<section>[A-Z]) *(?P<division>\d+)$", a_1_de)
section = nace_match.group("section")
print(section)
division = nace_match.group("division")
print(division)
division_number = int(division)
###Output
_____no_output_____
###Markdown
Python Basics Notebook IntroductionThis notebook is intended to teach some basics of the Python 3 language. It is by no means comprehensive or the only ressource one should refer to when learning python. There are numberless great python introduction available including:* [W3 Schools Python Tutorial](https://www.w3schools.com/python/default.asp)* [Real Python Tutorials](https://realpython.com/)* [Realated python questions of the stackoverflow community](https://stackoverflow.com/questions/tagged/python)All examples in this notebook are embedded as code and can be executed directly. And don't miss on the "What to du next"-Section at the end of this notebook. Anaconda and Jupyter LabsA starting point for programming with python is the [Anaconda](https://www.anaconda.com/) package, which focusses on data science and comes bundeled with some of the most popular python libraries. It also contains *Jupyter Lab*, which can be used to create and execute *Jupyter notebooks* like this. Hello World!Before we have a more systematic look into python concepts, let's start with a classic "Hello world" example. First VersionOur first program is to output the text "Hello World". For this we need the __function__ *print()*, which expects a __string__ as argument. To define a string, we have to surround the character string with quotation marks. Both single and double quotation marks can be used.
###Code
print('Hello World!')
###Output
Hello World!
###Markdown
Hello \!Now we would like to first ask for a name, and then issue "Hello ".
###Code
name = input()
print(f'Hello {name}!')
###Output
Oswald
###Markdown
The function *input()* enables the direct input of a value. We save this value as the __variable__ *name*. We output this in *print()* with a so-called *formatted string*. If we put an *f* in front of the string, a variable can be specified in curly brackets {}, which is then inserted into the string. Hello Users!Now we would like to welcome not only one user, but several users. We specify these as a list of strings.
###Code
nameList = ['Quinn','Charlie','Jessie']
for name in nameList:
print(f'Hello {name}!')
###Output
Hello Quinn!
Hello Charlie!
Hello Jessie!
###Markdown
We have now used a loop to output the greeting for each name. We have previously given the names as a __list__. Hello World - advanced versionFinally, input and output are now to be outsourced to two functions. Users should enter a list of names separated by commas.
###Code
def getUsers() -> list:
print ("Please enter a list of names separated by comma")
s = input()
return s.split(',')
def helloUsers(nameList:list):
for name in nameList:
print(f'Hello {name}!')
inputs= getUsers()
helloUsers(nameList=inputs)
###Output
Please enter a list of names separated by comma
###Markdown
Basic Concepts Variables__Variables__ can store any kind of information.
###Code
myVariable= 'This is my variable'
###Output
_____no_output_____
###Markdown
Scope of variablesVariables declared in functions are only avaiable in the funciton. Other variables are global. The second code example produces an error, because we definded *word* inside the function *printHello*.
###Code
def printHello():
word= 'hello'
print(word)
printHello()
try:
print(word)
except:
print('Error')
###Output
Error
###Markdown
Indentation by white spacePython code needs to be indented properly. I.e. Blocks of code in a loop or other structures need to be indented by whitespace.
###Code
for name in ['Quinn','Charlie','Jessie']:
if name == 'Jessie':
print("It's Jessie")
else:
print("It's not Jessie")
###Output
It's not Jessie
It's not Jessie
It's Jessie
###Markdown
FunctionsFunctions can be used to structure and reorganize code. They can have an input and an output.
###Code
def sumOfAList(inputList:list) -> int:
sum= 0
for number in inputList:
sum = sum + number
return sum
numbers= [29,43,523,86,23,76]
print(sumOfAList(numbers))
###Output
780
###Markdown
Modules and installation with pipCode can be organized and shared as modules. We will skip on the creation of modules for now, but the is a great and easy to use repository for python modules called [*pypi* - Python Package Index](https://pypi.org/).Packages provided over pypi can be install with *pip* in the console. For example, you can install the Pandas-Module to process tables:```bash $pip install pandas``` Data TypesPython knows a number of data types, some of which we have already learned about. StringsA string is a string of characters for which a number of methods are provided in Python.
###Code
'this is a string'
"this is another string"
'''
Use multiple quotation marks
for multi line string
'''
"""
funtions
with
double
quotation
marks
as
well
"""
###Output
_____no_output_____
###Markdown
Formatted StringA formatted string may included variable, which will be inserted into the string
###Code
variable= 'formatted'
print(f'This is a {variable} String.')
###Output
This is a formatted String.
###Markdown
Raw StringWith a regular string, certain characters are processed in the output. For example, "\n" can create a new line. A raw string, on the other hand, is processed as it is specified.
###Code
s = "Includes\nLinebreaks\n"
sr = r"Includes\nLinebreak\n"
print(s)
print(sr)
###Output
Includes
Linebreaks
Includes\nLinebreak\n
###Markdown
NumbersPython includes basic numeric formats. __Integer__ is a whole positive or negative number, __Float__ in addition can also contain decimals. __Complex__ can include an imegunary part. Numbers need no quotation marks.
###Code
i = 1234
f = 1234.5678
c = 1234.5678j
print(f'{i} is a {type(i)}')
print(f'{f} is a {type(f)}')
print(f'{c} is a {type(c)}')
###Output
1234 is a <class 'int'>
1234.5678 is a <class 'float'>
1234.5678j is a <class 'complex'>
###Markdown
The method *type()* used here returns the type of the variable. BooleanA boolen is a dtatype which can be *True* or *False*.
###Code
a= True
b= False
print(a)
print(b)
###Output
True
False
###Markdown
CollectionsCollections are data types which contain multiple values. ListA list or array is the most basic collection. It is a list of any values, including other collections.
###Code
l = ['apple', 'banana', 'prune','apple']
print(l)
###Output
['apple', 'banana', 'prune', 'apple']
###Markdown
SetA set can only contain unique values.
###Code
s = {'apple', 'banana', 'prune'}
print(s)
###Output
{'banana', 'apple', 'prune'}
###Markdown
As an additional example, we will transform the previous list into a set and print it.
###Code
s= set(l)
print(s)
###Output
{'banana', 'apple', 'prune'}
###Markdown
TupleA tuple is a type of list, which cannot be changed afer its generation.
###Code
t= ('apple', 'banana', 'prune')
print (t)
###Output
('apple', 'banana', 'prune')
###Markdown
DictionaryA dictionary is a collection of key-value-pairs. The keys in a dictionary must be unique.
###Code
d= {
'fruit':'banana',
'color':'yellow',
'taste':'sweet'
}
print(d)
###Output
{'fruit': 'banana', 'color': 'yellow', 'taste': 'sweet'}
###Markdown
UsageCollections allow complex structures, because the can be combined freely. So a list of dictionaries and other combinations are possible.
###Code
apple= {
'fruit':'apple',
'color':'red',
'taste':'sweet',
'cultivars':{'Edelborsdorfer','Codlin','Winter Pearmain'}
}
banana= {
'fruit':'banana',
'color':'yellow',
'taste':'sweet',
'cultivars':{'Cavendish'}
}
prune= {
'fruit':'prune',
'color':'purple',
'taste':'sweet',
'cultivars':{'Improved French','Tulare Giant'}
}
fruits= [apple,banana,prune]
print(fruits)
###Output
[{'fruit': 'apple', 'color': 'red', 'taste': 'sweet', 'cultivars': {'Edelborsdorfer', 'Codlin', 'Winter Pearmain'}}, {'fruit': 'banana', 'color': 'yellow', 'taste': 'sweet', 'cultivars': {'Cavendish'}}, {'fruit': 'prune', 'color': 'purple', 'taste': 'sweet', 'cultivars': {'Tulare Giant', 'Improved French'}}]
###Markdown
ConditionsAn *if statement* is used to check conditions. The following conditions are available:* Equals: a == b* Not Equals: a != b* Less than: a < b* Less than or equal to: a <= b* Greater than: a > b * Greater than or equal to: a >= bIt is possible to combine conditions with *and* and *or*. If-Conditions can have multiple alternatives (*elif*) and a final state (*else*)
###Code
a= 2
b= 8
if a == b:
print(f'{a} == {b}')
elif a < b:
print(f'{a} < {b}')
else:
print(f'{a} > {b}')
###Output
2 < 8
###Markdown
LoopsA loop is shortcut to run the same code multiple times. For-LoopA For-Loop repeats a predifined number of times. This can be done by providing a list of elements or defining a numeric range. An else-statement at the end of the loop gets executed, if the loop ended properly. The keyword *break* can be used to exit the loop
###Code
fruits = ['apple', 'banana', 'prune']
for fruit in fruits:
print(fruit)
else:
print('Finished version 1')
for i in range(0,3):
print(fruits[i])
else:
print('Finished version 2')
for fruit in fruits:
if fruit == 'prune':
break
else:
print(fruit)
else:
print('Finished version 3')
###Output
apple
banana
prune
Finished version 1
apple
banana
prune
Finished version 2
apple
banana
###Markdown
While-LoopA while loop will be executed until the defined condition is reached.
###Code
n= 0
while n < 10:
print(n)
n += 1
n= 1
userInput= ''
while userInput != 'exit':
print(f'{n} execution')
print('Enter "exit" to exit program or press enter to continue')
userInput= input()
n += 1
print('Program exited')
###Output
1 execution
Enter "exit" to exit program or press enter to continue
###Markdown
PythonOnline Python Shell: https://www.python.org/shell/Run Python code online: https://py3.codeskulptor.org/Try Jupyter Notebook : https://mybinder.org/v2/gh/ipython/ipython-in-depth/master?filepath=binder/Index.ipynb Hello World
###Code
print('Hello World!')
###Output
Hello World!
###Markdown
Basic objects -Strings, Integers & Floats,Booleans StringsStrings are just array/list of the characters
###Code
string1 = 'This is string1'
string2 = "this is string2"
type(string2)
string1 #print(string1)
string2
#Add two strings
string1 + ' ' + string2
#Indexing
name = 'Shashank Shekhar'
name[-7:]
name[9:16]
# Slicing
name[0:3]
name[:8]
# Length of String
len(name)
# Reverse
name[::-1]
###Output
_____no_output_____
###Markdown
Integers and Floats
###Code
# Addition
1 + 5
a = 4
b = 6
c = 7.6
type(a), type(b), type(c)
a + b
a - b
a / b
# Integral Division
b // a
# Modulas
b % a
# Exponents
a ** 2
(2 ** 32) // 2, ((2 ** 32) // 2) -1
d = a + c
d
type(d)
a, b = b, a
a, b
###Output
_____no_output_____
###Markdown
Booleans
###Code
x = True
y = False
x
x and y
x or y
5 <= 4
###Output
_____no_output_____
###Markdown
Complex Objects -Lists, Tuples andSets ListsLists are very similar to arrays. They can contain any type of variable, and they can contain as many variables as you wish. Lists can also be iterated over in a very simple manner.
###Code
number_list = [1,2,3,4]
number_list
len(number_list)
number_list[-1]
sum(number_list)
number_list.append(67)
number_list
sum(number_list)
name_list = ['Shashank', 'Sourodeep', 'Sai']
name_list[2]
# String formatting
print(name_list[0] + ' is instructor of this course')
print('{} is instructor of this course {}'.format(name_list[0], a))
print(f'{name_list[0]} is instructor of this course, {name_list[1]} is TA.')
print(f'{4**3} is cube of 4')
str(4**3) + ' is cube of 4'
###Output
_____no_output_____ |
Polarity Sentiment.ipynb | ###Markdown
This section of the tutorial will cover sentiment extraction. We will examine documents and assign them a score from positive to negative.
###Code
import nltk.classify.util
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
###Output
_____no_output_____
###Markdown
Firt, we'll grab some documents that have been categorized as either negative (neg) or positive (pos) from NLTK's movie review corpus.
###Code
negids = movie_reviews.fileids('neg')
posids = movie_reviews.fileids('pos')
###Output
_____no_output_____
###Markdown
Now, we'll write a function that will build a featureset for each one of these documents. Our featureset is a bag of words that contains all of the words appearing in our document.
###Code
def word_feats(words):
words = [w.lower() for w in words]
myDict = dict([(word, True) for word in words])
return myDict
###Output
_____no_output_____
###Markdown
Now we can extract our features, and divide our data, so that we have a training set and a test set.
###Code
negfeats = [(word_feats(movie_reviews.words(fileids=[f])), 'neg') for f in negids]
posfeats = [(word_feats(movie_reviews.words(fileids=[f])), 'pos') for f in posids]
negcutoff = len(negfeats)*3/4
poscutoff = len(posfeats)*3/4
trainfeats = negfeats[:negcutoff] + posfeats[:poscutoff]
testfeats = negfeats[negcutoff:] + posfeats[poscutoff:]
###Output
_____no_output_____
###Markdown
Now, we train our classifier, test it's accuracy, and then print out the most informative features in our featureset.
###Code
classifier = NaiveBayesClassifier.train(trainfeats)
print 'accuracy:', nltk.classify.util.accuracy(classifier, testfeats)
classifier.show_most_informative_features()
###Output
accuracy: 0.728
Most Informative Features
magnificent = True pos : neg = 15.0 : 1.0
outstanding = True pos : neg = 13.6 : 1.0
insulting = True neg : pos = 13.0 : 1.0
vulnerable = True pos : neg = 12.3 : 1.0
ludicrous = True neg : pos = 11.8 : 1.0
avoids = True pos : neg = 11.7 : 1.0
uninvolving = True neg : pos = 11.7 : 1.0
astounding = True pos : neg = 10.3 : 1.0
fascination = True pos : neg = 10.3 : 1.0
idiotic = True neg : pos = 9.8 : 1.0
###Markdown
Now, we can classify a new review by feeding it to our classifier.
###Code
temp = classifier.prob_classify(testfeats[7][0])
print temp.samples()
print temp.prob('neg')
print temp.prob('pos')
###Output
['neg', 'pos']
0.999997377199
2.62280111245e-06
###Markdown
Let's look at another aplication of Naive Bayes Classification, from the NLTK book. We're going to classify names by gender, based on a featureset we design. First, let's import some labeled names, and then shuffle them.
###Code
from nltk.corpus import names
labeled_names = ([(name, 'male') for name in names.words('male.txt')] +
[(name, 'female') for name in names.words('female.txt')])
import random
random.shuffle(labeled_names)
###Output
_____no_output_____
###Markdown
Now we can choose what features these names are going to have. Each name is like a document. Where we had given each movie review a feature for every word it had, in this case we are going to give every name a feature corresponding to the last letter in it.
###Code
def gender_features(word):
return {'last_letter': word[-1]}
###Output
_____no_output_____
###Markdown
Now we construct our training and test sets in the same way we did for our sentiment analyzer.
###Code
featureset = [(gender_features(n), gender) for (n, gender) in labeled_names]
trainfeats2, testfeats2 = featureset[500:], featureset[:500]
classifier2 = nltk.NaiveBayesClassifier.train(trainfeats2)
print 'accuracy:', nltk.classify.util.accuracy(classifier2, testfeats2)
classifier2.show_most_informative_features()
temp = classifier2.prob_classify(gender_features('Neo'))
print temp.samples()
print temp.prob('male')
###Output
['male', 'female']
0.83038276398
###Markdown
We can modify our featureset to include anything we think might help.
###Code
def gender_features(word):
myDict = dict()
myDict['first_letter'] = word[0]
myDict['last_letter'] = word[-1]
return myDict
###Output
_____no_output_____ |
Week 1/utf-8''Exercise_1_House_Prices_Question.ipynb | ###Markdown
In this exercise you'll try to build a neural network that predicts the price of a house according to a simple formula.So, imagine if house pricing was as easy as a house costs 50k + 50k per bedroom, so that a 1 bedroom house costs 100k, a 2 bedroom house costs 150k etc.How would you create a neural network that learns this relationship so that it would predict a 7 bedroom house as costing close to 400k etc.Hint: Your network might work better if you scale the house price down. You don't have to give the answer 400...it might be better to create something that predicts the number 4, and then your answer is in the 'hundreds of thousands' etc.
###Code
import tensorflow as tf
import numpy as np
from tensorflow import keras
# GRADED FUNCTION: house_model
def house_model(y_new):
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], dtype=float)
ys = np.array([1.0, 1.5, 2.0, 2.5, 3.0, 3.5], dtype=float)
model = keras.Sequential([keras.layers.Dense(units=1,input_shape=[1])])
model.compile(optimizer='sgd',loss='mean_squared_error')
model.fit(xs,ys,epochs=500)
return model.predict(y_new)[0]
prediction = house_model([7.0])
print(prediction)
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
###Output
_____no_output_____ |
Sklearn Toy Datasets.ipynb | ###Markdown
**Iris Dataset - Classification**
###Code
from sklearn.datasets import load_iris
iris_data=load_iris()
iris_data
dir(iris_data)
iris_data.target_names
iris_data.DESCR
iris_data.feature_names
###Output
_____no_output_____
###Markdown
**Diabetes Dataset Regression**
###Code
from sklearn.datasets import load_diabetes
diabetes_data=load_diabetes()
diabetes_data
dir(diabetes_data)
diabetes_data.feature_names
diabetes_data.target
###Output
_____no_output_____
###Markdown
**Digit Datasets - Classification**
###Code
from sklearn.datasets import load_digits
digits_data=load_digits()
dir(digits_data)
digits_data.feature_names
###Output
_____no_output_____ |
notebooks/3-04.ipynb | ###Markdown
3.4: DataFrame
###Code
# リスト 3.4.1 DataFrame の作成
import pandas as pd
df = pd.DataFrame(
[[1, 10, 100], [2, 20, 200], [3, 30, 300]],
index=["r1", "r2", "r3"],
columns=["c1", "c2", "c3"],
)
df
# リスト 3.4.2 ラベルによるデータの選択
df.loc["r2", "c2"]
# リスト 3.4.3 すべての列を指定
df.loc["r2", :]
# リスト 3.4.4 すべての行を指定
df.loc[:, "c2"]
# リスト 3.4.5 行をリスト・列をスライスで指定して抽出
df.loc[["r1", "r3"], "c2":"c3"]
# リスト 3.4.6 位置を指定したデータの選択
df.iloc[1:3, [0, 2]]
# リスト 3.4.7 列を指定してデータを選択
df["c2"]
# リスト 3.4.8 DataFrame に対する比較演算
df > 10
# リスト 3.4.9 比較演算によるデータの抽出
df.loc[df["c2"] > 10]
# リスト 3.4.10 複数の条件を組み合わせたデータの抽出
# c1列が1より大きくかつ、c3列が300より小さいデータ
df.loc[(df["c1"] > 1) & (df["c3"] < 300)]
###Output
_____no_output_____ |
content/notebook/Elements of Evolutionary Algorithms.ipynb | ###Markdown
Demostration Class 02 Elements of Evolutionary Algorithms Luis Martí, LIRA/[DEE](http://www.ele.puc-rio.br)/[PUC-Rio](http://www.puc-rio.br)[http://lmarti.com](http://lmarti.com); [[email protected]](mailto:[email protected]) [Advanced Evolutionary Computation: Theory and Practice](http://lmarti.com/aec-2014) The notebook is better viewed rendered as slides. You can convert it to slides and view them by:- using [nbconvert](http://ipython.org/ipython-doc/1/interactive/nbconvert.html) with a command like: ```bash $ ipython nbconvert --to slides --post serve ```- installing [Reveal.js - Jupyter/IPython Slideshow Extension](https://github.com/damianavila/live_reveal)- using the online [IPython notebook slide viewer](https://slideviewer.herokuapp.com/) (some slides of the notebook might not be properly rendered).This and other related IPython notebooks can be found at the course github repository:* [https://github.com/lmarti/evolutionary-computation-course](https://github.com/lmarti/evolutionary-computation-course) In this demonstration class we will deal with the features and problems shared by most evolutionary algorithms.*Note*: Most of the material used in this notebook comes from [DEAP](https://github.com/DEAP/deap) documentation. Elements to take into account using evolutionary algorithms* **Individual representation** (binary, Gray, floating-point, etc.);* **evaluation** and **fitness assignment**;* **mating selection**, that establishes a partial order of individuals in the population using their fitness function value as reference and determines the degree at which individuals in the population will take part in the generation of new (offspring) individuals.* **variation**, that applies a range of evolution-inspired operators, like crossover, mutation, etc., to synthesize offspring individuals from the current (parent) population. This process is supposed to prime the fittest individuals so they play a bigger role in the generation of the offspring.* **environmental selection**, that merges the parent and offspring individuals to produce the population that will be used in the next iteration. This process often involves the deletion of some individuals using a given criterion in order to keep the amount of individuals bellow a certain threshold.* **stopping criterion**, that determines when the algorithm shoulod be stopped, either because the optimum was reach or because the optimization process is not progressing. Hence a 'general' evolutionary algorithm can be described as ```def evolutionary_algorithm(): 'Pseudocode of an evolutionary algorithm' populations = [] a list with all the populations populations[0] = initialize_population(pop_size) t = 0 while not stop_criterion(populations[t]): fitnesses = evaluate(populations[t]) offspring = matting_and_variation(populations[t], fitnesses) populations[t+1] = environmental_selection( populations[t], offspring) t = t+1``` Python libraries for evolutionary computation* PaGMO/PyGMO* Inspyred* **Distributed Evolutionary Algorithms in Python (DEAP)**> There are potentially many more, feel free to give me some feedback on this. Open source Python library with, genetic algorithm using any representation; evolutionary strategies (including CMA-ES); multi-objective optimization from the start; co-evolution (cooperative and competitive) of multiple populations; parallelization of the evaluations (and more) using SCOOP; statistics keeping, and; benchmarks module containing some common test functions. [https://github.com/DEAP/deap](https://github.com/DEAP/deap) Lets start with an example and analyze it The One Max problem* Maximize the number of ones in a binary string (list, vector, etc.).* More formally, from the set of binary strings of length $n$,$$\mathcal{S}=\left\{s_1,\ldots,s_n\right\}, \text{ with } s_i=\left\{0,1\right\}.$$* Find $s^\ast\in\mathcal{S}$ such that$$s^\ast = \operatorname*{arg\,max}_{s\in\mathcal{S}} \sum_{i=1}^{n}{s_i}.$$* Its clear that the optimum is an *all-ones* string. Coding the problem
###Code
import random
from deap import algorithms, base, creator, tools
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", list, fitness=creator.FitnessMax)
def evalOneMax(individual):
return (sum(individual),)
###Output
_____no_output_____
###Markdown
Defining the elements
###Code
toolbox = base.Toolbox()
toolbox.register("attr_bool", random.randint, 0, 1)
toolbox.register("individual", tools.initRepeat, creator.Individual,
toolbox.attr_bool, n=100)
toolbox.register("population", tools.initRepeat, list,
toolbox.individual)
toolbox.register("evaluate", evalOneMax)
toolbox.register("mate", tools.cxTwoPoint)
toolbox.register("mutate", tools.mutFlipBit, indpb=0.05)
toolbox.register("select", tools.selTournament, tournsize=3)
###Output
_____no_output_____
###Markdown
Running the experiment
###Code
pop = toolbox.population(n=300)
###Output
_____no_output_____
###Markdown
Lets run only 10 generations
###Code
result = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2,
ngen=10, verbose=False)
print('Current best fitness:', evalOneMax(tools.selBest(pop, k=1)[0]))
result = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2,
ngen=50, verbose=False)
print('Current best fitness:', evalOneMax(tools.selBest(pop, k=1)[0]))
###Output
Current best fitness: (100,)
###Markdown
Essential features* `deap.creator`: meta-factory allowing to create classes that will fulfill the needs of your evolutionary algorithms.* `deap.base.Toolbox`: A toolbox for evolution that contains the evolutionary operators. You may populate the toolbox with any other function by using the `register()` method* `deap.base.Fitness([values])`: The fitness is a measure of quality of a solution. If values are provided as a tuple, the fitness is initalized using those values, otherwise it is empty (or invalid). You should inherit from this class to define your custom fitnesses. Defining an individualFirst import the required modules and register the different functions required to create individuals that are a list of floats with a minimizing two objectives fitness.
###Code
import random
from deap import base
from deap import creator
from deap import tools
IND_SIZE = 5
creator.create("FitnessMin", base.Fitness, weights=(-1.0, -1.0))
creator.create("Individual", list, fitness=creator.FitnessMin)
toolbox1 = base.Toolbox()
toolbox1.register("attr_float", random.random)
toolbox1.register("individual", tools.initRepeat, creator.Individual,
toolbox1.attr_float, n=IND_SIZE)
###Output
_____no_output_____
###Markdown
The first individual can now be built
###Code
ind1 = toolbox1.individual()
###Output
_____no_output_____
###Markdown
Printing the individual ind1 and checking if its fitness is valid will give something like this
###Code
print ind1
print ind1.fitness.valid
###Output
_____no_output_____
###Markdown
The individual is printed as its base class representation (here a list) and the fitness is invalid because it contains no values. EvaluationThe evaluation is the most "personal" part of an evolutionary algorithm* it is the only part of the library that you must write yourself. * A typical evaluation function takes one individual as argument and return its fitness as a tuple. * A fitness is a list of floating point values and has a property valid to know if this individual shall be re-evaluated. * The fitness is set by setting the values to the associated tuple. For example, the following evaluates the previously created individual ind1 and assign its fitness to the corresponding values.
###Code
def evaluate(individual):
# Do some hard computing on the individual
a = sum(individual)
b = len(individual)
return a, 1. / b
ind1.fitness.values = evaluate(ind1)
print ind1.fitness.valid
print ind1.fitness
###Output
_____no_output_____
###Markdown
Dealing with single objective fitness is not different, the evaluation function must return a tuple because single-objective is treated as a special case of multi-objective. Mutation* The next kind of operator that we will present is the mutation operator. * There is a variety of mutation operators in the deap.tools module. * Each mutation has its own characteristics and may be applied to different type of individual. * Be careful to read the documentation of the selected operator in order to avoid undesirable behaviour.The general rule for mutation operators is that they only mutate, this means that an independent copy must be made prior to mutating the individual if the original individual has to be kept or is a reference to an other individual (see the selection operator). In order to apply a mutation (here a gaussian mutation) on the individual ind1, simply apply the desired function.
###Code
mutant = toolbox1.clone(ind1)
ind2, = tools.mutGaussian(mutant, mu=0.0, sigma=0.2, indpb=0.2)
del mutant.fitness.values
###Output
_____no_output_____
###Markdown
The fitness’ values are deleted because they not related to the individual anymore. As stated above, the mutation does mutate and only mutate an individual it is not responsible of invalidating the fitness nor anything else. The following shows that ind2 and mutant are in fact the same individual.
###Code
print ind2 is mutant
print mutant is ind2
###Output
_____no_output_____
###Markdown
Crossover* There is a variety of crossover operators in the `deap.tools module`. * Each crossover has its own characteristics and may be applied to different type of individuals. * Be careful to read the documentation of the selected operator in order to avoid undesirable behaviour.The general rule for crossover operators is that they only mate individuals, this means that an independent copies must be made prior to mating the individuals if the original individuals have to be kept or is are references to other individuals (see the selection operator). Lets apply a crossover operation to produce the two children that are cloned beforehand.
###Code
child1, child2 = [toolbox1.clone(ind) for ind in (ind1, ind2)]
tools.cxBlend(child1, child2, 0.5)
del child1.fitness.values
del child2.fitness.values
###Output
_____no_output_____
###Markdown
Selection* Selection is made among a population by the selection operators that are available in the deap.operators module. * The selection operator usually takes as first argument an iterable container of individuals and the number of individuals to select. It returns a list containing the references to the selected individuals. The selection is made as follow.
###Code
selected = tools.selBest([child1, child2], 2)
print child1 in selected
###Output
_____no_output_____
###Markdown
Using the Toolbox* The toolbox is intended to contain all the evolutionary tools, from the object initializers to the evaluation operator. * It allows easy configuration of each algorithms. * The toolbox has basically two methods, `register()` and `unregister()`, that are used to add or remove tools from the toolbox.* The usual names for the evolutionary tools are mate(), mutate(), evaluate() and select(), however, any name can be registered as long as it is unique. Here is how they are registered in the toolbox.
###Code
from deap import base
from deap import tools
toolbox1 = base.Toolbox()
def evaluateInd(individual):
# Do some computation
return result,
toolbox1.register("mate", tools.cxTwoPoint)
toolbox1.register("mutate", tools.mutGaussian, mu=0, sigma=1, indpb=0.2)
toolbox1.register("select", tools.selTournament, tournsize=3)
toolbox1.register("evaluate", evaluateInd)
###Output
_____no_output_____
###Markdown
Tool Decoration* A powerful feature that helps to control very precise thing during an evolution without changing anything in the algorithm or operators. * A decorator is a wrapper that is called instead of a function.* It is asked to make some initialization and termination work before and after the actual function is called. For example, in the case of a constrained domain, one can apply a decorator to the mutation and crossover in order to keep any individual from being out-of-bound. The following defines a decorator that checks if any attribute in the list is out-of-bound and clips it if it is the case. * The decorator is defined using three functions in order to receive the min and max arguments. * Whenever the mutation or crossover is called, bounds will be check on the resulting individuals.
###Code
def checkBounds(min, max):
def decorator(func):
def wrapper(*args, **kargs):
offspring = func(*args, **kargs)
for child in offspring:
for i in xrange(len(child)):
if child[i] > max:
child[i] = max
elif child[i] < min:
child[i] = min
return offspring
return wrapper
return decorator
toolbox.register("mate_example", tools.cxBlend, alpha=0.2)
toolbox.register("mutate_example", tools.mutGaussian, mu=0, sigma=2)
MIN = 0; MAX = 10
toolbox.decorate("mate_example", checkBounds(MIN, MAX))
toolbox.decorate("mutate_example", checkBounds(MIN, MAX))
###Output
_____no_output_____
###Markdown
This will work on crossover and mutation because both return a tuple of individuals. The mutation is often considered to return a single individual but again like for the evaluation, the single individual case is a special case of the multiple individual case. Variations* Variations allows to build simple algorithms using predefined small building blocks. * In order to use a variation, the toolbox must be set to contain the required operators. > For example, in the lastly presented complete algorithm, the crossover and mutation are regrouped in the `varAnd()` function, this function requires the toolbox to contain the `mate()` and `mutate()` functions. The variations can be used to simplify the writing of an algorithm as follow.
###Code
from deap import algorithms
NGEN = 20 # number of generations
CXPB = 0.6
MUTPB = 0.05
for g in range(NGEN):
# Select and clone the next generation individuals
offspring = map(toolbox.clone, toolbox.select(pop, len(pop)))
# Apply crossover and mutation on the offspring
offspring = algorithms.varAnd(offspring, toolbox, CXPB, MUTPB)
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
# The population is entirely replaced by the offspring
pop[:] = offspring
###Output
_____no_output_____
###Markdown
Algorithms* There are several algorithms implemented in the algorithms module. * They are very simple and reflect the basic types of evolutionary algorithms present in the literature. * The algorithms use a Toolbox as defined in the last sections. * In order to setup a toolbox for an algorithm, you must register the desired operators under a specified names, refer to the documentation of the selected algorithm for more details. * Once the toolbox is ready, it is time to launch the algorithm. The *simple evolutionary algorithm* takes 5 arguments, a population, a toolbox, a probability of mating each individual at each generation (`cxpb`), a probability of mutating each individual at each generation (`mutpb`) and a number of generations to accomplish (`ngen`).
###Code
from deap import algorithms
result = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2, ngen=50)
###Output
_____no_output_____
###Markdown
Computing StatisticsOften, one wants to compile statistics on what is going on in the optimization. The Statistics are able to compile such data on arbitrary attributes of any designated object. To do that, one need to register the desired statistic functions inside the stats object using the exact same syntax as the toolbox.
###Code
stats = tools.Statistics(key=lambda ind: ind.fitness.values)
###Output
_____no_output_____
###Markdown
The statistics object is created using a key as first argument. This key must be supplied a function that will later be applied to the data on which the statistics are computed. The previous code sample uses the fitness.values attribute of each element.
###Code
import numpy
stats.register("avg", numpy.mean)
stats.register("std", numpy.std)
stats.register("min", numpy.min)
stats.register("max", numpy.max)
###Output
_____no_output_____
###Markdown
* The statistical functions are now registered. * The register function expects an alias as first argument and a function operating on vectors as second argument. * Any subsequent argument is passed to the function when called. The creation of the statistics object is now complete. Predefined AlgorithmsWhen using a predefined algorithm such as `eaSimple()`, `eaMuPlusLambda()`, `eaMuCommaLambda()`, or `eaGenerateUpdate()`, the statistics object previously created can be given as argument to the algorithm.
###Code
pop, logbook = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2, ngen=0,
stats=stats, verbose=True)
###Output
_____no_output_____
###Markdown
* Statistics will automatically be computed on the population every generation. * The verbose argument prints the statistics on screen while the optimization takes place.* Once the algorithm returns, the final population and a Logbook are returned. * See the next section or the Logbook documentation for more information. Writing Your Own AlgorithmWhen writing your own algorithm, including statistics is very simple. One need only to compile the statistics on the desired object. For example, compiling the statistics on a given population is done by calling the compile() method.
###Code
record = stats.compile(pop)
###Output
_____no_output_____
###Markdown
The argument to the compile function must be an iterable of elements on which the key will be called. Here, our population (`pop`) contains individuals.* The statistics object will call the key function on every individual to retrieve their `fitness.values` attribute. * The resulting array of values is finally given the each statistic function and the result is put into the record dictionary under the key associated with the function. * Printing the record reveals its nature.
###Code
>>> print(record)
{'std': 4.96, 'max': 63.0, 'avg': 50.2, 'min': 39.0}
###Output
_____no_output_____
###Markdown
Logging DataOnce the data is produced by the statistics, one can save it for further use in a Logbook. * The logbook is intended to be a chronological sequence of entries (as dictionaries). * It is directly compliant with the type of data returned by the statistics objects, but not limited to this data. * *In fact, anything can be incorporated in an entry of the logbook.*
###Code
logbook = tools.Logbook()
logbook.record(gen=0, evals=30, **record)
###Output
_____no_output_____
###Markdown
The `record()` method takes a variable number of argument, each of which is a data to be recorded. In the last example, we saved the generation, the number of evaluations and everything contained in the record produced by a statistics object using the star magic. All record will be kept in the logbook until its destruction.After a number of records, one may want to retrieve the information contained in the logbook.
###Code
gen, avg = logbook.select("gen", "avg")
###Output
_____no_output_____
###Markdown
The `select()` method provides a way to retrieve all the information associated with a keyword in all records. This method takes a variable number of string arguments, which are the keywords used in the record or statistics object. Here, we retrieved the generation and the average fitness using a single call to select. Printing to Screen* A logbook can be printed to screen or file. * Its `__str__()` method returns a header of each key inserted in the first record and the complete logbook for each of these keys. * The row are in chronological order of insertion while the columns are in an undefined order. * The easiest way to specify an order is to set the header attribute to a list of strings specifying the order of the columns.
###Code
logbook.header = "gen", "avg", "spam"
###Output
_____no_output_____
###Markdown
The result is:
###Code
print(logbook)
###Output
_____no_output_____
###Markdown
Plotting Features* One of the most common operation when an optimization is finished is to plot the data during the evolution. * The `Logbook` allows to do this very efficiently. * Using the select method, one can retrieve the desired data and plot it using matplotlib.
###Code
gen = logbook.select("gen")
fit_mins = logbook.chapters["fitness"].select("min")
size_avgs = logbook.chapters["size"].select("avg")
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax1 = plt.subplots()
line1 = ax1.plot(gen, fit_mins, "b-", label="Minimum Fitness")
ax1.set_xlabel("Generation")
ax1.set_ylabel("Fitness", color="b")
for tl in ax1.get_yticklabels():
tl.set_color("b")
ax2 = ax1.twinx()
line2 = ax2.plot(gen, size_avgs, "r-", label="Average Size")
ax2.set_ylabel("Size", color="r")
for tl in ax2.get_yticklabels():
tl.set_color("r")
lns = line1 + line2
labs = [l.get_label() for l in lns]
ax1.legend(lns, labs, loc="center right")
plt.show()
###Output
_____no_output_____
###Markdown
Constraint HandlingWe have already seen some alternatives.* **Penality functions** are the most basic way of handling constrains for individuals that cannot be evaluated or are forbiden for problem specific reasons, when falling in a given region. * The penality function gives a fitness disavantage to theses individuals based on the amount of constraint violation in the solution. In DEAP, a penality function can be added to any evaluation function using the DeltaPenality decorator provided in the tools module.
###Code
from math import sin
from deap import base
from deap import tools
def evalFct(individual):
"""Evaluation function for the individual."""
x = individual[0]
return (x - 5)**2 * sin(x) * (x/3),
def feasible(individual):
"""Feasability function for the individual. Returns True if feasible False
otherwise."""
if 3 < individual[0] < 5:
return True
return False
def distance(individual):
"""A distance function to the feasability region."""
return (individual[0] - 5.0)**2
toolbox = base.Toolbox()
toolbox.register("evaluate", evalFct)
toolbox.decorate("evaluate", tools.DeltaPenality(feasible, 7.0, distance))
###Output
_____no_output_____
###Markdown
Demostration Class 02 Elements of Evolutionary Algorithms Luis Martí, LIRA/[DEE](http://www.ele.puc-rio.br)/[PUC-Rio](http://www.puc-rio.br)[http://lmarti.com](http://lmarti.com); [[email protected]](mailto:[email protected]) [Advanced Evolutionary Computation: Theory and Practice](http://lmarti.com/aec-2014) The notebook is better viewed rendered as slides. You can convert it to slides and view them by:- using [nbconvert](http://ipython.org/ipython-doc/1/interactive/nbconvert.html) with a command like: ```bash $ ipython nbconvert --to slides --post serve ```- installing [Reveal.js - Jupyter/IPython Slideshow Extension](https://github.com/damianavila/live_reveal)- using the online [IPython notebook slide viewer](https://slideviewer.herokuapp.com/) (some slides of the notebook might not be properly rendered).This and other related IPython notebooks can be found at the course github repository:* [https://github.com/lmarti/evolutionary-computation-course](https://github.com/lmarti/evolutionary-computation-course) In this demonstration class we will deal with the features and problems shared by most evolutionary algorithms.*Note*: Most of the material used in this notebook comes from [DEAP](https://github.com/DEAP/deap) documentation. Elements to take into account using evolutionary algorithms* **Individual representation** (binary, Gray, floating-point, etc.);* **evaluation** and **fitness assignment**;* **mating selection**, that establishes a partial order of individuals in the population using their fitness function value as reference and determines the degree at which individuals in the population will take part in the generation of new (offspring) individuals.* **variation**, that applies a range of evolution-inspired operators, like crossover, mutation, etc., to synthesize offspring individuals from the current (parent) population. This process is supposed to prime the fittest individuals so they play a bigger role in the generation of the offspring.* **environmental selection**, that merges the parent and offspring individuals to produce the population that will be used in the next iteration. This process often involves the deletion of some individuals using a given criterion in order to keep the amount of individuals bellow a certain threshold.* **stopping criterion**, that determines when the algorithm shoulod be stopped, either because the optimum was reach or because the optimization process is not progressing. Hence a 'general' evolutionary algorithm can be described as ```def evolutionary_algorithm(): 'Pseudocode of an evolutionary algorithm' populations = [] a list with all the populations populations[0] = initialize_population(pop_size) t = 0 while not stop_criterion(populations[t]): fitnesses = evaluate(populations[t]) offspring = matting_and_variation(populations[t], fitnesses) populations[t+1] = environmental_selection( populations[t], offspring) t = t+1``` Python libraries for evolutionary computation* PaGMO/PyGMO* Inspyred* **Distributed Evolutionary Algorithms in Python (DEAP)**> There are potentially many more, feel free to give me some feedback on this. Open source Python library with, genetic algorithm using any representation; evolutionary strategies (including CMA-ES); multi-objective optimization from the start; co-evolution (cooperative and competitive) of multiple populations; parallelization of the evaluations (and more) using SCOOP; statistics keeping, and; benchmarks module containing some common test functions. [https://github.com/DEAP/deap](https://github.com/DEAP/deap) Lets start with an example and analyze it The One Max problem* Maximize the number of ones in a binary string (list, vector, etc.).* More formally, from the set of binary strings of length $n$,$$\mathcal{S}=\left\{s_1,\ldots,s_n\right\}, \text{ with } s_i=\left\{0,1\right\}.$$* Find $s^\ast\in\mathcal{S}$ such that$$s^\ast = \operatorname*{arg\,max}_{s\in\mathcal{S}} \sum_{i=1}^{n}{s_i}.$$* Its clear that the optimum is an *all-ones* string. Coding the problem
###Code
import random
from deap import algorithms, base, creator, tools
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", list, fitness=creator.FitnessMax)
def evalOneMax(individual):
return (sum(individual),)
###Output
_____no_output_____
###Markdown
Defining the elements
###Code
toolbox = base.Toolbox()
toolbox.register("attr_bool", random.randint, 0, 1)
toolbox.register("individual", tools.initRepeat, creator.Individual,
toolbox.attr_bool, n=100)
toolbox.register("population", tools.initRepeat, list,
toolbox.individual)
toolbox.register("evaluate", evalOneMax)
toolbox.register("mate", tools.cxTwoPoint)
toolbox.register("mutate", tools.mutFlipBit, indpb=0.05)
toolbox.register("select", tools.selTournament, tournsize=3)
###Output
_____no_output_____
###Markdown
Running the experiment
###Code
pop = toolbox.population(n=300)
###Output
_____no_output_____
###Markdown
Lets run only 10 generations
###Code
result = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2,
ngen=10, verbose=False)
print('Current best fitness:', evalOneMax(tools.selBest(pop, k=1)[0]))
result = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2,
ngen=50, verbose=False)
print('Current best fitness:', evalOneMax(tools.selBest(pop, k=1)[0]))
###Output
Current best fitness: (100,)
###Markdown
Essential features* `deap.creator`: meta-factory allowing to create classes that will fulfill the needs of your evolutionary algorithms.* `deap.base.Toolbox`: A toolbox for evolution that contains the evolutionary operators. You may populate the toolbox with any other function by using the `register()` method* `deap.base.Fitness([values])`: The fitness is a measure of quality of a solution. If values are provided as a tuple, the fitness is initalized using those values, otherwise it is empty (or invalid). You should inherit from this class to define your custom fitnesses. Defining an individualFirst import the required modules and register the different functions required to create individuals that are a list of floats with a minimizing two objectives fitness.
###Code
import random
from deap import base
from deap import creator
from deap import tools
IND_SIZE = 5
creator.create("FitnessMin", base.Fitness, weights=(-1.0, -1.0))
creator.create("Individual", list, fitness=creator.FitnessMin)
toolbox1 = base.Toolbox()
toolbox1.register("attr_float", random.random)
toolbox1.register("individual", tools.initRepeat, creator.Individual,
toolbox1.attr_float, n=IND_SIZE)
###Output
_____no_output_____
###Markdown
The first individual can now be built
###Code
ind1 = toolbox1.individual()
###Output
_____no_output_____
###Markdown
Printing the individual ind1 and checking if its fitness is valid will give something like this
###Code
print ind1
print ind1.fitness.valid
###Output
_____no_output_____
###Markdown
The individual is printed as its base class representation (here a list) and the fitness is invalid because it contains no values. EvaluationThe evaluation is the most "personal" part of an evolutionary algorithm* it is the only part of the library that you must write yourself. * A typical evaluation function takes one individual as argument and return its fitness as a tuple. * A fitness is a list of floating point values and has a property valid to know if this individual shall be re-evaluated. * The fitness is set by setting the values to the associated tuple. For example, the following evaluates the previously created individual ind1 and assign its fitness to the corresponding values.
###Code
def evaluate(individual):
# Do some hard computing on the individual
a = sum(individual)
b = len(individual)
return a, 1. / b
ind1.fitness.values = evaluate(ind1)
print ind1.fitness.valid
print ind1.fitness
###Output
_____no_output_____
###Markdown
Dealing with single objective fitness is not different, the evaluation function must return a tuple because single-objective is treated as a special case of multi-objective. Mutation* The next kind of operator that we will present is the mutation operator. * There is a variety of mutation operators in the deap.tools module. * Each mutation has its own characteristics and may be applied to different type of individual. * Be careful to read the documentation of the selected operator in order to avoid undesirable behaviour.The general rule for mutation operators is that they only mutate, this means that an independent copy must be made prior to mutating the individual if the original individual has to be kept or is a reference to an other individual (see the selection operator). In order to apply a mutation (here a gaussian mutation) on the individual ind1, simply apply the desired function.
###Code
mutant = toolbox1.clone(ind1)
ind2, = tools.mutGaussian(mutant, mu=0.0, sigma=0.2, indpb=0.2)
del mutant.fitness.values
###Output
_____no_output_____
###Markdown
The fitness’ values are deleted because they not related to the individual anymore. As stated above, the mutation does mutate and only mutate an individual it is not responsible of invalidating the fitness nor anything else. The following shows that ind2 and mutant are in fact the same individual.
###Code
print ind2 is mutant
print mutant is ind2
###Output
_____no_output_____
###Markdown
Crossover* There is a variety of crossover operators in the `deap.tools module`. * Each crossover has its own characteristics and may be applied to different type of individuals. * Be careful to read the documentation of the selected operator in order to avoid undesirable behaviour.The general rule for crossover operators is that they only mate individuals, this means that an independent copies must be made prior to mating the individuals if the original individuals have to be kept or is are references to other individuals (see the selection operator). Lets apply a crossover operation to produce the two children that are cloned beforehand.
###Code
child1, child2 = [toolbox1.clone(ind) for ind in (ind1, ind2)]
tools.cxBlend(child1, child2, 0.5)
del child1.fitness.values
del child2.fitness.values
###Output
_____no_output_____
###Markdown
Selection* Selection is made among a population by the selection operators that are available in the deap.operators module. * The selection operator usually takes as first argument an iterable container of individuals and the number of individuals to select. It returns a list containing the references to the selected individuals. The selection is made as follow.
###Code
selected = tools.selBest([child1, child2], 2)
print child1 in selected
###Output
_____no_output_____
###Markdown
Using the Toolbox* The toolbox is intended to contain all the evolutionary tools, from the object initializers to the evaluation operator. * It allows easy configuration of each algorithms. * The toolbox has basically two methods, `register()` and `unregister()`, that are used to add or remove tools from the toolbox.* The usual names for the evolutionary tools are mate(), mutate(), evaluate() and select(), however, any name can be registered as long as it is unique. Here is how they are registered in the toolbox.
###Code
from deap import base
from deap import tools
toolbox1 = base.Toolbox()
def evaluateInd(individual):
# Do some computation
return result,
toolbox1.register("mate", tools.cxTwoPoint)
toolbox1.register("mutate", tools.mutGaussian, mu=0, sigma=1, indpb=0.2)
toolbox1.register("select", tools.selTournament, tournsize=3)
toolbox1.register("evaluate", evaluateInd)
###Output
_____no_output_____
###Markdown
Tool Decoration* A powerful feature that helps to control very precise thing during an evolution without changing anything in the algorithm or operators. * A decorator is a wrapper that is called instead of a function.* It is asked to make some initialization and termination work before and after the actual function is called. For example, in the case of a constrained domain, one can apply a decorator to the mutation and crossover in order to keep any individual from being out-of-bound. The following defines a decorator that checks if any attribute in the list is out-of-bound and clips it if it is the case. * The decorator is defined using three functions in order to receive the min and max arguments. * Whenever the mutation or crossover is called, bounds will be check on the resulting individuals.
###Code
def checkBounds(min, max):
def decorator(func):
def wrapper(*args, **kargs):
offspring = func(*args, **kargs)
for child in offspring:
for i in xrange(len(child)):
if child[i] > max:
child[i] = max
elif child[i] < min:
child[i] = min
return offspring
return wrapper
return decorator
toolbox.register("mate_example", tools.cxBlend, alpha=0.2)
toolbox.register("mutate_example", tools.mutGaussian, mu=0, sigma=2)
MIN = 0; MAX = 10
toolbox.decorate("mate_example", checkBounds(MIN, MAX))
toolbox.decorate("mutate_example", checkBounds(MIN, MAX))
###Output
_____no_output_____
###Markdown
This will work on crossover and mutation because both return a tuple of individuals. The mutation is often considered to return a single individual but again like for the evaluation, the single individual case is a special case of the multiple individual case. Variations* Variations allows to build simple algorithms using predefined small building blocks. * In order to use a variation, the toolbox must be set to contain the required operators. > For example, in the lastly presented complete algorithm, the crossover and mutation are regrouped in the `varAnd()` function, this function requires the toolbox to contain the `mate()` and `mutate()` functions. The variations can be used to simplify the writing of an algorithm as follow.
###Code
from deap import algorithms
NGEN = 20 # number of generations
CXPB = 0.6
MUTPB = 0.05
for g in range(NGEN):
# Select and clone the next generation individuals
offspring = map(toolbox.clone, toolbox.select(pop, len(pop)))
# Apply crossover and mutation on the offspring
offspring = algorithms.varAnd(offspring, toolbox, CXPB, MUTPB)
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
# The population is entirely replaced by the offspring
pop[:] = offspring
###Output
_____no_output_____
###Markdown
Algorithms* There are several algorithms implemented in the algorithms module. * They are very simple and reflect the basic types of evolutionary algorithms present in the literature. * The algorithms use a Toolbox as defined in the last sections. * In order to setup a toolbox for an algorithm, you must register the desired operators under a specified names, refer to the documentation of the selected algorithm for more details. * Once the toolbox is ready, it is time to launch the algorithm. The *simple evolutionary algorithm* takes 5 arguments, a population, a toolbox, a probability of mating each individual at each generation (`cxpb`), a probability of mutating each individual at each generation (`mutpb`) and a number of generations to accomplish (`ngen`).
###Code
from deap import algorithms
result = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2, ngen=50)
###Output
_____no_output_____
###Markdown
Computing StatisticsOften, one wants to compile statistics on what is going on in the optimization. The Statistics are able to compile such data on arbitrary attributes of any designated object. To do that, one need to register the desired statistic functions inside the stats object using the exact same syntax as the toolbox.
###Code
stats = tools.Statistics(key=lambda ind: ind.fitness.values)
###Output
_____no_output_____
###Markdown
The statistics object is created using a key as first argument. This key must be supplied a function that will later be applied to the data on which the statistics are computed. The previous code sample uses the fitness.values attribute of each element.
###Code
import numpy
stats.register("avg", numpy.mean)
stats.register("std", numpy.std)
stats.register("min", numpy.min)
stats.register("max", numpy.max)
###Output
_____no_output_____
###Markdown
* The statistical functions are now registered. * The register function expects an alias as first argument and a function operating on vectors as second argument. * Any subsequent argument is passed to the function when called. The creation of the statistics object is now complete. Predefined AlgorithmsWhen using a predefined algorithm such as `eaSimple()`, `eaMuPlusLambda()`, `eaMuCommaLambda()`, or `eaGenerateUpdate()`, the statistics object previously created can be given as argument to the algorithm.
###Code
pop, logbook = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2, ngen=0,
stats=stats, verbose=True)
###Output
_____no_output_____
###Markdown
* Statistics will automatically be computed on the population every generation. * The verbose argument prints the statistics on screen while the optimization takes place.* Once the algorithm returns, the final population and a Logbook are returned. * See the next section or the Logbook documentation for more information. Writing Your Own AlgorithmWhen writing your own algorithm, including statistics is very simple. One need only to compile the statistics on the desired object. For example, compiling the statistics on a given population is done by calling the compile() method.
###Code
record = stats.compile(pop)
###Output
_____no_output_____
###Markdown
The argument to the compile function must be an iterable of elements on which the key will be called. Here, our population (`pop`) contains individuals.* The statistics object will call the key function on every individual to retrieve their `fitness.values` attribute. * The resulting array of values is finally given the each statistic function and the result is put into the record dictionary under the key associated with the function. * Printing the record reveals its nature.
###Code
>>> print(record)
{'std': 4.96, 'max': 63.0, 'avg': 50.2, 'min': 39.0}
###Output
_____no_output_____
###Markdown
Logging DataOnce the data is produced by the statistics, one can save it for further use in a Logbook. * The logbook is intended to be a chronological sequence of entries (as dictionaries). * It is directly compliant with the type of data returned by the statistics objects, but not limited to this data. * *In fact, anything can be incorporated in an entry of the logbook.*
###Code
logbook = tools.Logbook()
logbook.record(gen=0, evals=30, **record)
###Output
_____no_output_____
###Markdown
The `record()` method takes a variable number of argument, each of which is a data to be recorded. In the last example, we saved the generation, the number of evaluations and everything contained in the record produced by a statistics object using the star magic. All record will be kept in the logbook until its destruction.After a number of records, one may want to retrieve the information contained in the logbook.
###Code
gen, avg = logbook.select("gen", "avg")
###Output
_____no_output_____
###Markdown
The `select()` method provides a way to retrieve all the information associated with a keyword in all records. This method takes a variable number of string arguments, which are the keywords used in the record or statistics object. Here, we retrieved the generation and the average fitness using a single call to select. Printing to Screen* A logbook can be printed to screen or file. * Its `__str__()` method returns a header of each key inserted in the first record and the complete logbook for each of these keys. * The row are in chronological order of insertion while the columns are in an undefined order. * The easiest way to specify an order is to set the header attribute to a list of strings specifying the order of the columns.
###Code
logbook.header = "gen", "avg", "spam"
###Output
_____no_output_____
###Markdown
The result is:
###Code
print(logbook)
###Output
_____no_output_____
###Markdown
Plotting Features* One of the most common operation when an optimization is finished is to plot the data during the evolution. * The `Logbook` allows to do this very efficiently. * Using the select method, one can retrieve the desired data and plot it using matplotlib.
###Code
gen = logbook.select("gen")
fit_mins = logbook.chapters["fitness"].select("min")
size_avgs = logbook.chapters["size"].select("avg")
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax1 = plt.subplots()
line1 = ax1.plot(gen, fit_mins, "b-", label="Minimum Fitness")
ax1.set_xlabel("Generation")
ax1.set_ylabel("Fitness", color="b")
for tl in ax1.get_yticklabels():
tl.set_color("b")
ax2 = ax1.twinx()
line2 = ax2.plot(gen, size_avgs, "r-", label="Average Size")
ax2.set_ylabel("Size", color="r")
for tl in ax2.get_yticklabels():
tl.set_color("r")
lns = line1 + line2
labs = [l.get_label() for l in lns]
ax1.legend(lns, labs, loc="center right")
plt.show()
###Output
_____no_output_____
###Markdown
Constraint HandlingWe have already seen some alternatives.* **Penality functions** are the most basic way of handling constrains for individuals that cannot be evaluated or are forbiden for problem specific reasons, when falling in a given region. * The penality function gives a fitness disavantage to theses individuals based on the amount of constraint violation in the solution. In DEAP, a penality function can be added to any evaluation function using the DeltaPenality decorator provided in the tools module.
###Code
from math import sin
from deap import base
from deap import tools
def evalFct(individual):
"""Evaluation function for the individual."""
x = individual[0]
return (x - 5)**2 * sin(x) * (x/3),
def feasible(individual):
"""Feasability function for the individual. Returns True if feasible False
otherwise."""
if 3 < individual[0] < 5:
return True
return False
def distance(individual):
"""A distance function to the feasability region."""
return (individual[0] - 5.0)**2
toolbox = base.Toolbox()
toolbox.register("evaluate", evalFct)
toolbox.decorate("evaluate", tools.DeltaPenality(feasible, 7.0, distance))
###Output
_____no_output_____ |
jupyter_english/projects_indiv/elo_merchant_recommendation_tbb.ipynb | ###Markdown
[mlcourse.ai](mlcourse.ai) – Open Machine Learning Course Author: Korgun Dmitry, @tbb Individual data analysis project
###Code
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_palette('Set3')
%matplotlib inline
warnings.filterwarnings('ignore')
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.model_selection import validation_curve, learning_curve
from sklearn.metrics import mean_squared_error
###Output
_____no_output_____
###Markdown
Dataset and features description [Kaggle link](https://www.kaggle.com/c/elo-merchant-category-recommendation)Elo - one of the largest payment brands in Brazil. In the dataset we can see clients who use Elo and their transactions. We need to predict the loyalty score for each card_id.The description of the files are* train.csv - the training set* test.csv - the test set* sample_submission.csv - a sample submission file in the correct format - contains all card_ids you are expected to predict for.* historical_transactions.csv - up to 3 months' worth of historical transactions for each card_id* merchants.csv - additional information about all merchants / merchant_ids in the dataset.* new_merchant_transactions.csv - two months' worth of data for each card_id containing ALL purchases that card_id made at merchant_ids that were not visited in the historical data.The *historical_transactions.csv* and *new_merchant_transactions.csv* files contain information about each card's transactions. *historical_transactions.csv* contains up to 3 months' worth of transactions for every card at any of the provided merchant_ids. *new_merchant_transactions.csv* contains the transactions at new merchants (merchant_ids that this particular card_id has not yet visited) over a period of two months.*merchants.csv* contains aggregate information for each merchant_id represented in the data set. Main dataset:
###Code
train = pd.read_csv('../../data/ELO/train.csv', parse_dates=['first_active_month'])
test = pd.read_csv('../../data/ELO/test.csv', parse_dates=['first_active_month'])
train.head()
# columns description
pd.read_excel('../../data/ELO/Data_Dictionary.xlsx', sheet_name='train', header=2)
###Output
_____no_output_____
###Markdown
Historical Transactions:
###Code
hist = pd.read_csv('../../data/ELO/historical_transactions.csv')
hist.head()
# columns description
pd.read_excel('../../data/ELO/Data_Dictionary.xlsx', sheet_name='history', header=2)
###Output
_____no_output_____
###Markdown
New merchant transactions
###Code
transaction = pd.read_csv('../../data/ELO/new_merchant_transactions.csv')
transaction.head()
# columns description
pd.read_excel('../../data/ELO/Data_Dictionary.xlsx', sheet_name='new_merchant_period', header=2)
###Output
_____no_output_____
###Markdown
Little bit preprocessing
###Code
train.info()
###Output
_____no_output_____
###Markdown
As features are categorical we can change type to free some memory.
###Code
train['feature_1'] = train['feature_1'].astype('category')
train['feature_2'] = train['feature_2'].astype('category')
train['feature_3'] = train['feature_3'].astype('category')
test['feature_1'] = test['feature_1'].astype('category')
test['feature_2'] = test['feature_2'].astype('category')
test['feature_3'] = test['feature_3'].astype('category')
train.info()
###Output
_____no_output_____
###Markdown
Exploratory data analysis and feature engineering Check missed data
###Code
train.isna().sum()
test.isna().sum()
###Output
_____no_output_____
###Markdown
Target columnLet start analys with target value
###Code
fig, ax = plt.subplots(figsize = (16, 6))
plt.suptitle('Target value distribution', fontsize=24)
sns.distplot(train['target'], bins=50, ax=ax);
###Output
_____no_output_____
###Markdown
We can see that some of the loyalty values are far apart (less than -30) compared to others.
###Code
(train['target'] < -30).sum(), round((train['target'] < -30).sum() / train['target'].count(), 2)
###Output
_____no_output_____
###Markdown
So, there is 2207 rows (about 1% of the data), which has values different from the rest. Since the metric RMSE these rows might play an important role. So beware of them. First Active MonthIn this section, let see if there are any distribution change between train and test sets with respect to first active month of the card.
###Code
fig, ax = plt.subplots(figsize = (14, 6))
first_month_count_train = train['first_active_month'].dt.date.value_counts().sort_index()
sns.barplot(first_month_count_train.index,
first_month_count_train.values,
alpha=0.8, ax=ax, color='#96CAC0')
first_month_count_test = test['first_active_month'].dt.date.value_counts().sort_index()
sns.barplot(first_month_count_test.index,
first_month_count_test.values,
alpha=0.8, ax=ax, color='#F6F6BC')
plt.xticks(rotation='vertical')
plt.xlabel('First active month', fontsize=12)
plt.ylabel('Number of cards', fontsize=12)
plt.title('First active month count')
plt.show()
###Output
_____no_output_____
###Markdown
Looks like the distribution is kind of similar between train and test set. So we need not really have to do time based split I think. Anonymous featuresIn this section, let see if the other variables in the train dataset has good predictive power in finding the loyalty score.
###Code
fig, ax = plt.subplots(1, 3, figsize = (16, 6))
plt.suptitle('Counts of categiories for features', fontsize=24)
sns.countplot(data=train, x='feature_1', ax=ax[0])
sns.countplot(data=train, x='feature_2', ax=ax[1]).set(ylabel=None)
sns.countplot(data=train, x='feature_3', ax=ax[2]).set(ylabel=None);
fig, ax = plt.subplots(1, 3, figsize=(16, 6))
plt.suptitle('Violineplots for features and target', fontsize=24)
sns.violinplot(x='feature_1', y='target', data=train, ax=ax[0], title='feature_1', palette='Set3')
sns.violinplot(x='feature_2', y='target', data=train, ax=ax[1], title='feature_2', palette='Set3')
sns.violinplot(x='feature_3', y='target', data=train, ax=ax[2], title='feature_3', palette='Set3');
###Output
_____no_output_____
###Markdown
To the naked eyes, the distribution of the different categories in all three features look kind of similar. May be the models are able to find something here. Now let us make some features based on the historical transactions and merge them with train and test set. Number of Historical Transactions for the card
###Code
history_purchase_amount = hist.groupby('card_id')['purchase_amount'].size().reset_index()
history_purchase_amount.columns = ['card_id', 'history_purchase_amount']
train = pd.merge(train, history_purchase_amount, on='card_id', how='left')
test = pd.merge(test, history_purchase_amount, on='card_id', how='left')
history_purchase_amount = train.groupby('history_purchase_amount')['target'].mean().sort_index()[:-50]
fig, ax = plt.subplots(figsize=(16, 6))
plt.suptitle('Loyalty score by Number of historical transactions', fontsize=24)
sns.lineplot(history_purchase_amount.index[::-1],
history_purchase_amount.values[::-1],
ax=ax);
###Output
_____no_output_____
###Markdown
Now let bin the count of historical transactions and then do some box plots to see the plots better.
###Code
bins = [0] + [2 ** p for p in range(4, 13)]
train['binned_history_purchase_amount'] = pd.cut(train['history_purchase_amount'], bins)
plt.figure(figsize=(16, 6))
sns.boxplot(x='binned_history_purchase_amount', y='target', data=train, showfliers=False)
plt.xticks(rotation='vertical')
plt.xlabel('binned_num_hist_transactions', fontsize=12)
plt.ylabel('Loyalty score', fontsize=12)
plt.title('Distribution of binned history purchase amount', fontsize=24)
plt.show()
###Output
_____no_output_____
###Markdown
Value of Historical TransactionsCheck the value of the historical transactions for the cards and check the loyalty score distribution based on that.
###Code
gdf = hist.groupby('card_id')['purchase_amount'].agg(['sum', 'mean', 'std', 'min', 'max']).reset_index()
gdf.columns = ['card_id',
'sum_history_purchase_amount',
'mean_history_purchase_amount',
'std_history_purchase_amount',
'min_history_purchase_amount',
'max_history_purchase_amount']
train = pd.merge(train, gdf, on='card_id', how='left')
test = pd.merge(test, gdf, on='card_id', how='left')
bins = np.percentile(train['sum_history_purchase_amount'], range(0,101,10))
train['binned_sum_history_purchase_amount'] = pd.cut(train['sum_history_purchase_amount'], bins)
plt.figure(figsize=(16, 6))
sns.boxplot(x='binned_sum_history_purchase_amount', y='target', data=train, showfliers=False)
plt.xticks(rotation='vertical')
plt.xlabel('Binned sum history purchase amount', fontsize=12)
plt.ylabel('Loyalty score', fontsize=12)
plt.title('Sum of historical transaction value (binned) distribution', fontsize=24)
plt.show()
###Output
_____no_output_____
###Markdown
As we could see, the loyalty score seem to increase with the `sum of historical transaction value`. This is expected. Now we can do the same plot with `Mean value of historical transaction`.
###Code
bins = np.percentile(train['mean_history_purchase_amount'], range(0,101,10))
train['binned_mean_history_purchase_amount'] = pd.cut(train['mean_history_purchase_amount'], bins)
plt.figure(figsize=(16, 6))
sns.boxplot(x='binned_mean_history_purchase_amount', y='target', data=train, showfliers=False)
plt.xticks(rotation='vertical')
plt.xlabel('Binned Mean Historical Purchase Amount', fontsize=12)
plt.ylabel('Loyalty score', fontsize=12)
plt.title('Mean of historical transaction value (binned) distribution', fontsize=24)
plt.show()
###Output
_____no_output_____
###Markdown
New Merchant TransactionsIn this section, let look at the new merchant transactions data and do some analysis
###Code
gdf = transaction.groupby('card_id')['purchase_amount'].size().reset_index()
gdf.columns = ['card_id', 'transactions_count']
train = pd.merge(train, gdf, on='card_id', how='left')
test = pd.merge(test, gdf, on='card_id', how='left')
bins = [0, 10, 20, 30, 40, 50, 75, 10000]
train['binned_transactions_count'] = pd.cut(train['transactions_count'], bins)
plt.figure(figsize=(16, 6))
sns.boxplot(x='binned_transactions_count', y='target', data=train, showfliers=False)
plt.xticks(rotation='vertical')
plt.xlabel('Binned transactions count', fontsize=12)
plt.ylabel('Loyalty score', fontsize=12)
plt.title('Number of new merchants transaction (binned) distribution', fontsize=24)
plt.show()
###Output
_____no_output_____
###Markdown
Loyalty score seem to decrease as the number of new merchant transactions increases except for the last bin.
###Code
gdf = transaction.groupby('card_id')['purchase_amount'].agg(['sum', 'mean', 'std', 'min', 'max']).reset_index()
gdf.columns = ['card_id',
'sum_transactions_count',
'mean_transactions_count',
'std_transactions_count',
'min_transactions_count',
'max_transactions_count']
train = pd.merge(train, gdf, on='card_id', how='left')
test = pd.merge(test, gdf, on='card_id', how='left')
bins = np.nanpercentile(train['sum_transactions_count'], range(0,101,10))
train['binned_sum_transactions_count'] = pd.cut(train['sum_transactions_count'], bins)
plt.figure(figsize=(16, 6))
sns.boxplot(x='binned_sum_transactions_count', y='target', data=train, showfliers=False)
plt.xticks(rotation='vertical')
plt.xlabel('binned sum of new merchant transactions', fontsize=12)
plt.ylabel('Loyalty score', fontsize=12)
plt.title('Sum of new merchants transaction value (binned) distribution', fontsize=24)
plt.show()
###Output
_____no_output_____
###Markdown
Loyalty scores seem to increase with the increase in the sum of new merchant transaction values but for the last bin.
###Code
bins = np.nanpercentile(train['mean_transactions_count'], range(0,101,10))
train['binned_mean_transactions_count'] = pd.cut(train['mean_transactions_count'], bins)
plt.figure(figsize=(16, 6))
sns.boxplot(x='binned_mean_transactions_count', y='target', data=train, showfliers=False)
plt.xticks(rotation='vertical')
plt.xlabel('binned mean of new merchant transactions', fontsize=12)
plt.ylabel('Loyalty score', fontsize=12)
plt.title('Mean of New merchants transaction value (binned) distribution', fontsize=24)
plt.show()
###Output
_____no_output_____
###Markdown
Patterns, insights, pecularities of data So, according to the results of the data analysis, the following conclusions can be drawn:* There are no gaps in the train/tets data, but detailed information provided only for the last 3 month, so we have some missed data in generated features.* There are outliers in the target variable that require additional analysis. This could be fraud blocking, or, for example, badly filled gaps.* Judging by the dependence of loyalty on the number of purchases, loyalty grows with a sufficiently large number of purchases (> 75), and before that usually falls. This is expected, since those who stopped at a small number of purchases, as a rule, are not satisfied with the service. Data preprocessing 1 row in test data have missed `first_active_month`, so lets fix it.
###Code
test.loc[test['first_active_month'].isna(), 'first_active_month'] = test.loc[
(test['feature_1'] == 5) & (test['feature_2'] == 2) & (test['feature_3'] == 1),
'first_active_month'].min()
###Output
_____no_output_____
###Markdown
Fill in the data on `card_id` that do not have transactions over the past three months.
###Code
cols_to_fill = [
'transactions_count', 'sum_transactions_count',
'mean_transactions_count', 'std_transactions_count',
'min_transactions_count', 'max_transactions_count',
]
train[cols_to_fill] = train[cols_to_fill].fillna(0)
test[cols_to_fill] = test[cols_to_fill].fillna(0)
###Output
_____no_output_____
###Markdown
Add another several features Here we add common date features.
###Code
max_date = train['first_active_month'].dt.date.max()
def process_main(df):
date_parts = ['year', 'weekday', 'month']
for part in date_parts:
part_col = 'first_' + part
df[part_col] = getattr(df['first_active_month'].dt, part).astype(int)
df['elapsed_time'] = (max_date - df['first_active_month'].dt.date).dt.days
return df
train = process_main(train)
test = process_main(test)
###Output
_____no_output_____
###Markdown
Cross-validation, hyperparameter tuning Baseline ModelLet build a baseline model using the features created so far. First of all we have to split data to train and validation sets.
###Code
cols_to_use = [
'feature_1', 'feature_2', 'feature_3',
'first_year', 'first_month', 'first_weekday', 'elapsed_time',
'history_purchase_amount', 'sum_history_purchase_amount',
'mean_history_purchase_amount', 'std_history_purchase_amount',
'min_history_purchase_amount', 'max_history_purchase_amount',
'transactions_count', 'sum_transactions_count',
'mean_transactions_count', 'std_transactions_count',
'min_transactions_count', 'max_transactions_count',
]
X_train, X_holdout, y_train, y_holdout = train_test_split(train[cols_to_use],
train['target'],
test_size=0.2)
X_test = test[cols_to_use]
###Output
_____no_output_____
###Markdown
Now that we have prepared data, we can delete raw data.
###Code
del train, test, hist, transaction
params = {
'learning_rate': 0.1,
'n_estimators': 100,
'subsample': 1.0,
'max_depth': 3,
'max_features': 'sqrt',
'n_iter_no_change': 5,
'validation_fraction': 0.2,
'tol': 0.00001,
'random_state': 11,
}
###Output
_____no_output_____
###Markdown
Fit baseline model
###Code
%%time
model = GradientBoostingRegressor(**params)
model.fit(X_train[cols_to_use], y_train)
score = mean_squared_error(y_holdout, model.predict(X_holdout))
print(f'Baseline model score: {np.sqrt(score)}')
fi = list(zip(cols_to_use, model.feature_importances_))
fi = pd.DataFrame(sorted(fi, key=lambda x: x[1], reverse=True), columns=['Feature', 'Importance'])
plt.figure(figsize=(16, 6))
sns.barplot(x='Importance', y='Feature', data=fi, orient='h')
plt.title('Features importance', fontsize=24);
###Output
_____no_output_____
###Markdown
Validation and learning curves Change params and tune `n_estimators` with validation curve.
###Code
params = {
'learning_rate': 0.1,
'n_estimators': 100,
'subsample': 0.8,
'max_depth': 7,
'max_features': 'sqrt',
'n_iter_no_change': 5,
'validation_fraction': 0.2,
'tol': 0.00001,
'random_state': 11,
}
def plot_validation_curve(model, X_train, y_train,
param, param_range, cv=3,
scoring='neg_mean_squared_error'):
train_scores, test_scores = validation_curve(
model, X_train, y_train, cv=cv,
param_name=param, param_range=param_range,
scoring=scoring, n_jobs=-1
)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.figure(figsize=(16, 6))
plt.title('Validation Curve')
plt.xlabel('n_estimators')
plt.ylabel('Score')
plt.semilogx(param_range, train_scores_mean, label='Training score',
color='darkorange', lw=2)
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color='darkorange', lw=2)
plt.semilogx(param_range, test_scores_mean, label='Cross-validation score',
color='navy', lw=2)
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2,
color='navy', lw=2)
plt.legend(loc='best')
plt.show()
%%time
plot_validation_curve(GradientBoostingRegressor(**params),
X_train[cols_to_use], y_train,
param='n_estimators',
param_range=[10 ** x for x in range(1, 6)])
###Output
_____no_output_____
###Markdown
This validation curve poses two possibilities: first, that we do not have the correct param_range to find the best `n_estimators` and need to expand our search to larger values. The second is that other hyperparameters (such as `learning_rate` or `max_depth`, or even `subsample`) may have more influence on the default model than `n_estimators` by itself does. Although validation curves can give us some intuition about the performance of a model to a single hyperparameter, grid search is required to understand the performance of a model with respect to multiple hyperparameters.
###Code
def plot_learning_curve(model, X_train, y_train, cv=3,
train_sizes=None, scoring='neg_mean_squared_error',
random_state=11):
if not train_sizes:
train_sizes = np.linspace(.1, 1.0, 8)
train_sizes, train_scores, test_scores = learning_curve(
model, X_train, y_train, cv=cv,
train_sizes=train_sizes,
scoring=scoring,
random_state=random_state,
n_jobs=-1
)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.figure(figsize=(16, 6))
plt.title('Learning curve')
plt.xlabel('Training examples')
plt.ylabel('Score')
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color='r')
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color='g')
plt.plot(train_sizes, train_scores_mean, 'o-', color='r',
label='Training score')
plt.plot(train_sizes, test_scores_mean, 'o-', color='g',
label='Cross-validation score')
plt.legend(loc='best')
plt.show()
%%time
gbm = GradientBoostingRegressor(**params)
plot_learning_curve(gbm, X_train[cols_to_use], y_train)
###Output
_____no_output_____
###Markdown
This learning curve shows high test variability and a low score. We can see that the training and test scores have not yet converged, so potentially this model would benefit from more training data. Finally, this model does not suffer from error due to variance (the CV scores for the test data are more variable than for training data) so it is possible that the model is underfitting. Prediction for hold-out and test samples
###Code
%%time
new_params = params
new_params['n_iter_no_change'] = None
new_params['n_estimators'] = 100
model = GradientBoostingRegressor(**new_params)
model.fit(X_train[cols_to_use], y_train)
score = mean_squared_error(y_holdout, model.predict(X_holdout))
print(f'Final model score: {np.sqrt(score)}')
submission = pd.read_csv('../../data/ELO/sample_submission.csv')
submission['target'] = model.predict(X_test)
submission.to_csv('submit.csv', index=False)
###Output
_____no_output_____ |
Google Maps API/First Google maps tuto.ipynb | ###Markdown
First GOOGLE maps tuto
###Code
import googlemaps
from datetime import datetime
gmaps = googlemaps.Client(key='your API')
# Geocoding an address
geocode_result = gmaps.geocode('1600 Amphitheatre Parkway, Mountain View, CA')
geocode_result
###Output
_____no_output_____
###Markdown
** Display the reverse geocode **
###Code
# Look up an address with reverse geocoding
reverse_geocode_result = gmaps.reverse_geocode((40.714224, -73.961452))
reverse_geocode_result
###Output
_____no_output_____
###Markdown
** Display the directions **
###Code
# Request directions via public transit
now = datetime.now()
directions_result = gmaps.directions("Sydney Town Hall",
"Parramatta, NSW",
mode="transit",
departure_time=now)
directions_result
###Output
_____no_output_____ |
examples/setup_disc.ipynb | ###Markdown
Accretion discIn this tutorial we set up a protoplanetary disc around a star represented by a sink particle, and we add a planet. This notebook generates a Phantom "temporary" dump file that can be read by Phantom as an initial condition. It also generates a Phantom "in" file. Together, these files can start a Phantom simulation. InitializationFirst we import the required modules.
###Code
import matplotlib.pyplot as plt
import numpy as np
import phantomsetup
###Output
_____no_output_____
###Markdown
Here we set some constants for convenience.
###Code
igas = phantomsetup.defaults.PARTICLE_TYPE['igas']
###Output
_____no_output_____
###Markdown
ParametersNow we set the parameters for the problem.First is the `prefix` which sets the file name for the dump file and Phantom in file.
###Code
prefix = 'disc'
###Output
_____no_output_____
###Markdown
ResolutionWe choose the resolution to be $10^6$ gas particles.
###Code
number_of_particles = 1_000_000
particle_type = igas
###Output
_____no_output_____
###Markdown
ViscosityThe SPH $\alpha$ viscosity parameter is its minimal value of 0.1.
###Code
alpha_artificial = 0.1
###Output
_____no_output_____
###Markdown
UnitsWe set the length and mass units to be au and solar masses, respectively. We will also set the time unit such that the gravitational constant is unity.
###Code
length_unit = phantomsetup.units.unit_string_to_cgs('au')
mass_unit = phantomsetup.units.unit_string_to_cgs('solarm')
gravitational_constant = 1.0
###Output
_____no_output_____
###Markdown
StarThe star is of solar mass, at the origin, with a 5 au accretion radius.
###Code
stellar_mass = 1.0
stellar_accretion_radius = 5.0
stellar_position = (0.0, 0.0, 0.0)
stellar_velocity = (0.0, 0.0, 0.0)
###Output
_____no_output_____
###Markdown
DiscThe disc has mass 0.01 solar masses, it extends from 10 au to 200 au.
###Code
radius_min = 10.0
radius_max = 200.0
disc_mass = 0.01
###Output
_____no_output_____
###Markdown
Equation of stateThe equation of state is locally isothermal. We set the aspect ratio H/R at a reference radius.
###Code
ieos = 3
q_index = 0.75
aspect_ratio = 0.05
reference_radius = 10.0
###Output
_____no_output_____
###Markdown
PlanetWe add a planet at 100 au.
###Code
planet_mass = 0.001
planet_position = (100.0, 0.0, 0.0)
orbital_radius = np.linalg.norm(planet_position)
planet_velocity = np.sqrt(gravitational_constant * stellar_mass / orbital_radius)
###Output
_____no_output_____
###Markdown
We set the planet accretion radius as a fraction of the Hill sphere radius.
###Code
planet_accretion_radius_fraction_hill_radius = 0.25
planet_hill_radius = phantomsetup.orbits.hill_sphere_radius(
orbital_radius, planet_mass, stellar_mass
)
planet_accretion_radius = (
planet_accretion_radius_fraction_hill_radius * planet_hill_radius
)
###Output
_____no_output_____
###Markdown
Surface density distributionFor the surface density distribution we use the Lynden-Bell and Pringle (1974) self-similar solution, i.e. a power law with an exponential taper.
###Code
def density_distribution(radius, radius_critical, gamma):
"""Self-similar disc surface density distribution.
This is the Lyden-Bell and Pringle (1974) solution, i.e. a power law
with an exponential taper.
"""
return phantomsetup.disc.self_similar_accretion_disc(radius, radius_critical, gamma)
radius_critical = 100.0
gamma = 1.0
args = (radius_critical, gamma)
###Output
_____no_output_____
###Markdown
Instantiate the `Setup` objectThe following instantiates the `phantomsetup.Setup` object.
###Code
setup = phantomsetup.Setup()
###Output
_____no_output_____
###Markdown
Set attributes and add particles PrefixSet the prefix.
###Code
setup.prefix = prefix
###Output
_____no_output_____
###Markdown
UnitsSet units.
###Code
setup.set_units(
length=length_unit, mass=mass_unit, gravitational_constant_is_unity=True
)
###Output
_____no_output_____
###Markdown
Equation of stateSet the equation of state. We get `polyk` from the aspect ratio parametrization.
###Code
polyk = phantomsetup.eos.polyk_for_locally_isothermal_disc(
q_index, reference_radius, aspect_ratio, stellar_mass, gravitational_constant
)
setup.set_equation_of_state(ieos=ieos, polyk=polyk)
###Output
_____no_output_____
###Markdown
ViscositySet the numerical viscosity to Phantom disc viscosity.
###Code
setup.set_dissipation(disc_viscosity=True, alpha=alpha_artificial)
###Output
_____no_output_____
###Markdown
StarAdd a star at the origin.
###Code
setup.add_sink(
mass=stellar_mass,
accretion_radius=stellar_accretion_radius,
position=stellar_position,
velocity=stellar_velocity,
)
###Output
_____no_output_____
###Markdown
DiscAdd the disc around the star.
###Code
disc = phantomsetup.Disc(
particle_type=particle_type,
number_of_particles=number_of_particles,
disc_mass=disc_mass,
density_distribution=density_distribution,
radius_range=(radius_min, radius_max),
q_index=q_index,
aspect_ratio=aspect_ratio,
reference_radius=reference_radius,
stellar_mass=stellar_mass,
gravitational_constant=gravitational_constant,
extra_args=(radius_critical, gamma),
)
setup.add_container(disc)
###Output
_____no_output_____
###Markdown
PlanetAdd a planet in orbit around the star.
###Code
setup.add_sink(
mass=planet_mass,
accretion_radius=planet_accretion_radius,
position=planet_position,
velocity=planet_velocity,
)
###Output
_____no_output_____
###Markdown
PlotNow we plot some quantities to see what we have set up.First is the particles in the xy-plane. The sink particles are marked in red.
###Code
x, y, z = disc.arrays['position'][:, 0], disc.arrays['position'][:, 1], disc.arrays['position'][:, 2]
fig, ax = plt.subplots()
ax.plot(x[::10], y[::10], 'k.', ms=0.5)
for sink in setup.sinks:
ax.plot(sink.position[0], sink.position[1], 'ro')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_aspect('equal')
###Output
_____no_output_____
###Markdown
Next we plot the particles in the rz-plane.
###Code
position_cylindrical, velocity_cylindrical = phantomsetup.geometry.coordinate_transform(
position=disc.arrays['position'],
velocity=disc.arrays['velocity'],
geometry_from='cartesian',
geometry_to='cylindrical'
)
R = position_cylindrical[:, 0]
fig, ax = plt.subplots()
ax.plot(R[::10], z[::10], 'k.', ms=0.5)
ax.set_xlabel('$R$')
ax.set_ylabel('$z$')
ax.set_aspect('equal')
ax.set_ylim(bottom=2 * z.min(), top=2 * z.max())
###Output
_____no_output_____
###Markdown
Finally, we plot $v_{\phi}$ as a function of radius.
###Code
vphi = velocity_cylindrical[:, 1]
fig, ax = plt.subplots()
ax.plot(R[::10], vphi[::10], 'k.', ms=0.5)
ax.set_xlabel('$R$')
ax.set_ylabel('$v_{\phi}$')
###Output
_____no_output_____
###Markdown
Write to fileNow that we are happy with the setup, write the "temporary" dump file with the initial conditions and the Phantom "in" file.First we set a working directory for the simulation.
###Code
working_dir = '~/runs/disc'
setup.write_dump_file(directory=working_dir)
setup.write_in_file(directory=working_dir)
###Output
_____no_output_____
###Markdown
Compile PhantomYou can start a Phantom calculation from these two files but you must compile Phantom with the correct Makefile variables. We can use the `phantom_compile_command` method to show how Phantom would be compiled.
###Code
print(setup.phantom_compile_command())
###Output
make \
SETUP=empty \
SYSTEM=gfortran \
HDF5=yes \
HDF5ROOT=/usr/local/opt/hdf5 \
DISC_VISCOSITY=yes \
DRIVING=no \
DUST=no \
DUSTGROWTH=no \
GRAVITY=no \
H2CHEM=no \
IND_TIMESTEPS=yes \
INJECT_PARTICLES=no \
ISOTHERMAL=yes \
KERNEL=cubic \
MAXDUSTSMALL=11 \
MAXDUSTLARGE=11 \
MCFOST=no \
MHD=no \
NONIDEALMHD=no \
PERIODIC=no \
PHOTO=no
|
module4-makefeatures/4.Lecture_MakingFeatures.ipynb | ###Markdown
Lambda School Data Science*Unit 1, Sprint 1, Module 4*--- _Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series Get LendingClub data[Source](https://www.lendingclub.com/info/download-data.action)
###Code
# we can get the zip file with !wget.
!wget https://resources.lendingclub.com/LoanStats_2019Q1.csv.zip
# load the zip file.
!ls
# now we unzip the zip file.
!unzip LoanStats_2019Q1.csv.zip
# look at the headers of the .csv file without loading it into a data frame with !head.
!head LoanStats_2019Q1.csv
# lood at the end headers of the .csv file with !tail.
!tail LoanStats_2019Q1.csv
###Output
"","","40000","40000","40000"," 36 months"," 6.46%","1225.24","A","A1","President - North America","4 years","MORTGAGE","520000","Verified","Jan-2019","Current","n","","","credit_card","Credit card refinancing","752xx","TX","9.96","0","Sep-2006","1","43","","21","0","59529","29.6%","57","f","33858.42","33858.42","7337.08","7337.08","6141.58","1195.50","0.0","0.0","0.0","Jul-2019","1225.24","Aug-2019","Jul-2019","0","43","1","Individual","","","","0","0","864480","2","3","0","0","28","27151","35","3","5","38479","34","111100","1","0","3","7","41166","41467","34.3","0","0","147","146","2","2","9","3","43","3","43","0","4","5","8","16","10","14","36","5","21","0","0","0","3","98.2","12.5","0","0","1033574","95958","100800","78634","","","","","","","","","","","","N","","","","","","","","","","","","","","","N","","","","","",""
"","","5000","5000","5000"," 36 months"," 13.56%","169.83","C","C1","","n/a","MORTGAGE","48000","Not Verified","Jan-2019","Current","n","","","home_improvement","Home improvement","338xx","FL","8.28","2","May-2006","0","11","","8","0","3846","13.6%","21","w","4300.52","4300.52","1011.45","1011.45","699.48","311.97","0.0","0.0","0.0","Jul-2019","169.83","Aug-2019","Jul-2019","0","","1","Individual","","","","0","0","35666","0","1","0","0","49","5336","47","0","2","0","23","28200","0","3","0","2","4458","","","0","0","99","151","13","13","1","","","","11","0","0","3","0","1","4","6","16","3","8","0","0","0","0","90.5","","0","0","88613","9182","0","11413","","","","","","","","","","","","N","","","","","","","","","","","","","","","N","","","","","",""
"","","6000","6000","6000"," 36 months"," 6.46%","183.79","A","A1","","< 1 year","MORTGAGE","96000","Not Verified","Jan-2019","Current","n","","","debt_consolidation","Debt consolidation","060xx","CT","0.31","0","May-1993","0","","91","16","1","50","0.1%","36","w","5078.74","5078.74","1100.59","1100.59","921.26","179.33","0.0","0.0","0.0","Jul-2019","183.79","Aug-2019","Jul-2019","0","","1","Individual","","","","0","0","50","0","0","0","1","15","0","","1","4","50","0","33500","1","1","1","5","3","14850","0.3","0","0","15","306","7","7","0","45","","7","","0","1","1","3","12","1","16","35","1","16","0","0","0","1","100","0","1","0","33500","50","14900","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","N","","","","","",""
"","","16000","16000","16000"," 36 months"," 16.14%","563.62","C","C4","Estimator/Supervisor","10+ years","MORTGAGE","32000","Source Verified","Jan-2019","Current","n","","","debt_consolidation","Debt consolidation","925xx","CA","20.89","0","Dec-2010","0","35","117","13","1","17066","49.9%","15","f","13837.91","13837.91","3367.37","3367.37","2162.09","1205.28","0.0","0.0","0.0","Jul-2019","563.62","Aug-2019","Jul-2019","0","","1","Individual","","","","0","0","17066","1","0","0","0","55","0","","3","3","2353","50","34200","0","0","1","3","1313","6446","60.7","0","0","55","96","1","1","0","1","35","12","35","0","7","8","8","8","1","13","14","8","13","0","0","0","3","93.3","37.5","1","0","34200","17066","16400","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","N","","","","","",""
"","","16000","16000","16000"," 60 months"," 11.31%","350.36","B","B3","MATERIAL HANDLER","5 years","MORTGAGE","72000","Verified","Jan-2019","Current","n","","","debt_consolidation","Debt consolidation","850xx","AZ","7.02","2","Sep-2005","0","8","64","12","1","11882","37.1%","39","w","2907.09","2907.09","13792.11","13792.11","13092.91","699.20","0.0","0.0","0.0","Jul-2019","350.36","Aug-2019","Jul-2019","0","8","1","Individual","","","","0","0","225413","0","2","0","0","28","62953","79","0","0","5568","45","32000","2","2","4","1","18784","11705","49.1","2","0","159","129","25","24","3","25","","1","9","2","4","5","5","9","20","9","16","5","12","0","0","2","0","94.3","0","1","0","251486","74835","23000","63090","","","","","","","","","","","","N","","","","","","","","","","","","","","","N","","","","","",""
"","","29250","29250","29250"," 60 months"," 18.94%","757.8","D","D2","sr register csa","7 years","MORTGAGE","65000","Verified","Jan-2019","Current","n","","","debt_consolidation","Debt consolidation","774xx","TX","29.52","0","Jan-2011","0","","","20","0","38465","69%","22","w","27401.57","27401.57","4485.24","4485.24","1848.43","2636.81","0.0","0.0","0.0","Jul-2019","757.8","Aug-2019","Jul-2019","0","","1","Individual","","","","0","0","204764","0","1","0","1","17","20323","79","1","3","13874","69","55505","0","0","0","4","10238","2985","87","0","0","78","95","7","7","1","19","","13","","0","13","18","13","13","3","18","18","18","20","","0","0","1","100","76.9","0","0","237833","58788","38800","25728","","","","","","","","","","","","N","","","","","","","","","","","","","","","N","","","","","",""
Total amount funded in policy code 1: 1928448350
Total amount funded in policy code 2: 799382985
###Markdown
Load LendingClub datapandas documentation- [`read_csv`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)- [`options.display`](https://pandas.pydata.org/pandas-docs/stable/options.htmlavailable-options)
###Code
# import pandas as pd to load the data set.
import pandas as pd
# label the dat set, we saw above the header is shown as a row and there are 2 footers as the end of the data set so we are skipping 1 row and 2 footers, use engine python.
df = pd.read_csv('LoanStats_2019Q1.csv', skiprows=1, skipfooter=2, engine='python')
# show the shape of the data set.
print(df.shape)
# show the header of the data set.
df.head()
# check to see where the NA's are.
df.isna().sum()
# we can look as a specific column and see the NA's.
df[df.loan_amnt.isna()]
# look at the info of the data frame.
df.info()
# we can set how many columns and row we want to see.
pd.options.display.max_columns = 150
pd.options.display.max_rows = 150
# we can get the transverse data set with .T, this may look better with so many columns in the data set.
df.head().T
###Output
_____no_output_____
###Markdown
Work with strings For machine learning, we usually want to replace strings with numbers.We can get info about which columns have a datatype of "object" (strings)
###Code
# look at the statistics of the data set, typically only includes #s but with (includ='object') it will show all.
df.describe(include='object')
# lets look at the 'emp_length' column values.
df.emp_length.value_counts()
###Output
_____no_output_____
###Markdown
Convert `int_rate`
###Code
# load the data set again, same set up.
df = pd.read_csv('LoanStats_2019Q1.csv', skiprows=1, skipfooter=2, engine='python')
# show the shape of the data set.
print(df.shape)
# show the headers of the data set.
df.head()
x = '12.5%'
# lets remove the % sign in the interest rate colum so it can be used.
df['int rate'] = df['int_rate'].str.strip('%').astype(float)
# show the new data for 'int_rate' column.
df['int rate'].head()
###Output
_____no_output_____
###Markdown
Define a function to remove percent signs from strings and convert to floats Apply the function to the `int_rate` column
###Code
x
# create a function to remove the '%' sign.
def remove_percent_sign(string):
'''This function takes a string as input, strips the trailing percent sign, and returns float interest rate.'''
return float(string.strip('%'))
remove_percent_sign(x)
# we can now apply the 'remove_percent_sign' function to the whole 'int_rate' column.
df['int_rate'] = df['int_rate'].apply(remove_percent_sign)
df['int_rate'].head()
# now that the '%' sign is removed we can show the 'int_rate' column in a histogram plot.
df.int_rate.hist();
###Output
_____no_output_____
###Markdown
Clean `emp_title`Look at top 20 titles
###Code
df.emp_title.nunique()
###Output
_____no_output_____
###Markdown
How often is `emp_title` null?
###Code
# see how many NA's are in the emp_title column.
df.emp_title.isna().sum()
df.shape
###Output
_____no_output_____
###Markdown
Clean the title and handle missing values- Capitalize- Strip spaces- Replace 'NaN' with missing
###Code
import numpy as np
isinstance(np.nan,str)
import numpy as np
example = ['owner', 'Supervisor ', ' Project manager', np.nan]
def clean_emp_title(x):
if isinstance(x, str):
return x.strip().title()
else:
return 'Missing'
#for ex in example:
# print(clean_emp_title(ex))
[clean_emp_title(x) for x in example]
df['emp_title'] = df['emp_title'].apply(clean_emp_title)
df['emp_title'].head(20)
df.emp_title.nunique()
df.emp_title.value_counts().head(10)
###Output
_____no_output_____
###Markdown
Create `emp_title_manager`pandas documentation: [`str.contains`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html)
###Code
df['emp_title'].str.contains('manager', case=False).head(10)
df['emp_title'].iloc[0:10]
df['emp_title_manager'] = df.emp_title.str.contains('Manager')
df['emp_title_manager'].value_counts()
df.groupby('emp_title_manager').int_rate.mean()
###Output
_____no_output_____
###Markdown
Work with dates pandas documentation- [to_datetime](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html)- [Time/Date Components](https://pandas.pydata.org/pandas-docs/stable/timeseries.htmltime-date-components) "You can access these properties via the `.dt` accessor"
###Code
df['issue_d'].describe()
df['issue_d'] = pd.to_datetime(df['issue_d'], infer_datetime_format=True)
df['issue_d'].describe()
df['issue_month'] = df['issue_d'].dt.month
df['issue_month'].value_counts()
df.head()
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'])
df['days_since_earlist_cr_line'] = (df['issue_d'] - df['earliest_cr_line']).dt.days
df['days_since_earlist_cr_line'].describe()
27453/365
[col for col in df if col.endswith('_d')]
for col in ['last_pymnt_d', 'next_pymnt_d', 'last_credit_pull_d']:
df[col] = pd.to_datetime(df[col])
df.describe(include='datetime')
###Output
_____no_output_____ |
Hilbert Matrix Problem.ipynb | ###Markdown
Problem 2.Consider the minimization problem:minimize $x^TAx$, $x \in R^5$Where A is the 5x5 Hilbert matrix
###Code
from numpy import *
import scipy.linalg as la
from matplotlib import pyplot as plt
h = la.hilbert(5)
def f(x):
return x.dot(la.hilbert(5).dot(x))
def grad_f(x):
return 2*la.hilbert(5).dot(x)
def gradient(max_gradf=1.0e-2, x0=[1.,2.,3.,4.,5.], t=0.1):
fs = []
xk = array(x0)
gfk = grad_f(xk)
gfk_n2 = la.norm(gfk)
while gfk_n2 > max_gradf:
gfk = grad_f(xk)
gfk_n2 = la.norm(gfk)
xk -= t*gfk
fk = f(xk)
fs.append(fk)
return array(fs), xk
def conv_rate(alg):
fs, x = alg()
rs = (fs[1:]+1)/(fs[:-1]+1)
plt.plot(rs)
plt.show()
return rs
fs, xk = gradient()
plt.plot(fs)
plt.show()
###Output
_____no_output_____ |
Experimental/Preprocessing_MI/Preprocessing_BCICIV2a.ipynb | ###Markdown
Download Importan Modules
###Code
!pip install -U git+https://github.com/UN-GCPDS/python-gcpds.utils > /dev/null
!pip install -U git+https://github.com/UN-GCPDS/python-gcpds.databases > /dev/null #Module for database reading.
!pip install -U git+https://github.com/UN-GCPDS/python-gcpds.filters.git > /dev/null #Module for filters
!pip install mne > /dev/null #The MNE library is installed
FILEID = "1O2Iiam5QVaHQFd2t_pWmDpmPUP_0pNyV"
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id='$FILEID -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id="$FILEID -O MI_EEG_ClassMeth.zip && rm -rf /tmp/cookies.txt > /dev/null
!unzip MI_EEG_ClassMeth.zip > /dev/null #Package with useful functions for motor imagery classification based in EEG.
!dir
###Output
Running command git clone -q https://github.com/UN-GCPDS/python-gcpds.utils /tmp/pip-req-build-lsd9bxp8
Running command git clone -q https://github.com/UN-GCPDS/python-gcpds.databases /tmp/pip-req-build-_elsxqrn
Running command git clone -q https://github.com/UN-GCPDS/python-gcpds.filters.git /tmp/pip-req-build-kqqckcrw
--2021-11-08 15:12:26-- https://docs.google.com/uc?export=download&confirm=&id=1O2Iiam5QVaHQFd2t_pWmDpmPUP_0pNyV
Resolving docs.google.com (docs.google.com)... 173.194.217.113, 173.194.217.138, 173.194.217.139, ...
Connecting to docs.google.com (docs.google.com)|173.194.217.113|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://doc-00-bk-docs.googleusercontent.com/docs/securesc/30ckta7bknkgi0kt3cker95n6i1lskv1/4tr31c1j8ut8mp1ai30mmt57vpk3866p/1636384275000/09711457892284675029/07033108602856203522Z/1O2Iiam5QVaHQFd2t_pWmDpmPUP_0pNyV?e=download [following]
--2021-11-08 15:12:26-- https://doc-00-bk-docs.googleusercontent.com/docs/securesc/30ckta7bknkgi0kt3cker95n6i1lskv1/4tr31c1j8ut8mp1ai30mmt57vpk3866p/1636384275000/09711457892284675029/07033108602856203522Z/1O2Iiam5QVaHQFd2t_pWmDpmPUP_0pNyV?e=download
Resolving doc-00-bk-docs.googleusercontent.com (doc-00-bk-docs.googleusercontent.com)... 74.125.141.132, 2607:f8b0:400c:c06::84
Connecting to doc-00-bk-docs.googleusercontent.com (doc-00-bk-docs.googleusercontent.com)|74.125.141.132|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://docs.google.com/nonceSigner?nonce=q2gljhfpp2scm&continue=https://doc-00-bk-docs.googleusercontent.com/docs/securesc/30ckta7bknkgi0kt3cker95n6i1lskv1/4tr31c1j8ut8mp1ai30mmt57vpk3866p/1636384275000/09711457892284675029/07033108602856203522Z/1O2Iiam5QVaHQFd2t_pWmDpmPUP_0pNyV?e%3Ddownload&hash=9acv83h6vrs5q6lgqmoeb9pj2qs3qui1 [following]
--2021-11-08 15:12:26-- https://docs.google.com/nonceSigner?nonce=q2gljhfpp2scm&continue=https://doc-00-bk-docs.googleusercontent.com/docs/securesc/30ckta7bknkgi0kt3cker95n6i1lskv1/4tr31c1j8ut8mp1ai30mmt57vpk3866p/1636384275000/09711457892284675029/07033108602856203522Z/1O2Iiam5QVaHQFd2t_pWmDpmPUP_0pNyV?e%3Ddownload&hash=9acv83h6vrs5q6lgqmoeb9pj2qs3qui1
Connecting to docs.google.com (docs.google.com)|173.194.217.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://doc-00-bk-docs.googleusercontent.com/docs/securesc/30ckta7bknkgi0kt3cker95n6i1lskv1/4tr31c1j8ut8mp1ai30mmt57vpk3866p/1636384275000/09711457892284675029/07033108602856203522Z/1O2Iiam5QVaHQFd2t_pWmDpmPUP_0pNyV?e=download&nonce=q2gljhfpp2scm&user=07033108602856203522Z&hash=0jrm8e8ijcu13jvk68hhqsuefecp7rb0 [following]
--2021-11-08 15:12:26-- https://doc-00-bk-docs.googleusercontent.com/docs/securesc/30ckta7bknkgi0kt3cker95n6i1lskv1/4tr31c1j8ut8mp1ai30mmt57vpk3866p/1636384275000/09711457892284675029/07033108602856203522Z/1O2Iiam5QVaHQFd2t_pWmDpmPUP_0pNyV?e=download&nonce=q2gljhfpp2scm&user=07033108602856203522Z&hash=0jrm8e8ijcu13jvk68hhqsuefecp7rb0
Connecting to doc-00-bk-docs.googleusercontent.com (doc-00-bk-docs.googleusercontent.com)|74.125.141.132|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1364498 (1.3M) [application/zip]
Saving to: ‘MI_EEG_ClassMeth.zip’
MI_EEG_ClassMeth.zi 100%[===================>] 1.30M --.-KB/s in 0.01s
2021-11-08 15:12:26 (117 MB/s) - ‘MI_EEG_ClassMeth.zip’ saved [1364498/1364498]
__MACOSX MI_EEG_ClassMeth MI_EEG_ClassMeth.zip sample_data
###Markdown
Import Modules
###Code
import os
import numpy as np
from gcpds.utils import colab
import gcpds.databases as loaddb
from gcpds.filters import frequency as flt
from mne.channels import make_standard_montage
from mne import create_info
from mne import EpochsArray
from mne.preprocessing import compute_current_source_density
from MI_EEG_ClassMeth.Preprocessing import ICA
from MI_EEG_ClassMeth.FeatExtraction import GaussianKernel, SpectralConnectivity
from MI_EEG_ClassMeth.MIfunctions import Window_band_CSP_eppoch, flatt
from MI_EEG_ClassMeth.utils import grid_search_info
from sklearn.model_selection import StratifiedKFold, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.metrics import make_scorer, accuracy_score, precision_score, recall_score, f1_score, roc_auc_score
from pickle import load, dump
from tqdm import tqdm
import matplotlib.pyplot as plt
import math
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.utils.testing module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.utils. Anything that cannot be imported from sklearn.utils is now part of the private API.
warnings.warn(message, FutureWarning)
###Markdown
Mount Drive
###Code
colab.mount()
###Output
Mounted at /content/drive
###Markdown
Functions
###Code
def read_results(info, mode='CV/', type_exp='Preprocessing', subjects=np.arange(9)+1):
if mode == 'CV/':
res = np.zeros((len(info['sbj1']['no_ICA-no_surface_laplacian']['test_metrics']), len(info['sbj1']['no_ICA-no_surface_laplacian']['test_metrics'][0]) ,len(info.keys())))
if type_exp == 'No_Preprocessing':
for id_sbj, sbj in enumerate(subjects):
res[:,:,id_sbj] = np.round(np.array(info['sbj'+str(sbj)]['no_ICA-no_surface_laplacian']['test_metrics'])*100, 1)
elif type_exp == 'Preprocessing':
for id_sbj, sbj in enumerate(subjects):
best_steps = info['sbj'+str(sbj)]['best_steps']
res[:,:,id_sbj] = np.round(np.array(info['sbj'+str(sbj)][best_steps[0]+'-'+best_steps[1]]['test_metrics'])*100, 1)
else:
raise ValueError('No valid type_exp')
elif mode == 'test/':
if type_exp == 'No_Preprocessing':
res = np.round(np.array(info['No_Preprocessing'])*100,1).T
elif type_exp == 'Preprocessing':
res = np.round(np.array(info['Preprocessing'])*100,1).T
else:
raise ValueError('No valid type_exp')
else:
raise ValueError('No valid mode')
return res
def rounddown(x):
return int(math.floor(x / 10.0)) * 10
def roundup(x):
return int(math.ceil(x / 10.0)) * 10
###Output
_____no_output_____
###Markdown
Tests
###Code
parent_dir = './drive/Shareddrives/GCPDS/users/Mateo/BCICIV2a/'
###Output
_____no_output_____
###Markdown
Bi-Class
###Code
cross_val_dir = os.path.join(parent_dir,'CV/Bi-Class/')
test_dir = os.path.join(parent_dir,'test/Bi-Class/')
model_dir = os.path.join(parent_dir,'Models/Bi-Class/')
images_dir = os.path.join(parent_dir,'Images/Bi-Class/')
try:
os.makedirs(cross_val_dir)
os.makedirs(test_dir)
os.makedirs(model_dir)
os.makedirs(images_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
GFC
###Code
model_dir_no_prep = os.path.join(model_dir ,'GFC/No_Preprocessing/')
model_dir_prep = os.path.join(model_dir ,'GFC/Preprocessing/')
try:
os.makedirs(model_dir_no_prep)
os.makedirs(model_dir_prep)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
CV
###Code
db = loaddb.BCI_Competition_IV.Dataset_2a() #Database Initializer
EEG_channels = db.metadata['channels'][:-3] #EEG channels
sfreq = db.metadata['sampling_rate'] #sample frequency
ch_types = ['eeg']*len(EEG_channels) #type of each channel
montage = make_standard_montage(db.metadata['montage']) #Montage object
info = create_info(EEG_channels, sfreq, ch_types).set_montage(montage)
bandps_filter = flt.GenericButterBand(f0=1, f1=45, N=5) #Butterworth bandpass filter
f_bank = np.array([[8,12],[12,15],[15,20],[18,40]]) #mu, beta low, beta medium, beta high
CV_results = {}
for sbj in tqdm(np.arange(9)+1):
results_exp = {}
best_acc = -np.inf
best_std = -np.inf
best_preprocessing_steps = []
best_model = None
db.load_subject(sbj) #Load subject
X, y = db.get_data(classes=['left hand', 'right hand']) #Load data of left and right hand movements
X_mi = bandps_filter(X[:,:,int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
for ICA_flag in ['ICA', 'no_ICA']:
if ICA_flag == 'ICA':
Xica = ICA(X_mi, list(np.arange(len(EEG_channels))), [-3,-2,-1]) #Remove ocular artifacts ICA
else:
Xica = X_mi[:,:len(EEG_channels),:]
for slap_flag in ['surface_laplacian', 'no_surface_laplacian']:
if slap_flag == 'surface_laplacian':
EpochsXica = EpochsArray(Xica, info)
EpochsXsl = compute_current_source_density(EpochsXica, stiffness=3)
Xslap = EpochsXsl.get_data()
else:
Xslap = Xica
Xgk = GaussianKernel(sfreq=sfreq, f_bank=f_bank).fit_transform(Xslap) #GFC
classifier = LDA()
hyparams = {}
cv = StratifiedKFold(n_splits=10)
scores = {'acc':'accuracy' ,'precision':'precision' ,'recall':'recall', 'f1_score':'f1' ,'auc':'roc_auc'}
grid_search = GridSearchCV(classifier, hyparams, cv=cv, verbose=0, scoring=scores,
refit='acc', error_score='raise', n_jobs=-1, return_train_score=True)
grid_search.fit(Xgk, y)
_, test_metrics, train_metrics, _, _ = grid_search_info(grid_search.cv_results_, ['acc', 'precision', 'recall', 'f1_score', 'auc'])
results_exp[ICA_flag+'-'+slap_flag] = {'test_metrics':test_metrics, 'train_metrics':train_metrics}
if test_metrics[0][0] > best_acc:
best_acc = test_metrics[0][0]
best_std = test_metrics[1][0]
best_preprocessing_steps = [ICA_flag, slap_flag]
best_model = grid_search.best_estimator_
elif test_metrics[0][0] == best_acc:
if test_metrics[1][0] < best_std:
best_acc = test_metrics[0][0]
best_std = test_metrics[1][0]
best_preprocessing_steps = [ICA_flag, slap_flag]
best_model = grid_search.best_estimator_
else:
pass
if (ICA_flag == 'no_ICA') and (slap_flag == 'no_surface_laplacian'):
dump(grid_search.best_estimator_, open(model_dir_no_prep + 'subject_'+str(sbj)+'.p', 'wb'))
results_exp['best_steps'] = best_preprocessing_steps
CV_results['sbj'+str(sbj)] = results_exp
dump(best_model, open(model_dir_prep + 'subject_'+str(sbj)+'.p', 'wb'))
dump(CV_results, open(cross_val_dir + 'GFC.txt', 'wb'))
###Output
100%|██████████| 9/9 [06:23<00:00, 42.58s/it]
###Markdown
Test
###Code
db = loaddb.BCI_Competition_IV.Dataset_2a() #Database Initializer
EEG_channels = db.metadata['channels'][:-3] #EEG channels
sfreq = db.metadata['sampling_rate'] #sample frequency
ch_types = ['eeg']*len(EEG_channels) #type of each channel
montage = make_standard_montage(db.metadata['montage']) #Montage object
info = create_info(EEG_channels, sfreq, ch_types).set_montage(montage)
bandps_filter = flt.GenericButterBand(f0=1, f1=45, N=5) #Butterworth bandpass filter
f_bank = np.array([[8,12],[12,15],[15,20],[18,40]]) #mu, beta low, beta medium, beta high
###Output
_____no_output_____
###Markdown
No_preprocessing
###Code
no_prep_test_result = []
for sbj in tqdm(np.arange(9)+1):
with open(model_dir_no_prep+'/subject_'+str(sbj)+'.p', 'rb') as fmodel:
model = load(fmodel) #no-preprocessing model of subject
db.load_subject(sbj, mode='evaluation') #Load subject in evaluation mode
X, y_test = db.get_data(classes=['left hand', 'right hand']) #Load data of left and right hand movements (test)
X_mi = bandps_filter(X[:,:len(EEG_channels),int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
Xgk = GaussianKernel(sfreq=sfreq, f_bank=f_bank).fit_transform(X_mi) #GFC
y_pred = model.predict(Xgk)
acc = accuracy_score(y_test ,y_pred)
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)
auc = roc_auc_score(y_test, model.predict_proba(Xgk)[:, 1])
no_prep_test_result.append([acc , precision, recall, f1, auc])
###Output
100%|██████████| 9/9 [07:41<00:00, 51.24s/it]
###Markdown
Preprocessing
###Code
with open(cross_val_dir + 'GFC.txt', 'rb') as fcv:
cv_info = load(fcv) #Load CV info
prep_test_result = []
for sbj in tqdm(np.arange(9)+1):
with open(model_dir_prep+'subject_'+str(sbj)+'.p', 'rb') as fmodel:
model = load(fmodel) #preprocessing model of subject
db.load_subject(sbj, mode='evaluation') #Load subject in evaluation mode
X, y_test = db.get_data(classes=['left hand', 'right hand']) #Load data of left and right hand movements (test)
X_mi = bandps_filter(X[:,:,int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
ICA_flag, slap_flag = cv_info['sbj'+str(sbj)]['best_steps']
if ICA_flag == 'ICA':
Xica = ICA(X_mi, list(np.arange(len(EEG_channels))), [-3,-2,-1]) #Remove ocular artifacts ICA
else:
Xica = X_mi[:,:len(EEG_channels),:]
if slap_flag == 'surface_laplacian':
EpochsXica = EpochsArray(Xica, info)
EpochsXsl = compute_current_source_density(EpochsXica, stiffness=3)
Xslap = EpochsXsl.get_data()
else:
Xslap = Xica
Xgk = GaussianKernel(sfreq=sfreq, f_bank=f_bank).fit_transform(Xslap) #GFC
y_pred = model.predict(Xgk)
acc = accuracy_score(y_test ,y_pred)
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)
auc = roc_auc_score(y_test, model.predict_proba(Xgk)[:, 1])
prep_test_result.append([acc , precision, recall, f1, auc])
###Output
100%|██████████| 9/9 [02:18<00:00, 15.40s/it]
###Markdown
Save Results
###Code
dump({'No_Preprocessing':no_prep_test_result ,'Preprocessing':prep_test_result}, open(test_dir + 'GFC.txt', 'wb'))
###Output
_____no_output_____
###Markdown
COH
###Code
model_dir_no_prep = os.path.join(model_dir ,'COH/No_Preprocessing/')
model_dir_prep = os.path.join(model_dir ,'COH/Preprocessing/')
try:
os.makedirs(model_dir_no_prep)
os.makedirs(model_dir_prep)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
CV
###Code
db = loaddb.BCI_Competition_IV.Dataset_2a() #Database Initializer
EEG_channels = db.metadata['channels'][:-3] #EEG channels
sfreq = db.metadata['sampling_rate'] #sample frequency
ch_types = ['eeg']*len(EEG_channels) #type of each channel
montage = make_standard_montage(db.metadata['montage']) #Montage object
info = create_info(EEG_channels, sfreq, ch_types).set_montage(montage)
bandps_filter = flt.GenericButterBand(f0=1, f1=45, N=5) #Butterworth bandpass filter
f_bank = np.array([[8,12],[12,15],[15,20],[18,40]]) #mu, beta low, beta medium, beta high
CV_results = {}
for sbj in tqdm(np.arange(9)+1):
results_exp = {}
best_acc = -np.inf
best_std = -np.inf
best_preprocessing_steps = []
best_model = None
db.load_subject(sbj) #Load subject
X, y = db.get_data(classes=['left hand', 'right hand']) #Load data of left and right hand movements
X_mi = bandps_filter(X[:,:,int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
for ICA_flag in ['ICA', 'no_ICA']:
if ICA_flag == 'ICA':
Xica = ICA(X_mi, list(np.arange(len(EEG_channels))), [-3,-2,-1]) #Remove ocular artifacts ICA
else:
Xica = X_mi[:,:len(EEG_channels),:]
for slap_flag in ['surface_laplacian', 'no_surface_laplacian']:
if slap_flag == 'surface_laplacian':
EpochsXica = EpochsArray(Xica, info)
EpochsXsl = compute_current_source_density(EpochsXica, stiffness=3)
Xslap = EpochsXsl.get_data()
else:
Xslap = Xica
Xcoh = SpectralConnectivity(sfreq=sfreq, f_bank=f_bank, connectivity='coh', mode='wavelet', modeparams=3).fit_transform(Xslap) #COH
classifier = LDA()
hyparams = {}
cv = StratifiedKFold(n_splits=10)
scores = {'acc':'accuracy' ,'precision':'precision' ,'recall':'recall', 'f1_score':'f1' ,'auc':'roc_auc'}
grid_search = GridSearchCV(classifier, hyparams, cv=cv, verbose=0, scoring=scores,
refit='acc', error_score='raise', n_jobs=-1, return_train_score=True)
grid_search.fit(Xcoh, y)
_, test_metrics, train_metrics, _, _ = grid_search_info(grid_search.cv_results_, ['acc', 'precision', 'recall', 'f1_score', 'auc'])
results_exp[ICA_flag+'-'+slap_flag] = {'test_metrics':test_metrics, 'train_metrics':train_metrics}
if test_metrics[0][0] > best_acc:
best_acc = test_metrics[0][0]
best_std = test_metrics[1][0]
best_preprocessing_steps = [ICA_flag, slap_flag]
best_model = grid_search.best_estimator_
elif test_metrics[0][0] == best_acc:
if test_metrics[1][0] < best_std:
best_acc = test_metrics[0][0]
best_std = test_metrics[1][0]
best_preprocessing_steps = [ICA_flag, slap_flag]
best_model = grid_search.best_estimator_
else:
pass
if (ICA_flag == 'no_ICA') and (slap_flag == 'no_surface_laplacian'):
dump(grid_search.best_estimator_, open(model_dir_no_prep + 'subject_'+str(sbj)+'.p', 'wb'))
results_exp['best_steps'] = best_preprocessing_steps
CV_results['sbj'+str(sbj)] = results_exp
dump(best_model, open(model_dir_prep + 'subject_'+str(sbj)+'.p', 'wb'))
dump(CV_results, open(cross_val_dir + 'COH.txt', 'wb'))
###Output
100%|██████████| 9/9 [14:23<00:00, 95.91s/it]
###Markdown
Test
###Code
db = loaddb.BCI_Competition_IV.Dataset_2a() #Database Initializer
EEG_channels = db.metadata['channels'][:-3] #EEG channels
sfreq = db.metadata['sampling_rate'] #sample frequency
ch_types = ['eeg']*len(EEG_channels) #type of each channel
montage = make_standard_montage(db.metadata['montage']) #Montage object
info = create_info(EEG_channels, sfreq, ch_types).set_montage(montage)
bandps_filter = flt.GenericButterBand(f0=1, f1=45, N=5) #Butterworth bandpass filter
f_bank = np.array([[8,12],[12,15],[15,20],[18,40]]) #mu, beta low, beta medium, beta high
###Output
_____no_output_____
###Markdown
No_preprocessing
###Code
no_prep_test_result = []
for sbj in tqdm(np.arange(9)+1):
with open(model_dir_no_prep+'/subject_'+str(sbj)+'.p', 'rb') as fmodel:
model = load(fmodel) #no-preprocessing model of subject
db.load_subject(sbj, mode='evaluation') #Load subject in evaluation mode
X, y_test = db.get_data(classes=['left hand', 'right hand']) #Load data of left and right hand movements (test)
X_mi = bandps_filter(X[:,:len(EEG_channels),int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
Xcoh = SpectralConnectivity(sfreq=sfreq, f_bank=f_bank, connectivity='coh', mode='wavelet', modeparams=3).fit_transform(X_mi) #COH
y_pred = model.predict(Xcoh)
acc = accuracy_score(y_test ,y_pred)
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)
auc = roc_auc_score(y_test, model.predict_proba(Xcoh)[:, 1])
no_prep_test_result.append([acc , precision, recall, f1, auc])
###Output
100%|██████████| 9/9 [01:40<00:00, 11.14s/it]
###Markdown
Preprocessing
###Code
with open(cross_val_dir + 'COH.txt', 'rb') as fcv:
cv_info = load(fcv) #Load CV info
prep_test_result = []
for sbj in tqdm(np.arange(9)+1):
with open(model_dir_prep+'subject_'+str(sbj)+'.p', 'rb') as fmodel:
model = load(fmodel) #preprocessing model of subject
db.load_subject(sbj, mode='evaluation') #Load subject in evaluation mode
X, y_test = db.get_data(classes=['left hand', 'right hand']) #Load data of left and right hand movements (test)
X_mi = bandps_filter(X[:,:,int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
ICA_flag, slap_flag = cv_info['sbj'+str(sbj)]['best_steps']
if ICA_flag == 'ICA':
Xica = ICA(X_mi, list(np.arange(len(EEG_channels))), [-3,-2,-1]) #Remove ocular artifacts ICA
else:
Xica = X_mi[:,:len(EEG_channels),:]
if slap_flag == 'surface_laplacian':
EpochsXica = EpochsArray(Xica, info)
EpochsXsl = compute_current_source_density(EpochsXica, stiffness=3)
Xslap = EpochsXsl.get_data()
else:
Xslap = Xica
Xcoh = SpectralConnectivity(sfreq=sfreq, f_bank=f_bank, connectivity='coh', mode='wavelet', modeparams=3).fit_transform(Xslap) #COH
y_pred = model.predict(Xcoh)
acc = accuracy_score(y_test ,y_pred)
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)
auc = roc_auc_score(y_test, model.predict_proba(Xcoh)[:, 1])
prep_test_result.append([acc , precision, recall, f1, auc])
###Output
100%|██████████| 9/9 [03:17<00:00, 21.98s/it]
###Markdown
Save Results
###Code
dump({'No_Preprocessing':no_prep_test_result ,'Preprocessing':prep_test_result}, open(test_dir + 'COH.txt', 'wb'))
###Output
_____no_output_____
###Markdown
CSP
###Code
model_dir_no_prep = os.path.join(model_dir ,'CSP/No_Preprocessing/')
model_dir_prep = os.path.join(model_dir ,'CSP/Preprocessing/')
try:
os.makedirs(model_dir_no_prep)
os.makedirs(model_dir_prep)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
CV
###Code
db = loaddb.BCI_Competition_IV.Dataset_2a() #Database Initializer
EEG_channels = db.metadata['channels'][:-3] #EEG channels
sfreq = db.metadata['sampling_rate'] #sample frequency
ch_types = ['eeg']*len(EEG_channels) #type of each channel
montage = make_standard_montage(db.metadata['montage']) #Montage object
info = create_info(EEG_channels, sfreq, ch_types).set_montage(montage)
bandps_filter = flt.GenericButterBand(f0=1, f1=45, N=5) #Butterworth bandpass filter
f_bank = np.array([[8,12],[12,15],[15,20],[18,40]]) #mu, beta low, beta medium, beta high
CV_results = {}
for sbj in tqdm(np.arange(9)+1):
results_exp = {}
best_acc = -np.inf
best_std = -np.inf
best_preprocessing_steps = []
best_model = None
db.load_subject(sbj) #Load subject
X, y = db.get_data(classes=['left hand', 'right hand']) #Load data of left and right hand movements
X_mi = bandps_filter(X[:,:,int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
for ICA_flag in ['ICA', 'no_ICA']:
if ICA_flag == 'ICA':
Xica = ICA(X_mi, list(np.arange(len(EEG_channels))), [-3,-2,-1]) #Remove ocular artifacts ICA
else:
Xica = X_mi[:,:len(EEG_channels),:]
for slap_flag in ['surface_laplacian', 'no_surface_laplacian']:
if slap_flag == 'surface_laplacian':
EpochsXica = EpochsArray(Xica, info)
EpochsXsl = compute_current_source_density(EpochsXica, stiffness=3)
Xslap = EpochsXsl.get_data()
else:
Xslap = Xica
steps = [('CSP', Window_band_CSP_eppoch(fs=sfreq, f_frec=f_bank, vtw=np.array([[0,3]]), ncomp=6, reg='shrinkage')),
('flat',flatt()),
('cla', LDA())]
classifier = Pipeline(steps)
hyparams = {}
cv = StratifiedKFold(n_splits=10)
scores = {'acc':'accuracy' ,'precision':'precision' ,'recall':'recall', 'f1_score':'f1' ,'auc':'roc_auc'}
grid_search = GridSearchCV(classifier, hyparams, cv=cv, verbose=0, scoring=scores,
refit='acc', error_score='raise', n_jobs=-1, return_train_score=True)
grid_search.fit(Xslap, y)
_, test_metrics, train_metrics, _, _ = grid_search_info(grid_search.cv_results_, ['acc', 'precision', 'recall', 'f1_score', 'auc'])
results_exp[ICA_flag+'-'+slap_flag] = {'test_metrics':test_metrics, 'train_metrics':train_metrics}
if test_metrics[0][0] > best_acc:
best_acc = test_metrics[0][0]
best_std = test_metrics[1][0]
best_preprocessing_steps = [ICA_flag, slap_flag]
best_model = grid_search.best_estimator_
elif test_metrics[0][0] == best_acc:
if test_metrics[1][0] < best_std:
best_acc = test_metrics[0][0]
best_std = test_metrics[1][0]
best_preprocessing_steps = [ICA_flag, slap_flag]
best_model = grid_search.best_estimator_
else:
pass
if (ICA_flag == 'no_ICA') and (slap_flag == 'no_surface_laplacian'):
dump(grid_search.best_estimator_, open(model_dir_no_prep + 'subject_'+str(sbj)+'.p', 'wb'))
results_exp['best_steps'] = best_preprocessing_steps
CV_results['sbj'+str(sbj)] = results_exp
dump(best_model, open(model_dir_prep + 'subject_'+str(sbj)+'.p', 'wb'))
dump(CV_results, open(cross_val_dir + 'CSP.txt', 'wb'))
###Output
100%|██████████| 9/9 [18:17<00:00, 121.97s/it]
###Markdown
Test
###Code
db = loaddb.BCI_Competition_IV.Dataset_2a() #Database Initializer
EEG_channels = db.metadata['channels'][:-3] #EEG channels
sfreq = db.metadata['sampling_rate'] #sample frequency
ch_types = ['eeg']*len(EEG_channels) #type of each channel
montage = make_standard_montage(db.metadata['montage']) #Montage object
info = create_info(EEG_channels, sfreq, ch_types).set_montage(montage)
bandps_filter = flt.GenericButterBand(f0=1, f1=45, N=5) #Butterworth bandpass filter
f_bank = np.array([[8,12],[12,15],[15,20],[18,40]]) #mu, beta low, beta medium, beta high
###Output
_____no_output_____
###Markdown
No_preprocessing
###Code
no_prep_test_result = []
for sbj in tqdm(np.arange(9)+1):
with open(model_dir_no_prep+'/subject_'+str(sbj)+'.p', 'rb') as fmodel:
model = load(fmodel) #no-preprocessing model of subject
db.load_subject(sbj, mode='evaluation') #Load subject in evaluation mode
X, y_test = db.get_data(classes=['left hand', 'right hand']) #Load data of left and right hand movements (test)
X_mi = bandps_filter(X[:,:len(EEG_channels),int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
y_pred = model.predict(X_mi)
acc = accuracy_score(y_test ,y_pred)
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)
auc = roc_auc_score(y_test, model.predict_proba(X_mi)[:, 1])
no_prep_test_result.append([acc , precision, recall, f1, auc])
###Output
100%|██████████| 9/9 [00:31<00:00, 3.52s/it]
###Markdown
Preprocessing
###Code
with open(cross_val_dir + 'CSP.txt', 'rb') as fcv:
cv_info = load(fcv) #Load CV info
prep_test_result = []
for sbj in tqdm(np.arange(9)+1):
with open(model_dir_prep+'subject_'+str(sbj)+'.p', 'rb') as fmodel:
model = load(fmodel) #preprocessing model of subject
db.load_subject(sbj, mode='evaluation') #Load subject in evaluation mode
X, y_test = db.get_data(classes=['left hand', 'right hand']) #Load data of left and right hand movements (test)
X_mi = bandps_filter(X[:,:,int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
ICA_flag, slap_flag = cv_info['sbj'+str(sbj)]['best_steps']
if ICA_flag == 'ICA':
Xica = ICA(X_mi, list(np.arange(len(EEG_channels))), [-3,-2,-1]) #Remove ocular artifacts ICA
else:
Xica = X_mi[:,:len(EEG_channels),:]
if slap_flag == 'surface_laplacian':
EpochsXica = EpochsArray(Xica, info)
EpochsXsl = compute_current_source_density(EpochsXica, stiffness=3)
Xslap = EpochsXsl.get_data()
else:
Xslap = Xica
y_pred = model.predict(Xslap)
acc = accuracy_score(y_test ,y_pred)
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)
auc = roc_auc_score(y_test, model.predict_proba(Xslap)[:, 1])
prep_test_result.append([acc , precision, recall, f1, auc])
###Output
100%|██████████| 9/9 [01:33<00:00, 10.37s/it]
###Markdown
Save Results
###Code
dump({'No_Preprocessing':no_prep_test_result ,'Preprocessing':prep_test_result}, open(test_dir + 'CSP.txt', 'wb'))
###Output
_____no_output_____
###Markdown
Read results Form for latex table
###Code
evaluation_mode = ['CV/', 'test/']
type_preprocessing = ['No_Preprocessing', 'Preprocessing']
metrics = ['acc', 'precision', 'recall', 'f1', 'auc']
type_representation = ['COH', 'GFC', 'CSP']
n_subjects = 9
results_row = []
for id_eval_mode, eval_mode in enumerate(evaluation_mode):
for id_type_prep, type_prep in enumerate(type_preprocessing):
results_col = []
for id_m, m in enumerate(metrics):
for type_rep in type_representation:
with open(parent_dir + eval_mode + 'Bi-Class/' + type_rep + '.txt', 'rb') as feval:
eval_info = load(feval) #Load CV info
metric_sbj = np.zeros(n_subjects)
for sbj in np.arange(n_subjects)+1:
if eval_mode == 'CV/':
if type_prep == 'No_Preprocessing':
metric_sbj[sbj-1] = np.round(eval_info['sbj'+str(sbj)]['no_ICA'+'-'+'no_surface_laplacian']['test_metrics'][0][id_m]*100,1)
else:
best_steps = eval_info['sbj'+str(sbj)]['best_steps']
metric_sbj[sbj-1] = np.round(eval_info['sbj'+str(sbj)][best_steps[0]+'-'+best_steps[1]]['test_metrics'][0][id_m]*100,1)
else:
if type_prep == 'No_Preprocessing':
metric_sbj[sbj-1] = np.round(eval_info['No_Preprocessing'][sbj-1][id_m]*100,1)
else:
metric_sbj[sbj-1] = np.round(eval_info['Preprocessing'][sbj-1][id_m]*100,1)
results_col.append(str(np.round(metric_sbj.mean(),1))+' \pm '+str(np.round(metric_sbj.std(),1)))
results_row.append(results_col)
for j in range(len(results_row[0])):
for i in range(len(results_row)):
print('&$'+results_row[i][j]+'$',end='')
print(r'\\')
###Output
&$64.6 \pm 10.1$&$66.0 \pm 9.4$&$65.0 \pm 10.2$&$63.6 \pm 8.6$\\
&$71.3 \pm 12.6$&$75.9 \pm 12.0$&$72.9 \pm 9.1$&$73.0 \pm 10.3$\\
&$69.7 \pm 11.4$&$75.8 \pm 10.8$&$72.4 \pm 13.0$&$76.0 \pm 11.4$\\
&$66.0 \pm 10.2$&$67.4 \pm 9.8$&$64.6 \pm 12.4$&$62.0 \pm 8.3$\\
&$72.7 \pm 12.3$&$77.8 \pm 12.2$&$70.6 \pm 8.8$&$72.4 \pm 11.5$\\
&$71.6 \pm 11.2$&$77.2 \pm 9.9$&$72.1 \pm 12.8$&$74.5 \pm 12.0$\\
&$64.6 \pm 11.3$&$67.4 \pm 10.1$&$71.9 \pm 18.1$&$76.6 \pm 14.4$\\
&$71.5 \pm 13.4$&$75.8 \pm 12.7$&$78.7 \pm 10.9$&$78.6 \pm 12.7$\\
&$71.8 \pm 11.2$&$77.1 \pm 12.0$&$73.7 \pm 23.7$&$79.6 \pm 17.4$\\
&$64.0 \pm 10.3$&$66.0 \pm 9.6$&$66.6 \pm 11.1$&$67.5 \pm 7.4$\\
&$71.0 \pm 13.2$&$75.4 \pm 12.8$&$74.2 \pm 8.9$&$74.4 \pm 8.7$\\
&$70.3 \pm 11.0$&$75.9 \pm 11.1$&$71.0 \pm 17.1$&$76.1 \pm 13.2$\\
&$68.9 \pm 12.2$&$69.7 \pm 11.6$&$70.5 \pm 12.9$&$69.0 \pm 12.1$\\
&$76.9 \pm 13.3$&$80.1 \pm 14.0$&$79.9 \pm 11.3$&$81.2 \pm 11.8$\\
&$75.3 \pm 13.9$&$81.7 \pm 12.8$&$80.6 \pm 14.3$&$82.3 \pm 13.5$\\
###Markdown
Graphs Graph 1
###Code
evaluation_mode = ['CV/', 'test/']
type_representation = ['COH', 'GFC', 'CSP']
type_preprocessing = ['No_Preprocessing', 'Preprocessing']
metrics = ['acc', 'precision', 'recall', 'f1', 'auc']
subjects = np.arange(9)+1
xseed = np.array([-0.6, -0.3, 0, 0.3, 0.6])
markers = ['o', 'v', 's', 'D', 'p']
colors = ['b', 'm','r', 'c', 'k']
fig, axs = plt.subplots(len(evaluation_mode)*len(type_preprocessing), len(type_representation), figsize=(21,18), squeeze=False)
for id_eval_mode, eval_mode in zip([0,2], evaluation_mode):
for id_type_rep, type_rep in enumerate(type_representation):
with open(parent_dir + eval_mode + 'Bi-Class/' + type_rep + '.txt', 'rb') as feval:
eval_info = load(feval) #Load CV info
if eval_mode == 'CV/':
sbjs_order = np.argsort(read_results(eval_info, mode=eval_mode, type_exp='Preprocessing', subjects=subjects)[0,0,:])[::-1]
else:
sbjs_order = np.argsort(read_results(eval_info, mode=eval_mode, type_exp='Preprocessing', subjects=subjects)[0,:])[::-1]
for id_type_prep, type_prep in enumerate(type_preprocessing):
sbjs_metrics = read_results(eval_info, mode=eval_mode, type_exp=type_prep, subjects=subjects)
for id_m, m in enumerate(metrics):
if eval_mode == 'CV/':
axs[id_type_prep+id_eval_mode, id_type_rep].errorbar(np.arange(0, subjects.shape[0]*2.5, 2.5)+xseed[id_m], sbjs_metrics[0,id_m,sbjs_order], yerr=sbjs_metrics[1,id_m,sbjs_order], fmt=markers[id_m], color=colors[id_m], label=m)
else:
axs[id_type_prep+id_eval_mode, id_type_rep].stem(np.arange(0, subjects.shape[0]*2.5, 2.5)+xseed[id_m], sbjs_metrics[id_m,sbjs_order], linefmt='--'+colors[id_m], markerfmt=markers[id_m]+colors[id_m], label=m)
if eval_mode == 'CV/':
#axs[id_type_prep+id_eval_mode, id_type_rep].legend(loc='lower left', ncol=1)
axs[id_type_prep+id_eval_mode, id_type_rep].set_ylim([0, 110])
else:
#axs[id_type_prep+id_eval_mode, id_type_rep].legend(loc='upper right', ncol=1)
axs[id_type_prep+id_eval_mode, id_type_rep].set_ylim([0, 110])
axs[id_type_prep+id_eval_mode, id_type_rep].set_yticks(np.arange(0, 110, 10))
axs[id_type_prep+id_eval_mode, id_type_rep].set_xticks(np.arange(0, subjects.shape[0]*2.5, 2.5))
axs[id_type_prep+id_eval_mode, id_type_rep].set_xlim([-1, (subjects.shape[0]-1)*2.5+1])
axs[id_type_prep+id_eval_mode, id_type_rep].set_xticklabels(subjects[sbjs_order])
axs[0, 0].legend(loc='upper center', ncol=5)
axs[2, 0].legend(loc='upper center', ncol=5)
fig.tight_layout()
axs[0, 0].set_title('COH', fontfamily='serif', fontsize=52.5, weight=500)
axs[0, 1].set_title('GFC', fontfamily='serif', fontsize=52.5, weight=500)
axs[0, 2].set_title('CSP', fontfamily='serif', fontsize=52.5, weight=500)
axs[0, 0].set_ylabel('A', rotation=0, fontfamily='serif', fontsize=39.6, weight=1000, ha='right')
axs[1, 0].set_ylabel('B', rotation=0, fontfamily='serif', fontsize=39.6, weight=1000, ha='right')
axs[2, 0].set_ylabel('C', rotation=0, fontfamily='serif', fontsize=39.6, weight=1000, ha='right')
axs[3, 0].set_ylabel('D', rotation=0, fontfamily='serif', fontsize=39.6, weight=1000, ha='right')
fig.text(0.5, axs[-1,-1].get_position().y0 - 0.06, 'Subject', ha='center', fontfamily='serif', fontsize=54.5, weight=500)
plt.savefig(images_dir+'metrics-subjects-bi-class.pdf',format='pdf', bbox_inches='tight')
###Output
WARNING:matplotlib.legend:No handles with labels found to put in legend.
###Markdown
Graph 2
###Code
type_feat_extraction = ['COH', 'GFC', 'CSP']
type_preprocessing = ['No_Preprocessing', 'Preprocessing']
metrics = ['acc', 'precision', 'recall', 'f1', 'auc']
subjects = np.arange(9)+1
min = np.inf
max = -np.inf
xseed = np.array([-0.6, -0.3, 0, 0.3, 0.6])
markers = ['o', 'v', 's', 'D', 'p']
colors = ['b', 'm','r', 'c', 'k']
fig, axs = plt.subplots(len(type_feat_extraction), len(type_preprocessing), figsize=(12,10), squeeze=False, sharex=True, sharey=True)
with open(test_dir + 'GFC.txt', 'rb') as feval:
eval_info = load(feval) #Load test info
sbjs_order = np.argsort(read_results(eval_info, mode='test/', type_exp='Preprocessing', subjects=subjects)[0,:])[::-1]
for id_type_fte, type_fte in enumerate(type_feat_extraction):
with open(test_dir + type_fte + '.txt', 'rb') as feval:
eval_info = load(feval) #Load test info
for id_type_prep, type_prep in enumerate(type_preprocessing):
sbjs_metrics = read_results(eval_info, mode='test/', type_exp=type_prep, subjects=subjects)
if sbjs_metrics.min() < min:
min = sbjs_metrics.min()
if sbjs_metrics.max() > max:
max = sbjs_metrics.max()
for id_m, m in enumerate(metrics):
axs[id_type_fte, id_type_prep].stem(np.arange(0, subjects.shape[0]*2.5, 2.5)+xseed[id_m], sbjs_metrics[id_m,sbjs_order], linefmt='--'+colors[id_m], markerfmt=markers[id_m]+colors[id_m], label=m)
for j in range(axs.shape[1]):
axs[-1,j].set_xticks(np.arange(0, subjects.shape[0]*2.5, 2.5))
axs[-1,j].set_xticklabels(subjects[sbjs_order], fontsize=18)
axs[-1,j].set_xlim([-1, (subjects.shape[0]-1)*2.5+1])
#Becareful with the order
for i in range(axs.shape[0]):
axs[i,0].set_ylim([rounddown(min), roundup(max)])
for i in range(axs.shape[0]):
axs[i,0].set_yticks(np.arange(rounddown(min), roundup(max), 10, dtype=np.int))
axs[i,0].set_yticklabels(np.arange(rounddown(min), roundup(max), 10, dtype=np.int), fontsize=18)
axs[1,1].legend(loc='lower left', fontsize=16, ncol=2)
fig.tight_layout()
axs[0, 0].set_ylabel('COH', fontfamily='serif', fontsize=34.5, weight=500, rotation=0, ha='right')
axs[1, 0].set_ylabel('GFC', fontfamily='serif', fontsize=34.5, weight=500, rotation=0, ha='right')
axs[2, 0].set_ylabel('CSP', fontfamily='serif', fontsize=34.5, weight=500, rotation=0, ha='right')
axs[0, 0].set_title('A', fontfamily='serif', fontsize=28, weight=1000)
axs[0, 1].set_title('B', fontfamily='serif', fontsize=28, weight=1000)
fig.text(0.5, axs[-1,-1].get_position().y0 - 0.1, 'Subject', ha='center', fontfamily='serif', fontsize=34.5, weight=500)
plt.savefig(images_dir+'metrics-subjects-bi-class.pdf',format='pdf', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Multi-Class
###Code
cross_val_dir = os.path.join(parent_dir,'CV/Multi-Class/')
test_dir = os.path.join(parent_dir,'test/Multi-Class/')
model_dir = os.path.join(parent_dir,'Models/Multi-Class/')
images_dir = os.path.join(parent_dir,'Images/Multi-Class/')
try:
os.makedirs(cross_val_dir)
os.makedirs(test_dir)
os.makedirs(model_dir)
os.makedirs(images_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
GFC
###Code
model_dir_no_prep = os.path.join(model_dir ,'GFC/No_Preprocessing/')
model_dir_prep = os.path.join(model_dir ,'GFC/Preprocessing/')
try:
os.makedirs(model_dir_no_prep)
os.makedirs(model_dir_prep)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
CV
###Code
db = loaddb.BCI_Competition_IV.Dataset_2a() #Database Initializer
EEG_channels = db.metadata['channels'][:-3] #EEG channels
sfreq = db.metadata['sampling_rate'] #sample frequency
ch_types = ['eeg']*len(EEG_channels) #type of each channel
montage = make_standard_montage(db.metadata['montage']) #Montage object
info = create_info(EEG_channels, sfreq, ch_types).set_montage(montage)
bandps_filter = flt.GenericButterBand(f0=1, f1=45, N=5) #Butterworth bandpass filter
f_bank = np.array([[8,12],[12,15],[15,20],[18,40]]) #mu, beta low, beta medium, beta high
CV_results = {}
for sbj in tqdm(np.arange(9)+1):
results_exp = {}
best_acc = -np.inf
best_std = -np.inf
best_preprocessing_steps = []
best_model = None
db.load_subject(sbj) #Load subject
X, y = db.get_data() #Load data of all motor imagery tasks
X_mi = bandps_filter(X[:,:,int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
for ICA_flag in ['ICA', 'no_ICA']:
if ICA_flag == 'ICA':
Xica = ICA(X_mi, list(np.arange(len(EEG_channels))), [-3,-2,-1]) #Remove ocular artifacts ICA
else:
Xica = X_mi[:,:len(EEG_channels),:]
for slap_flag in ['surface_laplacian', 'no_surface_laplacian']:
if slap_flag == 'surface_laplacian':
EpochsXica = EpochsArray(Xica, info)
EpochsXsl = compute_current_source_density(EpochsXica, stiffness=3)
Xslap = EpochsXsl.get_data()
else:
Xslap = Xica
Xgk = GaussianKernel(sfreq=sfreq, f_bank=f_bank).fit_transform(Xslap) #GFC
classifier = LDA(solver='lsqr', shrinkage='auto')
hyparams = {}
cv = StratifiedKFold(n_splits=10)
scores = {'acc':'accuracy' ,'precision':make_scorer(precision_score, average='macro') ,'recall':make_scorer(recall_score, average='macro'), 'f1_score':make_scorer(f1_score, average='macro') ,'auc':make_scorer(roc_auc_score, needs_proba=True, multi_class='ovr')}
grid_search = GridSearchCV(classifier, hyparams, cv=cv, verbose=0, scoring=scores,
refit='acc', error_score='raise', n_jobs=-1, return_train_score=True)
grid_search.fit(Xgk, y)
_, test_metrics, train_metrics, _, _ = grid_search_info(grid_search.cv_results_, ['acc', 'precision', 'recall', 'f1_score', 'auc'])
results_exp[ICA_flag+'-'+slap_flag] = {'test_metrics':test_metrics, 'train_metrics':train_metrics}
if test_metrics[0][0] > best_acc:
best_acc = test_metrics[0][0]
best_std = test_metrics[1][0]
best_preprocessing_steps = [ICA_flag, slap_flag]
best_model = grid_search.best_estimator_
elif test_metrics[0][0] == best_acc:
if test_metrics[1][0] < best_std:
best_acc = test_metrics[0][0]
best_std = test_metrics[1][0]
best_preprocessing_steps = [ICA_flag, slap_flag]
best_model = grid_search.best_estimator_
else:
pass
if (ICA_flag == 'no_ICA') and (slap_flag == 'no_surface_laplacian'):
dump(grid_search.best_estimator_, open(model_dir_no_prep + 'subject_'+str(sbj)+'.p', 'wb'))
results_exp['best_steps'] = best_preprocessing_steps
CV_results['sbj'+str(sbj)] = results_exp
dump(best_model, open(model_dir_prep + 'subject_'+str(sbj)+'.p', 'wb'))
dump(CV_results, open(cross_val_dir + 'GFC.txt', 'wb'))
###Output
100%|██████████| 9/9 [14:50<00:00, 98.91s/it]
###Markdown
Test
###Code
db = loaddb.BCI_Competition_IV.Dataset_2a() #Database Initializer
EEG_channels = db.metadata['channels'][:-3] #EEG channels
sfreq = db.metadata['sampling_rate'] #sample frequency
ch_types = ['eeg']*len(EEG_channels) #type of each channel
montage = make_standard_montage(db.metadata['montage']) #Montage object
info = create_info(EEG_channels, sfreq, ch_types).set_montage(montage)
bandps_filter = flt.GenericButterBand(f0=1, f1=45, N=5) #Butterworth bandpass filter
f_bank = np.array([[8,12],[12,15],[15,20],[18,40]]) #mu, beta low, beta medium, beta high
###Output
_____no_output_____
###Markdown
No_preprocessing
###Code
no_prep_test_result = []
for sbj in tqdm(np.arange(9)+1):
with open(model_dir_no_prep+'/subject_'+str(sbj)+'.p', 'rb') as fmodel:
model = load(fmodel) #no-preprocessing model of subject
db.load_subject(sbj, mode='evaluation') #Load subject in evaluation mode
X, y_test = db.get_data() #Load data of all motor imagery tasks (test)
X_mi = bandps_filter(X[:,:len(EEG_channels),int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
Xgk = GaussianKernel(sfreq=sfreq, f_bank=f_bank).fit_transform(X_mi) #GFC
y_pred = model.predict(Xgk)
acc = accuracy_score(y_test ,y_pred)
precision = precision_score(y_test, y_pred, average='macro')
recall = recall_score(y_test, y_pred, average='macro')
f1 = f1_score(y_test, y_pred, average='macro')
auc = roc_auc_score(y_test, model.predict_proba(Xgk), multi_class='ovr')
no_prep_test_result.append([acc , precision, recall, f1, auc])
###Output
100%|██████████| 9/9 [05:28<00:00, 36.46s/it]
###Markdown
Preprocessing
###Code
with open(cross_val_dir + 'GFC.txt', 'rb') as fcv:
cv_info = load(fcv) #Load CV info
prep_test_result = []
for sbj in tqdm(np.arange(9)+1):
with open(model_dir_prep+'subject_'+str(sbj)+'.p', 'rb') as fmodel:
model = load(fmodel) #preprocessing model of subject
db.load_subject(sbj, mode='evaluation') #Load subject in evaluation mode
X, y_test = db.get_data() #Load data of all motor imagery tasks (test)
X_mi = bandps_filter(X[:,:,int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
ICA_flag, slap_flag = cv_info['sbj'+str(sbj)]['best_steps']
if ICA_flag == 'ICA':
Xica = ICA(X_mi, list(np.arange(len(EEG_channels))), [-3,-2,-1]) #Remove ocular artifacts ICA
else:
Xica = X_mi[:,:len(EEG_channels),:]
if slap_flag == 'surface_laplacian':
EpochsXica = EpochsArray(Xica, info)
EpochsXsl = compute_current_source_density(EpochsXica, stiffness=3)
Xslap = EpochsXsl.get_data()
else:
Xslap = Xica
Xgk = GaussianKernel(sfreq=sfreq, f_bank=f_bank).fit_transform(Xslap) #GFC
y_pred = model.predict(Xgk)
acc = accuracy_score(y_test ,y_pred)
precision = precision_score(y_test, y_pred, average='macro')
recall = recall_score(y_test, y_pred, average='macro')
f1 = f1_score(y_test, y_pred, average='macro')
auc = roc_auc_score(y_test, model.predict_proba(Xgk), multi_class='ovr')
prep_test_result.append([acc , precision, recall, f1, auc])
###Output
100%|██████████| 9/9 [05:21<00:00, 35.70s/it]
###Markdown
Save Results
###Code
dump({'No_Preprocessing':no_prep_test_result ,'Preprocessing':prep_test_result}, open(test_dir + 'GFC.txt', 'wb'))
###Output
_____no_output_____
###Markdown
COH
###Code
model_dir_no_prep = os.path.join(model_dir ,'COH/No_Preprocessing/')
model_dir_prep = os.path.join(model_dir ,'COH/Preprocessing/')
try:
os.makedirs(model_dir_no_prep)
os.makedirs(model_dir_prep)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
CV
###Code
db = loaddb.BCI_Competition_IV.Dataset_2a() #Database Initializer
EEG_channels = db.metadata['channels'][:-3] #EEG channels
sfreq = db.metadata['sampling_rate'] #sample frequency
ch_types = ['eeg']*len(EEG_channels) #type of each channel
montage = make_standard_montage(db.metadata['montage']) #Montage object
info = create_info(EEG_channels, sfreq, ch_types).set_montage(montage)
bandps_filter = flt.GenericButterBand(f0=1, f1=45, N=5) #Butterworth bandpass filter
f_bank = np.array([[8,12],[12,15],[15,20],[18,40]]) #mu, beta low, beta medium, beta high
CV_results = {}
for sbj in tqdm(np.arange(9)+1):
results_exp = {}
best_acc = -np.inf
best_std = -np.inf
best_preprocessing_steps = []
best_model = None
db.load_subject(sbj) #Load subject
X, y = db.get_data() #Load data of all motor imagery tasks
X_mi = bandps_filter(X[:,:,int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
for ICA_flag in ['ICA', 'no_ICA']:
if ICA_flag == 'ICA':
Xica = ICA(X_mi, list(np.arange(len(EEG_channels))), [-3,-2,-1]) #Remove ocular artifacts ICA
else:
Xica = X_mi[:,:len(EEG_channels),:]
for slap_flag in ['surface_laplacian', 'no_surface_laplacian']:
if slap_flag == 'surface_laplacian':
EpochsXica = EpochsArray(Xica, info)
EpochsXsl = compute_current_source_density(EpochsXica, stiffness=3)
Xslap = EpochsXsl.get_data()
else:
Xslap = Xica
Xcoh = SpectralConnectivity(sfreq=sfreq, f_bank=f_bank, connectivity='coh', mode='wavelet', modeparams=3).fit_transform(Xslap) #COH
classifier = LDA(solver='lsqr', shrinkage='auto')
hyparams = {}
cv = StratifiedKFold(n_splits=10)
scores = {'acc':'accuracy' ,'precision':make_scorer(precision_score, average='macro') ,'recall':make_scorer(recall_score, average='macro'), 'f1_score':make_scorer(f1_score, average='macro') ,'auc':make_scorer(roc_auc_score, needs_proba=True, multi_class='ovr')}
grid_search = GridSearchCV(classifier, hyparams, cv=cv, verbose=0, scoring=scores,
refit='acc', error_score='raise', n_jobs=-1, return_train_score=True)
grid_search.fit(Xcoh, y)
_, test_metrics, train_metrics, _, _ = grid_search_info(grid_search.cv_results_, ['acc', 'precision', 'recall', 'f1_score', 'auc'])
results_exp[ICA_flag+'-'+slap_flag] = {'test_metrics':test_metrics, 'train_metrics':train_metrics}
if test_metrics[0][0] > best_acc:
best_acc = test_metrics[0][0]
best_std = test_metrics[1][0]
best_preprocessing_steps = [ICA_flag, slap_flag]
best_model = grid_search.best_estimator_
elif test_metrics[0][0] == best_acc:
if test_metrics[1][0] < best_std:
best_acc = test_metrics[0][0]
best_std = test_metrics[1][0]
best_preprocessing_steps = [ICA_flag, slap_flag]
best_model = grid_search.best_estimator_
else:
pass
if (ICA_flag == 'no_ICA') and (slap_flag == 'no_surface_laplacian'):
dump(grid_search.best_estimator_, open(model_dir_no_prep + 'subject_'+str(sbj)+'.p', 'wb'))
results_exp['best_steps'] = best_preprocessing_steps
CV_results['sbj'+str(sbj)] = results_exp
dump(best_model, open(model_dir_prep + 'subject_'+str(sbj)+'.p', 'wb'))
dump(CV_results, open(cross_val_dir + 'COH.txt', 'wb'))
###Output
100%|██████████| 9/9 [26:17<00:00, 175.25s/it]
###Markdown
Test
###Code
db = loaddb.BCI_Competition_IV.Dataset_2a() #Database Initializer
EEG_channels = db.metadata['channels'][:-3] #EEG channels
sfreq = db.metadata['sampling_rate'] #sample frequency
ch_types = ['eeg']*len(EEG_channels) #type of each channel
montage = make_standard_montage(db.metadata['montage']) #Montage object
info = create_info(EEG_channels, sfreq, ch_types).set_montage(montage)
bandps_filter = flt.GenericButterBand(f0=1, f1=45, N=5) #Butterworth bandpass filter
f_bank = np.array([[8,12],[12,15],[15,20],[18,40]]) #mu, beta low, beta medium, beta high
###Output
_____no_output_____
###Markdown
No_preprocessing
###Code
no_prep_test_result = []
for sbj in tqdm(np.arange(9)+1):
with open(model_dir_no_prep+'/subject_'+str(sbj)+'.p', 'rb') as fmodel:
model = load(fmodel) #no-preprocessing model of subject
db.load_subject(sbj, mode='evaluation') #Load subject in evaluation mode
X, y_test = db.get_data() #Load data of all motor imagery tasks (test)
X_mi = bandps_filter(X[:,:len(EEG_channels),int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
Xcoh = SpectralConnectivity(sfreq=sfreq, f_bank=f_bank, connectivity='coh', mode='wavelet', modeparams=3).fit_transform(X_mi) #COH
y_pred = model.predict(Xcoh)
acc = accuracy_score(y_test ,y_pred)
precision = precision_score(y_test, y_pred, average='macro')
recall = recall_score(y_test, y_pred, average='macro')
f1 = f1_score(y_test, y_pred, average='macro')
auc = roc_auc_score(y_test, model.predict_proba(Xcoh), multi_class='ovr')
no_prep_test_result.append([acc , precision, recall, f1, auc])
###Output
100%|██████████| 9/9 [04:27<00:00, 29.70s/it]
###Markdown
Preprocessing
###Code
with open(cross_val_dir + 'COH.txt', 'rb') as fcv:
cv_info = load(fcv) #Load CV info
prep_test_result = []
for sbj in tqdm(np.arange(9)+1):
with open(model_dir_prep+'subject_'+str(sbj)+'.p', 'rb') as fmodel:
model = load(fmodel) #preprocessing model of subject
db.load_subject(sbj, mode='evaluation') #Load subject in evaluation mode
X, y_test = db.get_data() #Load data of all motor imagery tasks (test)
X_mi = bandps_filter(X[:,:,int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
ICA_flag, slap_flag = cv_info['sbj'+str(sbj)]['best_steps']
if ICA_flag == 'ICA':
Xica = ICA(X_mi, list(np.arange(len(EEG_channels))), [-3,-2,-1]) #Remove ocular artifacts ICA
else:
Xica = X_mi[:,:len(EEG_channels),:]
if slap_flag == 'surface_laplacian':
EpochsXica = EpochsArray(Xica, info)
EpochsXsl = compute_current_source_density(EpochsXica, stiffness=3)
Xslap = EpochsXsl.get_data()
else:
Xslap = Xica
Xcoh = SpectralConnectivity(sfreq=sfreq, f_bank=f_bank, connectivity='coh', mode='wavelet', modeparams=3).fit_transform(Xslap) #COH
y_pred = model.predict(Xcoh)
acc = accuracy_score(y_test ,y_pred)
precision = precision_score(y_test, y_pred, average='macro')
recall = recall_score(y_test, y_pred, average='macro')
f1 = f1_score(y_test, y_pred, average='macro')
auc = roc_auc_score(y_test, model.predict_proba(Xcoh), multi_class='ovr')
prep_test_result.append([acc , precision, recall, f1, auc])
###Output
100%|██████████| 9/9 [07:57<00:00, 53.08s/it]
###Markdown
Save Results
###Code
dump({'No_Preprocessing':no_prep_test_result ,'Preprocessing':prep_test_result}, open(test_dir + 'COH.txt', 'wb'))
###Output
_____no_output_____
###Markdown
CSP
###Code
model_dir_no_prep = os.path.join(model_dir ,'CSP/No_Preprocessing/')
model_dir_prep = os.path.join(model_dir ,'CSP/Preprocessing/')
try:
os.makedirs(model_dir_no_prep)
os.makedirs(model_dir_prep)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
CV
###Code
db = loaddb.BCI_Competition_IV.Dataset_2a() #Database Initializer
EEG_channels = db.metadata['channels'][:-3] #EEG channels
sfreq = db.metadata['sampling_rate'] #sample frequency
ch_types = ['eeg']*len(EEG_channels) #type of each channel
montage = make_standard_montage(db.metadata['montage']) #Montage object
info = create_info(EEG_channels, sfreq, ch_types).set_montage(montage)
bandps_filter = flt.GenericButterBand(f0=1, f1=45, N=5) #Butterworth bandpass filter
f_bank = np.array([[8,12],[12,15],[15,20],[18,40]]) #mu, beta low, beta medium, beta high
CV_results = {}
for sbj in tqdm(np.arange(9)+1):
results_exp = {}
best_acc = -np.inf
best_std = -np.inf
best_preprocessing_steps = []
best_model = None
db.load_subject(sbj) #Load subject
X, y = db.get_data() #Load data of all motor imagery tasks
X_mi = bandps_filter(X[:,:,int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
for ICA_flag in ['ICA', 'no_ICA']:
if ICA_flag == 'ICA':
Xica = ICA(X_mi, list(np.arange(len(EEG_channels))), [-3,-2,-1]) #Remove ocular artifacts ICA
else:
Xica = X_mi[:,:len(EEG_channels),:]
for slap_flag in ['surface_laplacian', 'no_surface_laplacian']:
if slap_flag == 'surface_laplacian':
EpochsXica = EpochsArray(Xica, info)
EpochsXsl = compute_current_source_density(EpochsXica, stiffness=3)
Xslap = EpochsXsl.get_data()
else:
Xslap = Xica
steps = [('CSP', Window_band_CSP_eppoch(fs=sfreq, f_frec=f_bank, vtw=np.array([[0,3]]), ncomp=12, reg='shrinkage')),
('flat',flatt()),
('cla', LDA(solver='lsqr', shrinkage='auto'))]
classifier = Pipeline(steps)
hyparams = {}
cv = StratifiedKFold(n_splits=10)
scores = {'acc':'accuracy' ,'precision':make_scorer(precision_score, average='macro') ,'recall':make_scorer(recall_score, average='macro'), 'f1_score':make_scorer(f1_score, average='macro') ,'auc':make_scorer(roc_auc_score, needs_proba=True, multi_class='ovr')}
grid_search = GridSearchCV(classifier, hyparams, cv=cv, verbose=0, scoring=scores,
refit='acc', error_score='raise', n_jobs=-1, return_train_score=True)
grid_search.fit(Xslap, y)
_, test_metrics, train_metrics, _, _ = grid_search_info(grid_search.cv_results_, ['acc', 'precision', 'recall', 'f1_score', 'auc'])
results_exp[ICA_flag+'-'+slap_flag] = {'test_metrics':test_metrics, 'train_metrics':train_metrics}
if test_metrics[0][0] > best_acc:
best_acc = test_metrics[0][0]
best_std = test_metrics[1][0]
best_preprocessing_steps = [ICA_flag, slap_flag]
best_model = grid_search.best_estimator_
elif test_metrics[0][0] == best_acc:
if test_metrics[1][0] < best_std:
best_acc = test_metrics[0][0]
best_std = test_metrics[1][0]
best_preprocessing_steps = [ICA_flag, slap_flag]
best_model = grid_search.best_estimator_
else:
pass
if (ICA_flag == 'no_ICA') and (slap_flag == 'no_surface_laplacian'):
dump(grid_search.best_estimator_, open(model_dir_no_prep + 'subject_'+str(sbj)+'.p', 'wb'))
results_exp['best_steps'] = best_preprocessing_steps
CV_results['sbj'+str(sbj)] = results_exp
dump(best_model, open(model_dir_prep + 'subject_'+str(sbj)+'.p', 'wb'))
dump(CV_results, open(cross_val_dir + 'CSP.txt', 'wb'))
###Output
100%|██████████| 9/9 [39:38<00:00, 264.28s/it]
###Markdown
Test
###Code
db = loaddb.BCI_Competition_IV.Dataset_2a() #Database Initializer
EEG_channels = db.metadata['channels'][:-3] #EEG channels
sfreq = db.metadata['sampling_rate'] #sample frequency
ch_types = ['eeg']*len(EEG_channels) #type of each channel
montage = make_standard_montage(db.metadata['montage']) #Montage object
info = create_info(EEG_channels, sfreq, ch_types).set_montage(montage)
bandps_filter = flt.GenericButterBand(f0=1, f1=45, N=5) #Butterworth bandpass filter
f_bank = np.array([[8,12],[12,15],[15,20],[18,40]]) #mu, beta low, beta medium, beta high
###Output
_____no_output_____
###Markdown
No_preprocessing
###Code
no_prep_test_result = []
for sbj in tqdm(np.arange(9)+1):
with open(model_dir_no_prep+'/subject_'+str(sbj)+'.p', 'rb') as fmodel:
model = load(fmodel) #no-preprocessing model of subject
db.load_subject(sbj, mode='evaluation') #Load subject in evaluation mode
X, y_test = db.get_data() #Load data of all motor imagery tasks (test)
X_mi = bandps_filter(X[:,:len(EEG_channels),int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
y_pred = model.predict(X_mi)
acc = accuracy_score(y_test ,y_pred)
precision = precision_score(y_test, y_pred, average='macro')
recall = recall_score(y_test, y_pred, average='macro')
f1 = f1_score(y_test, y_pred, average='macro')
auc = roc_auc_score(y_test, model.predict_proba(X_mi), multi_class='ovr')
no_prep_test_result.append([acc , precision, recall, f1, auc])
###Output
100%|██████████| 9/9 [00:44<00:00, 4.90s/it]
###Markdown
Preprocessing
###Code
with open(cross_val_dir + 'CSP.txt', 'rb') as fcv:
cv_info = load(fcv) #Load CV info
prep_test_result = []
for sbj in tqdm(np.arange(9)+1):
with open(model_dir_prep+'subject_'+str(sbj)+'.p', 'rb') as fmodel:
model = load(fmodel) #preprocessing model of subject
db.load_subject(sbj, mode='evaluation') #Load subject in evaluation mode
X, y_test = db.get_data() #Load data of all motor imagery tasks (test)
X_mi = bandps_filter(X[:,:,int(3*sfreq):int(6*sfreq)], fs=sfreq) #filter 1-45 Hz -> motor Imagery Interval
ICA_flag, slap_flag = cv_info['sbj'+str(sbj)]['best_steps']
if ICA_flag == 'ICA':
Xica = ICA(X_mi, list(np.arange(len(EEG_channels))), [-3,-2,-1]) #Remove ocular artifacts ICA
else:
Xica = X_mi[:,:len(EEG_channels),:]
if slap_flag == 'surface_laplacian':
EpochsXica = EpochsArray(Xica, info)
EpochsXsl = compute_current_source_density(EpochsXica, stiffness=3)
Xslap = EpochsXsl.get_data()
else:
Xslap = Xica
y_pred = model.predict(Xslap)
acc = accuracy_score(y_test ,y_pred)
precision = precision_score(y_test, y_pred, average='macro')
recall = recall_score(y_test, y_pred, average='macro')
f1 = f1_score(y_test, y_pred, average='macro')
auc = roc_auc_score(y_test, model.predict_proba(Xslap), multi_class='ovr')
prep_test_result.append([acc , precision, recall, f1, auc])
###Output
100%|██████████| 9/9 [06:39<00:00, 44.38s/it]
###Markdown
Save Results
###Code
dump({'No_Preprocessing':no_prep_test_result ,'Preprocessing':prep_test_result}, open(test_dir + 'CSP.txt', 'wb'))
###Output
_____no_output_____
###Markdown
Read results Form for latex table
###Code
evaluation_mode = ['CV/', 'test/']
type_preprocessing = ['No_Preprocessing', 'Preprocessing']
metrics = ['acc', 'precision', 'recall', 'f1', 'auc']
type_representation = ['COH', 'GFC', 'CSP']
n_subjects = 9
results_row = []
for id_eval_mode, eval_mode in enumerate(evaluation_mode):
for id_type_prep, type_prep in enumerate(type_preprocessing):
results_col = []
for id_m, m in enumerate(metrics):
for type_rep in type_representation:
with open(parent_dir + eval_mode + 'Multi-Class/' + type_rep + '.txt', 'rb') as feval:
eval_info = load(feval) #Load CV info
metric_sbj = np.zeros(n_subjects)
for sbj in np.arange(n_subjects)+1:
if eval_mode == 'CV/':
if type_prep == 'No_Preprocessing':
metric_sbj[sbj-1] = np.round(eval_info['sbj'+str(sbj)]['no_ICA'+'-'+'no_surface_laplacian']['test_metrics'][0][id_m]*100,1)
else:
best_steps = eval_info['sbj'+str(sbj)]['best_steps']
metric_sbj[sbj-1] = np.round(eval_info['sbj'+str(sbj)][best_steps[0]+'-'+best_steps[1]]['test_metrics'][0][id_m]*100,1)
else:
if type_prep == 'No_Preprocessing':
metric_sbj[sbj-1] = np.round(eval_info['No_Preprocessing'][sbj-1][id_m]*100,1)
else:
metric_sbj[sbj-1] = np.round(eval_info['Preprocessing'][sbj-1][id_m]*100,1)
results_col.append(str(np.round(metric_sbj.mean(),1))+' \pm '+str(np.round(metric_sbj.std(),1)))
results_row.append(results_col)
for j in range(len(results_row[0])):
for i in range(len(results_row)):
print('&$'+results_row[i][j]+'$',end='')
print(r'\\')
###Output
&$51.7 \pm 10.1$&$59.4 \pm 14.0$&$52.2 \pm 9.6$&$53.1 \pm 11.6$\\
&$63.1 \pm 12.7$&$66.3 \pm 14.3$&$61.9 \pm 10.7$&$61.7 \pm 11.2$\\
&$60.8 \pm 15.2$&$67.8 \pm 15.1$&$59.2 \pm 13.5$&$62.2 \pm 17.0$\\
&$53.1 \pm 11.0$&$61.0 \pm 14.5$&$54.0 \pm 8.3$&$54.4 \pm 11.3$\\
&$65.2 \pm 13.1$&$67.6 \pm 14.9$&$63.3 \pm 10.8$&$64.5 \pm 10.8$\\
&$62.6 \pm 15.8$&$69.8 \pm 14.9$&$61.7 \pm 14.3$&$64.7 \pm 16.7$\\
&$51.6 \pm 10.1$&$59.5 \pm 14.1$&$52.3 \pm 9.5$&$53.2 \pm 11.5$\\
&$63.1 \pm 12.7$&$66.3 \pm 14.3$&$62.1 \pm 10.6$&$61.7 \pm 11.1$\\
&$60.9 \pm 15.2$&$67.8 \pm 15.0$&$59.4 \pm 13.4$&$62.4 \pm 17.0$\\
&$50.4 \pm 10.3$&$58.6 \pm 14.3$&$51.4 \pm 10.1$&$52.3 \pm 12.2$\\
&$62.5 \pm 13.0$&$65.6 \pm 14.8$&$61.3 \pm 11.0$&$60.5 \pm 11.4$\\
&$59.8 \pm 15.7$&$67.2 \pm 15.2$&$58.2 \pm 14.9$&$60.3 \pm 19.2$\\
&$76.0 \pm 9.6$&$81.0 \pm 10.9$&$76.6 \pm 8.5$&$79.0 \pm 9.3$\\
&$84.6 \pm 8.9$&$85.8 \pm 9.7$&$84.4 \pm 8.2$&$85.6 \pm 8.4$\\
&$82.9 \pm 11.1$&$86.4 \pm 9.7$&$83.4 \pm 10.8$&$86.1 \pm 9.9$\\
###Markdown
Graphs Graph 1
###Code
evaluation_mode = ['CV/', 'test/']
type_representation = ['COH', 'GFC', 'CSP']
type_preprocessing = ['No_Preprocessing', 'Preprocessing']
metrics = ['acc', 'precision', 'recall', 'f1', 'auc']
subjects = np.arange(9)+1
xseed = np.array([-0.6, -0.3, 0, 0.3, 0.6])
markers = ['o', 'v', 's', 'D', 'p']
colors = ['b', 'm','r', 'c', 'k']
fig, axs = plt.subplots(len(evaluation_mode)*len(type_preprocessing), len(type_representation), figsize=(21,18), squeeze=False)
for id_eval_mode, eval_mode in zip([0,2], evaluation_mode):
for id_type_rep, type_rep in enumerate(type_representation):
with open(parent_dir + eval_mode + 'Multi-Class/' + type_rep + '.txt', 'rb') as feval:
eval_info = load(feval) #Load CV info
if eval_mode == 'CV/':
sbjs_order = np.argsort(read_results(eval_info, mode=eval_mode, type_exp='Preprocessing', subjects=subjects)[0,0,:])[::-1]
else:
sbjs_order = np.argsort(read_results(eval_info, mode=eval_mode, type_exp='Preprocessing', subjects=subjects)[0,:])[::-1]
for id_type_prep, type_prep in enumerate(type_preprocessing):
sbjs_metrics = read_results(eval_info, mode=eval_mode, type_exp=type_prep, subjects=subjects)
for id_m, m in enumerate(metrics):
if eval_mode == 'CV/':
axs[id_type_prep+id_eval_mode, id_type_rep].errorbar(np.arange(0, subjects.shape[0]*2.5, 2.5)+xseed[id_m], sbjs_metrics[0,id_m,sbjs_order], yerr=sbjs_metrics[1,id_m,sbjs_order], fmt=markers[id_m], color=colors[id_m], label=m)
else:
axs[id_type_prep+id_eval_mode, id_type_rep].stem(np.arange(0, subjects.shape[0]*2.5, 2.5)+xseed[id_m], sbjs_metrics[id_m,sbjs_order], linefmt='--'+colors[id_m], markerfmt=markers[id_m]+colors[id_m], label=m)
if eval_mode == 'CV/':
#axs[id_type_prep+id_eval_mode, id_type_rep].legend(loc='lower left', ncol=1)
axs[id_type_prep+id_eval_mode, id_type_rep].set_ylim([0, 110])
else:
#axs[id_type_prep+id_eval_mode, id_type_rep].legend(loc='upper right', ncol=1)
axs[id_type_prep+id_eval_mode, id_type_rep].set_ylim([0, 110])
axs[id_type_prep+id_eval_mode, id_type_rep].set_yticks(np.arange(0, 110, 10))
axs[id_type_prep+id_eval_mode, id_type_rep].set_xticks(np.arange(0, subjects.shape[0]*2.5, 2.5))
axs[id_type_prep+id_eval_mode, id_type_rep].set_xlim([-1, (subjects.shape[0]-1)*2.5+1])
axs[id_type_prep+id_eval_mode, id_type_rep].set_xticklabels(subjects[sbjs_order])
axs[0, 0].legend(loc='upper center', ncol=5)
axs[2, 0].legend(loc='upper center', ncol=5)
fig.tight_layout()
axs[0, 0].set_title('COH', fontfamily='serif', fontsize=52.5, weight=500)
axs[0, 1].set_title('GFC', fontfamily='serif', fontsize=52.5, weight=500)
axs[0, 2].set_title('CSP', fontfamily='serif', fontsize=52.5, weight=500)
axs[0, 0].set_ylabel('A', rotation=0, fontfamily='serif', fontsize=39.6, weight=1000, ha='right')
axs[1, 0].set_ylabel('B', rotation=0, fontfamily='serif', fontsize=39.6, weight=1000, ha='right')
axs[2, 0].set_ylabel('C', rotation=0, fontfamily='serif', fontsize=39.6, weight=1000, ha='right')
axs[3, 0].set_ylabel('D', rotation=0, fontfamily='serif', fontsize=39.6, weight=1000, ha='right')
fig.text(0.5, axs[-1,-1].get_position().y0 - 0.06, 'Subject', ha='center', fontfamily='serif', fontsize=54.5, weight=500)
plt.savefig(images_dir+'metrics-subjects-multi-class.pdf',format='pdf', bbox_inches='tight')
###Output
WARNING:matplotlib.legend:No handles with labels found to put in legend.
###Markdown
Graph 2
###Code
type_feat_extraction = ['COH', 'GFC', 'CSP']
type_preprocessing = ['No_Preprocessing', 'Preprocessing']
metrics = ['acc', 'precision', 'recall', 'f1', 'auc']
subjects = np.arange(9)+1
min = np.inf
max = -np.inf
xseed = np.array([-0.6, -0.3, 0, 0.3, 0.6])
markers = ['o', 'v', 's', 'D', 'p']
colors = ['b', 'm','r', 'c', 'k']
fig, axs = plt.subplots(len(type_feat_extraction), len(type_preprocessing), figsize=(12,10), squeeze=False, sharex=True, sharey=True)
with open(test_dir + 'GFC.txt', 'rb') as feval:
eval_info = load(feval) #Load test info
sbjs_order = np.argsort(read_results(eval_info, mode='test/', type_exp='Preprocessing', subjects=subjects)[0,:])[::-1]
for id_type_fte, type_fte in enumerate(type_feat_extraction):
with open(test_dir + type_fte + '.txt', 'rb') as feval:
eval_info = load(feval) #Load test info
for id_type_prep, type_prep in enumerate(type_preprocessing):
sbjs_metrics = read_results(eval_info, mode='test/', type_exp=type_prep, subjects=subjects)
if sbjs_metrics.min() < min:
min = sbjs_metrics.min()
if sbjs_metrics.max() > max:
max = sbjs_metrics.max()
for id_m, m in enumerate(metrics):
axs[id_type_fte, id_type_prep].stem(np.arange(0, subjects.shape[0]*2.5, 2.5)+xseed[id_m], sbjs_metrics[id_m,sbjs_order], linefmt='--'+colors[id_m], markerfmt=markers[id_m]+colors[id_m], label=m)
for j in range(axs.shape[1]):
axs[-1,j].set_xticks(np.arange(0, subjects.shape[0]*2.5, 2.5))
axs[-1,j].set_xticklabels(subjects[sbjs_order], fontsize=18)
axs[-1,j].set_xlim([-1, (subjects.shape[0]-1)*2.5+1])
#Becareful with the order
for i in range(axs.shape[0]):
axs[i,0].set_ylim([rounddown(min), roundup(max)])
for i in range(axs.shape[0]):
axs[i,0].set_yticks(np.arange(rounddown(min), roundup(max), 10, dtype=np.int))
axs[i,0].set_yticklabels(np.arange(rounddown(min), roundup(max), 10, dtype=np.int), fontsize=18)
axs[1,1].legend(loc='lower left', fontsize=16, ncol=2)
fig.tight_layout()
axs[0, 0].set_ylabel('COH', fontfamily='serif', fontsize=34.5, weight=500, rotation=0, ha='right')
axs[1, 0].set_ylabel('GFC', fontfamily='serif', fontsize=34.5, weight=500, rotation=0, ha='right')
axs[2, 0].set_ylabel('CSP', fontfamily='serif', fontsize=34.5, weight=500, rotation=0, ha='right')
axs[0, 0].set_title('A', fontfamily='serif', fontsize=28, weight=1000)
axs[0, 1].set_title('B', fontfamily='serif', fontsize=28, weight=1000)
fig.text(0.5, axs[-1,-1].get_position().y0 - 0.1, 'Subject', ha='center', fontfamily='serif', fontsize=34.5, weight=500)
plt.savefig(images_dir+'metrics-subjects-multi-class.pdf',format='pdf', bbox_inches='tight')
###Output
_____no_output_____ |
Edges_between_Countries.ipynb | ###Markdown
This is on the Entities as stated in the Panama Papers.
###Code
df.head()
###Output
_____no_output_____
###Markdown
Deciding which columns to chooseTo do an inital exploration, I have only chosen the Jurisdiction_description as well as country.Jurisdiction_description means 'the official power to make legal decisions and judgements.'With this, I assume that the company lies in the particular country"Country" means where the country operates.
###Code
df_edges = df[['name','jurisdiction_description', 'countries', 'incorporation_date','inactivation_date', 'address']]
###Output
_____no_output_____
###Markdown
Check for Duplicate Records
###Code
df_2 = df_edges[df_edges.duplicated(subset = ['name','jurisdiction_description', 'countries', 'incorporation_date','inactivation_date', 'address'],
keep = False)]
df_edges = df_edges.drop_duplicates(subset = ['name','jurisdiction_description', 'countries', 'incorporation_date','inactivation_date', 'address']
,keep = 'first')
df_edges
df_edges = df_edges[['jurisdiction_description', 'countries']]
df_edges.jurisdiction_description.unique()
df_edges.countries.unique()
###Output
_____no_output_____
###Markdown
Identifying the weird values in the columns itself.From this, we identified that Under Jurid, we have weird values such as - 'Recorded in leaked files as "fund"'- 'Undetermined'Under countries, we have - A lot of values with the ';' which is a combination of countries- 'Not identified'- NaNWe will remove all these
###Code
df_edges = df_edges.dropna()
df_edges = df_edges[ df_edges['jurisdiction_description'] != 'Undetermined']
df_edges = df_edges[ df_edges['jurisdiction_description'] != 'Recorded in leaked files as "fund"']
df_edges = df_edges[~df_edges['countries'].str.contains(';')]
df_edges = df_edges[ df_edges['countries'] != 'Not identified']
###Output
_____no_output_____
###Markdown
Grouping them into countries together
###Code
df_edges
df_edges_overall = df_edges.groupby(['jurisdiction_description', 'countries']).size().reset_index(name = 'Freq')
df_edges_overall.columns = ['target', 'source', 'weight']
#jurisdiction_description = target
#countries = source
df_edges_overall
###Output
_____no_output_____
###Markdown
It is interesting to note that there are sizeable amount of self loopsEg. Bahamas to Bahamas 400 records
###Code
df_edges_overall.to_csv('edgesbetweencountries.csv', index = False)
###Output
_____no_output_____ |
Webscraping/Wikipedia/Wikipedia_YoutubeData_Webscraping.ipynb | ###Markdown
1.Scrape the details of most viewed videos on YouTube from Wikipedia:Url= https://en.wikipedia.org/wiki/List_of_most-viewed_YouTube_videosYou need to find following details:A) RankB) NameC) ArtistD) Upload dateE) Views
###Code
#Connect to web driver
driver=webdriver.Chrome(r"D://chromedriver.exe") #r converts string to raw string
#If not r, we can use executable_path = "C:/path name"
#Getting the website to driver
driver.get('https://en.wikipedia.org/wiki/List_of_most-viewed_YouTube_videos')
#When we run this line, automatically the webpage will be opened
#Creating the empty lists to store the scraped data
Rank=[]
Name=[]
Artist=[]
Views=[]
Upload_Date=[]
#As we need only the first 30 details, we will iterate only for first 30 data
#Scrapping the details of the Rank of the video
rank=driver.find_elements_by_xpath("//table[@class='wikitable sortable jquery-tablesorter']/tbody/tr/td[1]")
for i in rank[:30]:
Rank.append(i.text)
#Scrapping the details of the video name
video=driver.find_elements_by_xpath("//table[@class='wikitable sortable jquery-tablesorter']/tbody/tr/td[2]")
for i in video[:30]:
Name.append(i.text)
#Scrapping the details of the Artist name
artist=driver.find_elements_by_xpath("//table[@class='wikitable sortable jquery-tablesorter']/tbody/tr/td[3]")
for i in artist[:30]:
Artist.append(i.text)
#Scrapping the details of the views information
views=driver.find_elements_by_xpath("//table[@class='wikitable sortable jquery-tablesorter']/tbody/tr/td[4]")
for i in views[:30]:
Views.append(i.text)
#Scrapping the details of the upload date
date=driver.find_elements_by_xpath("//table[@class='wikitable sortable jquery-tablesorter']/tbody/tr/td[5]")
for i in date[:30]:
Upload_Date.append(i.text)
#Checking the length of the data scraped
print(len(Rank),len(Name),len(Artist),len(Views),len(Upload_Date))
#Creating a dataframe for storing the scraped data
Yt_data=pd.DataFrame({})
Yt_data['Rank']=Rank
Yt_data['Video Name']=Name
Yt_data['Artist']=Artist
Yt_data['Views(Billions)']=Views
Yt_data['Upload Date']=Upload_Date
Yt_data
#Removing the stray numbers from videoname
new=Yt_data["Video Name"].str.split("[", n = 1, expand = True)
new
#Dropping the column with stray numbers
Yt_data.drop(columns=['Video Name'],axis=1,inplace=True)
#Inserting the name column
Yt_data.insert(1,"Video Name",new[0])
#Checking the data after removing the stray numbers
Yt_data
#Closing the driver
driver.close()
###Output
_____no_output_____ |
finetune/PyTorch/notebooks/BERT_Eval_GLUE.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. PyTorch Pretrained BERT on AzureML with GLUE DatasetIn this notebook, you will find the following contents:- Download GLUE dataset on the remote compute and store them in Azure storage- Speed-up fine-tuning BERT for GLUE dataset on AzureML GPU clusters PrerequisitesFollow instructions in BERT_pretraining.ipynb notebook for setting up AzureML
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize workspaceTo create or access an Azure ML Workspace, you will need to import the AML library and the following information:* A name for your workspace* Your subscription id* The resource group nameInitialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace) object from the existing workspace you created in the Prerequisites step or create a new one.
###Code
from azureml.core.workspace import Workspace
ws = Workspace.setup()
ws_details = ws.get_details()
print('Name:\t\t{}\nLocation:\t{}'
.format(ws_details['name'],
ws_details['location']))
###Output
_____no_output_____
###Markdown
Create an experimentCreate an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureexperiment) to track all the runs in your workspace for this distributed PyTorch tutorial. Download GLUE dataset on the remote computeBefore we start to fine-tune the pretained BERT model, we need to download the [GLUE data](https://gluebenchmark.com/tasks) by running the [script](https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e) and unpack it to an Azure Blob container. Define AzureML datastore to collect training datasetTo make data accessible for remote training, AML provides a convenient way to do so via a [Datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data). The datastore provides a mechanism for you to upload/download data to Azure Storage, and interact with it from your remote compute targets.Each workspace is associated with a default Azure Blob datastore named `'workspaceblobstore'`. In this work, we use this default datastore to collect the GLUE training dataset .
###Code
from azureml.core import Datastore
ds = ws.get_default_datastore()
###Output
_____no_output_____
###Markdown
Create a project directoryCreate a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on.
###Code
import os
import os.path as path
project_root = path.abspath(path.join(os.getcwd(),"../../../"))
###Output
_____no_output_____
###Markdown
Download GLUE dataset in BingBert/ directory
###Code
ds.upload(src_dir=os.path.join(project_root,'data','glue_data'), target_path='glue_data')
###Output
_____no_output_____
###Markdown
Create a folder named "bert-large-checkpoints" which contains the .pt bert checkpoint file against which you want to run your eval tasks. The following code will upload the folder to the datastore. The URL for the checkpoint is: https://bertonazuremlwestus2.blob.core.windows.net/public/models/bert_large_uncased_original/bert_encoder_epoch_200.pt
###Code
ds.upload(src_dir=os.path.join(project_root,'data','bert-large-checkpoints') , target_path='bert-large-checkpoints')
###Output
_____no_output_____
###Markdown
Uploading bert-large config file to datastore
###Code
ds.upload(src_dir=os.path.join(project_root,'pretrain','configs'), target_path='config')
###Output
_____no_output_____
###Markdown
**Remove /data folder to avoid uploading folder greater than 300MB.** Fine-tuning BERT with Distributed TrainingAs our `GLUE` dataset are ready in Azure storage, we can start the fine-tune the model by exploting the power of distributed training. Create a GPU remote compute targetWe need to create a GPU [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecturecompute-target) to perform the fine-tuning. In this example, we create an AmlCompute cluster as our training compute resource.This code creates a cluster for you if it does not already exist in your workspace.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
gpu_cluster_name = "bertcodetesting"
try:
gpu_compute_target = ComputeTarget(workspace=ws, name=gpu_cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC24', max_nodes=4)
# create the cluster
gpu_compute_target = ComputeTarget.create(ws, gpu_cluster_name, compute_config)
gpu_compute_target.wait_for_completion(show_output=True)
# Use the 'status' property to get a detailed status for the current cluster.
print(gpu_compute_target.status.serialize())
###Output
_____no_output_____
###Markdown
Create a PyTorch estimator for fine-tuningLet us create a new PyTorch estimator to run the fine-tuning script `run_classifier.py`, that is already provided at [the original repository](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py). Please refer [here](https://github.com/huggingface/pytorch-pretrained-BERTfine-tuning-with-bert-running-the-examples) for more detail about the script. The original `run_classifier.py` script uses PyTorch distributed launch untility to launch multiple processes across nodes and GPUs. We prepared a modified version [run_classifier_azureml.py](./run_classifier_azureml.py) so that we can launch it based on AzureML build-in MPI backend.To use AML's tracking and metrics capabilities, we need to add a small amount of AzureML code inside the training script.In `run_classifier_azureml.py`, we will log some metrics to our AML run. To do so, we will access the AML run object within the script:```Pythonfrom azureml.core.run import Runrun = Run.get_context()```Further within `run_classifier_azureml.py`, we log learning rate, training loss and evaluation accuracy the model achieves as:```Pythonrun.log('lr', np.float(args.learning_rate))...for step, batch in enumerate(tqdm(train_dataloader, desc="Iteration")): ... run.log('train_loss', np.float(loss))...result = {'eval_loss': eval_loss, 'eval_accuracy': eval_accuracy}for key in sorted(result.keys()): run.log(key, str(result[key]))``` The following code runs GLUE RTE task against a bert-large checkpoint with the parameters used by Huggingface for finetuning.- num_train_epochs = 3- max_seq_length = 128- train_batch_size = 8- learning_rate = 2e-5- grad_accumulation_step = 2
###Code
from azureml.train.dnn import PyTorch
from azureml.core.runconfig import RunConfiguration
from azureml.core.container_registry import ContainerRegistry
run_user_managed = RunConfiguration()
run_user_managed.environment.python.user_managed_dependencies = True
# Using a pre-defined public docker image published on AzureML
image_name = 'mcr.microsoft.com/azureml/bert:pretrain-openmpi3.1.2-cuda10.0-cudnn7-ubuntu16.04'
estimator = PyTorch(source_directory='../../../',
compute_target=gpu_compute_target,
#Docker image
use_docker=True,
custom_docker_image=image_name,
user_managed=True,
script_params = {
'--bert_model':'bert-large-uncased',
"--model_file_location": ds.path('bert-large-checkpoints/').as_mount(),
'--task_name': 'RTE',
'--data_dir': ds.path('glue_data/RTE/').as_mount(),
'--do_train' : '',
'--do_eval': '',
'--do_lower_case': '',
'--max_seq_length': 128,
'--train_batch_size': 8,
'--gradient_accumulation_steps': 2,
'--learning_rate': 2e-5,
'--num_train_epochs': 3.0,
'--output_dir': ds.path('output/').as_mount(),
'--model_file': 'bert_encoder_epoch_245.pt',
'--fp16': ""
},
entry_script='./finetune/run_classifier_azureml.py',
node_count=1,
process_count_per_node=4,
distributed_backend='mpi',
use_gpu=True)
# path to the Python environment in the custom Docker image
estimator._estimator_config.environment.python.interpreter_path = '/opt/miniconda/envs/amlbert/bin/python'
###Output
_____no_output_____
###Markdown
Submit and Monitor your run
###Code
from azureml.core import Experiment
experiment_name = 'bert-large-RTE'
experiment = Experiment(ws, name=experiment_name)
run = experiment.submit(estimator)
from azureml.widgets import RunDetails
RunDetails(run).show()
#run.cancel()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. PyTorch Pretrained BERT on AzureML with GLUE DatasetIn this notebook, you will find the following contents:- Download GLUE dataset on the remote compute and store them in Azure storage- Speed-up fine-tuning BERT for GLUE dataset on AzureML GPU clusters PrerequisitesFollow instructions in BERT_pretraining.ipynb notebook for setting up AzureML
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize workspaceTo create or access an Azure ML Workspace, you will need to import the AML library and the following information:* A name for your workspace* Your subscription id* The resource group nameInitialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace) object from the existing workspace you created in the Prerequisites step or create a new one.
###Code
from azureml.core.workspace import Workspace
ws = Workspace.setup()
ws_details = ws.get_details()
print('Name:\t\t{}\nLocation:\t{}'
.format(ws_details['name'],
ws_details['location']))
###Output
_____no_output_____
###Markdown
Create an experimentCreate an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureexperiment) to track all the runs in your workspace for this distributed PyTorch tutorial. Download GLUE dataset on the remote computeBefore we start to fine-tune the pretained BERT model, we need to download the [GLUE data](https://gluebenchmark.com/tasks) by running the [script](https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e) and unpack it to an Azure Blob container. Define AzureML datastore to collect training datasetTo make data accessible for remote training, AML provides a convenient way to do so via a [Datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data). The datastore provides a mechanism for you to upload/download data to Azure Storage, and interact with it from your remote compute targets.Each workspace is associated with a default Azure Blob datastore named `'workspaceblobstore'`. In this work, we use this default datastore to collect the GLUE training dataset .
###Code
from azureml.core import Datastore
ds = ws.get_default_datastore()
###Output
_____no_output_____
###Markdown
Create a project directoryCreate a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on.
###Code
import os
project_root = os.path.dirname((os.path.abspath('../../')))
#print(project_root)
###Output
_____no_output_____
###Markdown
Download GLUE dataset in BingBert/ directory
###Code
ds.upload(src_dir=os.path.join(project_root,'data','glue_data'), target_path='glue_data')
###Output
_____no_output_____
###Markdown
Create a folder named "bert-large-checkpoints" which contains the .pt bert checkpoint file against which you want to run your eval tasks. The following code will upload the folder to the datastore. The URL for the checkpoint is: https://bertonazuremlwestus2.blob.core.windows.net/public/models/bert_large_uncased_original/bert_encoder_epoch_200.pt
###Code
ds.upload(src_dir=os.path.join(project_root,'data','bert-large-checkpoints') , target_path='bert-large-checkpoints')
###Output
_____no_output_____
###Markdown
Uploading bert-large config file to datastore
###Code
ds.upload(src_dir=os.path.join(project_root,'pretrain','configs'), target_path='config')
###Output
_____no_output_____
###Markdown
**Remove /data folder to avoid uploading folder greater than 300MB.** Fine-tuning BERT with Distributed TrainingAs our `GLUE` dataset are ready in Azure storage, we can start the fine-tune the model by exploting the power of distributed training. Create a GPU remote compute targetWe need to create a GPU [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecturecompute-target) to perform the fine-tuning. In this example, we create an AmlCompute cluster as our training compute resource.This code creates a cluster for you if it does not already exist in your workspace.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
gpu_cluster_name = "bertcodetesting"
try:
gpu_compute_target = ComputeTarget(workspace=ws, name=gpu_cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC24', max_nodes=4)
# create the cluster
gpu_compute_target = ComputeTarget.create(ws, gpu_cluster_name, compute_config)
gpu_compute_target.wait_for_completion(show_output=True)
# Use the 'status' property to get a detailed status for the current cluster.
print(gpu_compute_target.status.serialize())
###Output
_____no_output_____
###Markdown
Create a PyTorch estimator for fine-tuningLet us create a new PyTorch estimator to run the fine-tuning script `run_classifier.py`, that is already provided at [the original repository](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py). Please refer [here](https://github.com/huggingface/pytorch-pretrained-BERTfine-tuning-with-bert-running-the-examples) for more detail about the script. The original `run_classifier.py` script uses PyTorch distributed launch untility to launch multiple processes across nodes and GPUs. We prepared a modified version [run_classifier_azureml.py](./run_classifier_azureml.py) so that we can launch it based on AzureML build-in MPI backend.To use AML's tracking and metrics capabilities, we need to add a small amount of AzureML code inside the training script.In `run_classifier_azureml.py`, we will log some metrics to our AML run. To do so, we will access the AML run object within the script:```Pythonfrom azureml.core.run import Runrun = Run.get_context()```Further within `run_classifier_azureml.py`, we log learning rate, training loss and evaluation accuracy the model achieves as:```Pythonrun.log('lr', np.float(args.learning_rate))...for step, batch in enumerate(tqdm(train_dataloader, desc="Iteration")): ... run.log('train_loss', np.float(loss))...result = {'eval_loss': eval_loss, 'eval_accuracy': eval_accuracy}for key in sorted(result.keys()): run.log(key, str(result[key]))``` The following code runs GLUE RTE task against a bert-large checkpoint with the parameters used by Huggingface for finetuning.- num_train_epochs = 3- max_seq_length = 128- train_batch_size = 8- learning_rate = 2e-5- grad_accumulation_step = 2
###Code
from azureml.train.dnn import PyTorch
from azureml.core.runconfig import RunConfiguration
from azureml.core.container_registry import ContainerRegistry
run_user_managed = RunConfiguration()
run_user_managed.environment.python.user_managed_dependencies = True
# Define custom Docker image info
image_name = 'bing/bertnew:0.0.4'
image_registry_details = ContainerRegistry()
image_registry_details.address = ""
image_registry_details.username = ""
image_registry_details.password = ""
estimator = PyTorch(source_directory='../../../',
compute_target=gpu_compute_target,
#Docker image
use_docker=True,
custom_docker_image=image_name,
image_registry_details=image_registry_details,
user_managed=True,
script_params = {
'--bert_model':'bert-large-uncased',
"--model_file_location": ds.path('bert-large-checkpoints/').as_mount(),
'--task_name': 'RTE',
'--data_dir': ds.path('glue_data/RTE/').as_mount(),
'--do_train' : '',
'--do_eval': '',
'--do_lower_case': '',
'--max_seq_length': 128,
'--train_batch_size': 8,
'--gradient_accumulation_steps': 2,
'--learning_rate': 2e-5,
'--num_train_epochs': 3.0,
'--output_dir': ds.path('output/').as_mount(),
'--model_file': 'bert_encoder_epoch_245.pt',
'--fp16': ""
},
entry_script='./finetune/run_classifier_azureml.py',
node_count=1,
process_count_per_node=4,
distributed_backend='mpi',
use_gpu=True)
# path to the Python environment in the custom Docker image
estimator._estimator_config.environment.python.interpreter_path = '/opt/miniconda/envs/amlbert/bin/python'
###Output
_____no_output_____
###Markdown
Submit and Monitor your run
###Code
from azureml.core import Experiment
experiment_name = 'bert-large-RTE'
experiment = Experiment(ws, name=experiment_name)
run = experiment.submit(estimator)
from azureml.widgets import RunDetails
RunDetails(run).show()
#run.cancel()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. PyTorch Pretrained BERT on AzureML with GLUE DatasetIn this notebook, you will find the following contents:- Download GLUE dataset on the remote compute and store them in Azure storage- Speed-up fine-tuning BERT for GLUE dataset on AzureML GPU clusters PrerequisitesFollow instructions in BERT_pretraining.ipynb notebook for setting up AzureML
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize workspaceTo create or access an Azure ML Workspace, you will need to import the AML library and the following information:* A name for your workspace* Your subscription id* The resource group nameInitialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace) object from the existing workspace you created in the Prerequisites step or create a new one.
###Code
from azureml.core.workspace import Workspace
ws = Workspace.setup()
ws_details = ws.get_details()
print('Name:\t\t{}\nLocation:\t{}'
.format(ws_details['name'],
ws_details['location']))
###Output
_____no_output_____
###Markdown
Create an experimentCreate an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureexperiment) to track all the runs in your workspace for this distributed PyTorch tutorial. Download GLUE dataset on the remote computeBefore we start to fine-tune the pretained BERT model, we need to download the [GLUE data](https://gluebenchmark.com/tasks) by running the [script](https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e) and unpack it to an Azure Blob container. Define AzureML datastore to collect training datasetTo make data accessible for remote training, AML provides a convenient way to do so via a [Datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data). The datastore provides a mechanism for you to upload/download data to Azure Storage, and interact with it from your remote compute targets.Each workspace is associated with a default Azure Blob datastore named `'workspaceblobstore'`. In this work, we use this default datastore to collect the GLUE training dataset .
###Code
from azureml.core import Datastore
ds = ws.get_default_datastore()
###Output
_____no_output_____
###Markdown
Create a project directoryCreate a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on.
###Code
import os
import os.path as path
project_root = path.abspath(path.join(os.getcwd(),"../../../"))
###Output
_____no_output_____
###Markdown
Download GLUE dataset in BingBert/ directory
###Code
ds.upload(src_dir=os.path.join(project_root,'data','glue_data'), target_path='glue_data')
###Output
_____no_output_____
###Markdown
Create a folder named "bert-large-checkpoints" which contains the .pt bert checkpoint file against which you want to run your eval tasks. The following code will upload the folder to the datastore. The URL for the checkpoint is: https://bertonazuremlwestus2.blob.core.windows.net/public/models/bert_large_uncased_original/bert_encoder_epoch_200.pt
###Code
ds.upload(src_dir=os.path.join(project_root,'data','bert-large-checkpoints') , target_path='bert-large-checkpoints')
###Output
_____no_output_____
###Markdown
Uploading bert-large config file to datastore
###Code
ds.upload(src_dir=os.path.join(project_root,'pretrain','configs'), target_path='config')
###Output
_____no_output_____
###Markdown
**Remove /data folder to avoid uploading folder greater than 300MB.** Fine-tuning BERT with Distributed TrainingAs our `GLUE` dataset are ready in Azure storage, we can start the fine-tune the model by exploting the power of distributed training. Create a GPU remote compute targetWe need to create a GPU [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecturecompute-target) to perform the fine-tuning. In this example, we create an AmlCompute cluster as our training compute resource.This code creates a cluster for you if it does not already exist in your workspace.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
gpu_cluster_name = "bertcodetesting"
try:
gpu_compute_target = ComputeTarget(workspace=ws, name=gpu_cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC24', max_nodes=4)
# create the cluster
gpu_compute_target = ComputeTarget.create(ws, gpu_cluster_name, compute_config)
gpu_compute_target.wait_for_completion(show_output=True)
# Use the 'status' property to get a detailed status for the current cluster.
print(gpu_compute_target.status.serialize())
###Output
_____no_output_____
###Markdown
Create a PyTorch estimator for fine-tuningLet us create a new PyTorch estimator to run the fine-tuning script `run_classifier.py`, that is already provided at [the original repository](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py). Please refer [here](https://github.com/huggingface/pytorch-pretrained-BERTfine-tuning-with-bert-running-the-examples) for more detail about the script. The original `run_classifier.py` script uses PyTorch distributed launch untility to launch multiple processes across nodes and GPUs. We prepared a modified version [run_classifier_azureml.py](./run_classifier_azureml.py) so that we can launch it based on AzureML build-in MPI backend.To use AML's tracking and metrics capabilities, we need to add a small amount of AzureML code inside the training script.In `run_classifier_azureml.py`, we will log some metrics to our AML run. To do so, we will access the AML run object within the script:```Pythonfrom azureml.core.run import Runrun = Run.get_context()```Further within `run_classifier_azureml.py`, we log learning rate, training loss and evaluation accuracy the model achieves as:```Pythonrun.log('lr', np.float(args.learning_rate))...for step, batch in enumerate(tqdm(train_dataloader, desc="Iteration")): ... run.log('train_loss', np.float(loss))...result = {'eval_loss': eval_loss, 'eval_accuracy': eval_accuracy}for key in sorted(result.keys()): run.log(key, str(result[key]))``` The following code runs GLUE RTE task against a bert-large checkpoint with the parameters used by Huggingface for finetuning.- num_train_epochs = 3- max_seq_length = 128- train_batch_size = 8- learning_rate = 2e-5- grad_accumulation_step = 2
###Code
from azureml.train.dnn import PyTorch
from azureml.core.runconfig import RunConfiguration
from azureml.core.container_registry import ContainerRegistry
run_user_managed = RunConfiguration()
run_user_managed.environment.python.user_managed_dependencies = True
# Using a pre-defined public docker image published on AzureML
image_name = 'mcr.microsoft.com/azureml/bert:pretrain-openmpi3.1.2-cuda10.0-cudnn7-ubuntu16.04'
estimator = PyTorch(source_directory='../../../',
compute_target=gpu_compute_target,
#Docker image
use_docker=True,
custom_docker_image=image_name,
user_managed=True,
script_params = {
'--bert_model':'bert-large-uncased',
"--model_file_location": ds.path('bert-large-checkpoints/').as_mount(),
'--task_name': 'RTE',
'--data_dir': ds.path('glue_data/RTE/').as_mount(),
'--do_train' : '',
'--do_eval': '',
'--do_lower_case': '',
'--max_seq_length': 128,
'--train_batch_size': 8,
'--gradient_accumulation_steps': 2,
'--learning_rate': 2e-5,
'--num_train_epochs': 3.0,
'--output_dir': ds.path('output/').as_mount(),
'--model_file': 'bert_encoder_epoch_245.pt',
'--fp16': ""
},
entry_script='./finetune/run_classifier_azureml.py',
node_count=1,
process_count_per_node=4,
distributed_backend='mpi',
use_gpu=True)
# path to the Python environment in the custom Docker image
estimator._estimator_config.environment.python.interpreter_path = '/opt/miniconda/envs/amlbert/bin/python'
###Output
_____no_output_____
###Markdown
Submit and Monitor your run
###Code
from azureml.core import Experiment
experiment_name = 'bert-large-RTE'
experiment = Experiment(ws, name=experiment_name)
run = experiment.submit(estimator)
from azureml.widgets import RunDetails
RunDetails(run).show()
#run.cancel()
###Output
_____no_output_____ |
data_to_insights.ipynb | ###Markdown
Data Exploration FROM DATA TO INSIGHTS
Introduction
This notebook is created that it should be possible to run it in one go.
**NOTE**
Before you run the script fill in the URL for the constant URL_FILE
Python 3, conda and pip should be installed upfront.
###Code
!python --version
!conda --version
!pip --version
###Output
_____no_output_____
###Markdown
Install whatever packages that are needed
###Code
!pip install folium==0.12.1
!pip install matplotlib==3.4.3
!pip install numpy==1.21.2
!pip install pandas==1.3.2
!pip install requests==2.26.0
!pip install scikit-learn==0.24.2
import folium
import json
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import random
import requests
from IPython.display import display
from pathlib import Path
from sklearn.cluster import DBSCAN
from sklearn.cluster import AffinityPropagation
from sklearn.cluster import AgglomerativeClustering
from sklearn.decomposition import PCA
DATA_FILE = "global_cities_data_set.json"
URL_FILE = "<URL>"
START_FROM_SCRATCH = True
# Filters
REGION_FILTER = 'EUREG'
YEAR_LIST = [2018, 2019, 2020, 2021, 2022, 2023, 2024]
# Clustering hyper parameters
EPS_VALUE = 0.02
MIN_SAMPLES_VALUE = 50
N_CLUSTERS = 10
DTYPES_DICT = {
'year': np.int32,
'indicator_name': object,
'geography_iso': object,
'geography_country': object,
'geographyid': object,
'geographyname': object,
'value_unit': object,
'databank': object,
'value': np.float64
}
FILE_LIST = [
'Consumer spending by product',
'Population',
'Household numbers by income band'
]
def download_and_read_source_data():
if START_FROM_SCRATCH:
r = requests.get(URL_FILE)
open(DATA_FILE, 'wb').write(r.content)
file_object = open(DATA_FILE, encoding='utf8')
data = json.load(file_object)
df = pd.json_normalize(data['data'])
print("df.shape: (all): ", df.shape)
# Make sure the year field is an integer
df.year = df.year.astype('int32')
file_object.close()
return df
###Output
_____no_output_____
###Markdown
Filtering
In the current setup it's only possible to visualize the data for EU region,
The geographyid is unique for all countries except for the USA. Therefore creating a combined logical key named geography_region_id consisting of geographyid and geographyname which is 100% unique for the region.
year combined with geography_region_id is a primary key which can be used to merge data.
###Code
def filter_data(par_df, par_year):
par_df["geography_region_key"] = par_df["geographyid"] + "_" + par_df["geographyname"]
par_df = par_df[(par_df['databank'] == 'EUREG') & (par_df['year'] == par_year)]
print("par_df.shape: (" + REGION_FILTER + " & " + str(par_year) + "): ", par_df.shape)
return par_df
###Output
_____no_output_____
###Markdown
Indicators
The file provided hosts a number of different types of data as can be seen in the indicator_name field.
Some indicators belong together. For example Population per age range.
These indicator_groups are handled separately.
Singular indicator are written into separate files.
###Code
def split_indicators(par_data_dir_name, par_df_data):
#Some indicator are grouped
indicator_groups = [
'Household numbers by income band',
'Population',
'Consumer spending by product'
]
indicator_groups_strings = (
'Household numbers by income band',
'Population',
'Consumer spending by product'
)
other_indicators = []
for word in par_df_data.indicator_name.unique()[:]:
if not word.startswith(indicator_groups_strings):
other_indicators.append(word)
# Create separate files for indicators.
for indicator in other_indicators:
df_filtered = par_df_data[(par_df_data['indicator_name'] == indicator)]
filtered_file_name = \
par_data_dir_name + os.path.sep + indicator.replace(" ", "_"). \
replace(",", "_").replace("/", "_") + '.csv'
df_filtered.to_csv(filtered_file_name, sep=";", encoding="utf-8")
# Group some indicators into one file.
for indicator_group in indicator_groups:
df_filtered = par_df_data[(
par_df_data['indicator_name'].str.startswith(indicator_group))]
filtered_file_name = \
par_data_dir_name + os.path.sep + indicator_group + '.csv'
df_filtered.to_csv(filtered_file_name, sep=";", encoding="utf-8")
par_df_data.to_csv(par_data_dir_name + os.path.sep + "total_set.csv",
sep=";",
encoding="utf-8")
###Output
_____no_output_____
###Markdown
Indicator groups
Now process the indicator groups. Different bands of the same kind of data are put into one file for further processing.
As the value_unit might not be the same we can't compare the data in its original form.
For each band a ratio is calculated to indicate what proportion of total this band represents.
This makes it possible to compare the data no matter the country.
###Code
def generate_grouped_indicator_files(par_data_dir_name, par_file_item):
df_data = pd.read_csv(
par_data_dir_name + os.path.sep + par_file_item + ".csv",
sep=";",
encoding="utf8",
dtype=DTYPES_DICT)
print("shape: ", df_data.shape)
# Remove unwanted columns when grouping
df_sum = df_data.loc[:, ("geography_region_key", "year", "value")]
# Sum values
df_grouped = df_sum.groupby(by=['year', 'geography_region_key']).sum()
# Back to a data frame
df_sum = df_grouped.reset_index()
def calculate_ratio(par_year, par_geography_region_key, par_value):
df_filtered_sum = df_sum[(df_sum['year'] == par_year) &
(df_sum['geography_region_key'] == par_geography_region_key)].sum()
return par_value / df_filtered_sum.values[2]
df_data['ratio'] = df_data.apply(
lambda row : calculate_ratio(
row['year'],
row['geography_region_key'],
row['value']), axis = 1)
df_data['ratio'].fillna(0, inplace=True)
print("shape: ", df_data.shape)
df_data.to_csv(par_data_dir_name + os.path.sep + file_item + "_ext.csv",
sep=";",
encoding="utf8")
print("End " + file_item)
def generate_rows_with_grouped_indicators(par_data_dir_name, par_file_item):
df_data = pd.read_csv(
par_data_dir_name + os.path.sep + par_file_item + "_ext.csv",
sep=";",
encoding="utf8")
print("shape: ", df_data.shape)
column_names = []
df_data_ext = pd.DataFrame()
indicator_names = df_data.indicator_name.unique()
for indicator_name in indicator_names:
df_select = df_data[df_data.indicator_name == indicator_name]
column_name = indicator_name. \
replace("resident", ""). \
replace("based", ""). \
replace("current", ""). \
replace("prices", ""). \
replace("(", ""). \
replace(")", ""). \
replace("Consumer spending by product / service - ", ""). \
replace("Household numbers by income band - ", ""). \
replace(",", ""). \
replace(" ", "_"). \
replace("-", "_"). \
replace("____", ""). \
replace("__", "_"). \
lower()
#print("column_name: ", column_name)
column_names.append(column_name)
df_select[column_name] = df_select['ratio']
df_select = df_select.loc[:, ("geographyid", "geography_region_key", "year", column_name)]
if (len(df_data_ext) == 0):
df_data_ext = df_select
else:
df_data_ext = df_data_ext.merge(
right=df_select,
on=["geographyid", "geography_region_key", "year"],
how="outer")
#print("Shape: ", df_data_ext.shape)
df_data_ext.fillna(0, inplace=True)
df_data_ext.to_csv(par_data_dir_name + os.path.sep + par_file_item + "_ext2.csv",
sep=";",
encoding="utf8")
print("End " + file_item)
###Output
_____no_output_____
###Markdown
Preprocess data
Download the data and preprocess it for each year.
###Code
def preprocess_data(par_data_dir_name, par_file_item, par_df_filtered):
print("Process :", par_file_item)
split_indicators(par_data_dir_name, df_filtered)
generate_grouped_indicator_files(par_data_dir_name, file_item)
generate_rows_with_grouped_indicators(par_data_dir_name, file_item)
###Output
_____no_output_____
###Markdown
Data exploration
Now we have a set of different files, one file for each indicator(group). Let's look at the data in more detail.
Primary data points
Some of the data are the primary datapoints. These can be divided into grouped and non-grouped indicators.
Non-grouped indicators:
| indicator_name | indicator_type | value_unit | value_type | regions | comment | |
|-----------------------------------------------------------------------|-----------------|-----------------|------------|------------------|--------------------------------|---|
| Average_household_size | demographics | Persons | float | AFR, EUREG, GCFS | | |
| Births | demographics | Persons | float | AFR, GCFS | how to interpret? Aggregations | |
| CREA_house_price_index | housing | Index | float | AMREG | CAN | |
| Deaths | demographics | Persons | float | AFR, GCFS | how to interpret? | |
| Employment_-_Industry | employment | Persons | float | AFR, GCFS | not complete, how to interpret | |
| Employment_-_Transport__storage__information_&_communication_services | employment | Persons | float | AFR, GCFS | how to interpret, not complete | |
| Gross_domestic_product__real | gdp | currency | float | EUREG, AMREG | | |
| Homeownership_rate | housing | % | float | AMREG | USA | |
| Household_disposable_income__per_household__nominal | housing | currency | float | EUREG | | |
| Household_disposable_income__per_household__real | housing | currency | float | EUREG | | |
| Household_disposable_income__real | housing | currency | float | EUREG | | |
| Housing_permits_-_multi_family | housing | Housing permits | float | AMREG | USA | |
| Housing_permits_-_single_family | housing | Housing permits | float | AMREG | USA | |
| Housing_permits_-_total | housing | Housing permits | float | AMREG | USA | |
| Housing_starts | housing | null | float | AMREG | CAN, how to interpret? | |
| Housing_starts_-_multi_family | housing | Housing starts | float | AMREG | USA | |
| Housing_starts_-_single_family | housing | Housing starts | float | AMREG | USA | |
| Housing_starts_-_total | housing | Housing starts | float | AMREG | USA | |
| Income_from_employment__nominal | income | currency | float | AMREG | USA | |
| Income_from_rent__dividends_and_interest__nominal | income | currency | float | AMREG | USA | |
| Income_taxes__nominal | income | currency | float | AMREG | USA | |
| Labor_force | employment | Persons | float | AMREG | USA, CAN | |
| Labor_force_participation_rate | employment | % | float | AMREG | USA | |
| Labour_force_participation_rate | employment | % | float | AMREG | CAN | |
| Median_household_income__real | income | currency | float | AMREG | USA | |
| Net_migration_(including_statistical_adjustment) | demographics | Persons | float | AFR, GCFS | can be both negative and positive | |
| New_housing_price_index | housing | index | float | AMREG | CAN | |
| Personal_disposable_income__per_capita__real | income | currency | float | AMREG | USA, CAN | |
| Personal_disposable_income__per_household__real | income | currency | float | AMREG | USA, CAN | |
| Personal_income__per_capita__real | income | currency | float | AMREG | USA, CAN | |
| Personal_income__per_household__real | income | currency | float | AMREG | USA, CAN | |
| Proprietors_incomes__nominal | income | currency | float | AMREG | USA | |
| Residential_building_permits | housing | null | float | AMREG | CAN | |
| Social_security_payments__nominal | income | currency | float | AMREG | USA | |
| Total_households | housing | Households | float | All | | |
| Total_population | demographics | Persons | float | All | | |
| Unemployment_level | unemployment | Persons | float | AMREG | USA, CAN | |
| Unemployment_rate | unemployment | % | float | AMREG | USA, CAN | |
| Urban_Total_Population | demographics | Persons | float | All | | |
Grouped indicators
| indicator_name | indicator_type | value_unit | value_type | regions | comment |
|-----------------------------------|-----------------|-------------|------------|---------|-----------------------------------------------------|
| Population* | demographics | Persons | float | All | |
| Consumer spending by product* | spending | currency | float | All | value_unit contains : empty, null |
| Household numbers by income band* | income | Households | float | All | value contains float values very big and very small |
Secondary data points
There's a set of secondary data points that describe the primary data points in terms of a number of facets. For instance geographical region, year etc.
| indicator_name | value_unit | value_type | key | comment |
|-------------------|------------|------------|------|----------------------------|
| year | year | int | Key1 | |
| geography_iso | category | string | | ISO 3166-1 alpha-3 |
| geography_country | category | string | | |
| geographyid | category | string | Key2 | NUTS-2 region data (EUREG), No standards found for other regions |
| geographyname | category | string | Key3 | |
| databank | category | string | | |
Conclusion
The indicators that are available for all regions are limited. The rest is fragmented, most detailed data is available for the AMREG region.
For the geographyid a standard applies based on the ISO 3166-1 alpha-3 standard and then extended with a 2 or 3 digit code. In order to visualize the results of the clustering in a map longitude and latitude data is needed per region. I've been only able to find the definition for it the EUREG region, but not for the other regions. This is a drawback for now. This data should be available somehow so it's not considered an impediment.
Assumptions made
Though it's possible to generate cluster data on a global level it's not possible to visualize it. Therefore I've made the assumption that it's ok to take just the EUREG region so the results can be shown to the stakeholders in a map.
I will focus on data that is available on a global level,but filter to the EUREG region, so that whenever the geospatial data becomes available it's easy to visualize it for all regions of the world.
Clustering
Now we have preprocessed the data we can start the clustering.
###Code
def read_file(par_data_dir_name, par_file_name):
X = pd.read_csv(par_data_dir_name + os.path.sep + par_file_name + '.csv',
sep=';',
encoding="utf8")
# Dropping irrelevant columns from the data
drop_columns = [
'Unnamed: 0',
'year',
'geography_region_key',
'geographyid'
]
X_stripped = X.drop(drop_columns, axis=1)
# Handling the missing values
X_stripped.fillna(0, inplace=True)
print("X.shape: ", X_stripped.shape)
return (X, X_stripped)
def do_PCA(par_X_normalized):
pca = PCA(n_components=2)
par_X_normalized = par_X_normalized.dropna()
X_principal = pca.fit_transform(par_X_normalized)
X_principal = pd.DataFrame(X_principal)
X_principal.columns = ['P1', 'P2']
return X_principal
def init_algo():
#return DBSCAN(eps=EPS_VALUE, min_samples=MIN_SAMPLES_VALUE)
#return AffinityPropagation(random_state=None, max_iter=20)
return AgglomerativeClustering(n_clusters=N_CLUSTERS)
def get_labels(par_DBSCAN, par_X_principal):
db_default = par_DBSCAN.fit(par_X_principal)
labels = db_default.labels_
print("labels: ", labels.max())
return labels
def generate_colours():
'''Generate a set of random colours for the plot'''
colours = {}
for i in range(-1, 200):
r = random.random()
b = random.random()
g = random.random()
color = (r, g, b)
colours[i] = color
return colours
def do_plot(par_data_dir_name,
par_file_item,
par_labels,
par_X_principal,
colours):
cvec = [colours[label] for label in par_labels]
legend_list = []
label_list = []
for counter in range(0, par_labels.max()):
legend_item = plt.scatter(
par_X_principal['P1'],
par_X_principal['P2'],
color=colours[counter])
legend_list.append(legend_item)
label_item = "Label " + str(counter)
label_list.append(label_item)
# Plotting P1 on the X-Axis and P2 on the Y-Axis
# according to the colour vector defined
plt.figure(figsize=(9, 9))
plt.scatter(par_X_principal['P1'], par_X_principal['P2'], c=cvec)
# Building the legend
plt.legend(legend_list, label_list)
plt.savefig(par_data_dir_name + os.path.sep + par_file_item + '.png')
return plt
def run_algo(par_algo, par_X_principal):
db = par_algo.fit(par_X_principal)
return db
def do_clustering(par_data_dir_name, par_file_item):
X, X_stripped = read_file(par_data_dir_name, par_file_item + "_ext2")
X_principal = do_PCA(X_stripped)
algo = init_algo()
labels = get_labels(algo, X_principal)
result = run_algo(algo, X_principal)
plt = do_plot(par_data_dir_name,
par_file_item,
labels,
X_principal,
colours)
plt.show()
X['cluster'] = result.labels_.tolist()
X.to_csv(par_data_dir_name + os.path.sep + par_file_item + "_clusters.csv",
sep=";",
encoding="utf8")
###Output
_____no_output_____
###Markdown
Visualization
The plots show the different clusters but it's not clear to which regions the data points refer.
Therefore we will plot the clusterdata on a map so it's clear where the actual clusters are.
###Code
COLOURS = [
'lightred',
'lightgreen',
'yellow',
'lightpurple',
'darkgrey',
'darkred',
'darkgreen',
'darkyellow',
'darkpurple',
'dodgerblue',
'red',
'blue',
'green',
'cyan',
'black',
'lightyellow',
'lightgrey',
'olive',
'purple',
'lime'
]
def get_coordinates(coordinates, item_no):
if coordinates == np.nan:
return None
try:
if item_no == 0:
return coordinates[0]
else:
return coordinates[1]
except Exception:
return None
def read_geo_data():
DATA_FILE = "nutspt_3.json"
file_object = open(DATA_FILE, encoding="UTF-8")
json_data = json.load(file_object)
df = pd.json_normalize(json_data['features'])
df['longitude'] = df.apply(
lambda row : get_coordinates(row['geometry.coordinates'], 0), axis = 1)
df['latitude'] = df.apply(
lambda row : get_coordinates(row['geometry.coordinates'], 1), axis = 1)
return df
def read_cluster_data(par_data_dir_name, par_file_name):
return pd.read_csv(
par_data_dir_name + os.path.sep + par_file_name + "_clusters.csv",
sep=";",
encoding="utf8")
def merge_data(par_df_cluster, par_df_geo):
df_cluster_merged = par_df_cluster.merge(par_df_geo,
left_on='geographyid',
right_on='properties.id',
how='left')
return df_cluster_merged.dropna()
def plot_map(par_data_dir_name, par_df_cluster, par_title, par_year):
# Initialize map and center on Munich
folium_map = folium.Map(location=[48.130518, 11.5364172],
zoom_start=3,
width='75%',
heigth='75%')
title_html = '''
<h3 align="center" style="font-size:16px"><b>{} ({})</b></h3>
'''.format(par_title, par_year)
folium_map.get_root().html.add_child(folium.Element(title_html))
for index, row in par_df_cluster.iterrows():
colour = COLOURS[row.cluster]
folium.CircleMarker(
location=[row['latitude'], row['longitude']],
popup="<stong>" + str(row['properties.id']) + "</stong>",
tooltip=str(row.cluster),
color=colour,
).add_to(folium_map)
folium_map.save(par_data_dir_name + os.path.sep + par_title + ".html")
folium_map
return folium_map
def do_visualization(par_data_dir_name,
par_file_item,
par_df_geo_data,
par_map_list,
par_year):
df_cluster = read_cluster_data(par_data_dir_name, par_file_item)
df_merged = merge_data(df_cluster, par_df_geo_data)
print("df_merged.shape :", df_merged.shape)
cluster_map = plot_map(par_data_dir_name,
df_merged,
par_file_item,
par_year)
par_map_list.append(cluster_map)
display(cluster_map)
return par_map_list
###Output
_____no_output_____
###Markdown
main process
Loop tthrough the different indicator and years and perform the clustering and rendering of the maps
Render cluster maps
The maps are rendered per indicator per year
The maps are also saved as PNG files in the data directory.
Render maps
The maps are rendered per indicator per year
The maps are also saved as HTML files in the data directory.
It's possible to zoom in and out of an area.
If you hover over a data point it shows you the cluster it belongs to. This corresponds to the cluster number as can be found in the *_cluster.csv files in the data directory.
Clicking on a data point shows you the region of that data point. This corresponds to the geographyid in the *_cluster.csv files in the data directory.
###Code
# Download and filter base data set
df_data = download_and_read_source_data()
# Generate colour palette for cluster maps
colours = generate_colours()
# Initialize list of maps
map_list = []
# Retrieve geospatial data
df_geo_data = read_geo_data()
for file_item in FILE_LIST:
print("Process file: ", file_item)
for year in YEAR_LIST:
print(">>Process year: ", year)
data_dir_name = "data_" + str(year)
# Create a directory for derived data.
Path(data_dir_name).mkdir(parents=True, exist_ok=True)
df_filtered = filter_data(df_data, year)
preprocess_data(data_dir_name, file_item, df_filtered)
do_clustering(data_dir_name, file_item)
map_list = do_visualization(data_dir_name,
file_item,
df_geo_data,
map_list,
year)
print("End cell")
###Output
_____no_output_____ |
Course 1 - Natural Language Processing with Classification and Vector Spaces/Week 3/Final_Assignment.ipynb | ###Markdown
Assignment 3: Hello VectorsWelcome to this week's programming assignment on exploring word vectors.In natural language processing, we represent each word as a vector consisting of numbers.The vector encodes the meaning of the word. These numbers (or weights) for each word are learned using various machinelearning models, which we will explore in more detail later in this specialization. Rather than make you code themachine learning models from scratch, we will show you how to use them. In the real world, you can always load thetrained word vectors, and you will almost never have to train them from scratch. In this assignment, you will:- Predict analogies between words.- Use PCA to reduce the dimensionality of the word embeddings and plot them in two dimensions.- Compare word embeddings by using a similarity measure (the cosine similarity).- Understand how these vector space models work. 1.0 Predict the Countries from CapitalsIn the lectures, we have illustrated the word analogiesby finding the capital of a country from the country. We have changed the problem a bit in this part of the assignment. You are asked to predict the **countries** that corresponds to some **capitals**.You are playing trivia against some second grader who just took their geography test and knows all the capitals by heart.Thanks to NLP, you will be able to answer the questions properly. In other words, you will write a program that can giveyou the country by its capital. That way you are pretty sure you will win the trivia game. We will start by exploring the data set. 1.1 Importing the dataAs usual, you start by importing some essential Python libraries and then load the dataset.The dataset will be loaded as a [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/getting_started/dsintro.html),which is very a common method in data science.This may take a few minutes because of the large size of the data.
###Code
# Run this cell to import packages.
import pickle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from utils import get_vectors
data = pd.read_csv('capitals.txt', delimiter=' ')
data.columns = ['city1', 'country1', 'city2', 'country2']
# print first five elements in the DataFrame
data.head(5)
###Output
_____no_output_____
###Markdown
*** To Run This Code On Your Own Machine:Note that because the original google news word embedding dataset is about 3.64 gigabytes,the workspace is not able to handle the full file set. So we've downloaded the full dataset,extracted a sample of the words that we're going to analyze in this assignment, and savedit in a pickle file called `word_embeddings_capitals.p`If you want to download the full dataset on your own and choose your own set of word embeddings,please see the instructions and some helper code.- Download the dataset from this [page](https://code.google.com/archive/p/word2vec/).- Search in the page for 'GoogleNews-vectors-negative300.bin.gz' and click the link to download. Copy-paste the code below and run it on your local machine after downloadingthe dataset to the same directory as the notebook.```pythonimport nltkfrom gensim.models import KeyedVectorsembeddings = KeyedVectors.load_word2vec_format('./GoogleNews-vectors-negative300.bin', binary = True)f = open('capitals.txt', 'r').read()set_words = set(nltk.word_tokenize(f))select_words = words = ['king', 'queen', 'oil', 'gas', 'happy', 'sad', 'city', 'town', 'village', 'country', 'continent', 'petroleum', 'joyful']for w in select_words: set_words.add(w)def get_word_embeddings(embeddings): word_embeddings = {} for word in embeddings.vocab: if word in set_words: word_embeddings[word] = embeddings[word] return word_embeddings Testing your functionword_embeddings = get_word_embeddings(embeddings)print(len(word_embeddings))pickle.dump( word_embeddings, open( "word_embeddings_subset.p", "wb" ) )```*** Now we will load the word embeddings as a [Python dictionary](https://docs.python.org/3/tutorial/datastructures.htmldictionaries).As stated, these have already been obtained through a machine learning algorithm.
###Code
word_embeddings = pickle.load(open("word_embeddings_subset.p", "rb"))
len(word_embeddings) # there should be 243 words that will be used in this assignment
###Output
_____no_output_____
###Markdown
Each of the word embedding is a 300-dimensional vector.
###Code
print("dimension: {}".format(word_embeddings['Spain'].shape[0]))
###Output
dimension: 300
###Markdown
Predict relationships among wordsNow you will write a function that will use the word embeddings to predict relationships among words.* The function will take as input three words.* The first two are related to each other.* It will predict a 4th word which is related to the third word in a similar manner as the two first words are related to each other.* As an example, "Athens is to Greece as Bangkok is to ______"?* You will write a program that is capable of finding the fourth word.* We will give you a hint to show you how to compute this.A similar analogy would be the following:You will implement a function that can tell you the capital of a country.You should use the same methodology shown in the figure above. To do this,compute you'll first compute cosine similarity metric or the Euclidean distance. 1.2 Cosine SimilarityThe cosine similarity function is:$$\cos (\theta)=\frac{\mathbf{A} \cdot \mathbf{B}}{\|\mathbf{A}\|\|\mathbf{B}\|}=\frac{\sum_{i=1}^{n} A_{i} B_{i}}{\sqrt{\sum_{i=1}^{n} A_{i}^{2}} \sqrt{\sum_{i=1}^{n} B_{i}^{2}}}\tag{1}$$$A$ and $B$ represent the word vectors and $A_i$ or $B_i$ represent index i of that vector.& Note that if A and B are identical, you will get $cos(\theta) = 1$.* Otherwise, if they are the total opposite, meaning, $A= -B$, then you would get $cos(\theta) = -1$.* If you get $cos(\theta) =0$, that means that they are orthogonal (or perpendicular).* Numbers between 0 and 1 indicate a similarity score.* Numbers between -1-0 indicate a dissimilarity score.**Instructions**: Implement a function that takes in two word vectors and computes the cosine distance. Hints Python's NumPy library adds support for linear algebra operations (e.g., dot product, vector norm ...). Use numpy.dot . Use numpy.linalg.norm .
###Code
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def cosine_similarity(A, B):
'''
Input:
A: a numpy array which corresponds to a word vector
B: A numpy array which corresponds to a word vector
Output:
cos: numerical number representing the cosine similarity between A and B.
'''
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
dot = np.dot(A, B)
norma = np.linalg.norm(A)
normb = np.linalg.norm(B)
cos = np.divide(dot, np.multiply(norma, normb))
### END CODE HERE ###
return cos
# feel free to try different words
king = word_embeddings['king']
queen = word_embeddings['queen']
cosine_similarity(king, queen)
###Output
_____no_output_____
###Markdown
**Expected Output**:$\approx$ 0.6510956 1.3 Euclidean distanceYou will now implement a function that computes the similarity between two vectors using the Euclidean distance.Euclidean distance is defined as:$$ \begin{aligned} d(\mathbf{A}, \mathbf{B})=d(\mathbf{B}, \mathbf{A}) &=\sqrt{\left(A_{1}-B_{1}\right)^{2}+\left(A_{2}-B_{2}\right)^{2}+\cdots+\left(A_{n}-B_{n}\right)^{2}} \\ &=\sqrt{\sum_{i=1}^{n}\left(A_{i}-B_{i}\right)^{2}} \end{aligned}$$* $n$ is the number of elements in the vector* $A$ and $B$ are the corresponding word vectors. * The more similar the words, the more likely the Euclidean distance will be close to 0. **Instructions**: Write a function that computes the Euclidean distance between two vectors. Hints Use numpy.linalg.norm .
###Code
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def euclidean(A, B):
"""
Input:
A: a numpy array which corresponds to a word vector
B: A numpy array which corresponds to a word vector
Output:
d: numerical number representing the Euclidean distance between A and B.
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# euclidean distance
d = np.linalg.norm(A-B)
### END CODE HERE ###
return d
# Test your function
euclidean(king, queen)
###Output
_____no_output_____
###Markdown
**Expected Output:**2.4796925 1.4 Finding the country of each capitalNow, you will use the previous functions to compute similarities between vectors,and use these to find the capital cities of countries. You will write a function thattakes in three words, and the embeddings dictionary. Your task is to find thecapital cities. For example, given the following words: - 1: Athens 2: Greece 3: Baghdad,your task is to predict the country 4: Iraq.**Instructions**: 1. To predict the capital you might want to look at the *King - Man + Woman = Queen* example above, and implement that scheme into a mathematical function, using the word embeddings and a similarity function.2. Iterate over the embeddings dictionary and compute the cosine similarity score between your vector and the current word embedding.3. You should add a check to make sure that the word you return is not any of the words that you fed into your function. Return the one with the highest score.
###Code
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_country(city1, country1, city2, embeddings):
"""
Input:
city1: a string (the capital city of country1)
country1: a string (the country of capital1)
city2: a string (the capital city of country2)
embeddings: a dictionary where the keys are words and values are their embeddings
Output:
countries: a dictionary with the most likely country and its similarity score
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# store the city1, country 1, and city 2 in a set called group
group = set((city1, country1, city2))
# get embeddings of city 1
city1_emb = embeddings[city1]
# get embedding of country 1
country1_emb = embeddings[country1]
# get embedding of city 2
city2_emb = embeddings[city2]
# get embedding of country 2 (it's a combination of the embeddings of country 1, city 1 and city 2)
# Remember: King - Man + Woman = Queen
vec = np.add(np.subtract(country1_emb, city1_emb), city2_emb)
# Initialize the similarity to -1 (it will be replaced by a similarities that are closer to +1)
similarity = -1
# initialize country to an empty string
country = ''
# loop through all words in the embeddings dictionary
for word in embeddings.keys():
# first check that the word is not already in the 'group'
if word not in group:
# get the word embedding
word_emb = embeddings[word]
# calculate cosine similarity between embedding of country 2 and the word in the embeddings dictionary
cur_similarity = cosine_similarity(vec, word_emb)
# if the cosine similarity is more similar than the previously best similarity...
if cur_similarity > similarity:
# update the similarity to the new, better similarity
similarity = cur_similarity
# store the country as a tuple, which contains the word and the similarity
country = (word, similarity)
### END CODE HERE ###
return country
# Testing your function, note to make it more robust you can return the 5 most similar words.
get_country('Athens', 'Greece', 'Cairo', word_embeddings)
###Output
_____no_output_____
###Markdown
**Expected Output:**('Egypt', 0.7626821) 1.5 Model AccuracyNow you will test your new function on the dataset and check the accuracy of the model:$$\text{Accuracy}=\frac{\text{Correct of predictions}}{\text{Total of predictions}}$$**Instructions**: Write a program that can compute the accuracy on the dataset provided for you. You have to iterate over every row to get the corresponding words and feed them into you `get_country` function above. Hints Use pandas.DataFrame.iterrows .
###Code
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_accuracy(word_embeddings, data):
'''
Input:
word_embeddings: a dictionary where the key is a word and the value is its embedding
data: a pandas dataframe containing all the country and capital city pairs
Output:
accuracy: the accuracy of the model
'''
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# initialize num correct to zero
num_correct = 0
# loop through the rows of the dataframe
for i, row in data.iterrows():
# get city1
city1 = row['city1']
# get country1
country1 = row['country1']
# get city2
city2 = row['city2']
# get country2
country2 = row['country2']
# use get_country to find the predicted country2
predicted_country2, _ = get_country(city1, country1, city2, word_embeddings)
# if the predicted country2 is the same as the actual country2...
if predicted_country2 == country2:
# increment the number of correct by 1
num_correct += 1
# get the number of rows in the data dataframe (length of dataframe)
m = len(data)
# calculate the accuracy by dividing the number correct by m
accuracy = num_correct / m
### END CODE HERE ###
return accuracy
###Output
_____no_output_____
###Markdown
**NOTE: The cell below takes about 30 SECONDS to run.**
###Code
accuracy = get_accuracy(word_embeddings, data)
print(f"Accuracy is {accuracy:.2f}")
###Output
Accuracy is 0.92
###Markdown
**Expected Output:**$\approx$ 0.92 3.0 Plotting the vectors using PCANow you will explore the distance between word vectors after reducing their dimension.The technique we will employ is known as[*principal component analysis* (PCA)](https://en.wikipedia.org/wiki/Principal_component_analysis).As we saw, we are working in a 300-dimensional space in this case.Although from a computational perspective we were able to perform a good job,it is impossible to visualize results in such high dimensional spaces.You can think of PCA as a method that projects our vectors in a space of reduceddimension, while keeping the maximum information about the original vectors intheir reduced counterparts. In this case, by *maximum infomation* we mean that theEuclidean distance between the original vectors and their projected siblings isminimal. Hence vectors that were originally close in the embeddings dictionary,will produce lower dimensional vectors that are still close to each other.You will see that when you map out the words, similar words will be clusterednext to each other. For example, the words 'sad', 'happy', 'joyful' all describeemotion and are supposed to be near each other when plotted.The words: 'oil', 'gas', and 'petroleum' all describe natural resources.Words like 'city', 'village', 'town' could be seen as synonyms and describe asimilar thing.Before plotting the words, you need to first be able to reduce each word vectorwith PCA into 2 dimensions and then plot it. The steps to compute PCA are as follows:1. Mean normalize the data2. Compute the covariance matrix of your data ($\Sigma$). 3. Compute the eigenvectors and the eigenvalues of your covariance matrix4. Multiply the first K eigenvectors by your normalized data. The transformation should look something as follows: **Instructions**: You will write a program that takes in a data set where each row corresponds to a word vector. * The word vectors are of dimension 300. * Use PCA to change the 300 dimensions to `n_components` dimensions. * The new matrix should be of dimension `m, n_componentns`. * First de-mean the data* Get the eigenvalues using `linalg.eigh`. Use `eigh` rather than `eig` since R is symmetric. The performance gain when using `eigh` instead of `eig` is substantial.* Sort the eigenvectors and eigenvalues by decreasing order of the eigenvalues.* Get a subset of the eigenvectors (choose how many principle components you want to use using `n_components`).* Return the new transformation of the data by multiplying the eigenvectors with the original data. Hints Use numpy.mean(a,axis=None) : If you set axis = 0, you take the mean for each column. If you set axis = 1, you take the mean for each row. Remember that each row is a word vector, and the number of columns are the number of dimensions in a word vector. Use numpy.cov(m, rowvar=True) . This calculates the covariance matrix. By default rowvar is True. From the documentation: "If rowvar is True (default), then each row represents a variable, with observations in the columns." In our case, each row is a word vector observation, and each column is a feature (variable). Use numpy.linalg.eigh(a, UPLO='L') Use numpy.argsort sorts the values in an array from smallest to largest, then returns the indices from this sort. In order to reverse the order of a list, you can use: x[::-1]. To apply the sorted indices to eigenvalues, you can use this format x[indices_sorted]. When applying the sorted indices to eigen vectors, note that each column represents an eigenvector. In order to preserve the rows but sort on the columns, you can use this format x[:,indices_sorted] To transform the data using a subset of the most relevant principle components, take the matrix multiplication of the eigenvectors with the original data. The data is of shape (n_observations, n_features). The subset of eigenvectors are in a matrix of shape (n_features, n_components). To multiply these together, take the transposes of both the eigenvectors (n_components, n_features) and the data (n_features, n_observations). The product of these two has dimensions (n_components,n_observations). Take its transpose to get the shape (n_observations, n_components).
###Code
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def compute_pca(X, n_components=2):
"""
Input:
X: of dimension (m,n) where each row corresponds to a word vector
n_components: Number of components you want to keep.
Output:
X_reduced: data transformed in 2 dims/columns + regenerated original data
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# mean center the data
X_demeaned = X - np.mean(X, axis=0)
# calculate the covariance matrix
covariance_matrix = np.cov(X_demeaned.T, rowvar=True)
# calculate eigenvectors & eigenvalues of the covariance matrix
eigen_vals, eigen_vecs = np.linalg.eigh(covariance_matrix, UPLO='L')
# sort eigenvalue in increasing order (get the indices from the sort)
idx_sorted = np.argsort(eigen_vals)
# reverse the order so that it's from highest to lowest.
idx_sorted_decreasing = idx_sorted[::-1]
# sort the eigen values by idx_sorted_decreasing
eigen_vals_sorted = eigen_vals[idx_sorted_decreasing]
# sort eigenvectors using the idx_sorted_decreasing indices
eigen_vecs_sorted = eigen_vecs[:, idx_sorted_decreasing]
# select the first n eigenvectors (n is desired dimension
# of rescaled data array, or dims_rescaled_data)
eigen_vecs_subset = eigen_vecs_sorted[:, :n_components]
# transform the data by multiplying the transpose of the eigenvectors
# with the transpose of the de-meaned data
# Then take the transpose of that product.
X_reduced = np.dot(eigen_vecs_subset.T, X_demeaned.T).T
### END CODE HERE ###
return X_reduced
# Testing your function
np.random.seed(1)
X = np.random.rand(3, 10)
X_reduced = compute_pca(X, n_components=2)
print("Your original matrix was " + str(X.shape) + " and it became:")
print(X_reduced)
###Output
Your original matrix was (3, 10) and it became:
[[ 0.43437323 0.49820384]
[ 0.42077249 -0.50351448]
[-0.85514571 0.00531064]]
###Markdown
**Expected Output:**Your original matrix was: (3,10) and it became: 0.43437323 0.49820384 0.42077249 -0.50351448 -0.85514571 0.00531064 Now you will use your pca function to plot a few words we have chosen for you.You will see that similar words tend to be clustered near each other.Sometimes, even antonyms tend to be clustered near each other. Antonymsdescribe the same thing but just tend to be on the other end of the scaleThey are usually found in the same location of a sentence,have the same parts of speech, and thus whenlearning the word vectors, you end up getting similar weights. In the next weekwe will go over how you learn them, but for now let's just enjoy using them.**Instructions:** Run the cell below.
###Code
words = ['oil', 'gas', 'happy', 'sad', 'city', 'town',
'village', 'country', 'continent', 'petroleum', 'joyful']
# given a list of words and the embeddings, it returns a matrix with all the embeddings
X = get_vectors(word_embeddings, words)
print('You have 11 words each of 300 dimensions thus X.shape is:', X.shape)
# We have done the plotting for you. Just run this cell.
result = compute_pca(X, 2)
plt.scatter(result[:, 0], result[:, 1])
for i, word in enumerate(words):
plt.annotate(word, xy=(result[i, 0] - 0.05, result[i, 1] + 0.1))
plt.show()
###Output
_____no_output_____ |
notebooks/HeatmapDemo.ipynb | ###Markdown
Listing 1. Visualizing the Heatmap of a large data table with ProgressiVis
###Code
from progressivis import Scheduler
from progressivis.io import CSVLoader
from progressivis.stats import Histogram2D, Min, Max
from progressivis.datasets import get_dataset
from progressivis.vis import Heatmap
s = Scheduler.default = Scheduler()
URLS = [f"https://s3.amazonaws.com/nyc-tlc/trip+data/yellow_tripdata_2015-0{n}.csv" for n in range(1,7)]
csv_module = CSVLoader(URLS, index_col=False, skipinitialspace=True,
usecols=['pickup_longitude', 'pickup_latitude']) # load many compressed CSV files
min_module = Min() # computes the min value of each column
min_module.input.table = csv_module.output.result
max_module = Max() # computes the max value of each column
max_module.input.table = csv_module.output.result
histogram2d = Histogram2D('pickup_longitude', # compute a 2d histogram
'pickup_latitude',
xbins=256, ybins=256)
histogram2d.input.table = csv_module.output.result
histogram2d.input.min = min_module.output.result
histogram2d.input.max = max_module.output.result
heatmap=Heatmap() # compute the Heatmap
heatmap.input.array = histogram2d.output.result
###Output
_____no_output_____
###Markdown
**NB:** the results will appear below after running all cells :
###Code
import ipywidgets as ipw
from IPython.display import display
wg = None
async def _after_run(m, run_number):
global wg
img = m.get_image_bin()
if img is None:
return
if wg is None:
wg = ipw.Image(value=img, width=512, height=512)
display(wg)
else:
wg.value = img
heatmap.after_run_proc = _after_run
await s.start()
###Output
_____no_output_____ |
cpu-optimized-transformer/notebooks/optimize_transformers_in_production_for_cpu.ipynb | ###Markdown
Optimize transformers in production on CPU 🛠️ 📝 Note: This notebook covers everything you need to know about optimizing a particular transformer model for production on CPU. The knowledge that is used here comes from the HuggingFace documentation, ONNX documentation and chapter 8 "Making Transformers Efficient in Production" from the book "Natural Language Processing with Transformers". Used materials:- Model used in this notebook: [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)- Dataset for benchmarking: [SST2 GLUE dataset](https://huggingface.co/datasets/glue) 🤗 HuggingFace provides a wide range of models and datasets that can improve a lot of applications. Be sure to check their website: https://huggingface.co/.
###Code
import seaborn as sns
from shutil import rmtree
from datasets import load_dataset, load_metric
import time
import torch.nn as nn
import torch
from transformers.convert_graph_to_onnx import convert
from torch.quantization import quantize_dynamic as torch_quantize_dynamic
import os
from psutil import cpu_count
from tqdm import tqdm
from pathlib import Path
from transformers.convert_graph_to_onnx import convert
import numpy as np
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer,
pipeline
)
from onnxruntime import (GraphOptimizationLevel, InferenceSession, SessionOptions)
from onnxruntime.quantization import quantize_dynamic as onnx_quantize_dynamic, QuantType
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Defining some standard variables
###Code
task = "sentiment-analysis"
model_name = "distilbert-base-uncased-finetuned-sst-2-english"
padding = "max_length"
labels_mapping = {"negative": 0, "positive": 1 }
accuracy_score = load_metric('accuracy')
tokenizer = AutoTokenizer.from_pretrained(model_name)
sentiment_dataset = load_dataset("glue", "sst2")
columns = ["name", "average_latency", "std_latency", "accuracy", "size"]
benchmark_results_df = pd.DataFrame(columns=columns)
###Output
Reusing dataset glue (C:\Users\Thomas\.cache\huggingface\datasets\glue\sst2\1.0.0\dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
100%|██████████| 3/3 [00:00<00:00, 332.90it/s]
###Markdown
Utility functions
###Code
def label2id(label):
return labels_mapping.get(label, None)
def create_model_for_provider(model_path, provider="CPUExecutionProvider"):
options = SessionOptions()
options.intra_op_num_threads = 1
options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL
session = InferenceSession(str(model_path), options, providers=[provider])
session.disable_fallback()
return session
class Benchmarker:
def __init__(self, name, pipeline, dataset=None) -> None:
self.name = name
self.pipeline = pipeline
self.dataset = dataset
def measure_latency(self, input_data) -> dict:
latencies = list()
for _ in range(100):
self.pipeline(input_data)
for _ in range(1000):
start_time = time.perf_counter()
self.pipeline(input_data)
end_time = time.perf_counter()
latencies.append((end_time - start_time)*1000)
latencies = np.array(latencies)
return {"average_latency": np.mean(latencies), "std_latency": np.std(latencies)}
def compute_accuracy(self, dataset=None) -> float:
if dataset is None:
dataset = self.dataset
predictions, labels = [], []
for sample in tqdm(self.dataset):
prediction = self.pipeline(sample["sentence"])[0]["label"]
predictions.append(label2id(prediction.lower()))
labels.append(sample["label"])
return accuracy_score.compute(predictions=predictions, references=labels).get("accuracy")
def compute_size(self):
state_dict = self.pipeline.model.state_dict()
tmp_path = Path("model.pt")
torch.save(state_dict, tmp_path)
size_mb = Path(tmp_path).stat().st_size / (1024 * 1024)
tmp_path.unlink()
return size_mb
def run_full_benchmark(self, input_data, dataset=None):
result = {"name": self.name}
result.update(self.measure_latency(input_data))
result["accuracy"] = self.compute_accuracy(dataset)
result["size"] = self.compute_size()
return result
def print_results(self, benchmark_report):
print(f"BENCHMARK REPORT".center(40, "-"))
print(f"Name {benchmark_report['name']}")
print(f"Latency: {benchmark_report['average_latency']:.2f} ms")
print(f"Accuracy on dataset: {benchmark_report['accuracy'] * 100:.2f}%")
print(f"Size: {benchmark_report['size']:.2f} MB")
print(f"".center(40, "-"))
###Output
_____no_output_____
###Markdown
Baseline model
###Code
classifier = pipeline(task=task, model=model_name)
benchmarker = Benchmarker(f"baseline-torch", classifier, sentiment_dataset["validation"])
benchmark_report = benchmarker.run_full_benchmark("I like you!")
benchmarker.print_results(benchmark_report)
benchmark_results_df = benchmark_results_df.append(benchmark_report, ignore_index=True)
###Output
100%|██████████| 872/872 [00:17<00:00, 49.28it/s]
###Markdown
Baseline model (quantization)
###Code
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = (AutoModelForSequenceClassification.from_pretrained(model_name).to("cpu"))
model_quantized = torch_quantize_dynamic(model, {nn.Linear}, dtype=torch.qint8)
classifier = pipeline(task=task, model=model_quantized, tokenizer=tokenizer)
benchmarker = Benchmarker(f"baseline-torch-quant", classifier, sentiment_dataset["validation"])
benchmark_report = benchmarker.run_full_benchmark("I like you!")
benchmarker.print_results(benchmark_report)
benchmark_results_df = benchmark_results_df.append(benchmark_report, ignore_index=True)
###Output
100%|██████████| 872/872 [00:11<00:00, 74.88it/s]
###Markdown
Baseline model (ONNX)
###Code
os.environ["OMP_NUM_THREADS"] = f"{cpu_count()}"
os.environ["OMP_WAIT_POLICY"] = "ACTIVE"
onnx_convert_framework = "pt"
onnx_convert_opset_version = 13
onnx_save_path = "onnx/"
onnx_model_path = Path(f"{onnx_save_path}baseline-ort.onnx")
if os.path.exists(onnx_save_path):
rmtree(onnx_save_path)
convert(framework=onnx_convert_framework, model=model_name, tokenizer=tokenizer, output=onnx_model_path, opset=onnx_convert_opset_version, pipeline_name=task)
from scipy.special import softmax
class OnnxPipeline:
def __init__(self, model, tokenizer):
self.model = model
self.tokenizer = tokenizer
def __call__(self, query):
model_inputs = self.tokenizer(query, return_tensors="pt")
inputs_onnx = {k: v.cpu().detach().numpy()
for k, v in model_inputs.items()}
logits = self.model.run(None, inputs_onnx)[0][0, :]
probs = softmax(logits)
pred_idx = np.argmax(probs).item()
return [{"label": pred_idx, "score": probs[pred_idx]}]
class OnnxBenchmarker(Benchmarker):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def compute_size(self):
size_mb = Path(f"onnx/{self.name}.onnx").stat().st_size / (1024 * 1024)
return size_mb
def compute_accuracy(self, dataset):
"""This overrides the PerformanceBenchmark.compute_accuracy() method"""
if dataset is None:
dataset = self.dataset
predictions, labels = [], []
for sample in tqdm(self.dataset):
prediction = self.pipeline(sample["sentence"])[0]["label"]
predictions.append(prediction)
labels.append(sample["label"])
return accuracy_score.compute(predictions=predictions, references=labels).get("accuracy")
onnx_model = create_model_for_provider(onnx_model_path)
classifier = OnnxPipeline(onnx_model, tokenizer)
benchmarker = OnnxBenchmarker(f"baseline-ort", classifier, sentiment_dataset["validation"])
benchmark_report = benchmarker.run_full_benchmark("I like you!")
benchmarker.print_results(benchmark_report)
benchmark_results_df = benchmark_results_df.append(benchmark_report, ignore_index=True)
###Output
100%|██████████| 872/872 [00:19<00:00, 45.81it/s]
###Markdown
Baseline model (ONNX + quantization)
###Code
model_input = onnx_model_path
model_output = f"{onnx_save_path}baseline-ort-quant.onnx"
onnx_quantize_dynamic(model_input, model_output, weight_type=QuantType.QInt8)
onnx_quantized_model = create_model_for_provider(model_output)
classifier = OnnxPipeline(onnx_quantized_model, tokenizer)
benchmarker = OnnxBenchmarker(f"baseline-ort-quant", classifier, sentiment_dataset["validation"])
benchmark_report = benchmarker.run_full_benchmark("I like you!")
benchmarker.print_results(benchmark_report)
benchmark_results_df = benchmark_results_df.append(benchmark_report, ignore_index=True)
###Output
100%|██████████| 872/872 [00:09<00:00, 88.93it/s]
###Markdown
Wrap-up
###Code
graph = sns.barplot(x="average_latency", y="name", data=benchmark_results_df, order=benchmark_results_df.sort_values('average_latency', ascending=False)["name"], orient="h")
graph.set_title("Comparison of the size of the models")
graph.set_ylabel("Model")
graph.set_xlabel("Average latency in ms")
plt.show()
graph = sns.barplot(x="size", y="name", data=benchmark_results_df, order=benchmark_results_df.sort_values('size', ascending=False)["name"], orient="h")
graph.set_title("Comparison of the size of the models")
graph.set_ylabel("Model")
graph.set_xlabel("Model size in MB")
plt.show()
benchmark_results_df
###Output
_____no_output_____ |
DCGAN/Flowers.ipynb | ###Markdown
Setup
###Code
%matplotlib qt
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_gan as tfgan
import numpy as np
import os, sys
from tqdm.notebook import tqdm
from pathlib import Path
sys.path.append( os.path.abspath('..') )
import utils
Path('Flowers').mkdir(exist_ok=True)
os.chdir('Flowers')
###Output
_____no_output_____
###Markdown
Loading the preprocessed Flowers dataset, make sure to execute the code found at `Other/Preprocessing.ipynb` in order to create this data.
###Code
dataset = tf.data.Dataset.from_tensor_slices(np.load(os.path.join('..', '..', 'flowers.npy')))
dataset = dataset.map(lambda img: (tf.cast(img, tf.float32) - 127.5) / 127.5)
NUM_IMAGES = int(dataset.cardinality())
###Output
_____no_output_____
###Markdown
1 Models 1.1 Architecure In these experiments the use of Batch Normalization has shown to produce artifacts
###Code
def generator_model_transp_conv(latent_dims):
return tf.keras.Sequential([
tf.keras.layers.Dense(6*6*512, input_shape=(latent_dims,)),
tf.keras.layers.LeakyReLU(0.2),
# tf.keras.layers.BatchNormalization(),
tf.keras.layers.Reshape((6, 6, 512)),
# Output Shape: 6x6x512
tf.keras.layers.Conv2DTranspose(256, kernel_size=2, strides=2, padding='same'),
tf.keras.layers.LeakyReLU(0.2),
# tf.keras.layers.BatchNormalization(),
# Output Shape: 12x12x256
tf.keras.layers.Conv2DTranspose(128, kernel_size=2, strides=2, padding='same'),
tf.keras.layers.LeakyReLU(0.2),
# tf.keras.layers.BatchNormalization(),
# Output Shape: 24x24x128
tf.keras.layers.Conv2DTranspose(64, kernel_size=2, strides=2, padding='same'),
tf.keras.layers.LeakyReLU(0.2),
# tf.keras.layers.BatchNormalization(),
# Output Shape: 48x48x64
tf.keras.layers.Conv2D(3, kernel_size=1, strides=1, padding='same', activation='tanh')
# Output Shape: 48x48x3
])
def generator_model_upsample(latent_dims, interpolation):
return tf.keras.Sequential([
tf.keras.layers.Dense(6*6*512, input_shape=(latent_dims,)),
tf.keras.layers.LeakyReLU(0.2),
# tf.keras.layers.BatchNormalization(),
tf.keras.layers.Reshape((6, 6, 512)),
# Output Shape: 6x6x512
tf.keras.layers.UpSampling2D(size=2, interpolation=interpolation),
tf.keras.layers.Conv2D(256, kernel_size=3, strides=1, padding='same'),
tf.keras.layers.LeakyReLU(0.2),
# tf.keras.layers.BatchNormalization(),
# Output Shape: 12x12x256
tf.keras.layers.UpSampling2D(size=2, interpolation=interpolation),
tf.keras.layers.Conv2D(128, kernel_size=3, strides=1, padding='same'),
tf.keras.layers.LeakyReLU(0.2),
# tf.keras.layers.BatchNormalization(),
# Output Shape: 24x24x128
tf.keras.layers.UpSampling2D(size=2, interpolation=interpolation),
tf.keras.layers.Conv2D(64, kernel_size=3, strides=1, padding='same'),
tf.keras.layers.LeakyReLU(0.2),
# tf.keras.layers.BatchNormalization(),
# Output Shape: 48x48x64
tf.keras.layers.Conv2D(3, kernel_size=1, strides=1, padding='same', activation='tanh')
# Output Shape: 48x48x3
])
def discriminator_model():
return tf.keras.Sequential([
tf.keras.layers.Conv2D(64, kernel_size=1, strides=2, padding='same', input_shape=(48,48,3)),
tf.keras.layers.LeakyReLU(0.2),
tf.keras.layers.Dropout(0.3),
# Output Shape: 48x48x64
tf.keras.layers.Conv2D(128, kernel_size=3, strides=2, padding='same'),
tf.keras.layers.LeakyReLU(0.2),
tf.keras.layers.Dropout(0.3),
# Output Shape: 24x24x128
tf.keras.layers.Conv2D(256, kernel_size=3, strides=2, padding='same'),
tf.keras.layers.LeakyReLU(0.2),
tf.keras.layers.Dropout(0.3),
# Output Shape: 12x12x256
tf.keras.layers.Conv2D(512, kernel_size=3, strides=2, padding='same'),
tf.keras.layers.LeakyReLU(0.2),
tf.keras.layers.Dropout(0.3),
# Output Shape: 6x6x512
tf.keras.layers.Conv2D(512, kernel_size=6, strides=1, padding='same'),
tf.keras.layers.LeakyReLU(0.2),
tf.keras.layers.Dropout(0.3),
# Output Shape: 1x1x512
tf.keras.layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
1.2 Losses The binary cross entropy (BCE) between $y$ and $\hat{y}$ is calculated as:$$ \mathrm{BCE}(y, \hat{y}) = - y \log\left(\hat{y}\right) - (1-y) \log\left(1 - \hat{y}\right)$$
###Code
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
###Output
_____no_output_____
###Markdown
The generator tries to maximize the chance of the discriminator being wrong. This is equivalent of trying to minimize the following loss function:$$ J^{(G)} = -\log\bigl(D\bigl(G(z)\bigr)\bigr)$$
###Code
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
###Output
_____no_output_____
###Markdown
The discriminator tries to correctly classify real data as real and fake data as fake. This is equivalent to minimizing the following loss function:$$ J^{(D)} = -\log\bigr(D(x)\bigl) - \log\bigl(1 - D\bigl(G(z)\bigr)\bigr)$$Here we scale down the loss by a factor of $\;0.5$ and apply a one sided label smoothing of $\:0.9$
###Code
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(0.9*tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
return 0.5 * (real_loss + fake_loss)
###Output
_____no_output_____
###Markdown
2 Training 2.1 Main functions
###Code
def discriminator_train_step(generator, discriminator, images, latent_dims):
noise = tf.random.normal([images.shape[0], latent_dims])
with tf.GradientTape() as disc_tape:
generated_imgs = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_imgs, training=True)
loss_D = discriminator_loss(real_output, fake_output)
grads_D = disc_tape.gradient(loss_D, discriminator.trainable_variables)
discriminator.optimizer.apply_gradients(zip(grads_D, discriminator.trainable_variables))
def generator_train_step(generator, discriminator, batch_size, latent_dims):
noise = tf.random.normal([batch_size, latent_dims])
with tf.GradientTape() as gen_tape:
generated_imgs = generator(noise, training=True)
fake_output = discriminator(generated_imgs, training=True)
loss_G = generator_loss(fake_output)
grads_G = gen_tape.gradient(loss_G, generator.trainable_variables)
generator.optimizer.apply_gradients(zip(grads_G, generator.trainable_variables))
def train(generator, discriminator, dataset, epochs, batch_size, callbacks=None):
latent_dims = generator.input_shape[1]
num_batches = int(1 + (NUM_IMAGES - 1) // batch_size)
generator_step = tf.function(generator_train_step)
discriminator_step = tf.function(discriminator_train_step)
for epoch in tqdm(range(epochs)):
for c in callbacks:
c.on_epoch_begin(epoch=epoch + 1, generator=generator, discriminator=discriminator)
for batch in tqdm(dataset, leave=False, total=num_batches):
discriminator_step(generator, discriminator, batch, latent_dims)
generator_step(generator, discriminator, batch_size, latent_dims)
for c in callbacks:
c.on_epoch_end(epoch=epoch + 1, generator=generator, discriminator=discriminator)
###Output
_____no_output_____
###Markdown
2.2 Hyperparameter Testing These were the hyperparameters tested for the final document. Training all of them simultaneously may take a long time, consider commenting out some options to run the tests individually.
###Code
BATCH_SIZE = 24
LATENT_DIMS = 128
hparams_list = [
{'upsample': 'TrpConv', 'epochs': 20},
{'upsample': 'bilinear', 'epochs': 20},
{'upsample': 'nearest', 'epochs': 100}
]
for hparams in hparams_list:
dirname = '{}'.format(hparams['upsample'].upper())
Path(dirname).mkdir(exist_ok=True)
## Models
if hparams['upsample'] is 'TrpConv':
generator = generator_model_transp_conv(LATENT_DIMS)
else:
generator = generator_model_upsample(LATENT_DIMS, hparams['upsample'])
generator.optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.0)
discriminator = discriminator_model()
discriminator.optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.0)
## Callbacks
timer = utils.callback.TimerCallback()
save_samples = utils.callback.SaveSamplesCallback(
path_format=os.path.join(dirname, 'epoch-{}'),
inputs=tf.random.normal((8*8, LATENT_DIMS)),
n_cols=8,
savefig_kwargs={'bbox_inches': 'tight', 'pad_inches': 0, 'dpi': 256},
grid_params={'border':2, 'pad':2, 'pad_value':0.0},
transform_samples=lambda samples: (1 + samples) * 0.5
)
## Train and save results
train(
generator,
discriminator,
dataset=dataset.shuffle(NUM_IMAGES).batch(BATCH_SIZE),
epochs=hparams['epochs'],
batch_size=BATCH_SIZE,
callbacks=[timer, save_samples]
)
generator.save (os.path.join(dirname, 'generator.h5' ), overwrite=True, save_format='h5')
discriminator.save(os.path.join(dirname, 'discriminator.h5'), overwrite=True, save_format='h5')
###Output
_____no_output_____ |
2-coincidence-detection.ipynb | ###Markdown
Coincidence detection and sound localisationThis notebook shows some Python + Brian code to simulate spiking neural networks solving a sound localisation task using coincidence detection.[Brian](https://briansimulator.org) is a Python-based spiking neural network simulator package. It is simple to install and has extensive documentation. I recommend it for this course and more generally. It is entirely coincidental that I'm one of the authors. The modelIn this notebook, which leads in to the exercise for the first half of the tutorial, we'll construct a highly simplified model of sound localisation carried out by coincidence detection and delay lines. This model was proposed by [Jeffress (1948)]() and is explained in more detail [here](http://www.scholarpedia.org/article/Jeffress_model).The basic idea is that you have an incoming auditory signal arriving from some angle $\theta$, so it arrives at one ear earlier than the other:We call the difference in arrival time the *interaural time difference* or ITD. In this notebook, the signal will be a sine wave at frequency $f$ and so this time difference is ambiguous and becomes an *interaural phase difference* (IPD). The two are related by $\mathrm{IPD}=2\pi f\cdot\mathrm{ITD}$.In Jeffress' model the brain tries to infer the ITD by compensating with multiple neural delay lines:Each circle is a coincidence detector neuron, and the signal travels along the lines at a fixed speed. So, the signal from the left ear reaches the leftmost coincidence detector neuron first, and the rightmost neuron last. The signal from the right ear is the opposite, it reaches the rightmost neuron first and the leftmost neuron last. For each neuron there is a *best ITD* where the neural delays exactly compensate the acoustic delays (ITD). Since coincidence detector neurons fire more frequently if they are receiving inputs that are more similar, you can estimate the ITD of the sound by which neuron is firing at the highest rate (you estimate that the ITD of the sound is the best ITD of that neuron). Coding it upIn the rest of this notebook, we'll turn this idea into code. First of all, we import the plotting libraries and the Brian simulator package. Note that I'm doing some funky stuff here to make it work locally and also in Google Colab, etc.You can ignore the ``prefs.codegen...`` line for the moment. It makes it run faster for small models but slower for fast models, so it's handy for demos.
###Code
try:
import ipywidgets as widgets
except ImportError:
widgets = None
try:
import brian2
except ImportError:
!pip install brian2
%matplotlib inline
from brian2 import *
import matplotlib.gridspec as gridspec
prefs.codegen.target = 'numpy'
###Output
_____no_output_____
###Markdown
Input signalNow we set up the input signal model. We'll have the two ears receive two sine waves with different phase delays, ear 0 will have no delay, and ear 1 will have a delay of ``ipd``. Then, we'll have the neurons generate spikes as a Poisson process with firing rate ``rate_max*0.5*(1+sin(theta))``. We model this in Brian by having a spike threshold condition ``rand()<rate*dt`` where ``rand()`` is a uniform random number in ``[0, 1]`` and ``dt`` is the simulation time step. We can take a look at what this model looks like by running the cell below.
###Code
def input_signal(rate_max_Hz=100, ipd_deg=90, f_Hz=3):
# We can't use values with units in the widgets, so we add the units first
rate_max = rate_max_Hz*Hz
ipd = (pi/180)*ipd_deg
f = f_Hz*Hz
# These are the equations governing the ear neurons. Take a look at the
# Brian documentation for an explanation, but the only thing you might
# find non-obvious hopefully is the ": 1" and ": Hz" which tells Brian
# what the units of the variable being defined are (1 means dimensionless).
# Also note that the variable "i" is defined in Brian to be the index of
# the neuron, so for neuron 0 it will be 0 and for neuron 1 it will be 1,
# allowing us to make the input signal different for the two ears.
eqs_ears = '''
theta = 2*pi*f*t + i*ipd : 1
rate = rate_max*0.5*(1+sin(theta)) : Hz
'''
# Create a group of 2 neurons with these equations, that fires a spike
# according to a Poisson process with the given time-varying rate. We
# use a dt of 1ms to speed up the simulation for interactivity, but later
# we'll use a better default of 0.1ms.
ears = NeuronGroup(2, eqs_ears, threshold='rand()<rate*dt', dt=1*ms)
# Record the spikes and values of the rate as we run the simulation
M_spike = SpikeMonitor(ears)
M_state = StateMonitor(ears, 'rate', record=True)
# Run the simulation for 1 second
run(1*second)
# Now plot the results. I won't explain in detail because it's mostly
# just fiddly matplotlib stuff to make it look nice.
trains = M_spike.spike_trains()
fig = figure(figsize=(4, 2), dpi=200)
gs = gridspec.GridSpec(2, 1, hspace=0, height_ratios=[1, .3])
ax = subplot(gs[0])
plot(M_state.t/ms, M_state.rate[0]/Hz, label='Left ear')
plot(M_state.t/ms, M_state.rate[1]/Hz, label='Right ear')
legend(loc='upper right')
gca().set_frame_on(False)
ylabel('Rate')
yticks([])
xticks([])
ylim(-10, 210)
subplot(gs[1], sharex=ax)
plot(trains[0]/ms, [0]*len(trains[0]), '|')
plot(trains[1]/ms, [1]*len(trains[1]), '|')
ylim(-1, 2)
gca().set_frame_on(False)
xlabel('Time')
ylabel('Spikes')
yticks([])
xticks([])
tight_layout()
if widgets is not None:
widgets.interact(input_signal,
rate_max_Hz=widgets.IntSlider(min=10, max=200, value=100, step=10, continuous_update=False),
ipd_deg=widgets.IntSlider(min=0, max=360, value=90, step=10, continuous_update=False),
f_Hz=widgets.FloatSlider(min=0, max=10, value=3, step=.1, continuous_update=False),
);
else:
input_signal()
###Output
_____no_output_____
###Markdown
Coincidence detectorsNow we're going to set up the coincidence detector neurons. We'll use $N$ neurons with best delays equally distributed between 0 and $\mathrm{ITD}_\mathrm{max}=1/f$. The coincidence detector neurons are standard LIF neurons like we've seen before, but we store a copy of their best IPD and best ITD. Next, we create synapses from the ear neurons to the coincidence detector neurons where the synaptic delay from ear 0 is 0, and from ear 1 is the best ITD of that neuron. We use a small time constant $\tau$ to get strong coincidence detection, and plot the results.
###Code
def localise(rate_max_Hz=400, ipd_deg=200, f_Hz=50, w=0.5, tau_ms=1, N_cd=100, duration=1*second):
rate_max = rate_max_Hz*Hz
ipd = (pi/180)*ipd_deg
f = f_Hz*Hz
tau = tau_ms*ms
itd = ipd/(2*pi*f)
# One difference from before is that we handle edge effects here, by making sure the signal
# is the same on both sides by padding with rate=0.5*rate_max at the beginning/end. The
# code for this is a bit clever/tricky and not essential to understand immediately.
eqs_ears = '''
theta = 2*pi*f*t + i*ipd : 1
signal_is_on = int(t<duration-itd)*int(i==0)+int(t>itd)*int(i==1) : 1
rate = rate_max*0.5*(1+signal_is_on*sin(theta)) : Hz
'''
ears = NeuronGroup(2, eqs_ears, threshold='rand()<rate*dt')
# Standard LIF neuron but with added best IPD and best ITD that depends on the neuron
# index (equally distributed in the possible range).
eqs_cd = '''
dv/dt = -v/tau : 1
best_ipd = 2*pi*i/(N_cd-1) : 1
best_itd = best_ipd/(2*pi*f) : second
'''
cd = NeuronGroup(N_cd, eqs_cd, threshold='v>1', reset='v=0', method='exact')
# Synapses from the ears to the coincidence detector neurons. If a presynaptic neuron
# fires, the postsynaptic v value is increased by w.
S = Synapses(ears, cd, on_pre='v += w')
# All presynaptic neurons connected to all postsynaptic neurons
S.connect(p=1)
# Delays are 0 by default, so we set the delays for where the presynaptic neuron has
# index 1 (the right ear) to be the best_itd of the post-synaptic neuron.
S.delay['i==1'] = 'best_itd'
M = SpikeMonitor(cd)
run(duration)
# We take as our estimate the mean best IPD of all neurons with
# the maximum spike count
i = max(M.count)
I = M.count==i
ipd_est = mean(cd.best_ipd[I])
figure(figsize=(6, 4), dpi=100)
plot(cd.best_ipd[I]*180/pi, M.count[I], 'or')
plot(cd.best_ipd*(180/pi), M.count, '.k')
axvline(ipd_deg, ls='--', c='b', label='True IPD')
axvline(ipd_est*180/pi, ls='--', c='r', label='Estimated IPD')
xlabel('IPD (deg)')
ylabel('Spike count')
legend(loc='lower right')
tight_layout()
if widgets is not None:
widgets.interact(localise,
rate_max_Hz=widgets.IntSlider(min=10, max=1000, value=400, step=10, continuous_update=False),
ipd_deg=widgets.IntSlider(min=0, max=360, value=90, step=10, continuous_update=False),
f_Hz=widgets.IntSlider(min=0, max=200, value=50, step=5, continuous_update=False),
w=widgets.FloatSlider(min=.1, max=1, value=.5, step=.1, continuous_update=False),
tau_ms=widgets.FloatSlider(min=.1, max=10, value=1, step=.1, continuous_update=False),
N_cd=widgets.IntSlider(min=10, max=1000, value=100, step=10, continuous_update=False),
duration=widgets.fixed(1*second),
);
else:
localise()
###Output
_____no_output_____
###Markdown
Evaluating performanceHow well does this model perform? Let's try it out. We'll run it once for each IPD from 0 to 360 degrees in steps of 10 degrees, and plot the estimated IPDs and errors as a function of IPD, and compute the mean error.We'll separate out the functions a bit. The first function computes the input signal and returns the spike times. The second function estimates the IPD from it. This is partly for the exercise (below) but also just to make sure we're not accidentally using information about the true answer when computing our estimate.
###Code
rate_max = 400*Hz
f = 50*Hz
duration = 1*second
w = 0.5
tau = 1*ms
N_cd = 100
# This one generates an input signal and returns a pair (i, t) of
# arrays with i the spike indices (0 or 1 as there are 2 neurons)
# and t the corresponding spike times.
def generate_input_signal(ipd):
itd = ipd/(2*pi*f)
eqs_ears = '''
theta = 2*pi*f*t + i*ipd : 1
signal_is_on = int(t<duration-itd)*int(i==0)+int(t>itd)*int(i==1) : 1
rate = rate_max*0.5*(1+signal_is_on*sin(theta)) : Hz
'''
ears = NeuronGroup(2, eqs_ears, threshold='rand()<rate*dt')
M = SpikeMonitor(ears)
run(duration)
return M.i, M.t
# This one performs the localisation from before, using just those
# arrays returned by the previous function, which we convert into
# a group of neurons in Brian using SpikeGeneratorGroup
def localise_from_input(i, t):
ears = SpikeGeneratorGroup(2, i, t)
eqs_cd = '''
dv/dt = -v/tau : 1
best_ipd = 2*pi*i/(N_cd-1) : 1
best_itd = best_ipd/(2*pi*f) : second
'''
cd = NeuronGroup(N_cd, eqs_cd, threshold='v>1', reset='v=0', method='exact')
S = Synapses(ears, cd, on_pre='v += w')
S.connect(p=1)
S.delay['i==1'] = 'best_itd'
M = SpikeMonitor(cd)
run(duration)
i = max(M.count)
I = M.count==i
ipd_est = mean(cd.best_ipd[I])
return ipd_est
def generate_and_localise(ipd, localiser):
i, t = generate_input_signal(ipd)
return localiser(i, t)
def generate_results(localiser):
ipds = arange(0, 360, 10)*pi/180
ipds_est = array([generate_and_localise(ipd, localiser) for ipd in ipds])
return ipds, ipds_est
# This will take a minute or so to run.
ipds, ipds_est = generate_results(localise_from_input)
# Mean error should be calculated in a circular fashion
# Giving 359 degrees when the answer is 0 is 1 degree not 359
# So compute +-360 deg and take the minimum
def compute_errors(ipds, ipds_est):
ipds_est_circ = array([ipds_est, ipds_est+2*pi, ipds_est-2*pi])
abs_errors_circ = abs(ipds[newaxis, :]-ipds_est_circ)
abs_errors_deg = amin(abs_errors_circ, axis=0)*180/pi
return abs_errors_deg
def plot_results(ipds, ipds_est):
abs_errors_deg = compute_errors(ipds, ipds_est)
figure(figsize=(8, 4), dpi=100)
subplot(121)
plot(ipds*180/pi, ipds_est*180/pi, '.k')
plot([0, 360], [0, 360], '--g')
xlabel('True IPD (deg)')
ylabel('Estimated IPD (deg)')
subplot(122)
plot(ipds*180/pi, abs_errors_deg, '.k')
mean_abs_error_deg = mean(abs_errors_deg)
axhline(mean_abs_error_deg, ls='--', c='b', label=f'Mean error = {int(mean_abs_error_deg)}')
xlabel('True IPD (deg)')
ylabel('Absolute error (deg)')
legend(loc='best')
tight_layout();
plot_results(ipds, ipds_est)
###Output
_____no_output_____
###Markdown
ExerciseCan you do better than the network above? Limit yourself to only 100 neurons, and use the ``generate_input_signal`` function from above to generate the input data, but otherwise feel free to do whatever you like. Some ideas:* Optimise the parameters ``tau``, ``w``* Use a different neuron model* Use a different method to estimate the IPD given the set of coincidence detector neuron countsAfter you've had a go, you might be interested in our related paper [Goodman et al. (2013)](https://elifesciences.org/articles/01312). One solution (will be split into a separate notebook when finalised)In this solution, I simply smooth the spike counts with a Savitzky-Golay filter with a large window and use the smoothed spike counts instead of the originals. It slightly improves the error from around 20-30 degrees to around 10-20. You could imagine this could easily be implemented with a second layer of neurons with local connectivity.
###Code
from scipy.signal import savgol_filter
N_cd = 100
tau = 1*ms
w = 0.5
# This function is just the same as the given solution except that we smooth
# the spike counts before estimating
def localise_from_input_smoothed(i, t, true_ipd=None, window_length=51, do_plot=False):
ears = SpikeGeneratorGroup(2, i, t)
eqs_cd = '''
dv/dt = -v/tau : 1
best_ipd = 2*pi*i/N_cd : 1
best_itd = best_ipd/(2*pi*f) : second
'''
cd = NeuronGroup(N_cd, eqs_cd, threshold='v>1', reset='v=0', method='exact')
S = Synapses(ears, cd, on_pre='v += w')
S.connect(p=1)
S.delay['i==1'] = 'best_itd'
M = SpikeMonitor(cd)
run(duration)
# Smooth the spike counts.
bipd = cd.best_ipd[:]
count = M.count[:]
smoothed_count = savgol_filter(count, window_length, 2, mode='wrap')
ipd_est = bipd[argmax(smoothed_count)]
if do_plot:
figure(figsize=(6, 4), dpi=100)
plot(bipd*180/pi, count, '.k')
plot(bipd*180/pi, smoothed_count, '-g', label='Smoothed')
if true_ipd is not None:
axvline(true_ipd*(180/pi), ls='--', c='b', label='True IPD')
axvline(ipd_est*180/pi, ls='--', c='r', label='Estimated IPD')
xlim(-30, 390)
xlabel('IPD (deg)')
ylabel('Spike count')
legend(loc='lower right')
tight_layout()
return ipd_est
i, t = generate_input_signal(pi)
localise_from_input_smoothed(i, t, true_ipd=pi, do_plot=True);
# This will take a minute or so to run.
ipds, ipds_est = generate_results(localise_from_input_smoothed)
plot_results(ipds, ipds_est)
###Output
_____no_output_____
###Markdown
Coincidence detection and sound localisationThis notebook shows some Python + Brian code to simulate spiking neural networks solving a sound localisation task using coincidence detection.[Brian](https://briansimulator.org) is a Python-based spiking neural network simulator package. It is simple to install and has extensive documentation. I recommend it for this course and more generally. It is entirely coincidental that I'm one of the authors. The modelIn this notebook, which leads in to the exercise for the first half of the tutorial, we'll construct a highly simplified model of sound localisation carried out by coincidence detection and delay lines. This model was proposed by [Jeffress (1948)]() and is explained in more detail [here](http://www.scholarpedia.org/article/Jeffress_model).The basic idea is that you have an incoming auditory signal arriving from some angle $\theta$, so it arrives at one ear earlier than the other:We call the difference in arrival time the *interaural time difference* or ITD. In this notebook, the signal will be a sine wave at frequency $f$ and so this time difference is ambiguous and becomes an *interaural phase difference* (IPD). The two are related by $\mathrm{IPD}=2\pi f\cdot\mathrm{ITD}$.In Jeffress' model the brain tries to infer the ITD by compensating with multiple neural delay lines:Each circle is a coincidence detector neuron, and the signal travels along the lines at a fixed speed. So, the signal from the left ear reaches the leftmost coincidence detector neuron first, and the rightmost neuron last. The signal from the right ear is the opposite, it reaches the rightmost neuron first and the leftmost neuron last. For each neuron there is a *best ITD* where the neural delays exactly compensate the acoustic delays (ITD). Since coincidence detector neurons fire more frequently if they are receiving inputs that are more similar, you can estimate the ITD of the sound by which neuron is firing at the highest rate (you estimate that the ITD of the sound is the best ITD of that neuron). Coding it upIn the rest of this notebook, we'll turn this idea into code. First of all, we import the plotting libraries and the Brian simulator package. Note that I'm doing some funky stuff here to make it work locally and also in Google Colab, etc.You can ignore the ``prefs.codegen...`` line for the moment. It makes it run faster for small models but slower for fast models, so it's handy for demos.
###Code
try:
import ipywidgets as widgets
except ImportError:
widgets = None
try:
import brian2
except ImportError:
!pip install brian2
%matplotlib inline
from brian2 import *
import matplotlib.gridspec as gridspec
prefs.codegen.target = 'numpy'
###Output
_____no_output_____
###Markdown
Input signalNow we set up the input signal model. We'll have the two ears receive two sine waves with different phase delays, ear 0 will have no delay, and ear 1 will have a delay of ``ipd``. Then, we'll have the neurons generate spikes as a Poisson process with firing rate ``rate_max*0.5*(1+sin(theta))``. We model this in Brian by having a spike threshold condition ``rand()<rate*dt`` where ``rand()`` is a uniform random number in ``[0, 1]`` and ``dt`` is the simulation time step. We can take a look at what this model looks like by running the cell below.
###Code
def input_signal(rate_max_Hz=100, ipd_deg=90, f_Hz=3):
# We can't use values with units in the widgets, so we add the units first
rate_max = rate_max_Hz*Hz
ipd = (pi/180)*ipd_deg
f = f_Hz*Hz
# These are the equations governing the ear neurons. Take a look at the
# Brian documentation for an explanation, but the only thing you might
# find non-obvious hopefully is the ": 1" and ": Hz" which tells Brian
# what the units of the variable being defined are (1 means dimensionless).
# Also note that the variable "i" is defined in Brian to be the index of
# the neuron, so for neuron 0 it will be 0 and for neuron 1 it will be 1,
# allowing us to make the input signal different for the two ears.
eqs_ears = '''
theta = 2*pi*f*t + i*ipd : 1
rate = rate_max*0.5*(1+sin(theta)) : Hz
'''
# Create a group of 2 neurons with these equations, that fires a spike
# according to a Poisson process with the given time-varying rate. We
# use a dt of 1ms to speed up the simulation for interactivity, but later
# we'll use a better default of 0.1ms.
ears = NeuronGroup(2, eqs_ears, threshold='rand()<rate*dt', dt=1*ms)
# Record the spikes and values of the rate as we run the simulation
M_spike = SpikeMonitor(ears)
M_state = StateMonitor(ears, 'rate', record=True)
# Run the simulation for 1 second
run(1*second)
# Now plot the results. I won't explain in detail because it's mostly
# just fiddly matplotlib stuff to make it look nice.
trains = M_spike.spike_trains()
fig = figure(figsize=(4, 2), dpi=200)
gs = gridspec.GridSpec(2, 1, hspace=0, height_ratios=[1, .3])
ax = subplot(gs[0])
plot(M_state.t/ms, M_state.rate[0]/Hz, label='Left ear')
plot(M_state.t/ms, M_state.rate[1]/Hz, label='Right ear')
legend(loc='upper right')
gca().set_frame_on(False)
ylabel('Rate')
yticks([])
xticks([])
ylim(-10, 210)
subplot(gs[1], sharex=ax)
plot(trains[0]/ms, [0]*len(trains[0]), '|')
plot(trains[1]/ms, [1]*len(trains[1]), '|')
ylim(-1, 2)
gca().set_frame_on(False)
xlabel('Time')
ylabel('Spikes')
yticks([])
xticks([])
tight_layout()
if widgets is not None:
widgets.interact(input_signal,
rate_max_Hz=widgets.IntSlider(min=10, max=200, value=100, step=10, continuous_update=False),
ipd_deg=widgets.IntSlider(min=0, max=360, value=90, step=10, continuous_update=False),
f_Hz=widgets.FloatSlider(min=0, max=10, value=3, step=.1, continuous_update=False),
);
else:
input_signal()
###Output
_____no_output_____
###Markdown
Coincidence detectorsNow we're going to set up the coincidence detector neurons. We'll use $N$ neurons with best delays equally distributed between 0 and $\mathrm{ITD}_\mathrm{max}=1/f$. The coincidence detector neurons are standard LIF neurons like we've seen before, but we store a copy of their best IPD and best ITD. Next, we create synapses from the ear neurons to the coincidence detector neurons where the synaptic delay from ear 0 is 0, and from ear 1 is the best ITD of that neuron. We use a small time constant $\tau$ to get strong coincidence detection, and plot the results.
###Code
def localise(rate_max_Hz=400, ipd_deg=200, f_Hz=50, w=0.5, tau_ms=1, N_cd=100, duration=1*second):
rate_max = rate_max_Hz*Hz
ipd = (pi/180)*ipd_deg
f = f_Hz*Hz
tau = tau_ms*ms
itd = ipd/(2*pi*f)
# One difference from before is that we handle edge effects here, by making sure the signal
# is the same on both sides by padding with rate=0.5*rate_max at the beginning/end. The
# code for this is a bit clever/tricky and not essential to understand immediately.
eqs_ears = '''
theta = 2*pi*f*t + i*ipd : 1
signal_is_on = int(t<duration-itd)*int(i==0)+int(t>itd)*int(i==1) : 1
rate = rate_max*0.5*(1+signal_is_on*sin(theta)) : Hz
'''
ears = NeuronGroup(2, eqs_ears, threshold='rand()<rate*dt')
# Standard LIF neuron but with added best IPD and best ITD that depends on the neuron
# index (equally distributed in the possible range).
eqs_cd = '''
dv/dt = -v/tau : 1
best_ipd = 2*pi*i/(N_cd-1) : 1
best_itd = best_ipd/(2*pi*f) : second
'''
cd = NeuronGroup(N_cd, eqs_cd, threshold='v>1', reset='v=0', method='exact')
# Synapses from the ears to the coincidence detector neurons. If a presynaptic neuron
# fires, the postsynaptic v value is increased by w.
S = Synapses(ears, cd, on_pre='v += w')
# All presynaptic neurons connected to all postsynaptic neurons
S.connect(p=1)
# Delays are 0 by default, so we set the delays for where the presynaptic neuron has
# index 1 (the right ear) to be the best_itd of the post-synaptic neuron.
S.delay['i==1'] = 'best_itd'
M = SpikeMonitor(cd)
run(duration)
# We take as our estimate the mean best IPD of all neurons with
# the maximum spike count
i = max(M.count)
I = M.count==i
ipd_est = mean(cd.best_ipd[I])
figure(figsize=(6, 4), dpi=100)
plot(cd.best_ipd[I]*180/pi, M.count[I], 'or')
plot(cd.best_ipd*(180/pi), M.count, '.k')
axvline(ipd_deg, ls='--', c='b', label='True IPD')
axvline(ipd_est*180/pi, ls='--', c='r', label='Estimated IPD')
xlabel('IPD (deg)')
ylabel('Spike count')
legend(loc='lower right')
tight_layout()
if widgets is not None:
widgets.interact(localise,
rate_max_Hz=widgets.IntSlider(min=10, max=1000, value=400, step=10, continuous_update=False),
ipd_deg=widgets.IntSlider(min=0, max=360, value=90, step=10, continuous_update=False),
f_Hz=widgets.IntSlider(min=0, max=200, value=50, step=5, continuous_update=False),
w=widgets.FloatSlider(min=.1, max=1, value=.5, step=.1, continuous_update=False),
tau_ms=widgets.FloatSlider(min=.1, max=10, value=1, step=.1, continuous_update=False),
N_cd=widgets.IntSlider(min=10, max=1000, value=100, step=10, continuous_update=False),
duration=widgets.fixed(1*second),
);
else:
localise()
###Output
_____no_output_____
###Markdown
Evaluating performanceHow well does this model perform? Let's try it out. We'll run it once for each IPD from 0 to 360 degrees in steps of 10 degrees, and plot the estimated IPDs and errors as a function of IPD, and compute the mean error.We'll separate out the functions a bit. The first function computes the input signal and returns the spike times. The second function estimates the IPD from it. This is partly for the exercise (below) but also just to make sure we're not accidentally using information about the true answer when computing our estimate.
###Code
rate_max = 400*Hz
f = 50*Hz
duration = 1*second
w = 0.5
tau = 1*ms
N_cd = 100
# This one generates an input signal and returns a pair (i, t) of
# arrays with i the spike indices (0 or 1 as there are 2 neurons)
# and t the corresponding spike times.
def generate_input_signal(ipd):
itd = ipd/(2*pi*f)
eqs_ears = '''
theta = 2*pi*f*t + i*ipd : 1
signal_is_on = int(t<duration-itd)*int(i==0)+int(t>itd)*int(i==1) : 1
rate = rate_max*0.5*(1+signal_is_on*sin(theta)) : Hz
'''
ears = NeuronGroup(2, eqs_ears, threshold='rand()<rate*dt')
M = SpikeMonitor(ears)
run(duration)
return M.i, M.t
# This one performs the localisation from before, using just those
# arrays returned by the previous function, which we convert into
# a group of neurons in Brian using SpikeGeneratorGroup
def localise_from_input(i, t):
ears = SpikeGeneratorGroup(2, i, t)
eqs_cd = '''
dv/dt = -v/tau : 1
best_ipd = 2*pi*i/(N_cd-1) : 1
best_itd = best_ipd/(2*pi*f) : second
'''
cd = NeuronGroup(N_cd, eqs_cd, threshold='v>1', reset='v=0', method='exact')
S = Synapses(ears, cd, on_pre='v += w')
S.connect(p=1)
S.delay['i==1'] = 'best_itd'
M = SpikeMonitor(cd)
run(duration)
i = max(M.count)
I = M.count==i
ipd_est = mean(cd.best_ipd[I])
return ipd_est
def generate_and_localise(ipd, localiser):
i, t = generate_input_signal(ipd)
return localiser(i, t)
def generate_results(localiser):
ipds = arange(0, 360, 10)*pi/180
ipds_est = array([generate_and_localise(ipd, localiser) for ipd in ipds])
return ipds, ipds_est
# This will take a minute or so to run.
ipds, ipds_est = generate_results(localise_from_input)
# Mean error should be calculated in a circular fashion
# Giving 359 degrees when the answer is 0 is 1 degree not 359
# So compute +-360 deg and take the minimum
def compute_errors(ipds, ipds_est):
ipds_est_circ = array([ipds_est, ipds_est+2*pi, ipds_est-2*pi])
abs_errors_circ = abs(ipds[newaxis, :]-ipds_est_circ)
abs_errors_deg = amin(abs_errors_circ, axis=0)*180/pi
return abs_errors_deg
def plot_results(ipds, ipds_est):
abs_errors_deg = compute_errors(ipds, ipds_est)
figure(figsize=(8, 4), dpi=100)
subplot(121)
plot(ipds*180/pi, ipds_est*180/pi, '.k')
plot([0, 360], [0, 360], '--g')
xlabel('True IPD (deg)')
ylabel('Estimated IPD (deg)')
subplot(122)
plot(ipds*180/pi, abs_errors_deg, '.k')
mean_abs_error_deg = mean(abs_errors_deg)
axhline(mean_abs_error_deg, ls='--', c='b', label=f'Mean error = {int(mean_abs_error_deg)}')
xlabel('True IPD (deg)')
ylabel('Absolute error (deg)')
legend(loc='best')
tight_layout();
plot_results(ipds, ipds_est)
###Output
_____no_output_____ |
PyMC Done.ipynb | ###Markdown
The first step in any data analysis is acquiring and munging the dataOur starting data set can be found here: http://jakecoltman.com in the pyData postIt is designed to be roughly similar to the output from DCM's path to conversionDownload the file and transform it into something with the columns: id,lifetime,age,male,event,search,brand where lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into intsIt is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)
###Code
running_id = 0
output = [[0]]
with open("E:/output.txt") as file_open:
for row in file_open.read().split("\n"):
cols = row.split(",")
if cols[0] == output[-1][0]:
output[-1].append(cols[1])
output[-1].append(True)
else:
output.append(cols)
output = output[1:]
for row in output:
if len(row) == 6:
row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]
output = output[1:-1]
def convert_to_days(dt):
day_diff = dt / np.timedelta64(1, 'D')
if day_diff == 0:
return 23.0
else:
return day_diff
df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"])
df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"])
df["lifetime"] = df["lifetime"].apply(convert_to_days)
df["male"] = df["male"].astype(int)
df["search"] = df["search"].astype(int)
df["brand"] = df["brand"].astype(int)
df["age"] = df["age"].astype(int)
df["event"] = df["event"].astype(int)
df = df.drop('advert_time', 1)
df = df.drop('conversion_time', 1)
df = df.set_index("id")
df = df.dropna(thresh=2)
df.median()
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
N = 2500
##Generate some random data
lifetime = pm.rweibull( 2, 5, size = N )
birth = pm.runiform(0, 10, N)
censor = ((birth + lifetime) >= 10)
lifetime_ = lifetime.copy()
lifetime_[censor] = 10 - birth[censor]
alpha = pm.Uniform('alpha', 0, 20)
beta = pm.Uniform('beta', 0, 20)
@pm.observed
def survival(value=lifetime_, alpha = alpha, beta = beta ):
return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(50000, 30000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
###Output
_____no_output_____
###Markdown
Problems: 1 - Try to fit your data from section 1 2 - Use the results to plot the distribution of the median Note that the media of a Weibull distribution is:$$β(log 2)^{1/α}$$
###Code
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000)
def weibull_median(alpha, beta):
return beta * ((log(2)) ** ( 1 / alpha))
plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
###Output
_____no_output_____
###Markdown
Problems: 4 - Try adjusting the number of samples for burning and thinnning 5 - Try adjusting the prior and see how it affects the estimate
###Code
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 3000, thin = 20)
pm.Matplot.plot(mcmc)
#Solution to Q5
## Adjusting the priors impacts the overall result
## If we give a looser, less informative prior then we end up with a broader, shorter distribution
## If we give much more informative priors, then we get a tighter, taller distribution
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
## Note the narrowing of the prior
alpha = pm.Normal("alpha", 1.7, 10000)
beta = pm.Normal("beta", 18.5, 10000)
####Uncomment this to see the result of looser priors
## Note this ends up pretty much the same as we're already very loose
#alpha = pm.Uniform("alpha", 0, 30)
#beta = pm.Uniform("beta", 0, 30)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 5000, thin = 20)
pm.Matplot.plot(mcmc)
#plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
###Output
[-----------------100%-----------------] 10000 of 10000 complete in 18.4 secPlotting beta
Plotting alpha
###Markdown
Problems: 7 - Try testing whether the median is greater than a different values
###Code
medians = [weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))]
testing_value = 14.9
number_of_greater_samples = sum([x >= testing_value for x in medians])
100 * (number_of_greater_samples / len(medians))
###Output
_____no_output_____
###Markdown
If we want to look at covariates, we need a new approach. We'll use Cox proprtional hazards, a very popular regression model.To fit in python we use the module lifelines:http://lifelines.readthedocs.io/en/latest/
###Code
### Fit a cox proprtional hazards model
###Output
_____no_output_____
###Markdown
Once we've fit the data, we need to do something useful with it. Try to do the following things: 1 - Plot the baseline survival function 2 - Predict the functions for a particular set of features 3 - Plot the survival function for two different set of features 4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time
###Code
#### Plot baseline hazard function
#### Predict
#### Plot survival functions for different covariates
#### Plot some odds
###Output
_____no_output_____
###Markdown
Model selectionDifficult to do with classic tools (here)Problem: 1 - Calculate the BMA coefficient values 2 - Try running with different priors
###Code
#### BMA Coefficient values
#### Different priors
###Output
_____no_output_____ |
.ipynb_checkpoints/gbr_best_75624-checkpoint.ipynb | ###Markdown
Проблема - местами 6 идет в ID сместо реального ID!!!
###Code
k=18
for number in range(1):
k=round(k+2,2)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
test_data = pd.read_csv('test.csv')
train_data = pd.read_csv('train.csv')
# Функции для очистки и подготовки данных
mean_year = np.round(train_data.loc[train_data['HouseYear'] <= 2020, 'HouseYear'].mean())
mean_healthcare = np.round(train_data["Healthcare_1"].mean())
mean_square_for_max = train_data.loc[(train_data['Rooms'] <= train_data.loc[(train_data['Square'] > 300), 'Rooms'].mean()), 'Square'].mean()
mean_square_for_big_ls = train_data.loc[train_data['LifeSquare'] > 250, 'Square'].mean()
mean_life_squae_for_max = train_data.loc[train_data['Square'] >= mean_square_for_big_ls, 'LifeSquare'].mean()
mean_square_Kitchen=train_data.KitchenSquare.mean()
def clean_year(df, mean_year):
df.loc[df['HouseYear'] > 2020, 'HouseYear'] = mean_year
def clean_life_square(df, koef_S_LS):
df.loc[(df['LifeSquare'] < 10) | (df['LifeSquare'].isnull()), 'LifeSquare'] = df['Square']*0.85
df.loc[df['LifeSquare'] > 250, 'LifeSquare'] = mean_life_squae_for_max
def clean_square(df, mean_square_for_max):
df.loc[(df['Square'] > 300), 'Square'] = mean_square_for_max
def clean_healthcare_1(df, mean_healthcare):
df.loc[df['Healthcare_1'].isnull(), 'Healthcare_1'] = mean_healthcare
def clean_rooms(df):
df.loc[(df['Rooms'] < 1) & (df['LifeSquare'] < 30), 'Rooms'] = 1
df.loc[(df['Rooms'] < 1) & (df['LifeSquare'] > 30) & (df['LifeSquare'] < 45), 'Rooms'] = 2
df.loc[(df['Rooms'] < 1) & (df['LifeSquare'] > 45) & (df['LifeSquare'] < 60), 'Rooms'] = 3
df.loc[(df['Rooms'] < 1) & (df['LifeSquare'] > 60) & (df['LifeSquare'] < 75), 'Rooms'] = 4
df.loc[(df['Rooms'] < 1) & (df['LifeSquare'] > 70), 'Rooms'] = 6
df.loc[(df['Rooms'] > 10), 'Rooms'] = 2
def KitchenSquare(df, mean_square_for_max):
df.loc[(df['KitchenSquare'] < 3) | (df['KitchenSquare'].isnull()), 'KitchenSquare'] = 6
df.loc[(df['KitchenSquare'] > 24)] = 6
def prepare_data(df, mean_year=mean_year, mean_healthcare=mean_healthcare, mean_square_for_max=mean_square_for_max, mean_life_squae_for_max=mean_life_squae_for_max):
clean_year(df, mean_year)
clean_life_square(df, mean_life_squae_for_max)
clean_healthcare_1(df, mean_healthcare)
clean_rooms(df)
clean_square(df, mean_square_for_max)
KitchenSquare(df, mean_square_for_max)
prepare_data(train_data)
prepare_data(test_data)
X = pd.get_dummies(train_data)
X.drop("Price", axis=1, inplace=True)
X.drop("Id", axis=1, inplace=True)
y = train_data.Price
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.16, random_state=42)
# переобучение и оценка модели
from sklearn.ensemble import GradientBoostingRegressor
final_model = GradientBoostingRegressor(n_estimators=200, max_depth=5, random_state=42
)
# min_samples_split=5, subsample=0.5 , min_samples_leaf=4
final_model.fit(X_train, y_train)
y_pred_gbr = final_model.predict(X_valid)
y_pred_train_gbr = final_model.predict(X_train)
print('r2: ', r2_score(y_valid, y_pred_gbr),', k: ',k)
# Предсказываем цены для тестовых данных и выгружаем в файл
X_test = pd.get_dummies(test_data)
X_test.drop("Id", axis=1, inplace=True)
test_data["Price"] = final_model.predict(X_test)
# экспорт в файл
test_data.loc[:, ['Id', 'Price']].to_csv('best_gbr_04.csv', index=False)
X_train.KitchenSquare.value_counts()
###Output
_____no_output_____
###Markdown
0.7621056876187297 - test_size=0.16 - n_estimators=200, max_depth=5, random_state=42 (0.75339)r2: 0.770945819723227 , k: 20 - KSM
###Code
# k=1000
# for number in range(200):
# X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.16, random_state=42)
# # переобучение и оценка модели
# from sklearn.ensemble import GradientBoostingRegressor
# final_model = GradientBoostingRegressor(n_estimators=200, max_depth=5, random_state=42
# )
# # min_samples_split=5, subsample=0.5 , min_samples_leaf=4
# final_model.fit(X_train, y_train)
# y_pred_gbr = final_model.predict(X_valid)
# y_pred_train_gbr = final_model.predict(X_train)
# print('r2: ', r2_score(y_valid, y_pred),', n_estimators: ',k)
from sklearn.metrics import r2_score as r2, mean_absolute_error as mae, mean_squared_error as mse
import seaborn as sns
def evaluate_preds(true_values, pred_values):
print("R2:\t" + str(round(r2(true_values, pred_values), 9)) + "\n" +
"MAE:\t" + str(round(mae(true_values, pred_values), 9)) + "\n" +
"MSE:\t" + str(round(mse(true_values, pred_values), 9)))
plt.figure(figsize=(10,10))
sns.scatterplot(x=pred_values, y=true_values)
plt.xlabel('Predicted values')
plt.ylabel('True values')
plt.title('True vs Predicted values')
plt.show()
# y_train_preds = final_model.predict(X_train)
# evaluate_preds(y_train, y_train_preds)
###Output
_____no_output_____ |
Notebooks/Model Back End.ipynb | ###Markdown
 Master in Big Data Solutions Final Project - Smart Tour Pablo Dellacassa, Santiago Borgnino & Muhannad ShahadaContent-based recommender system using dotProduct score and applying Dijkstra Algorithm for find shortest path
###Code
#Content-based recommender system using cosinde similarity function
def function(input1, input2):
import pandas as pd
import numpy as np
model_data=pd.read_csv('model_data.csv')
model_data_content=pd.read_csv('model_data_content.csv')
df_user=model_data_content[(model_data_content['name']==input1) | (model_data_content['name']==input2)].set_index('name')
df_user.drop('Unnamed: 0', axis=1, inplace=True)
userProfile=df_user.transpose().sum(axis=1)
userProfile.to_frame().transpose()
model_data_content2=model_data_content.set_index('name')
recommendationTable = ((model_data_content2*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable=recommendationTable.sort_values(ascending=False).head(5).to_frame()
recommendationTable.reset_index(inplace=True)
recommendationTable.columns=['name', 'dotProduct']
recommended_itinerary=[]
for i in recommendationTable['name'].head(5):
recommended_itinerary.append(i)
graph={recommended_itinerary[0]:{recommended_itinerary[1]:6, recommended_itinerary[2]:2},
recommended_itinerary[1]:{recommended_itinerary[0]:6,recommended_itinerary[2]:2, recommended_itinerary[3]:1, recommended_itinerary[4]:2},
recommended_itinerary[2]:{recommended_itinerary[0]:2,recommended_itinerary[1]:2, recommended_itinerary[3]:2,recommended_itinerary[4]:3},
recommended_itinerary[3]:{recommended_itinerary[1]:1, recommended_itinerary[2]:2,recommended_itinerary[4]:3},
recommended_itinerary[4]:{recommended_itinerary[2]:3,recommended_itinerary[1]:2,recommended_itinerary[3]:3}}
newlist=list()
for i in graph.keys():
newlist.append(i)
return graph, newlist
graph, places=function('LA SAGRADA FAMILIA', 'PARK GÜELL')
#Dijkstra algorithm function
def dijkstra(graph,src,dest,visited=[],distances={},predecessors={}):
""" calculates a shortest path tree routed in src
"""
# a few sanity checks
if src not in graph:
raise TypeError('The root of the shortest path tree cannot be found')
if dest not in graph:
raise TypeError('The target of the shortest path cannot be found')
# ending condition
if src == dest:
# We build the shortest path and display it
path=[]
pred=dest
while pred != None:
path.append(pred)
pred=predecessors.get(pred,None)
# reverses the array, to display the path nicely
readable=path[0]
for index in range(1,len(path)): readable = path[index]+'--->'+readable
#prints it
print('shortest path - array: '+str(path))
print("path: "+readable+", cost="+str(distances[dest]))
else:
# if it is the initial run, initializes the cost
if not visited:
distances[src]=0
# visit the neighbors
for neighbor in graph[src] :
if neighbor not in visited:
new_distance = distances[src] + graph[src][neighbor]
if new_distance < distances.get(neighbor,float('inf')):
distances[neighbor] = new_distance
predecessors[neighbor] = src
# mark as visited
visited.append(src)
# now that all neighbors have been visited: recurse
# select the non visited node with lowest distance 'x'
# run Dijskstra with src='x'
unvisited={}
for k in graph:
if k not in visited:
unvisited[k] = distances.get(k,float('inf'))
x=min(unvisited, key=unvisited.get)
dijkstra(graph,x,dest,visited,distances,predecessors)
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
#unittest.main()
dijkstra(graph,places[0], places[4])
###Output
shortest path - array: ['GÜELL PALACE', 'CASA BATLLÓ', 'LA SAGRADA FAMILIA']
path: LA SAGRADA FAMILIA--->CASA BATLLÓ--->GÜELL PALACE, cost=5
|
notebooks/chapter13_stochastic/01_markov.ipynb | ###Markdown
> This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python. 13.1. Simulating a discrete-time Markov chain 1. Let's import NumPy and matplotlib.
###Code
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
2. We consider a population that cannot comprise more than $N=100$ individuals. We also define birth and death rates.
###Code
N = 100 # maximum population size
a = .5/N # birth rate
b = .5/N # death rate
###Output
_____no_output_____
###Markdown
3. We will simulate a Markov chain on the finite space $\{0, 1, \ldots, N\}$. Each state represents a population size. The vector $x$ will contain the population size at each time step. We set the initial state to $x_0=25$, i.e. there are 25 individuals in the population at initialization time.
###Code
nsteps = 1000
x = np.zeros(nsteps)
x[0] = 25
###Output
_____no_output_____
###Markdown
4. We now simulate our chain. At each time step $t$, there is a new birth with probability $a \cdot x_t$, and independently, there is a new death with probability $b \cdot x_t$. These probabilities are proportional to the size of the population at that time. If the population size reaches $0$ or $N$, the evolution stops.
###Code
for t in range(nsteps - 1):
if 0 < x[t] < N-1:
# Is there a birth?
birth = np.random.rand() <= a*x[t]
# Is there a death?
death = np.random.rand() <= b*x[t]
# We update the population size.
x[t+1] = x[t] + 1*birth - 1*death
# The evolution stops if we reach $0$ or $N$.
else:
x[t+1] = x[t]
###Output
_____no_output_____
###Markdown
5. Let's look at the evolution of the population size.
###Code
plt.figure(figsize=(6,3));
plt.plot(x);
###Output
_____no_output_____
###Markdown
We see that, at every time, the population size can stays stable, increase by 1, or decrease by 1. 6. Now, we will simulate many independent trials of this Markov chain. We could run the previous simulation with a loop, but it would be very slow (two nested `for` loops). Instead, we *vectorize* the simulation by considering all independent trials at once. There is a single loop over time. At every time step, we update all trials simultaneously with vectorized operations on vectors. The vector `x` now contains the population size of all trials, at a particular time. At initialization time, the population sizes are set to random numbers between $0$ and $N$.
###Code
ntrials = 100
x = np.random.randint(size=ntrials,
low=0, high=N)
###Output
_____no_output_____
###Markdown
7. We define a function that performs the simulation. At every time step, we find the trials that undergo births and deaths by generating random vectors, and we update the population sizes with vector operations.
###Code
def simulate(x, nsteps):
"""Run the simulation."""
for _ in range(nsteps - 1):
# Which trials to update?
upd = (0 < x) & (x < N-1)
# In which trials do births occur?
birth = 1*(np.random.rand(ntrials) <= a*x)
# In which trials do deaths occur?
death = 1*(np.random.rand(ntrials) <= b*x)
# We update the population size for all trials.
x[upd] += birth[upd] - death[upd]
###Output
_____no_output_____
###Markdown
8. Now, we will look at the histograms of the population size at different times. These histograms represent the probability distribution of the Markov chain, estimated with independent trials (Monte Carlo method).
###Code
bins = np.linspace(0, N, 25);
plt.figure(figsize=(12,3));
nsteps_list = [10, 1000, 10000]
for i, nsteps in enumerate(nsteps_list):
plt.subplot(1, len(nsteps_list), i + 1);
simulate(x, nsteps)
plt.hist(x, bins=bins);
plt.xlabel("Population size");
if i == 0:
plt.ylabel("Histogram");
plt.title("{0:d} time steps".format(nsteps));
###Output
_____no_output_____ |
gym_qiskit-game/envs/qiskit-game-env.ipynb | ###Markdown
**AlphaZero-like algorithm for Qiskit game** We need to define an environment that is able to interact with OpenAI. It contains necessarily init, reset, render and close. We also introduce the methods step, calc_reward and others that are instrumental.
###Code
from qiskit import *
import re
import numpy as np
import math
from copy import copy, deepcopy
import gym
from gym import spaces
from fastai.text import *
class QiskitGameEnv(gym.Env):
'''
The game starts in state |+>|->|+>|->|+>|-> and the objective of each player is to measure as many 0 or 1 as possible
'''
metadata = {'render.modes': ['human']}
def __init__(self):
self.max_turns = 10
self.qubits = 6
self.turn = True
self.objective = 1 #Assume 1 means that we want to measure 1
self.adversary_objective = -self.objective
self.temperature = .001
#self.P = P # Is there a difference between a given state and the environment? - > probably yes. Correct this
#self.v = v
self.gates = ['H','X','Z','CX','CZ','M']
self.simulator = Aer.get_backend('qasm_simulator')
self.circuit = QuantumCircuit(self.qubits,self.qubits) #self.qubit qubits and self.qubit bits to store the result
self.viewer = None
self.step_count = 0
self.measured = []
self.action_space = [self.gates, spaces.Discrete(self.qubits), spaces.Discrete(self.qubits)]
#first we indicate the gate, then the first qubit to which it applies, and latter the second qubit it is applied to.
self.seed()
def step(self, action):
# The format of action is (gate, qubit1, qubit2)
self.step_count += 1
if action[0] not in self.gates:
raise Exception('Not valid gate!')
if action[1] in self.measured:
raise Exception('Already measured qubit!')
if action[1] not in self.measured:
if action[0] == 'H':
self.circuit.h(action[1]) #apply Hadamard to qubit action[1]
elif action[0] == 'M':
self.circuit.measure(action[1],action[1]) #measures qubit in action[1] and saves the result to bit action[1]
#self.measured += [action[1]] # This qubit was measured
elif action[0] == 'X':
self.circuit.x(action[1]) #apply X to qubit action[1]
elif action[0] == 'Z':
self.circuit.z(action[1]) #apply Z to qubit action[1]
elif action[0] == 'CX':
self.circuit.cx(action[1],action[2]) #apply CX from qubit action[2] to qubit action[1]
elif action[0] == 'CZ':
self.circuit.cz(action[1],action[2]) #apply CZ from qubit action[2] to qubit action[1]
def reset(self):
self.step_count = 0
self.circuit = QuantumCircuit(self.qubits,self.qubits)
for qubit in range(0,self.qubits):
if qubit % 2 == 1:
self.circuit.x(qubit) #Apply X on the odd qubits
self.circuit.h(qubit) #Apply H on all qubits
self.measured = [] # This is a list of what qubits has been measured
return self.circuit # The initial state is |+>|->|+>|->|+>|->
def render(self, mode='human'):
return self.circuit.draw()
def calc_reward(self): #Tengo que enredar con medidas parciales a ver qué pasa: si no se ha medido el output es 0
circ = deepcopy(self.circuit) #To run a simulation we use a copy
# Use Aer's qasm_simulator
backend_sim = Aer.get_backend('qasm_simulator')
# Execute the circuit on the qasm simulator.
# We've set the number of repeats of the circuit
# to be 1024, which is the default.
job_sim = execute(circ, backend_sim, shots=1024)
# Grab the results from the job.
result_sim = job_sim.result()
counts = result_sim.get_counts(circ)
self.counts = counts
reward = 0
measured = self.measured_qubits(self.circuit)
for key in counts.keys():
counter = circ.n_qubits - 1 # Due to the structure of the representation of qubits, we start by the biggest and go to the lowest
key_reward = 0
for digit in str(key):
if counter in measured:
key_reward += 2*int(digit)-1
counter -= 1
reward += key_reward * counts[key]
if reward == 0:
reward = 2*(float(np.random.rand(1))-0.5)
reward *= .001
self.reward = reward
del circ
def count_letter(string,letter):
count = 0
for any_letter in string:
if any_letter == letter:
count += 1
return count
#def next_state(self,action):
def measured_qubits(self):
measured_qubits = { qarg for (inst, qargs, cargs) in self.circuit.data for qarg in qargs if inst.name == 'measure' }
list_measured_qubits = []
for qubit in measured_qubits:
list_measured_qubits += [qubit.index]
return list_measured_qubits
def close(self):
return
###Output
_____no_output_____
###Markdown
Next we want to define Node(), Edge() and the MonteCarlo Tree Search, MCTS(). Node can calculate the reward, calc_reward, and if it has never been previously expanded, expand the children, calc_children.
###Code
class Node():
def __init__(self,circuit,parent=None,previous_edge = None, value=0,children = []):
'''
circuit: class QuantumCircuit
parent: class Node
children: list of Nodes
'''
if type(circuit) != QuantumCircuit:
raise Exception('Not a circuit!')
#super().__init__()
self.value = value
#self.probabilities = probabilities
self.children = children #Tuple (edge, circuit)
self.parent = parent
self.circuit = circuit
self.one_qubit_gates = ['H','X','Z','M']
self.two_qubit_gates = ['CX','CZ']
self.gates = self.one_qubit_gates + self.two_qubit_gates
self.measured = self.measured_qubits(self.circuit) #calculates self.measured
self.reward = None
self.previous_edge = previous_edge
def calc_reward(self):
'''
Calculates the reward of a circuit simulating it once
'''
measured = self.measured_qubits(self.circuit) #Figure out which qubits have been measured
circ = deepcopy(self.circuit) #To run an experiment we use a copy
# Use Aer's qasm_simulator
backend_sim = Aer.get_backend('qasm_simulator')
# Execute the circuit on the qasm simulator.
# We've set the number of repeats of the circuit
# to be 1024, which is the default.
job_sim = execute(circ, backend_sim, shots=1024)
# Grab the results from the job.
result_sim = job_sim.result()
counts = result_sim.get_counts(circ)
self.counts = counts #----
#print('counts: ',counts)
reward = 0
for key in counts.keys():
counter = circ.n_qubits - 1 # Due to the structure of the representation of qubits, we start by the biggest and go to the lowest
key_reward = 0
for digit in str(key):
if counter in self.measured:
key_reward += 2*int(digit)-1
counter -= 1
reward += key_reward * counts[key]
if reward == 0:
reward = 2*(float(np.random.rand(1))-0.5)
reward *= .001
self.reward = reward
del circ
def calc_children(self):
'''
Expands one layer of the MCTS
'''
self.children = []
number_of_qubits = self.circuit.n_qubits
for gate in self.one_qubit_gates:
for qubit1 in range(number_of_qubits):
if qubit1 not in self.measured:
new_circuit = deepcopy(self.circuit) #'''Need to clone it'''
if gate is 'H':
new_circuit.h(qubit1)
elif gate is 'M':
new_circuit.measure(qubit1,qubit1)
elif gate is 'X':
new_circuit.x(qubit1)
elif gate is 'Z':
new_circuit.z(qubit1)
qubit2 = -1
new_node = Node(new_circuit, parent = self)
new_edge = Edge(self,(gate,qubit1,qubit2),new_node,P=1,N=0) # Important to call the NN to calculate P!!!
new_node.previous_edge = new_edge
self.children += [(new_node,new_edge)]
for gate in self.two_qubit_gates:
for qubit1 in range(number_of_qubits):
for qubit2 in range(number_of_qubits):
if (qubit1 != qubit2) and (qubit1 not in self.measured) and (qubit2 not in self.measured):
new_circuit = deepcopy(self.circuit)
if gate is 'CX':
new_circuit.cx(qubit1,qubit2)
elif gate is 'CZ':
new_circuit.cz(qubit1,qubit2)
new_node = Node(new_circuit, parent = self)
new_edge = Edge(self,(gate,qubit1,qubit2),new_node,P=1,N=0) # Important to call the NN to calculate P!!!
new_node.previous_edge = new_edge
self.children += [(new_node,new_edge)]
# edge = Edge(circuit,(gate,qubit1,qubit2),P,N+1)
def measured_qubits(self,circuit):
measured_qubits = { qarg for (inst, qargs, cargs) in circuit.data for qarg in qargs if inst.name == 'measure' }
list_measured_qubits = []
for qubit in measured_qubits:
list_measured_qubits += [qubit.index]
return list_measured_qubits
###Output
_____no_output_____
###Markdown
Edge should be able to calculate $Q$ and $U$. The key here is that for each edge we should feed the number of times we have explored this edge, $N$, and $P$, calculated by the Neural Network initially.
###Code
class Edge():
def __init__(self, state, action, next_state, P = 1, N = 0):
'''
state: class Node
action: tuple (string, number, possible number)
'''
#print('creating one new edge')
if type(state) != Node:
#print(type(state))
raise Exception('Not a node!')
self.circuit = state.circuit
self.state = state
self.N = N
self.P = P # Set P = None once we define P_function()
self.action = action
self.next_state = next_state
#print(next_state.circuit) #
#self.Q_function()
self.Q = 0
self.U_function()
def Q_function(self):
#next_state = deepcopy(self.next_state)
sum_values = self.sum_of_values(self.next_state)
self.Q = sum_values / (self.N)
def U_function(self): #Need P that comes from the NN
self.U = self.P / (1+self.N)
def sum_of_values(self,state): #Returns \sum V(s') such that s,a eventually reaches s'
'''
state: class Node
'''
if type(state) != Node:
raise Exception('state is not a node for sum_of_values')
list_of_states = [state]
value = state.value
for s in state.children:
if s[0] not in list_of_states:
value_add = self.sum_of_values(s[0])
value += value_add
list_of_states += [s[0]]
return value
'''
def P_function(self,NN):
#Calculates the probability that in state self.circuit one takes self.action
'''
###Output
_____no_output_____
###Markdown
Finally, we want to define the MCTS. We need functions to select a new node to which one wants to move. One would also like to have a rollout policy, a backup and a calc_reward.
###Code
class MCTS():
def __init__(self,root_node,n_iterations=1,depth=5,temperature=.001):
self.root_node = root_node
self.n_iterations = n_iterations
self.depth = depth
self.temperature = temperature
def play(self):
for i in range(0,self.n_iterations):
node = self.root_node
#Expand
for j in range(0,self.depth):
print('expanded ',j,' times')
node, _ = self.select(node)
#Rollout
self.rollout(node)
#Backup
self.backup(node)
children = self.root_node.children
options = []
edges = []
for child in children:
options += [child[0]]
edges += [child[1]]
action_probabilities = []
sumN = sum((edge.N) for edge in edges)
action_probabilities += [(float(edge.N)/sumN)**(1/self.temperature) for edge in edges]
return action_probabilities, edges, options
def select(self,node): # np.radom.choice(action , number_of_items_to_pick = 1, P)
#print(node.circuit)
if node.children == []:
print('calculating children')
node.calc_children()
children = node.children
options = []
edges = []
for child in children:
options += [child[0]]
edges += [child[1]]
probabilities = []
for edge in edges:
probabilities += [edge.P]
index = np.argmax(edge.Q+edge.U for edge in edges) #Since at the beginning we have no idea of what is best, it makes sense that index = 0
#print(index)
new_node, new_edge = children[index]
new_edge.N += 1
new_edge.Q_function()
new_edge.U_function()
#print(new_node.circuit)
return new_node, new_edge
def rollout(self,rollout_node):
node = deepcopy(rollout_node)
for i in range(0,5):
if node.children == []:
node.calc_children()
children = node.children
options = []
edges = []
for child in children:
options += [child[0]]
edges += [child[1]]
probabilities = []
probabilities += (edge.P for edge in edges) # Is this notation right?
#for edge in edges:
# probabilities += [edge.P]
'''To eliminate when the NN predicts probabilities, if needed'''
#print('probabilities',probabilities)
suma = sum(probabilities)
probabilities = [probability/suma for probability in probabilities] # Ensure that probabilities are norm-1 normalized
#print(probabilities)
#print(node.children)
indices = np.arange(len(node.children))
#print(indices)
index = int(np.random.choice(indices, 1, probabilities))
#print(index)
node, edge = node.children[index]
#print(node,edge)
node.calc_reward()
#print('rollout node reward', node.reward)
#print(node.circuit)
rollout_node.value += node.reward
del node
def backup(self,node):
while node.previous_edge != None:
previous_edge = node.previous_edge
#print('Q value before', previous_edge.Q)
previous_edge.Q_function()
#print('Q value after', previous_edge.Q)
node = previous_edge.state
###Output
_____no_output_____
###Markdown
To define the neural network we need to format the input and output. The input is (state,action_probabilities, success_probability). That is, with a state, one should predict action probabilities, and success probabilities.
###Code
def play_once():
data_DL =[] # Data used to train the NN
game = QiskitGameEnv() #Environment
game.reset() #Initialize environment
while len(game.measured) < 6: #How do we update game.measured
node = deepcopy(Node(game.circuit))
MonteCarlo = MCTS(node) #Create the MonteCarlo game
#Run MonteCarlo
action_probabilities, edges, nodes = MonteCarlo.play()
#Saving the data
data_DL +=[(game.circuit,action_probabilities, edges, nodes)]
#Choose next step
indices = np.arange(len(edges))
#print(action_probabilities)
index = int(np.random.choice(indices, 1, action_probabilities))
edge = edges[index]
action = edge.action
game.step(action)
del node
print(game.circuit)
if game.step_count > 5:
break
return None
#return data_DL
###Output
_____no_output_____
###Markdown
The main algorithm, still to be completed. Perhaps a Transformer should work as DL architercture to predict steps.
###Code
if __name__ == '__main__':
game = QiskitGameEnv()
# Initialize the NN -> Recall to modify the P_function() in Edge()
for j in range(0,100)
data_dL = []
for i in range(0,100):
data_dL += play_once()
# train the NN with data_dL
# Collect some statistics
###Output
_____no_output_____ |
v0.12.2/examples/notebooks/generated/robust_models_0.ipynb | ###Markdown
Robust Linear Models
###Code
%matplotlib inline
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
###Output
_____no_output_____
###Markdown
EstimationLoad data:
###Code
data = sm.datasets.stackloss.load(as_pandas=False)
data.exog = sm.add_constant(data.exog)
###Output
_____no_output_____
###Markdown
Huber's T norm with the (default) median absolute deviation scaling
###Code
huber_t = sm.RLM(data.endog, data.exog, M=sm.robust.norms.HuberT())
hub_results = huber_t.fit()
print(hub_results.params)
print(hub_results.bse)
print(hub_results.summary(yname='y',
xname=['var_%d' % i for i in range(len(hub_results.params))]))
###Output
[-41.02649835 0.82938433 0.92606597 -0.12784672]
[9.79189854 0.11100521 0.30293016 0.12864961]
Robust linear Model Regression Results
==============================================================================
Dep. Variable: y No. Observations: 21
Model: RLM Df Residuals: 17
Method: IRLS Df Model: 3
Norm: HuberT
Scale Est.: mad
Cov Type: H1
Date: Tue, 02 Feb 2021
Time: 06:52:45
No. Iterations: 19
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
var_0 -41.0265 9.792 -4.190 0.000 -60.218 -21.835
var_1 0.8294 0.111 7.472 0.000 0.612 1.047
var_2 0.9261 0.303 3.057 0.002 0.332 1.520
var_3 -0.1278 0.129 -0.994 0.320 -0.380 0.124
==============================================================================
If the model instance has been used for another fit with different fit parameters, then the fit options might not be the correct ones anymore .
###Markdown
Huber's T norm with 'H2' covariance matrix
###Code
hub_results2 = huber_t.fit(cov="H2")
print(hub_results2.params)
print(hub_results2.bse)
###Output
[-41.02649835 0.82938433 0.92606597 -0.12784672]
[9.08950419 0.11945975 0.32235497 0.11796313]
###Markdown
Andrew's Wave norm with Huber's Proposal 2 scaling and 'H3' covariance matrix
###Code
andrew_mod = sm.RLM(data.endog, data.exog, M=sm.robust.norms.AndrewWave())
andrew_results = andrew_mod.fit(scale_est=sm.robust.scale.HuberScale(), cov="H3")
print('Parameters: ', andrew_results.params)
###Output
Parameters: [-40.8817957 0.79276138 1.04857556 -0.13360865]
###Markdown
See ``help(sm.RLM.fit)`` for more options and ``module sm.robust.scale`` for scale options Comparing OLS and RLMArtificial data with outliers:
###Code
nsample = 50
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, (x1-5)**2))
X = sm.add_constant(X)
sig = 0.3 # smaller error variance makes OLS<->RLM contrast bigger
beta = [5, 0.5, -0.0]
y_true2 = np.dot(X, beta)
y2 = y_true2 + sig*1. * np.random.normal(size=nsample)
y2[[39,41,43,45,48]] -= 5 # add some outliers (10% of nsample)
###Output
_____no_output_____
###Markdown
Example 1: quadratic function with linear truthNote that the quadratic term in OLS regression will capture outlier effects.
###Code
res = sm.OLS(y2, X).fit()
print(res.params)
print(res.bse)
print(res.predict())
###Output
[ 4.9399875 0.54301113 -0.01461384]
[0.44996437 0.06946843 0.00614688]
[ 4.57464158 4.85349246 5.12747409 5.39658648 5.66082961 5.92020349
6.17470812 6.4243435 6.66910964 6.90900652 7.14403415 7.37419253
7.59948167 7.81990155 8.03545218 8.24613356 8.45194569 8.65288857
8.84896221 9.04016659 9.22650172 9.4079676 9.58456423 9.75629162
9.92314975 10.08513863 10.24225826 10.39450864 10.54188977 10.68440166
10.82204429 10.95481767 11.0827218 11.20575668 11.32392231 11.4372187
11.54564583 11.64920371 11.74789234 11.84171172 11.93066185 12.01474273
12.09395437 12.16829675 12.23776988 12.30237376 12.36210839 12.41697377
12.4669699 12.51209678]
###Markdown
Estimate RLM:
###Code
resrlm = sm.RLM(y2, X).fit()
print(resrlm.params)
print(resrlm.bse)
###Output
[ 4.84780243e+00 5.33332553e-01 -4.59650485e-03]
[0.15278692 0.02358824 0.00208719]
###Markdown
Draw a plot to compare OLS estimates to the robust estimates:
###Code
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax.plot(x1, y2, 'o',label="data")
ax.plot(x1, y_true2, 'b-', label="True")
prstd, iv_l, iv_u = wls_prediction_std(res)
ax.plot(x1, res.fittedvalues, 'r-', label="OLS")
ax.plot(x1, iv_u, 'r--')
ax.plot(x1, iv_l, 'r--')
ax.plot(x1, resrlm.fittedvalues, 'g.-', label="RLM")
ax.legend(loc="best")
###Output
_____no_output_____
###Markdown
Example 2: linear function with linear truthFit a new OLS model using only the linear term and the constant:
###Code
X2 = X[:,[0,1]]
res2 = sm.OLS(y2, X2).fit()
print(res2.params)
print(res2.bse)
###Output
[5.52901458 0.39687276]
[0.3933935 0.03389637]
###Markdown
Estimate RLM:
###Code
resrlm2 = sm.RLM(y2, X2).fit()
print(resrlm2.params)
print(resrlm2.bse)
###Output
[5.01652461 0.48945402]
[0.12230441 0.01053824]
###Markdown
Draw a plot to compare OLS estimates to the robust estimates:
###Code
prstd, iv_l, iv_u = wls_prediction_std(res2)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x1, y2, 'o', label="data")
ax.plot(x1, y_true2, 'b-', label="True")
ax.plot(x1, res2.fittedvalues, 'r-', label="OLS")
ax.plot(x1, iv_u, 'r--')
ax.plot(x1, iv_l, 'r--')
ax.plot(x1, resrlm2.fittedvalues, 'g.-', label="RLM")
legend = ax.legend(loc="best")
###Output
_____no_output_____ |
pandas/.ipynb_checkpoints/Gabarito - Pandas 07-checkpoint.ipynb | ###Markdown
Adicionando Colunas, Modificando Colunas e Valores Vamos pegar o nosso dataframe novamente
###Code
import pandas as pd
#importando os arquivos
vendas_df = pd.read_csv(r'Contoso - Vendas - 2017.csv', sep=';')
produtos_df = pd.read_csv(r'Contoso - Cadastro Produtos.csv', sep=';')
lojas_df = pd.read_csv(r'Contoso - Lojas.csv', sep=';')
clientes_df = pd.read_csv(r'Contoso - Clientes.csv', sep=';')
#limpando apenas as colunas que queremos
clientes_df = clientes_df[['ID Cliente', 'E-mail']]
produtos_df = produtos_df[['ID Produto', 'Nome do Produto']]
lojas_df = lojas_df[['ID Loja', 'Nome da Loja']]
#mesclando e renomeando os dataframes
vendas_df = vendas_df.merge(produtos_df, on='ID Produto')
vendas_df = vendas_df.merge(lojas_df, on='ID Loja')
vendas_df = vendas_df.merge(clientes_df, on='ID Cliente').rename(columns={'E-mail': 'E-mail do Cliente'})
display(vendas_df)
vendas_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 980642 entries, 0 to 980641
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Numero da Venda 980642 non-null int64
1 Data da Venda 980642 non-null object
2 Data do Envio 980642 non-null object
3 ID Canal 980642 non-null int64
4 ID Loja 980642 non-null int64
5 ID Produto 980642 non-null int64
6 ID Promocao 980642 non-null int64
7 ID Cliente 980642 non-null int64
8 Quantidade Vendida 980642 non-null int64
9 Quantidade Devolvida 980642 non-null int64
10 Nome do Produto 980642 non-null object
11 Nome da Loja 980642 non-null object
12 E-mail do Cliente 980642 non-null object
dtypes: int64(8), object(5)
memory usage: 104.7+ MB
###Markdown
Agora, e se quisermos acrescentar uma coluna com o mês, o dia e o ano de cada venda (e não só a data completa)
###Code
# modificando uma coluna inteira
# adicionando um formato de de dia, mês e ano (o texto vira datetime)
vendas_df['Data da Venda'] = pd.to_datetime(vendas_df['Data da Venda'], format='%d/%m/%Y')
# adicionando uma coluna com o ano da venda
vendas_df['Ano da Venda'] = vendas_df['Data da Venda'].dt.year
# adicionando uma coluna com o mês da venda
vendas_df['Mes da Venda'] = vendas_df['Data da Venda'].dt.month
# adicionando uma coluna com o dia da venda
vendas_df['Dia da Venda'] = vendas_df['Data da Venda'].dt.day
display(vendas_df)
vendas_df.info()
###Output
_____no_output_____
###Markdown
E agora, caso a gente queira modificar 1 valor específico, como fazemos? Vamos importar novamente a base de produtos
###Code
novo_produtos_df = pd.read_csv(r'Contoso - Cadastro Produtos.csv', sep=';')
display(novo_produtos_df.head())
#repare no .head() para pegar apenas os primeiros valores, é bem comum esse uso para ter uma visão do que são os dados
# para pegar os últimos valores
display(novo_produtos_df.tail())
###Output
_____no_output_____
###Markdown
Antes de entrar no próximo exemplo, precisamos falar de 2 métodos: 1. loc - permite pegar uma linha de acordo com o índice dela. Ele dá erro caso não encontre o índice. Isso é interessante principalmente quando o índice é uma informação relevante ao invés só do número do índice ou quando queremos pegar alguma linha específica do dataframe (ao invés de ir do início do dataframe até a linha 5, por exemplo). Também podemos usar como loc[índice_linha, índice_coluna] para acessar um valor específico e modificá-lo. 2. iloc - enxerga o dataframe como linhas e colunas e consegue pegar o valor com um número de linha e um número de coluna. Repara que ele não analisa o valor do índice da linha e da coluna, apenas a posição importa. Uso: iloc[num_linha, num_coluna] - Vendo na prática
###Code
# deixando o 'Nome do Produto' como index
novo_produtos_df = novo_produtos_df.set_index('Nome do Produto')
display(novo_produtos_df.head())
#vamos pegar o preço produto Contoso Optical Wheel OEM PS/2 Mouse E60 Black
#por loc (o loc pega de acordo com o nome)
print(novo_produtos_df.loc['Contoso Optical Wheel OEM PS/2 Mouse E60 Black', 'Preco Unitario'])
#por iloc (o iloc pega de acordo com o índice e a posição, a posição começa a contar depois do índice = Nome do Produto)
print(novo_produtos_df.iloc[2, 5])
###Output
13
13
###Markdown
A empresa decidiu aumentar o preço do produto ID 873 (Contoso Wireless Laser Mouse E50 Grey) para 23 reais. Como fazemos, para modificar isso na nossa base?
###Code
novo_produtos_df.loc['Contoso Wireless Laser Mouse E50 Grey', 'ID Produto'] = 23
display(novo_produtos_df.head())
###Output
_____no_output_____ |
notebooks/07.01-Measuring-Return.ipynb | ###Markdown
*This notebook contains course material from [CBE40455](https://jckantor.github.io/CBE40455) byJeffrey Kantor (jeff at nd.edu); the content is available [on Github](https://github.com/jckantor/CBE40455.git).The text is released under the [CC-BY-NC-ND-4.0 license](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode),and code is released under the [MIT license](https://opensource.org/licenses/MIT).* Measuring ReturnHow much does one earn relative to the amount invested? This is the basic concept of return, and one of the fundamental measurements of financial performance. This notebook examines the different ways in which return can be measured. Pandas-datareaderAs will be shown below, [pandas-datareader](https://github.com/pydata/pandas-datareader) provides a convenient means access and manipulate financial data using the Pandas library. The pandas-datareader is normally imported separately from pandas. Typical installation is pip install pandas-datareaderfrom a terminal window, or executing !pip install pandas-datareaderin a Jupyter notebook cell. Google Colab environment now includes pandas-datareader, so separate installation is required. Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import datetime
import pandas as pd
import pandas_datareader as pdr
###Output
_____no_output_____
###Markdown
Where to get Price DataThis notebook uses the price of stocks and various commodity goods for the purpose of demonstrating returns. Price data is available from a number of sources. Here we demonstrate the process of obtaining price data on financial goods from [Yahoo Finance](http://finance.yahoo.com/) and downloading price data sets from [Quandl](http://www.quandl.com/). (UPDATE: [Look here for an alternative descripton of how to get live market data from Yahoo Finance](https://towardsdatascience.com/python-how-to-get-live-market-data-less-than-0-1-second-lag-c85ee280ed93).)The most comprehensive repositories of financial data are commercial enterprises. Some provide a free tier of service for limited use, typically 50 inquires a day or several hundred a month. Some require registration to access the free tier. These details are a constantly changing. A listing of free services is available from [awesome-quant](https://github.com/wilsonfreitas/awesome-quantdata-sources), but please note that details change quickly. [Another useful collection of stock price data using Python](https://towardsdatascience.com/how-to-get-stock-data-using-python-c0de1df17e75). Stock SymbolsStock price data is usually indexed and accessed by stock symbols. Stock symbols are unique identifiers for a stock, commodity, or other financial good on a specific exchanges. For example, [this is a list of symbols for the New York Stock Exchange (NYSE)](http://www.eoddata.com/symbols.aspx?AspxAutoDetectCookieSupport=1) The following function looks up details of stock symbol on yahoo finance..
###Code
# python libraray for accessing internet resources
import requests
def lookup_yahoo(symbol):
"""Return a list of all matches for a symbol on Yahoo Finance."""
url = f"http://d.yimg.com/autoc.finance.yahoo.com/autoc?query={symbol}®ion=1&lang=en"
return requests.get(url).json()["ResultSet"]["Result"]
lookup_yahoo("XOM")
def get_symbol(symbol):
"""Return exact match for a symbol."""
result = [r for r in lookup_yahoo(symbol) if symbol == r['symbol']]
return result[0] if len(result) > 0 else None
get_symbol('TSLA')
###Output
_____no_output_____
###Markdown
Yahoo Finance[Yahoo Finance](http://finance.yahoo.com/) provides historical Open, High, Low, Close, and Volume date for quotes on traded securities. In addition, Yahoo Finance provides historical [Adjusted Close](http://marubozu.blogspot.com/2006/09/how-yahoo-calculates-adjusted-closing.html) price data that corrects for splits and dividend distributions. Adjusted Close is a useful tool for computing the return on long-term investments.The following cell demonstrates how to download historical Adjusted Close price for a selected security into a pandas DataFrame.
###Code
symbol = 'TSLA'
# get symbol data
symbol_data = get_symbol(symbol)
assert symbol_data, f"Symbol {symbol} wasn't found."
# start and end of a three year interval that ends today
end = datetime.datetime.today().date()
start = end - datetime.timedelta(3*365)
# get stock price data
S = pdr.data.DataReader(symbol, "yahoo", start, end)['Adj Close']
# plot data
plt.figure(figsize=(10,4))
title = f"{symbol_data['name']} ({symbol_data['exchDisp']} {symbol_data['typeDisp']} {symbol_data['symbol']})"
S.plot(title=title)
plt.ylabel('Adjusted Close')
plt.grid()
###Output
_____no_output_____
###Markdown
Note that `S` is an example of a Pandas time series.
###Code
S
###Output
_____no_output_____
###Markdown
Pandas time series are indexed by datetime entries. There is a large collection of functions in Pandas for manipulating time series data.
###Code
S["2018"].plot()
###Output
_____no_output_____
###Markdown
Quandl[Quandl](http://www.quandl.com/) is a searchable source of time-series data on a wide range of commodities, financials, and many other economic and social indicators. Data from Quandl can be downloaded as files in various formats, or accessed directly using the [Quandl API](http://www.quandl.com/help/api) or software-specific package. Here we use demonstrate use of the [Quandl Python package](http://www.quandl.com/help/packagesPython). The first step is execute a system command to check that the Quandl package has been installed.Here are examples of energy datasets. These were found by searching Quandl, then identifying the Quandl code used for accessing the dataset, a description, the name of the field containing the desired price information.
###Code
%%capture
capture = !pip install quandl
code = 'CHRIS/MCX_CL1'
description = 'NYMEX Crude Oil Futures, Continuous Contract #1 (CL1) (Front Month)'
field = 'Close'
import quandl
end = datetime.datetime.today().date()
start = end - datetime.timedelta(5*365)
try:
S = quandl.get(code, collapse='daily', trim_start=start.isoformat(), trim_end=end.isoformat())[field]
plt.figure(figsize=(10,4))
S.plot()
plt.title(description)
plt.ylabel('Price $/bbl')
plt.grid()
except:
pass
###Output
_____no_output_____
###Markdown
ReturnsThe statistical properties of financial series are usually studied in terms of the change in prices. There are several reasons for this, key among them is that the changes can often be closely approximated as stationary random variables whereas prices are generally non-stationary sequences. A common model is $$S_{t} = R_{t} S_{t-1}$$so, recursively,$$S_{t} = R_{t} R_{t-1} \cdots R_{0} S_{0}$$The gross return $R_t$ is simply the ratio of the current price to the previous, i.e.,$$R_t = \frac{S_t}{S_{t-1}}$$$R_t$ will typically be a number close to one in value. The return is greater than one for an appreciating asset, or less than one for a declining asset.The Pandas timeseries `shift()` function is used compute the ratio $\frac{S_t}{S_{t-1}}$. Shifting a timeseries 1 day forward, i.e, `shift(1)`, shifts $S_{t-1}$ to time $t$. That's why R = S/S.shift(1)provides the correct calculation for the quantities $R_t$.
###Code
print([S, S.shift(1)])
symbol = 'TSLA'
end = datetime.datetime.today().date()
start = end - datetime.timedelta(3*365)
# get stock price data
S = pdr.data.DataReader(symbol, "yahoo", start, end)['Adj Close']
R = S/S.shift(1)
# plot data
plt.figure(figsize=(10, 5))
plt.subplot(2, 1, 1)
S.plot(title=symbol)
plt.ylabel('Adjusted Close')
plt.grid()
plt.subplot(2, 1, 2)
R.plot()
plt.ylabel('Returns')
plt.grid()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Linear fractional or Arithmetic ReturnsPerhaps the most common way of reporting returns is simply the fractional increase in value of an asset over a period, i.e.,$$r^{lin}_t = \frac{S_t - S_{t-1}}{S_{t-1}} = \frac{S_t}{S_{t-1}} - 1 $$Obviously$$r^{lin}_t = R_t - 1$$
###Code
symbol = 'TSLA'
end = datetime.datetime.today().date()
start = end - datetime.timedelta(3*365)
# get stock price data
S = pdr.data.DataReader(symbol, "yahoo", start, end)['Adj Close']
rlin = S/S.shift(1) - 1
# plot data
plt.figure(figsize=(10,5))
plt.subplot(2,1,1)
S.plot(title=symbol)
plt.ylabel('Adjusted Close')
plt.grid()
plt.subplot(2,1,2)
rlin.plot()
plt.title('Linear Returns (daily)')
plt.grid()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Linear returns don't tell the whole story.Suppose you put money in an asset that returns 10% interest in even numbered years, but loses 10% in odd numbered years. Is this a good investment for the long-haul?If we look at mean linear return\begin{align}\bar{r}^{lin} & = \frac{1}{T}\sum_{t=1}{T} r^{lin}_t \\& = \frac{1}{T} (0.1 - 0.1 + 0.1 - 0.1 + \cdots) \\& = 0\end{align}we would conclude this asset, on average, offers zero return. What does a simulation show?
###Code
S = 100
log = [[0,S]]
r = 0.10
for k in range(1,101):
S = S + r*S
r = -r
log.append([k,S])
df = pd.DataFrame(log,columns = ['k','S'])
plt.plot(df['k'],df['S'])
plt.xlabel('Year')
plt.ylabel('Value')
###Output
_____no_output_____
###Markdown
Despite an average linear return of zero, what we observe over time is an asset declining in price. The reason is pretty obvious --- on average, the years in which the asset loses money have higher balances than years where the asset gains value. Consequently, the losses are somewhat greater than the gains which, over time, leads to a loss of value.Here's a real-world example of this phenomenon. For a three year period ending October 24, 2017, United States Steel (stock symbol 'X') offers an annualized linear return of 15.9%. Seems like a terrific investment opportunity, doesn't it? Would you be surprised to learn that the actual value of the stock fell 18.3% over that three-year period period?What we can conclude from these examples is that average linear return, by itself, does not provide us with the information needed for long-term investing.
###Code
symbol = 'X'
end = datetime.datetime(2017, 10, 24)
start = end-datetime.timedelta(3*365)
# get stock price data
S = pdr.data.DataReader(symbol, "yahoo", start, end)['Adj Close']
rlin = S/S.shift(1) - 1
rlog = np.log(S/S.shift(1))
print('Three year return :', 100*(S[-1]-S[0])/S[0], '%')
# plot data
plt.figure(figsize=(10,5))
plt.subplot(2,1,1)
S.plot(title=symbol)
plt.ylabel('Adjusted Close')
plt.grid()
plt.subplot(2,1,2)
rlog.plot()
plt.title('Mean Log Returns (annualized) = {0:.2f}%'.format(100*252*rlog.mean()))
plt.grid()
plt.tight_layout()
###Output
Three year return : -18.27174276977313 %
###Markdown
Compounded Log ReturnsCompounded, or log returns, are defined as$$r^{log}_{t} = \log R_t = \log \frac{S_{t}}{S_{t-1}}$$The log returns have a very useful compounding property for aggregating price changes across time$$ \log \frac{S_{t+k}}{S_{t}} = r^{log}_{t+1} + r^{log}_{t+2} + \cdots + r^{log}_{t+k}$$If the compounded returns are statistically independent and identically distributed, then this property provides a means to aggregate returns and develop statistical price projections.
###Code
symbol = 'TSLA'
end = datetime.datetime.today().date()
start = end - datetime.timedelta(3*365)
# get stock price data
S = pdr.data.DataReader(symbol, "yahoo", start, end)['Adj Close']
rlog = np.log(S/S.shift(1))
# plot data
plt.figure(figsize=(10,5))
plt.subplot(2,1,1)
S.plot(title=symbol)
plt.ylabel('Adjusted Close')
plt.grid()
plt.subplot(2,1,2)
rlin.plot()
plt.title('Log Returns (daily)')
plt.grid()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Volatility Drag and the Relationship between Linear and Log ReturnsFor long-term financial decision making, it's important to understand the relationship between $r_t^{log}$ and $r_t^{lin}$. Algebraically, the relationships are simple.$$r^{log}_t = \log \left(1+r^{lin}_t\right)$$$$r^{lin}_t = e^{r^{log}_t} - 1$$The linear return $r_t^{lin}$ is the fraction of value that is earned from an asset in a single period. It is a direct measure of earnings. The average value $\bar{r}^{lin}$ over many periods this gives the average fractional earnings per period. If you care about consuming the earnings from an asset and not about growth in value, then $\bar{r}^{lin}$ is the quantity of interest to you.Log return $r_t^{log}$ is the rate of growth in value of an asset over a single period. When averaged over many periods, $\bar{r}^{log}$ measures the compounded rate of growth of value. If you care about the growth in value of an asset, then $\bar{r}^{log}$ is the quantity of interest to you.The compounded rate of growth $r_t^{log}$ is generally smaller than average linear return $\bar{r}^{lin}$ due to the effects of volatility. To see this, consider an asset that has a linear return of -50% in period 1, and +100% in period 2. The average linear return is would be +25%, but the compounded growth in value would be 0%.A general formula for the relationship between $\bar{r}^{log}$ and $\bar{r}^{lin}$ is derived as follows:$$\begin{align*}\bar{r}^{log} & = \frac{1}{T}\sum_{t=1}^{T} r_t^{log} \\& = \frac{1}{T}\sum_{t=1}^{T} \log\left(1+r_t^{lin}\right) \\& = \frac{1}{T}\sum_{t=1}^{T} \left(\log(1) + r_t^{lin} - \frac{1}{2} (r_t^{lin})^2 + \cdots\right) \\& = \frac{1}{T}\sum_{t=1}^{T} r_t^{lin} - \frac{1}{2}\frac{1}{T}\sum_{t=1}^{T} (r_t^{lin})^2 + \cdots \\& = \bar{r}^{lin} - \frac{1}{2}\left(\frac{1}{T}\sum_{t=1}^{T} (r_t^{lin})^2\right) + \cdots \\& = \bar{r}^{lin} - \frac{1}{2}\left((\bar{r}^{lin})^2 + \frac{1}{T}\sum_{t=1}^{T} (r_t^{lin}-\bar{r}^{lin})^2\right) + \cdots\end{align*}$$For typical values $\bar{r}^{lin}$ of and long horizons $T$, this results in a formula$$\begin{align*}\bar{r}^{log} & \approx \bar{r}^{lin} - \frac{1}{2} \left(\sigma^{lin}\right)^2\end{align*}$$where $\sigma^{lin}$ is the standard deviation of linear returns, more commonly called the volatility.The difference $- \frac{1}{2} \left(\sigma^{lin}\right)^2$ is the _volatility drag_ imposed on the compounded growth in value of an asset due to volatility in linear returns. This can be significant and a source of confusion for many investors. It's indeed possible to have a positive average linear return, but negative compounded growth. To see this, consider a \$100 investment which earns 20% on even-numbered years, and loses 18% on odd-numbered years. The average linear return is 1%, and the average log return is -0.81%.
###Code
symbol = 'TSLA'
end = datetime.datetime.today().date()
start = end - datetime.timedelta(3*365)
# get stock price data
S = pdr.data.DataReader(symbol, "yahoo", start, end)['Adj Close']
rlin = (S - S.shift(1))/S.shift(1)
rlog = np.log(S/S.shift(1))
# plot data
plt.figure(figsize=(10,6))
plt.subplot(3,1,1)
S.plot(title=symbol)
plt.ylabel('Adjusted Close')
plt.grid()
plt.subplot(3,1,2)
rlin.plot()
plt.title('Linear Returns (daily)')
plt.grid()
plt.tight_layout()
plt.subplot(3,1,3)
rlog.plot()
plt.title('Log Returns (daily)')
plt.grid()
plt.tight_layout()
print("Mean Linear Return (rlin) = {0:.7f}".format(rlin.mean()))
print("Linear Volatility (sigma) = {0:.7f}".format(rlin.std()))
print("Volatility Drag -0.5*sigma**2 = {0:.7f}".format(-0.5*rlin.std()**2))
print("rlin - 0.5*vol = {0:.7f}\n".format(rlin.mean() - 0.5*rlin.std()**2))
print("Mean Log Return = {0:.7f}".format(rlog.mean()))
symbols = ['AAPL','MSFT','F','XOM','GE','X','TSLA','NIO']
end = datetime.datetime.today().date()
start = end - datetime.timedelta(3*365)
rlin = []
rlog = []
sigma = []
for symbol in symbols:
# get stock price data
S = pdr.data.DataReader(symbol, "yahoo", start, end)['Adj Close']
r = (S - S.shift(1))/S.shift(1)
rlin.append(r.mean())
rlog.append((np.log(S/S.shift(1))).mean())
sigma.append(r.std())
import seaborn as sns
N = len(symbols)
idx = np.arange(N)
width = 0.2
plt.figure(figsize=(12, 6))
p0 = plt.bar(2*idx - 1.25*width, rlin, width)
p1 = plt.bar(2*idx, -0.5*np.array(sigma)**2, width, bottom=rlin)
p2 = plt.bar(2*idx + 1.25*width, rlog, width)
for k in range(0,N):
plt.plot([2*k - 1.75*width, 2*k + 0.5*width], [rlin[k], rlin[k]], 'k', lw=1)
plt.plot([2*k - 0.5*width, 2*k + 1.75*width], [rlog[k], rlog[k]], 'k', lw=1)
plt.xticks(2*idx, symbols)
plt.legend((p0[0], p1[0], p2[0]), ('rlin', '0.5*sigma**2', 'rlog'))
plt.title('Components of Linear Return')
plt.ylim(1.1*np.array(plt.ylim()))
plt.grid()
###Output
_____no_output_____ |
lab2_jupyter_http-request.ipynb | ###Markdown
HTTP Requests in REstimated time needed: **30** minutes ObjectivesAfter completing this lab you will be able to:* Understand HTTP* Handle the HTTP Requests and response using R Table of Contents Overview of HTTP The httr library Overview of HTTP When the **client** uses a web page your browser sends an **HTTP** request to the **server** where the page is hosted. The server tries to find the desired **resource** such as the home page (index.html).If your request is successful, the server will send the resource to the client in an **HTTP response**; this includes information like the type of the **resource**, the length of the **resource**, and other information.The figure below represents the process; the circle on the left represents the client, the circle on the right represents the Web server. The table under the Web server represents a list of resources stored in the web server. In this case an HTML file, png image, and txt file .The HTTP protocol allows you to send and receive information through the web including webpages, images, and other web resources.</p Uniform Resource Locator:URL Uniform resource locator (URL) is the most popular way to find resources on the web. We can break the URL into four parts. scheme this is this protocol, for this lab it will always be http:// Internet address or Base URL this will be used to find the location here are some examples: www.ibm.com and www.gitlab.com route location on the web server for example: /images/IDSNlogo.png URL parameters parameters included in an URL for example: ?userid=1 You may also here the term uniform resource identifier (URI), URL are actually a subset of URIs. Another popular term is endpoint, this is the URL of an operation provided by a Web server. Request The process can be broken into the request and response process.The request using the get method is partially illustrated below. In the start line we have the GET method, this is an HTTP method. Also the location of the resource /index.html and the HTTP version.The Request header passes additional information with an HTTP request: When an HTTP request is made, an HTTP method is sent, this tells the server what action to perform.A list of several HTTP methods is shown below. Response The figure below represents the response; the response start line contains the version number HTTP/1.0, a status code (200) meaning success, followed by a descriptive phrase (OK).The response header contains useful meta information.Finally, we have the response body containing the requested file an HTML document. It should be noted that some request have headers. Some status code examples are shown in the table below, the prefix indicates the class; these are shown in yellow, with actual status codes shown in white. Check out the following link for more descriptions. The httr library `httr` is a R library that allows you to build and send HTTP requests, as well as process HTTP requests easily. We can import the package as follows (may take less than minute to import):
###Code
library(httr)
###Output
_____no_output_____
###Markdown
You can make a GET request via the method get to [www.ibm.com](http://www.ibm.com?cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ):
###Code
url<-'https://www.ibm.com/'
response<-GET(url)
response
###Output
_____no_output_____
###Markdown
We have the response object response , this has information about the response, like the status of the request. We can view the status code using the attribute status
###Code
response$status
###Output
_____no_output_____
###Markdown
You can also check the headers of the response
###Code
response_headers <- headers(response)
response_headers
###Output
_____no_output_____
###Markdown
We can obtain the date the request was sent using the key Date
###Code
response_headers['date']
###Output
_____no_output_____
###Markdown
Content-Type indicates the type of data:
###Code
response_headers['content-type']
###Output
_____no_output_____
###Markdown
To obtain the original request, you can view it via response object:
###Code
response$request$headers
###Output
_____no_output_____
###Markdown
**Coding Exercise:** in the code cell below, find the content-length attribute in the response header
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
###Output
_____no_output_____
###Markdown
Click here for the solution```Rresponse_headers['content-length']``` Now, let's get the content of HTTP response
###Code
content(response)
###Output
_____no_output_____
###Markdown
which is the IBM home page (in fact, HTML page which you will learn later in this course) You can load other types of data for non-text requests like images, consider the URL of the following image:
###Code
image_url<-'https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png'
###Output
_____no_output_____
###Markdown
We can make a get request:
###Code
image_response<-GET(image_url)
###Output
_____no_output_____
###Markdown
We can look at the response header:
###Code
image_headers <- headers(image_response)
###Output
_____no_output_____
###Markdown
We can we can see the 'Content-Type', which is an image
###Code
image_headers['content-type']
###Output
_____no_output_____
###Markdown
An image is a response object that contains the image as a bytes-like object. As a result, we must save it using a file object. First, we specify the file path andname
###Code
image <- content(image_response, "raw")
writeBin(image, "logo.png")
###Output
_____no_output_____
###Markdown
Then you should be able to find the `log.png` at the file explorer on the left **Coding Exercise:** in the code cell below, find another image url and use above code to request and download the image
###Code
# Find another image URL you are interested, and download the image using above example
###Output
_____no_output_____
###Markdown
Get Request with URL Parameters You can also add URL parameters to HTTP GET request to filter resources. For example, instead of return all users from an API, I only want to get the user with id 1. To do so, I can add a URL parameter like `userid = 1` in my GET request. Let's see an GET example with URL parameters: Suppose we have a simple GET API with base URL for [http://httpbin.org/](http://httpbin.org?cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)
###Code
url_get <- 'http://httpbin.org/get'
###Output
_____no_output_____
###Markdown
and we want to add some URL parameters to above GET API. To do so, we simply create a named list with parameter names and values:
###Code
query_params <- list(name = "Yan", ID = "123")
###Output
_____no_output_____
###Markdown
Then passing the list query_params to the query argument of the GET() function.It basically tells the GET API I only want to get resources with name equals `Yan` and id equals `123`.OK, let's make the GET request to '[http://httpbin.org/get](http://httpbin.org/get?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkRP0101ENCoursera23911160-2021-01-01)' with the two arameters
###Code
response <- GET(url_get, query=query_params)
###Output
_____no_output_____
###Markdown
We can print out the updated URL and see the attached URL parameters.
###Code
response$request$url
###Output
_____no_output_____
###Markdown
After the base URL [http://httpbin.org/get](http://httpbin.org/get?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkRP0101ENCoursera23911160-2021-01-01), you can see the URL parameters `name=Yan&ID=123` are seperated by `?` The attribute args of the response had the name and values:
###Code
content(response)$args
###Output
_____no_output_____
###Markdown
Post Requests Like a GET request a POST is used to send data to a server in a request body. In order to send the Post Request in Python in the URL we change the route to POST:
###Code
url_post <- 'http://httpbin.org/post'
###Output
_____no_output_____
###Markdown
This endpoint will expect data as a file or as a form, a from is convenient way to configure an HTTP request to send data to a server. To make a POST request we use the POST() function, the list body is passed to the parameter body :
###Code
body<- list(course_name='Introduction to R', instructor='Yan')
response<-POST('http://httpbin.org/post', body = body)
response
###Output
_____no_output_____
###Markdown
We can see POST request has a body stored in fields attribute
###Code
response$request$fields
###Output
_____no_output_____ |
pythonUPVX20.ipynb | ###Markdown
Errores típicos de ejecución
###Code
10/0
4 + spam*2
'2'+ 2
###Output
_____no_output_____
###Markdown
además errores de archivo, conexión, comunicación, entrada de datos... Manejando erroresSupongamos que le pedimos al usuario que nos introduzca un número para hacer un cálculo pero...
###Code
valor = input('Escribe el número:') #pero el usuario lo pone con letras
valor_numerico = int(valor)
resultado = valor_numerico + 3
print(resultado)
###Output
_____no_output_____
###Markdown
El bloque try/except
###Code
try:
valor = input('Escribe el número:') #pero el usuario lo pone con letras
valorNumerico = int(valor)
resultado = valor_numerico + 3
print(resultado)
except Exception as e:
print('Hemos subrido un error debido a que:',str(e))
###Output
Escribe el número:tas
Hemos subrido un error debido a que invalid literal for int() with base 10: 'tas'
###Markdown
Esto resuelve nuestro problema si nuestro programa es así de sencillo pero que pasa cuando nuestra aplicación debe seguir realizando cosas con el contenido de la variable **valorNumerico** más adelante?
###Code
try:
valor = input('Escribe el número:') #pero el usuario lo pone con letras
valorNumerico = int(valor)
resultado = valor_numerico + 3
print(resultado)
except:
valorNumerico = 0
print(valorNumerico + 3)
###Output
_____no_output_____
###Markdown
Cuidado con los bloques try muy largos
###Code
try:
valor = 'siete'
valorNumerico = int(valor)
resultado = valorNumerico + 3 #esto no se ejecutará porque falla en la linea anterior
print(resultado)
except ValueError:
valorNumerico = 0
print(valorNumerico + 3)
###Output
3
###Markdown
Tipos de errores disponibles en la [documentación de python](https://docs.python.org/3/library/exceptions.htmlException) y podemos actuar dependiendo del tipo de excepción.
###Code
try:
valor = input('Introduce un valor :')
valorNumerico = int(valor) #/0 #/dos
resultado = valorNumerico + 3
print(resultado)
except ArithmeticError:
print('Error aritmético')
except ValueError:
valorNumerico = 0
except Exception as e:
print('Hemos subrido un error debido a que:',str(e))
print(valorNumerico + 3)
###Output
_____no_output_____
###Markdown
El grupo finally
###Code
try:
valor = "siete"
valorNumerico = int(valor)
resultado = valorNumerico + 3 #esto no se ejecutará porque falla en la linea anterior
print(resultado)
except ValueError:
valorNumerico = 0
finally:
print(valorNumerico + 3)
###Output
_____no_output_____ |
superseded/One Hot Encoder.ipynb | ###Markdown
OLS Analysis Using full PSU dataset
###Code
#Import required packages
import pandas as pd
import statsmodels.api as sm
import statsmodels.formula.api as smf
import numpy as np
import datetime
import seaborn as sns
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.seasonal import seasonal_decompose
def format_date(df_date):
"""
Splits Meeting Times and Dates into datetime objects where applicable using regex.
"""
df_date['Days'] = df_date['Meeting_Times'].str.extract('([^\s]+)', expand=True)
df_date['Start_Date'] = df_date['Meeting_Dates'].str.extract('([^\s]+)', expand=True)
df_date['Year'] = df_date['Term'].astype(str).str.slice(0,4)
df_date['Quarter'] = df_date['Term'].astype(str).str.slice(4,6)
df_date['Term_Date'] = pd.to_datetime(df_date['Year'] + df_date['Quarter'], format='%Y%m')
#df_date['Start_Month'] = pd.to_datetime(df_date['Year'] + df_date['Start_Date'], format='%Y%b')
df_date['End_Date'] = df_date['Meeting_Dates'].str.extract('(?<=-)(.*)(?= )', expand=True)
#df_date['End_Month'] = pd.to_datetime(df_date['End_Date'], format='%b')
df_date['Start_Time'] = df_date['Meeting_Times'].str.extract('(?<= )(.*)(?=-)', expand=True)
df_date['Start_Time'] = pd.to_datetime(df_date['Start_Time'], format='%H%M')
df_date['End_Time'] = df_date['Meeting_Times'].str.extract('((?<=-).*$)', expand=True)
df_date['End_Time'] = pd.to_datetime(df_date['End_Time'], format='%H%M')
df_date['Duration_Hr'] = ((df_date['End_Time'] - df_date['Start_Time']).dt.seconds)/3600
#df_date = df_date.set_index(pd.DatetimeIndex(df_date['Term_Date']))
return df_date
def format_xlist(df_xl):
"""
revises % capacity calculations by using Max Enrollment instead of room capacity.
"""
df_xl['Cap_Diff'] = np.where(df_xl['Xlst'] != '',
df_xl['Max_Enrl'].astype(int) - df_xl['Actual_Enrl'].astype(int),
df_xl['Room_Capacity'].astype(int) - df_xl['Actual_Enrl'].astype(int))
df_xl = df_xl.loc[df_xl['Room_Capacity'].astype(int) < 999]
return df_xl
"""
Main program control flow.
"""
#pd.set_option('display.max_rows', None)
#pd.set_option('display.max_columns', None)
df = pd.read_csv('data/PSU_master_classroom.csv', dtype={'Schedule': object, 'Schedule Desc': object})
df = df.fillna('')
df = format_date(df)
# Avoid classes that only occur on a single day
df = df.loc[df['Start_Date'] != df['End_Date']]
df = df.loc[df['Online Instruct Method'] != 'Fully Online']
# Calculate number of days per week and treat Sunday condition
df['Days_Per_Week'] = df['Days'].str.len()
df['Room_Capacity'] = df['Room_Capacity'].apply(lambda x: x if (x != 'No Data Available') else 0)
df_cl = format_xlist(df)
# Map and Enumerate
from sklearn.preprocessing import LabelEncoder
cat_columns = ['Dept', 'Class', 'Meeting_Times', 'ROOM' ]
for column in cat_columns:
col_mapping = {label: idx for idx, label in enumerate(np.unique(df_cl['{0}'.format(column)]))}
df_cl['{0}'.format(column)] = df_cl['{0}'.format(column)].map(col_mapping)
X = df_cl[['Dept', 'Term', 'Class', 'Meeting_Times', 'ROOM']].values
df_cl_le = LabelEncoder()
X[:, 0] = df_cl_le.fit_transform(X[:, 0])
X
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(categorical_features=[0])
ohe.fit_transform(X).toarray()
pd.get_dummies(df[['Dept', 'Term', 'Class', 'Meeting_Times', 'ROOM']])
###Output
_____no_output_____ |
Training on Different Image Types/Ensable Binary.ipynb | ###Markdown
**Download dependencies**
###Code
!pip3 install sklearn matplotlib GPUtil
!pip3 install torch==1.3.1+cu92 torchvision==0.4.2+cu92 -f https://download.pytorch.org/whl/torch_stable.html
###Output
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torch==1.3.1+cu92
[?25l Downloading https://download.pytorch.org/whl/cu92/torch-1.3.1%2Bcu92-cp36-cp36m-linux_x86_64.whl (621.4MB)
[K |████████████████████████████████| 621.4MB 72kB/s s eta 0:00:01 |███████████████▎ | 296.4MB 85.9MB/s eta 0:00:04 |███████████████▉ | 306.9MB 85.9MB/s eta 0:00:04 |███████████████████▋ | 380.5MB 52.0MB/s eta 0:00:05 |████████████████████▊ | 403.0MB 52.0MB/s eta 0:00:05 |████████████████████████▋ | 478.5MB 2.7MB/s eta 0:00:53 |██████████████████████████ | 504.5MB 2.7MB/s eta 0:00:44
[?25hCollecting torchvision==0.4.2+cu92
[?25l Downloading https://download.pytorch.org/whl/cu92/torchvision-0.4.2%2Bcu92-cp36-cp36m-linux_x86_64.whl (10.1MB)
[K |████████████████████████████████| 10.1MB 1.5MB/s eta 0:00:01
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch==1.3.1+cu92) (1.17.3)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision==0.4.2+cu92) (1.13.0)
Collecting pillow>=4.1.1
[?25l Downloading https://files.pythonhosted.org/packages/10/5c/0e94e689de2476c4c5e644a3bd223a1c1b9e2bdb7c510191750be74fa786/Pillow-6.2.1-cp36-cp36m-manylinux1_x86_64.whl (2.1MB)
[K |████████████████████████████████| 2.1MB 12.6MB/s eta 0:00:01
[?25hInstalling collected packages: torch, pillow, torchvision
Successfully installed pillow-6.2.1 torch-1.3.1+cu92 torchvision-0.4.2+cu92
###Markdown
**Download Data** Mount my google drive, where I stored the dataset.
###Code
try:
from google.colab import drive
drive.mount('/content/drive')
except Exception as e:
print(e)
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly
Enter your authorization code:
··········
Mounted at /content/drive
###Markdown
In order to acquire the dataset please navigate to:https://ieee-dataport.org/documents/cervigram-image-datasetUnzip the dataset into the folder "dataset".For your environment, please adjust the paths accordingly.
###Code
!rm -vrf "dataset"
!mkdir "dataset"
!cp -r "/content/drive/My Drive/Studiu doctorat leziuni cervicale/cervigram-image-dataset-v2.zip" "dataset/cervigram-image-dataset-v2.zip"
# !cp -r "cervigram-image-dataset-v2.zip" "dataset/cervigram-image-dataset-v2.zip"
!unzip "dataset/cervigram-image-dataset-v2.zip" -d "dataset"
###Output
removed 'dataset/data/train/3/20160406014/20160406155835.jpg'
removed 'dataset/data/train/3/20160406014/20160406160345.jpg'
removed 'dataset/data/train/3/20160406014/20160406160153.jpg'
removed 'dataset/data/train/3/20160406014/20160406160152.jpg'
removed 'dataset/data/train/3/20160406014/20160406160125.jpg'
removed 'dataset/data/train/3/20160406014/20160406160044.jpg'
removed 'dataset/data/train/3/20160406014/20160406160059.jpg'
removed directory 'dataset/data/train/3/20160406014'
removed 'dataset/data/train/3/20150930010/20150930160649.jpg'
removed 'dataset/data/train/3/20150930010/20150930161006.jpg'
removed 'dataset/data/train/3/20150930010/20150930160926.jpg'
removed 'dataset/data/train/3/20150930010/20150930161002.jpg'
removed 'dataset/data/train/3/20150930010/20150930161104.jpg'
removed 'dataset/data/train/3/20150930010/20150930160831.jpg'
removed 'dataset/data/train/3/20150930010/20150930160900.jpg'
removed directory 'dataset/data/train/3/20150930010'
removed 'dataset/data/train/3/20160427007/20160427151938.jpg'
removed 'dataset/data/train/3/20160427007/20160427151800.jpg'
removed 'dataset/data/train/3/20160427007/20160427152036.jpg'
removed 'dataset/data/train/3/20160427007/20160427152214.jpg'
removed 'dataset/data/train/3/20160427007/20160427152113.jpg'
removed 'dataset/data/train/3/20160427007/20160427152003.jpg'
removed 'dataset/data/train/3/20160427007/20160427152112.jpg'
removed directory 'dataset/data/train/3/20160427007'
removed 'dataset/data/train/3/20160427004/20160427143000.jpg'
removed 'dataset/data/train/3/20160427004/20160427143301.jpg'
removed 'dataset/data/train/3/20160427004/20160427143137.jpg'
removed 'dataset/data/train/3/20160427004/20160427143305.jpg'
removed 'dataset/data/train/3/20160427004/20160427143358.jpg'
removed 'dataset/data/train/3/20160427004/20160427143202.jpg'
removed 'dataset/data/train/3/20160427004/20160427143231.jpg'
removed directory 'dataset/data/train/3/20160427004'
removed 'dataset/data/train/3/115924160/115924160Image7.jpg'
removed 'dataset/data/train/3/115924160/115924160Image0.jpg'
removed 'dataset/data/train/3/115924160/115924160Image8.jpg'
removed 'dataset/data/train/3/115924160/115924160Image1.jpg'
removed 'dataset/data/train/3/115924160/115924160Image5.jpg'
removed 'dataset/data/train/3/115924160/115924160Image2.jpg'
removed 'dataset/data/train/3/115924160/115924160Image9.jpg'
removed directory 'dataset/data/train/3/115924160'
removed 'dataset/data/train/3/20160612004/20160612164618.jpg'
removed 'dataset/data/train/3/20160612004/20160612164615.jpg'
removed 'dataset/data/train/3/20160612004/20160612164454.jpg'
removed 'dataset/data/train/3/20160612004/20160612164314.jpg'
removed 'dataset/data/train/3/20160612004/20160612164707.jpg'
removed 'dataset/data/train/3/20160612004/20160612164541.jpg'
removed 'dataset/data/train/3/20160612004/20160612164608.jpg'
removed directory 'dataset/data/train/3/20160612004'
removed 'dataset/data/train/3/20160323017/20160323152231.jpg'
removed 'dataset/data/train/3/20160323017/20160323151927.jpg'
removed 'dataset/data/train/3/20160323017/20160323152323.jpg'
removed 'dataset/data/train/3/20160323017/20160323152131.jpg'
removed 'dataset/data/train/3/20160323017/20160323152105.jpg'
removed 'dataset/data/train/3/20160323017/20160323152201.jpg'
removed 'dataset/data/train/3/20160323017/20160323152240.jpg'
removed directory 'dataset/data/train/3/20160323017'
removed 'dataset/data/train/3/101450780/101450780Image0.jpg'
removed 'dataset/data/train/3/101450780/101450780Image4.jpg'
removed 'dataset/data/train/3/101450780/101450780Image2.jpg'
removed 'dataset/data/train/3/101450780/101450780Image9.jpg'
removed 'dataset/data/train/3/101450780/101450780Image3.jpg'
removed 'dataset/data/train/3/101450780/101450780Image6.jpg'
removed 'dataset/data/train/3/101450780/101450780Image1.jpg'
removed directory 'dataset/data/train/3/101450780'
removed 'dataset/data/train/3/20160418009/20160418154803.jpg'
removed 'dataset/data/train/3/20160418009/20160418154437.jpg'
removed 'dataset/data/train/3/20160418009/20160418154732.jpg'
removed 'dataset/data/train/3/20160418009/20160418154833.jpg'
removed 'dataset/data/train/3/20160418009/20160418154633.jpg'
removed 'dataset/data/train/3/20160418009/20160418154810.jpg'
removed 'dataset/data/train/3/20160418009/20160418154659.jpg'
removed directory 'dataset/data/train/3/20160418009'
removed 'dataset/data/train/3/103336120/103336120Image5.jpg'
removed 'dataset/data/train/3/103336120/103336120Image1.jpg'
removed 'dataset/data/train/3/103336120/103336120Image11.jpg'
removed 'dataset/data/train/3/103336120/103336120Image6.jpg'
removed 'dataset/data/train/3/103336120/103336120Image10.jpg'
removed 'dataset/data/train/3/103336120/103336120Image12.jpg'
removed 'dataset/data/train/3/103336120/103336120Image7.jpg'
removed directory 'dataset/data/train/3/103336120'
removed 'dataset/data/train/3/20160617002/20160617152115.jpg'
removed 'dataset/data/train/3/20160617002/20160617152536.jpg'
removed 'dataset/data/train/3/20160617002/20160617152628.jpg'
removed 'dataset/data/train/3/20160617002/20160617152502.jpg'
removed 'dataset/data/train/3/20160617002/20160617152625.jpg'
removed 'dataset/data/train/3/20160617002/20160617152452.jpg'
removed 'dataset/data/train/3/20160617002/20160617152854.jpg'
removed directory 'dataset/data/train/3/20160617002'
removed 'dataset/data/train/3/20160303007/20160303173758.jpg'
removed 'dataset/data/train/3/20160303007/20160303173742.jpg'
removed 'dataset/data/train/3/20160303007/20160303173514.jpg'
removed 'dataset/data/train/3/20160303007/20160303173829.jpg'
removed 'dataset/data/train/3/20160303007/20160303173705.jpg'
removed 'dataset/data/train/3/20160303007/20160303173853.jpg'
removed 'dataset/data/train/3/20160303007/20160303173826.jpg'
removed directory 'dataset/data/train/3/20160303007'
removed 'dataset/data/train/3/090200510/090200510Image11.jpg'
removed 'dataset/data/train/3/090200510/090200510Image3.jpg'
removed 'dataset/data/train/3/090200510/090200510Image8.jpg'
removed 'dataset/data/train/3/090200510/090200510Image0.jpg'
removed 'dataset/data/train/3/090200510/090200510Image2.jpg'
removed 'dataset/data/train/3/090200510/090200510Image10.jpg'
removed 'dataset/data/train/3/090200510/090200510Image9.jpg'
removed directory 'dataset/data/train/3/090200510'
removed 'dataset/data/train/3/20151020004/20151020160653.jpg'
removed 'dataset/data/train/3/20151020004/20151020160928.jpg'
removed 'dataset/data/train/3/20151020004/20151020160843.jpg'
removed 'dataset/data/train/3/20151020004/20151020160903.jpg'
removed 'dataset/data/train/3/20151020004/20151020161326.jpg'
removed 'dataset/data/train/3/20151020004/20151020161110.jpg'
removed 'dataset/data/train/3/20151020004/20151020161109.jpg'
removed directory 'dataset/data/train/3/20151020004'
removed 'dataset/data/train/3/150023453/150023453Image0.jpg'
removed 'dataset/data/train/3/150023453/150023453Image10.jpg'
removed 'dataset/data/train/3/150023453/150023453Image6.jpg'
removed 'dataset/data/train/3/150023453/150023453Image4.jpg'
removed 'dataset/data/train/3/150023453/150023453Image7.jpg'
removed 'dataset/data/train/3/150023453/150023453Image2.jpg'
removed 'dataset/data/train/3/150023453/150023453Image5.jpg'
removed directory 'dataset/data/train/3/150023453'
removed 'dataset/data/train/3/20150914004/20150914160239.jpg'
removed 'dataset/data/train/3/20150914004/20150914155948.jpg'
removed 'dataset/data/train/3/20150914004/20150914160017.jpg'
removed 'dataset/data/train/3/20150914004/20150914155806.jpg'
removed 'dataset/data/train/3/20150914004/20150914160049.jpg'
removed 'dataset/data/train/3/20150914004/20150914160123.jpg'
removed 'dataset/data/train/3/20150914004/20150914160121.jpg'
removed directory 'dataset/data/train/3/20150914004'
removed 'dataset/data/train/3/20160606004/20160606160616.jpg'
removed 'dataset/data/train/3/20160606004/20160606160324.jpg'
removed 'dataset/data/train/3/20160606004/20160606160545.jpg'
removed 'dataset/data/train/3/20160606004/20160606160448.jpg'
removed 'dataset/data/train/3/20160606004/20160606160614.jpg'
removed 'dataset/data/train/3/20160606004/20160606160700.jpg'
removed 'dataset/data/train/3/20160606004/20160606160514.jpg'
removed directory 'dataset/data/train/3/20160606004'
removed 'dataset/data/train/3/114204650/114204650Image8.jpg'
removed 'dataset/data/train/3/114204650/114204650Image10.jpg'
removed 'dataset/data/train/3/114204650/114204650Image4.jpg'
removed 'dataset/data/train/3/114204650/114204650Image15.jpg'
removed 'dataset/data/train/3/114204650/114204650Image9.jpg'
removed 'dataset/data/train/3/114204650/114204650Image14.jpg'
removed 'dataset/data/train/3/114204650/114204650Image13.jpg'
removed directory 'dataset/data/train/3/114204650'
removed 'dataset/data/train/3/145141110/145141110Image2.jpg'
removed 'dataset/data/train/3/145141110/145141110Image9.jpg'
removed 'dataset/data/train/3/145141110/145141110Image0.jpg'
removed 'dataset/data/train/3/145141110/145141110Image3.jpg'
removed 'dataset/data/train/3/145141110/145141110Image6.jpg'
removed 'dataset/data/train/3/145141110/145141110Image5.jpg'
removed 'dataset/data/train/3/145141110/145141110Image10.jpg'
removed directory 'dataset/data/train/3/145141110'
removed 'dataset/data/train/3/093518297/093518297Image0.jpg'
removed 'dataset/data/train/3/093518297/093518297Image7.jpg'
removed 'dataset/data/train/3/093518297/093518297Image4.jpg'
removed 'dataset/data/train/3/093518297/093518297Image6.jpg'
removed 'dataset/data/train/3/093518297/093518297Image2.jpg'
removed 'dataset/data/train/3/093518297/093518297Image5.jpg'
removed 'dataset/data/train/3/093518297/093518297Image8.jpg'
removed directory 'dataset/data/train/3/093518297'
removed directory 'dataset/data/train/3'
removed 'dataset/data/train/1/20160427005/20160427144901.jpg'
removed 'dataset/data/train/1/20160427005/20160427144823.jpg'
removed 'dataset/data/train/1/20160427005/20160427144935.jpg'
removed 'dataset/data/train/1/20160427005/20160427144639.jpg'
removed 'dataset/data/train/1/20160427005/20160427144839.jpg'
removed 'dataset/data/train/1/20160427005/20160427144931.jpg'
removed 'dataset/data/train/1/20160427005/20160427145110.jpg'
removed directory 'dataset/data/train/1/20160427005'
removed 'dataset/data/train/1/20151020001/20151020111129.jpg'
removed 'dataset/data/train/1/20151020001/20151020111438.jpg'
removed 'dataset/data/train/1/20151020001/20151020111156.jpg'
removed 'dataset/data/train/1/20151020001/20151020111224.jpg'
removed 'dataset/data/train/1/20151020001/20151020110941.jpg'
removed 'dataset/data/train/1/20151020001/20151020111245.jpg'
removed 'dataset/data/train/1/20151020001/20151020111248.jpg'
removed directory 'dataset/data/train/1/20151020001'
removed 'dataset/data/train/1/20160201002/20160201144221.jpg'
removed 'dataset/data/train/1/20160201002/20160201144143.jpg'
removed 'dataset/data/train/1/20160201002/20160201144316.jpg'
removed 'dataset/data/train/1/20160201002/20160201144045.jpg'
removed 'dataset/data/train/1/20160201002/20160201143900.jpg'
removed 'dataset/data/train/1/20160201002/20160201144225.jpg'
removed 'dataset/data/train/1/20160201002/20160201144110.jpg'
removed directory 'dataset/data/train/1/20160201002'
removed 'dataset/data/train/1/20151209008/20151209155546.jpg'
removed 'dataset/data/train/1/20151209008/20151209155447.jpg'
removed 'dataset/data/train/1/20151209008/20151209155516.jpg'
removed 'dataset/data/train/1/20151209008/20151209155530.jpg'
removed 'dataset/data/train/1/20151209008/20151209155256.jpg'
removed 'dataset/data/train/1/20151209008/20151209155549.jpg'
removed 'dataset/data/train/1/20151209008/20151209155644.jpg'
removed directory 'dataset/data/train/1/20151209008'
removed 'dataset/data/train/1/20151201001/20151201111819.jpg'
removed 'dataset/data/train/1/20151201001/20151201112027.jpg'
removed 'dataset/data/train/1/20151201001/20151201111648.jpg'
removed 'dataset/data/train/1/20151201001/20151201111817.jpg'
removed 'dataset/data/train/1/20151201001/20151201111746.jpg'
removed 'dataset/data/train/1/20151201001/20151201111528.jpg'
removed 'dataset/data/train/1/20151201001/20151201111717.jpg'
removed directory 'dataset/data/train/1/20151201001'
removed 'dataset/data/train/1/100435333/100435333Image8.jpg'
removed 'dataset/data/train/1/100435333/100435333Image5.jpg'
removed 'dataset/data/train/1/100435333/100435333Image2.jpg'
removed 'dataset/data/train/1/100435333/100435333Image0.jpg'
removed 'dataset/data/train/1/100435333/100435333Image4.jpg'
removed 'dataset/data/train/1/100435333/100435333Image9.jpg'
removed 'dataset/data/train/1/100435333/100435333Image6.jpg'
removed directory 'dataset/data/train/1/100435333'
removed 'dataset/data/train/1/152815657/152815657Image8.jpg'
removed 'dataset/data/train/1/152815657/152815657Image3.jpg'
removed 'dataset/data/train/1/152815657/152815657Image5.jpg'
removed 'dataset/data/train/1/152815657/152815657Image2.jpg'
removed 'dataset/data/train/1/152815657/152815657Image0.jpg'
removed 'dataset/data/train/1/152815657/152815657Image6.jpg'
removed 'dataset/data/train/1/152815657/152815657Image7.jpg'
removed directory 'dataset/data/train/1/152815657'
removed 'dataset/data/train/1/161441187/161441187Image0.jpg'
removed 'dataset/data/train/1/161441187/161441187Image9.jpg'
removed 'dataset/data/train/1/161441187/161441187Image2.jpg'
removed 'dataset/data/train/1/161441187/161441187Image5.jpg'
removed 'dataset/data/train/1/161441187/161441187Image10.jpg'
removed 'dataset/data/train/1/161441187/161441187Image6.jpg'
removed 'dataset/data/train/1/161441187/161441187Image8.jpg'
removed directory 'dataset/data/train/1/161441187'
removed 'dataset/data/train/1/145030457/145030457Image7.jpg'
removed 'dataset/data/train/1/145030457/145030457Image3.jpg'
removed 'dataset/data/train/1/145030457/145030457Image6.jpg'
removed 'dataset/data/train/1/145030457/145030457Image5.jpg'
removed 'dataset/data/train/1/145030457/145030457Image0.jpg'
removed 'dataset/data/train/1/145030457/145030457Image2.jpg'
removed 'dataset/data/train/1/145030457/145030457Image4.jpg'
removed directory 'dataset/data/train/1/145030457'
removed 'dataset/data/train/1/20150909004/20150909145813.jpg'
removed 'dataset/data/train/1/20150909004/20150909145844.jpg'
removed 'dataset/data/train/1/20150909004/20150909150017.jpg'
removed 'dataset/data/train/1/20150909004/20150909145523.jpg'
removed 'dataset/data/train/1/20150909004/20150909145712.jpg'
removed 'dataset/data/train/1/20150909004/20150909145642.jpg'
removed 'dataset/data/train/1/20150909004/20150909145742.jpg'
removed directory 'dataset/data/train/1/20150909004'
removed 'dataset/data/train/1/20151210003/20151210152114.jpg'
removed 'dataset/data/train/1/20151210003/20151210151935.jpg'
removed 'dataset/data/train/1/20151210003/20151210152305.jpg'
removed 'dataset/data/train/1/20151210003/20151210152139.jpg'
removed 'dataset/data/train/1/20151210003/20151210152215.jpg'
removed 'dataset/data/train/1/20151210003/20151210152250.jpg'
removed 'dataset/data/train/1/20151210003/20151210152629.jpg'
removed directory 'dataset/data/train/1/20151210003'
removed 'dataset/data/train/1/161811413/161811413Image3.jpg'
removed 'dataset/data/train/1/161811413/161811413Image2.jpg'
removed 'dataset/data/train/1/161811413/161811413Image5.jpg'
removed 'dataset/data/train/1/161811413/161811413Image4.jpg'
removed 'dataset/data/train/1/161811413/161811413Image7.jpg'
removed 'dataset/data/train/1/161811413/161811413Image0.jpg'
removed 'dataset/data/train/1/161811413/161811413Image6.jpg'
removed directory 'dataset/data/train/1/161811413'
removed 'dataset/data/train/1/144111257/144111257Image2.jpg'
removed 'dataset/data/train/1/144111257/144111257Image4.jpg'
removed 'dataset/data/train/1/144111257/144111257Image54.jpg'
removed 'dataset/data/train/1/144111257/144111257Image7.jpg'
removed 'dataset/data/train/1/144111257/144111257Image46.jpg'
removed 'dataset/data/train/1/144111257/144111257Image0.jpg'
removed 'dataset/data/train/1/144111257/144111257Image3.jpg'
removed directory 'dataset/data/train/1/144111257'
removed 'dataset/data/train/1/20160622011/20160622160126.jpg'
removed 'dataset/data/train/1/20160622011/20160622160056.jpg'
removed 'dataset/data/train/1/20160622011/20160622160308.jpg'
removed 'dataset/data/train/1/20160622011/20160622160227.jpg'
removed 'dataset/data/train/1/20160622011/20160622160156.jpg'
removed 'dataset/data/train/1/20160622011/20160622155936.jpg'
removed 'dataset/data/train/1/20160622011/20160622160229.jpg'
removed directory 'dataset/data/train/1/20160622011'
removed 'dataset/data/train/1/150501227/150501227Image3.jpg'
removed 'dataset/data/train/1/150501227/150501227Image666.jpg'
removed 'dataset/data/train/1/150501227/150501227Image1.jpg'
removed 'dataset/data/train/1/150501227/150501227Image4.jpg'
removed 'dataset/data/train/1/150501227/150501227Image5.jpg'
removed 'dataset/data/train/1/150501227/150501227Image0.jpg'
removed 'dataset/data/train/1/150501227/150501227Image74.jpg'
removed directory 'dataset/data/train/1/150501227'
removed 'dataset/data/train/1/153524690/153524690Image7.jpg'
removed 'dataset/data/train/1/153524690/153524690Image3.jpg'
removed 'dataset/data/train/1/153524690/153524690Image0.jpg'
removed 'dataset/data/train/1/153524690/153524690Image84.jpg'
removed 'dataset/data/train/1/153524690/153524690Image2.jpg'
removed 'dataset/data/train/1/153524690/153524690Image44.jpg'
removed 'dataset/data/train/1/153524690/153524690Image6.jpg'
removed directory 'dataset/data/train/1/153524690'
removed 'dataset/data/train/1/150521730/150521730Image9.jpg'
removed 'dataset/data/train/1/150521730/150521730Image7.jpg'
removed 'dataset/data/train/1/150521730/150521730Image2.jpg'
removed 'dataset/data/train/1/150521730/150521730Image6.jpg'
removed 'dataset/data/train/1/150521730/150521730Image5.jpg'
removed 'dataset/data/train/1/150521730/150521730Image4.jpg'
removed 'dataset/data/train/1/150521730/150521730Image0.jpg'
removed directory 'dataset/data/train/1/150521730'
removed 'dataset/data/train/1/143855533/143855533Image6.jpg'
removed 'dataset/data/train/1/143855533/143855533Image7.jpg'
removed 'dataset/data/train/1/143855533/143855533Image4.jpg'
removed 'dataset/data/train/1/143855533/143855533Image5.jpg'
removed 'dataset/data/train/1/143855533/143855533Image2.jpg'
removed 'dataset/data/train/1/143855533/143855533Image0.jpg'
removed 'dataset/data/train/1/143855533/143855533Image3.jpg'
removed directory 'dataset/data/train/1/143855533'
removed 'dataset/data/train/1/20150930005/20150930144716.jpg'
removed 'dataset/data/train/1/20150930005/20150930144528.jpg'
removed 'dataset/data/train/1/20150930005/20150930144825.jpg'
removed 'dataset/data/train/1/20150930005/20150930144759.jpg'
removed 'dataset/data/train/1/20150930005/20150930144648.jpg'
removed 'dataset/data/train/1/20150930005/20150930144830.jpg'
removed 'dataset/data/train/1/20150930005/20150930144952.jpg'
removed directory 'dataset/data/train/1/20150930005'
removed 'dataset/data/train/1/145618193/145618193Image4.jpg'
removed 'dataset/data/train/1/145618193/145618193Image2.jpg'
removed 'dataset/data/train/1/145618193/145618193Image0.jpg'
removed 'dataset/data/train/1/145618193/145618193Image3.jpg'
removed 'dataset/data/train/1/145618193/145618193Image5.jpg'
removed 'dataset/data/train/1/145618193/145618193Image6.jpg'
removed 'dataset/data/train/1/145618193/145618193Image7.jpg'
removed directory 'dataset/data/train/1/145618193'
removed 'dataset/data/train/1/115033773/115033773Image6.jpg'
removed 'dataset/data/train/1/115033773/115033773Image5.jpg'
removed 'dataset/data/train/1/115033773/115033773Image2.jpg'
removed 'dataset/data/train/1/115033773/115033773Image0.jpg'
removed 'dataset/data/train/1/115033773/115033773Image7.jpg'
removed 'dataset/data/train/1/115033773/115033773Image8.jpg'
removed 'dataset/data/train/1/115033773/115033773Image4.jpg'
removed directory 'dataset/data/train/1/115033773'
removed 'dataset/data/train/1/151423737/151423737Image2.jpg'
removed 'dataset/data/train/1/151423737/151423737Image5.jpg'
removed 'dataset/data/train/1/151423737/151423737Image0.jpg'
removed 'dataset/data/train/1/151423737/151423737Image6.jpg'
removed 'dataset/data/train/1/151423737/151423737Image9.jpg'
removed 'dataset/data/train/1/151423737/151423737Image4.jpg'
removed 'dataset/data/train/1/151423737/151423737Image3.jpg'
removed directory 'dataset/data/train/1/151423737'
removed 'dataset/data/train/1/20160810006/20160810145705.jpg'
removed 'dataset/data/train/1/20160810006/20160810145547.jpg'
removed 'dataset/data/train/1/20160810006/20160810145814.jpg'
removed 'dataset/data/train/1/20160810006/20160810145846.jpg'
removed 'dataset/data/train/1/20160810006/20160810145736.jpg'
removed 'dataset/data/train/1/20160810006/20160810145840.jpg'
removed 'dataset/data/train/1/20160810006/20160810150000.jpg'
removed directory 'dataset/data/train/1/20160810006'
removed 'dataset/data/train/1/20151116002/20151116145338.jpg'
removed 'dataset/data/train/1/20151116002/20151116145508.jpg'
removed 'dataset/data/train/1/20151116002/20151116145307.jpg'
removed 'dataset/data/train/1/20151116002/20151116145408.jpg'
removed 'dataset/data/train/1/20151116002/20151116145407.jpg'
removed 'dataset/data/train/1/20151116002/20151116145238.jpg'
removed 'dataset/data/train/1/20151116002/20151116145120.jpg'
removed directory 'dataset/data/train/1/20151116002'
removed 'dataset/data/train/1/20160321005/20160321153208.jpg'
removed 'dataset/data/train/1/20160321005/20160321152549.jpg'
removed 'dataset/data/train/1/20160321005/20160321152731.jpg'
removed 'dataset/data/train/1/20160321005/20160321152803.jpg'
removed 'dataset/data/train/1/20160321005/20160321152840.jpg'
removed 'dataset/data/train/1/20160321005/20160321152911.jpg'
removed 'dataset/data/train/1/20160321005/20160321152915.jpg'
removed directory 'dataset/data/train/1/20160321005'
removed 'dataset/data/train/1/090450340/090450340Image6.jpg'
removed 'dataset/data/train/1/090450340/090450340Image4.jpg'
removed 'dataset/data/train/1/090450340/090450340Image2.jpg'
removed 'dataset/data/train/1/090450340/090450340Image0.jpg'
removed 'dataset/data/train/1/090450340/090450340Image5.jpg'
removed 'dataset/data/train/1/090450340/090450340Image7.jpg'
removed 'dataset/data/train/1/090450340/090450340Image1.jpg'
removed directory 'dataset/data/train/1/090450340'
removed 'dataset/data/train/1/20160225003/20160225105518.jpg'
removed 'dataset/data/train/1/20160225003/20160225105538.jpg'
removed 'dataset/data/train/1/20160225003/20160225105604.jpg'
removed 'dataset/data/train/1/20160225003/20160225105706.jpg'
removed 'dataset/data/train/1/20160225003/20160225105320.jpg'
removed 'dataset/data/train/1/20160225003/20160225105624.jpg'
removed 'dataset/data/train/1/20160225003/20160225105626.jpg'
removed directory 'dataset/data/train/1/20160225003'
removed 'dataset/data/train/1/20160517002/20160517152102.jpg'
removed 'dataset/data/train/1/20160517002/20160517152029.jpg'
removed 'dataset/data/train/1/20160517002/20160517152129.jpg'
removed 'dataset/data/train/1/20160517002/20160517152141.jpg'
removed 'dataset/data/train/1/20160517002/20160517152250.jpg'
removed 'dataset/data/train/1/20160517002/20160517152148.jpg'
removed 'dataset/data/train/1/20160517002/20160517151843.jpg'
removed directory 'dataset/data/train/1/20160517002'
removed 'dataset/data/train/1/20160824004/20160824110211.jpg'
removed 'dataset/data/train/1/20160824004/20160824110140.jpg'
removed 'dataset/data/train/1/20160824004/20160824105919.jpg'
removed 'dataset/data/train/1/20160824004/20160824110043.jpg'
removed 'dataset/data/train/1/20160824004/20160824110232.jpg'
removed 'dataset/data/train/1/20160824004/20160824110114.jpg'
removed 'dataset/data/train/1/20160824004/20160824110215.jpg'
removed directory 'dataset/data/train/1/20160824004'
removed 'dataset/data/train/1/090614767/090614767Image0.jpg'
removed 'dataset/data/train/1/090614767/090614767Image121.jpg'
removed 'dataset/data/train/1/090614767/090614767Image4.jpg'
removed 'dataset/data/train/1/090614767/090614767Image75.jpg'
removed 'dataset/data/train/1/090614767/090614767Image5.jpg'
removed 'dataset/data/train/1/090614767/090614767Image10.jpg'
removed 'dataset/data/train/1/090614767/090614767Image8.jpg'
removed directory 'dataset/data/train/1/090614767'
removed 'dataset/data/train/1/103022110/103022110Image9.jpg'
removed 'dataset/data/train/1/103022110/103022110Image4.jpg'
removed 'dataset/data/train/1/103022110/103022110Image0.jpg'
removed 'dataset/data/train/1/103022110/103022110Image3.jpg'
removed 'dataset/data/train/1/103022110/103022110Image1.jpg'
removed 'dataset/data/train/1/103022110/103022110Image6.jpg'
removed 'dataset/data/train/1/103022110/103022110Image7.jpg'
removed directory 'dataset/data/train/1/103022110'
removed 'dataset/data/train/1/20151112001/20151112092016.jpg'
removed 'dataset/data/train/1/20151112001/20151112092139.jpg'
removed 'dataset/data/train/1/20151112001/20151112091946.jpg'
removed 'dataset/data/train/1/20151112001/20151112091716.jpg'
removed 'dataset/data/train/1/20151112001/20151112091846.jpg'
removed 'dataset/data/train/1/20151112001/20151112092018.jpg'
removed 'dataset/data/train/1/20151112001/20151112091907.jpg'
removed directory 'dataset/data/train/1/20151112001'
removed 'dataset/data/train/1/151849783/151849783Image0.jpg'
removed 'dataset/data/train/1/151849783/151849783Image78.jpg'
removed 'dataset/data/train/1/151849783/151849783Image54.jpg'
removed 'dataset/data/train/1/151849783/151849783Image6.jpg'
removed 'dataset/data/train/1/151849783/151849783Image4.jpg'
removed 'dataset/data/train/1/151849783/151849783Image2.jpg'
removed 'dataset/data/train/1/151849783/151849783Image3.jpg'
removed directory 'dataset/data/train/1/151849783'
removed 'dataset/data/train/1/20151118004/20151118154458.jpg'
removed 'dataset/data/train/1/20151118004/20151118154416.jpg'
removed 'dataset/data/train/1/20151118004/20151118154459.jpg'
removed 'dataset/data/train/1/20151118004/20151118154557.jpg'
removed 'dataset/data/train/1/20151118004/20151118154201.jpg'
removed 'dataset/data/train/1/20151118004/20151118154437.jpg'
removed 'dataset/data/train/1/20151118004/20151118154344.jpg'
removed directory 'dataset/data/train/1/20151118004'
removed 'dataset/data/train/1/20160425007/20160425155828.jpg'
removed 'dataset/data/train/1/20160425007/20160425155759.jpg'
removed 'dataset/data/train/1/20160425007/20160425155901.jpg'
removed 'dataset/data/train/1/20160425007/20160425155928.jpg'
removed 'dataset/data/train/1/20160425007/20160425155926.jpg'
removed 'dataset/data/train/1/20160425007/20160425160013.jpg'
removed 'dataset/data/train/1/20160425007/20160425155644.jpg'
removed directory 'dataset/data/train/1/20160425007'
removed 'dataset/data/train/1/153732347/153732347Image5.jpg'
removed 'dataset/data/train/1/153732347/153732347Image0.jpg'
removed 'dataset/data/train/1/153732347/153732347Image7.jpg'
removed 'dataset/data/train/1/153732347/153732347Image4.jpg'
removed 'dataset/data/train/1/153732347/153732347Image2.jpg'
removed 'dataset/data/train/1/153732347/153732347Image1.jpg'
removed 'dataset/data/train/1/153732347/153732347Image6.jpg'
removed directory 'dataset/data/train/1/153732347'
removed 'dataset/data/train/1/100004000/100004000Image0.jpg'
removed 'dataset/data/train/1/100004000/100004000Image10.jpg'
removed 'dataset/data/train/1/100004000/100004000Image3.jpg'
removed 'dataset/data/train/1/100004000/100004000Image8.jpg'
removed 'dataset/data/train/1/100004000/100004000Image5.jpg'
removed 'dataset/data/train/1/100004000/100004000Image4.jpg'
removed 'dataset/data/train/1/100004000/100004000Image7.jpg'
removed directory 'dataset/data/train/1/100004000'
removed 'dataset/data/train/1/20151223005/20151223150535.jpg'
removed 'dataset/data/train/1/20151223005/20151223150731.jpg'
removed 'dataset/data/train/1/20151223005/20151223150835.jpg'
removed 'dataset/data/train/1/20151223005/20151223151014.jpg'
removed 'dataset/data/train/1/20151223005/20151223150700.jpg'
removed 'dataset/data/train/1/20151223005/20151223150756.jpg'
removed 'dataset/data/train/1/20151223005/20151223150826.jpg'
removed directory 'dataset/data/train/1/20151223005'
removed 'dataset/data/train/1/20150923006/20150923154936.jpg'
removed 'dataset/data/train/1/20150923006/20150923154943.jpg'
removed 'dataset/data/train/1/20150923006/20150923154621.jpg'
removed 'dataset/data/train/1/20150923006/20150923154828.jpg'
removed 'dataset/data/train/1/20150923006/20150923155146.jpg'
removed 'dataset/data/train/1/20150923006/20150923154753.jpg'
removed 'dataset/data/train/1/20150923006/20150923154846.jpg'
removed directory 'dataset/data/train/1/20150923006'
removed 'dataset/data/train/1/094549143/094549143Image4.jpg'
removed 'dataset/data/train/1/094549143/094549143Image6.jpg'
removed 'dataset/data/train/1/094549143/094549143Image5.jpg'
removed 'dataset/data/train/1/094549143/094549143Image1.jpg'
removed 'dataset/data/train/1/094549143/094549143Image2.jpg'
removed 'dataset/data/train/1/094549143/094549143Image10.jpg'
removed 'dataset/data/train/1/094549143/094549143Image12.jpg'
removed directory 'dataset/data/train/1/094549143'
removed 'dataset/data/train/1/20160608002/20160608095148.jpg'
removed 'dataset/data/train/1/20160608002/20160608094925.jpg'
removed 'dataset/data/train/1/20160608002/20160608095059.jpg'
removed 'dataset/data/train/1/20160608002/20160608095028.jpg'
removed 'dataset/data/train/1/20160608002/20160608095003.jpg'
removed 'dataset/data/train/1/20160608002/20160608094812.jpg'
removed 'dataset/data/train/1/20160608002/20160608095057.jpg'
removed directory 'dataset/data/train/1/20160608002'
removed 'dataset/data/train/1/20150914002/20150914151610.jpg'
removed 'dataset/data/train/1/20150914002/20150914151645.jpg'
removed 'dataset/data/train/1/20150914002/20150914151807.jpg'
removed 'dataset/data/train/1/20150914002/20150914151336.jpg'
removed 'dataset/data/train/1/20150914002/20150914151518.jpg'
removed 'dataset/data/train/1/20150914002/20150914151540.jpg'
removed 'dataset/data/train/1/20150914002/20150914151643.jpg'
removed directory 'dataset/data/train/1/20150914002'
removed 'dataset/data/train/1/152356617/152356617Image7.jpg'
removed 'dataset/data/train/1/152356617/152356617Image4.jpg'
removed 'dataset/data/train/1/152356617/152356617Image88.jpg'
removed 'dataset/data/train/1/152356617/152356617Image6.jpg'
removed 'dataset/data/train/1/152356617/152356617Image54.jpg'
removed 'dataset/data/train/1/152356617/152356617Image3.jpg'
removed 'dataset/data/train/1/152356617/152356617Image0.jpg'
removed directory 'dataset/data/train/1/152356617'
removed 'dataset/data/train/1/091028237/091028237Image3.jpg'
removed 'dataset/data/train/1/091028237/091028237Image1.jpg'
removed 'dataset/data/train/1/091028237/091028237Image0.jpg'
removed 'dataset/data/train/1/091028237/091028237Image4.jpg'
removed 'dataset/data/train/1/091028237/091028237Image6.jpg'
removed 'dataset/data/train/1/091028237/091028237Image9.jpg'
removed 'dataset/data/train/1/091028237/091028237Image8.jpg'
removed directory 'dataset/data/train/1/091028237'
removed 'dataset/data/train/1/155511193/155511193Image7.jpg'
removed 'dataset/data/train/1/155511193/155511193Image4.jpg'
removed 'dataset/data/train/1/155511193/155511193Image84.jpg'
removed 'dataset/data/train/1/155511193/155511193Image65.jpg'
removed 'dataset/data/train/1/155511193/155511193Image3.jpg'
removed 'dataset/data/train/1/155511193/155511193Image0.jpg'
removed 'dataset/data/train/1/155511193/155511193Image2.jpg'
removed directory 'dataset/data/train/1/155511193'
removed 'dataset/data/train/1/155933557/155933557Image6.jpg'
removed 'dataset/data/train/1/155933557/155933557Image2.jpg'
removed 'dataset/data/train/1/155933557/155933557Image5.jpg'
removed 'dataset/data/train/1/155933557/155933557Image4.jpg'
removed 'dataset/data/train/1/155933557/155933557Image7.jpg'
removed 'dataset/data/train/1/155933557/155933557Image0.jpg'
removed 'dataset/data/train/1/155933557/155933557Image3.jpg'
removed directory 'dataset/data/train/1/155933557'
removed 'dataset/data/train/1/145846147/145846147Image5.jpg'
removed 'dataset/data/train/1/145846147/145846147Image6.jpg'
removed 'dataset/data/train/1/145846147/145846147Image8.jpg'
removed 'dataset/data/train/1/145846147/145846147Image34.jpg'
removed 'dataset/data/train/1/145846147/145846147Image4.jpg'
removed 'dataset/data/train/1/145846147/145846147Image94.jpg'
removed 'dataset/data/train/1/145846147/145846147Image0.jpg'
removed directory 'dataset/data/train/1/145846147'
removed 'dataset/data/train/1/102219213/102219213Image2.jpg'
removed 'dataset/data/train/1/102219213/102219213Image5.jpg'
removed 'dataset/data/train/1/102219213/102219213Image6.jpg'
removed 'dataset/data/train/1/102219213/102219213Image7.jpg'
removed 'dataset/data/train/1/102219213/102219213Image9.jpg'
removed 'dataset/data/train/1/102219213/102219213Image0.jpg'
removed 'dataset/data/train/1/102219213/102219213Image3.jpg'
removed directory 'dataset/data/train/1/102219213'
removed 'dataset/data/train/1/20151230004/20151230143556.jpg'
removed 'dataset/data/train/1/20151230004/20151230143256.jpg'
removed 'dataset/data/train/1/20151230004/20151230143522.jpg'
removed 'dataset/data/train/1/20151230004/20151230143552.jpg'
removed 'dataset/data/train/1/20151230004/20151230143452.jpg'
removed 'dataset/data/train/1/20151230004/20151230143434.jpg'
removed 'dataset/data/train/1/20151230004/20151230143658.jpg'
removed directory 'dataset/data/train/1/20151230004'
removed 'dataset/data/train/1/20151228010/20151228161335.jpg'
removed 'dataset/data/train/1/20151228010/20151228161403.jpg'
removed 'dataset/data/train/1/20151228010/20151228161115.jpg'
removed 'dataset/data/train/1/20151228010/20151228161310.jpg'
removed 'dataset/data/train/1/20151228010/20151228161445.jpg'
removed 'dataset/data/train/1/20151228010/20151228161405.jpg'
removed 'dataset/data/train/1/20151228010/20151228161236.jpg'
removed directory 'dataset/data/train/1/20151228010'
removed 'dataset/data/train/1/105806403/105806403Image3.jpg'
removed 'dataset/data/train/1/105806403/105806403Image8.jpg'
removed 'dataset/data/train/1/105806403/105806403Image5.jpg'
removed 'dataset/data/train/1/105806403/105806403Image0.jpg'
removed 'dataset/data/train/1/105806403/105806403Image6.jpg'
removed 'dataset/data/train/1/105806403/105806403Image4.jpg'
removed 'dataset/data/train/1/105806403/105806403Image7.jpg'
removed directory 'dataset/data/train/1/105806403'
removed 'dataset/data/train/1/20160427006/20160427150414.jpg'
removed 'dataset/data/train/1/20160427006/20160427150424.jpg'
removed 'dataset/data/train/1/20160427006/20160427150550.jpg'
removed 'dataset/data/train/1/20160427006/20160427150204.jpg'
removed 'dataset/data/train/1/20160427006/20160427150458.jpg'
removed 'dataset/data/train/1/20160427006/20160427150326.jpg'
removed 'dataset/data/train/1/20160427006/20160427150459.jpg'
removed directory 'dataset/data/train/1/20160427006'
removed 'dataset/data/train/1/20160615007/20160615145454.jpg'
removed 'dataset/data/train/1/20160615007/20160615145559.jpg'
removed 'dataset/data/train/1/20160615007/20160615145658.jpg'
removed 'dataset/data/train/1/20160615007/20160615145525.jpg'
removed 'dataset/data/train/1/20160615007/20160615145306.jpg'
removed 'dataset/data/train/1/20160615007/20160615145437.jpg'
removed 'dataset/data/train/1/20160615007/20160615145554.jpg'
removed directory 'dataset/data/train/1/20160615007'
removed 'dataset/data/train/1/20160606002/20160606154419.jpg'
removed 'dataset/data/train/1/20160606002/20160606154125.jpg'
removed 'dataset/data/train/1/20160606002/20160606154343.jpg'
removed 'dataset/data/train/1/20160606002/20160606154246.jpg'
removed 'dataset/data/train/1/20160606002/20160606154517.jpg'
removed 'dataset/data/train/1/20160606002/20160606154417.jpg'
removed 'dataset/data/train/1/20160606002/20160606154314.jpg'
removed directory 'dataset/data/train/1/20160606002'
removed 'dataset/data/train/1/20151116004/20151116152500.jpg'
removed 'dataset/data/train/1/20151116004/20151116152325.jpg'
removed 'dataset/data/train/1/20151116004/20151116152601.jpg'
removed 'dataset/data/train/1/20151116004/20151116152130.jpg'
removed 'dataset/data/train/1/20151116004/20151116152401.jpg'
removed 'dataset/data/train/1/20151116004/20151116152424.jpg'
removed 'dataset/data/train/1/20151116004/20151116152505.jpg'
removed directory 'dataset/data/train/1/20151116004'
removed 'dataset/data/train/1/092355743/092355743Image54.jpg'
removed 'dataset/data/train/1/092355743/092355743Image74.jpg'
removed 'dataset/data/train/1/092355743/092355743Image0.jpg'
removed 'dataset/data/train/1/092355743/092355743Image6.jpg'
removed 'dataset/data/train/1/092355743/092355743Image3.jpg'
removed 'dataset/data/train/1/092355743/092355743Image2.jpg'
removed 'dataset/data/train/1/092355743/092355743Image4.jpg'
removed directory 'dataset/data/train/1/092355743'
removed 'dataset/data/train/1/20151207010/20151207160110.jpg'
removed 'dataset/data/train/1/20151207010/20151207160135.jpg'
removed 'dataset/data/train/1/20151207010/20151207160151.jpg'
removed 'dataset/data/train/1/20151207010/20151207160239.jpg'
removed 'dataset/data/train/1/20151207010/20151207160402.jpg'
removed 'dataset/data/train/1/20151207010/20151207160238.jpg'
removed 'dataset/data/train/1/20151207010/20151207155920.jpg'
removed directory 'dataset/data/train/1/20151207010'
removed 'dataset/data/train/1/20151106008/20151106163635.jpg'
removed 'dataset/data/train/1/20151106008/20151106163801.jpg'
removed 'dataset/data/train/1/20151106008/20151106163538.jpg'
removed 'dataset/data/train/1/20151106008/20151106163602.jpg'
removed 'dataset/data/train/1/20151106008/20151106163758.jpg'
removed 'dataset/data/train/1/20151106008/20151106163323.jpg'
removed 'dataset/data/train/1/20151106008/20151106163704.jpg'
removed directory 'dataset/data/train/1/20151106008'
removed 'dataset/data/train/1/150222377/150222377Image5.jpg'
removed 'dataset/data/train/1/150222377/150222377Image64.jpg'
removed 'dataset/data/train/1/150222377/150222377Image2.jpg'
removed 'dataset/data/train/1/150222377/150222377Image104.jpg'
removed 'dataset/data/train/1/150222377/150222377Image3.jpg'
removed 'dataset/data/train/1/150222377/150222377Image9.jpg'
removed 'dataset/data/train/1/150222377/150222377Image0.jpg'
removed directory 'dataset/data/train/1/150222377'
removed 'dataset/data/train/1/20151102003/20151102152549.jpg'
removed 'dataset/data/train/1/20151102003/20151102152513.jpg'
removed 'dataset/data/train/1/20151102003/20151102152543.jpg'
removed 'dataset/data/train/1/20151102003/20151102152644.jpg'
removed 'dataset/data/train/1/20151102003/20151102152415.jpg'
removed 'dataset/data/train/1/20151102003/20151102152252.jpg'
removed 'dataset/data/train/1/20151102003/20151102152442.jpg'
removed directory 'dataset/data/train/1/20151102003'
removed 'dataset/data/train/1/20151028006/20151028150503.jpg'
removed 'dataset/data/train/1/20151028006/20151028150604.jpg'
removed 'dataset/data/train/1/20151028006/20151028150533.jpg'
removed 'dataset/data/train/1/20151028006/20151028150313.jpg'
removed 'dataset/data/train/1/20151028006/20151028150438.jpg'
removed 'dataset/data/train/1/20151028006/20151028150705.jpg'
removed 'dataset/data/train/1/20151028006/20151028150603.jpg'
removed directory 'dataset/data/train/1/20151028006'
removed 'dataset/data/train/1/20151116006/20151116154627.jpg'
removed 'dataset/data/train/1/20151116006/20151116154505.jpg'
removed 'dataset/data/train/1/20151116006/20151116154759.jpg'
removed 'dataset/data/train/1/20151116006/20151116154728.jpg'
removed 'dataset/data/train/1/20151116006/20151116154837.jpg'
removed 'dataset/data/train/1/20151116006/20151116154757.jpg'
removed 'dataset/data/train/1/20151116006/20151116154659.jpg'
removed directory 'dataset/data/train/1/20151116006'
removed 'dataset/data/train/1/143607127/143607127Image0.jpg'
removed 'dataset/data/train/1/143607127/143607127Image7.jpg'
removed 'dataset/data/train/1/143607127/143607127Image5.jpg'
removed 'dataset/data/train/1/143607127/143607127Image4.jpg'
removed 'dataset/data/train/1/143607127/143607127Image6.jpg'
removed 'dataset/data/train/1/143607127/143607127Image3.jpg'
removed 'dataset/data/train/1/143607127/143607127Image2.jpg'
removed directory 'dataset/data/train/1/143607127'
removed 'dataset/data/train/1/104638520/104638520Image7.jpg'
removed 'dataset/data/train/1/104638520/104638520Image8.jpg'
removed 'dataset/data/train/1/104638520/104638520Image6.jpg'
removed 'dataset/data/train/1/104638520/104638520Image2.jpg'
removed 'dataset/data/train/1/104638520/104638520Image5.jpg'
removed 'dataset/data/train/1/104638520/104638520Image10.jpg'
removed 'dataset/data/train/1/104638520/104638520Image0.jpg'
removed directory 'dataset/data/train/1/104638520'
removed 'dataset/data/train/1/20151230016/20151230173316.jpg'
removed 'dataset/data/train/1/20151230016/20151230173130.jpg'
removed 'dataset/data/train/1/20151230016/20151230173208.jpg'
removed 'dataset/data/train/1/20151230016/20151230173313.jpg'
removed 'dataset/data/train/1/20151230016/20151230173430.jpg'
removed 'dataset/data/train/1/20151230016/20151230173243.jpg'
removed 'dataset/data/train/1/20151230016/20151230173012.jpg'
removed directory 'dataset/data/train/1/20151230016'
removed 'dataset/data/train/1/20151116003/20151116150659.jpg'
removed 'dataset/data/train/1/20151116003/20151116150620.jpg'
removed 'dataset/data/train/1/20151116003/20151116150527.jpg'
removed 'dataset/data/train/1/20151116003/20151116150655.jpg'
removed 'dataset/data/train/1/20151116003/20151116150600.jpg'
removed 'dataset/data/train/1/20151116003/20151116150341.jpg'
removed 'dataset/data/train/1/20151116003/20151116150800.jpg'
removed directory 'dataset/data/train/1/20151116003'
removed 'dataset/data/train/1/20160914006/20160914152351.jpg'
removed 'dataset/data/train/1/20160914006/20160914152619.jpg'
removed 'dataset/data/train/1/20160914006/20160914152654.jpg'
removed 'dataset/data/train/1/20160914006/20160914152659.jpg'
removed 'dataset/data/train/1/20160914006/20160914152522.jpg'
removed 'dataset/data/train/1/20160914006/20160914152815.jpg'
removed 'dataset/data/train/1/20160914006/20160914152540.jpg'
removed directory 'dataset/data/train/1/20160914006'
removed directory 'dataset/data/train/1'
removed 'dataset/data/train/0/20151222004/20151222170123.jpg'
removed 'dataset/data/train/0/20151222004/20151222170001.jpg'
removed 'dataset/data/train/0/20151222004/20151222170057.jpg'
removed 'dataset/data/train/0/20151222004/20151222165821.jpg'
removed 'dataset/data/train/0/20151222004/20151222170216.jpg'
removed 'dataset/data/train/0/20151222004/20151222170150.jpg'
removed 'dataset/data/train/0/20151222004/20151222170023.jpg'
removed directory 'dataset/data/train/0/20151222004'
removed 'dataset/data/train/0/20160504006/20160504154623.jpg'
removed 'dataset/data/train/0/20160504006/20160504154230.jpg'
removed 'dataset/data/train/0/20160504006/20160504154453.jpg'
removed 'dataset/data/train/0/20160504006/20160504154549.jpg'
removed 'dataset/data/train/0/20160504006/20160504154420.jpg'
removed 'dataset/data/train/0/20160504006/20160504154400.jpg'
removed 'dataset/data/train/0/20160504006/20160504154531.jpg'
removed directory 'dataset/data/train/0/20160504006'
removed 'dataset/data/train/0/20160420009/20160420153448.jpg'
removed 'dataset/data/train/0/20160420009/20160420153639.jpg'
removed 'dataset/data/train/0/20160420009/20160420153764.jpg'
removed 'dataset/data/train/0/20160420009/20160420153822.jpg'
removed 'dataset/data/train/0/20160420009/20160420153739.jpg'
removed 'dataset/data/train/0/20160420009/20160420153613.jpg'
removed 'dataset/data/train/0/20160420009/20160420153709.jpg'
removed directory 'dataset/data/train/0/20160420009'
removed 'dataset/data/train/0/20151223015/20151223175150.jpg'
removed 'dataset/data/train/0/20151223015/20151223175013.jpg'
removed 'dataset/data/train/0/20151223015/20151223175039.jpg'
removed 'dataset/data/train/0/20151223015/20151223175141.jpg'
removed 'dataset/data/train/0/20151223015/20151223175114.jpg'
removed 'dataset/data/train/0/20151223015/20151223175227.jpg'
removed 'dataset/data/train/0/20151223015/20151223174832.jpg'
removed directory 'dataset/data/train/0/20151223015'
removed 'dataset/data/train/0/20151028016/20151028170023.jpg'
removed 'dataset/data/train/0/20151028016/20151028170352.jpg'
removed 'dataset/data/train/0/20151028016/20151028170251.jpg'
removed 'dataset/data/train/0/20151028016/20151028170221.jpg'
removed 'dataset/data/train/0/20151028016/20151028170326.jpg'
removed 'dataset/data/train/0/20151028016/20151028170202.jpg'
removed 'dataset/data/train/0/20151028016/20151028170509.jpg'
removed directory 'dataset/data/train/0/20151028016'
removed 'dataset/data/train/0/20160901007/20160901184038.jpg'
removed 'dataset/data/train/0/20160901007/20160901183820.jpg'
removed 'dataset/data/train/0/20160901007/20160901184008.jpg'
removed 'dataset/data/train/0/20160901007/20160901184114.jpg'
removed 'dataset/data/train/0/20160901007/20160901183944.jpg'
removed 'dataset/data/train/0/20160901007/20160901184210.jpg'
removed 'dataset/data/train/0/20160901007/20160901184108.jpg'
removed directory 'dataset/data/train/0/20160901007'
removed 'dataset/data/train/0/20160725006/20160725154102.jpg'
removed 'dataset/data/train/0/20160725006/20160725154148.jpg'
removed 'dataset/data/train/0/20160725006/20160725154221.jpg'
removed 'dataset/data/train/0/20160725006/20160725154007.jpg'
removed 'dataset/data/train/0/20160725006/20160725153839.jpg'
removed 'dataset/data/train/0/20160725006/20160725154131.jpg'
removed 'dataset/data/train/0/20160725006/20160725154041.jpg'
removed directory 'dataset/data/train/0/20160725006'
removed 'dataset/data/train/0/20151031002/20151031110102.jpg'
removed 'dataset/data/train/0/20151031002/20151031105980.jpg'
removed 'dataset/data/train/0/20151031002/20151031105817.jpg'
removed 'dataset/data/train/0/20151031002/20151031105828.jpg'
removed 'dataset/data/train/0/20151031002/20151031105551.jpg'
removed 'dataset/data/train/0/20151031002/20151031105859.jpg'
removed 'dataset/data/train/0/20151031002/20151031105927.jpg'
removed directory 'dataset/data/train/0/20151031002'
removed 'dataset/data/train/0/20160506006/20160506153220.jpg'
removed 'dataset/data/train/0/20160506006/20160506153255.jpg'
removed 'dataset/data/train/0/20160506006/20160506152949.jpg'
removed 'dataset/data/train/0/20160506006/20160506153153.jpg'
removed 'dataset/data/train/0/20160506006/20160506153122.jpg'
removed 'dataset/data/train/0/20160506006/20160506153118.jpg'
removed 'dataset/data/train/0/20160506006/20160506153238.jpg'
removed directory 'dataset/data/train/0/20160506006'
removed 'dataset/data/train/0/20160622012/20160622161021.jpg'
removed 'dataset/data/train/0/20160622012/20160622161133.jpg'
removed 'dataset/data/train/0/20160622012/20160622161144.jpg'
removed 'dataset/data/train/0/20160622012/20160622161112.jpg'
removed 'dataset/data/train/0/20160622012/20160622161042.jpg'
removed 'dataset/data/train/0/20160622012/20160622160856.jpg'
removed 'dataset/data/train/0/20160622012/20160622161244.jpg'
removed directory 'dataset/data/train/0/20160622012'
removed 'dataset/data/train/0/20160111011/20160111162209.jpg'
removed 'dataset/data/train/0/20160111011/20160111162139.jpg'
removed 'dataset/data/train/0/20160111011/20160111162249.jpg'
removed 'dataset/data/train/0/20160111011/20160111162316.jpg'
removed 'dataset/data/train/0/20160111011/20160111161955.jpg'
removed 'dataset/data/train/0/20160111011/20160111162430.jpg'
removed 'dataset/data/train/0/20160111011/20160111162309.jpg'
removed directory 'dataset/data/train/0/20160111011'
removed 'dataset/data/train/0/20160830003/20160830144457.jpg'
removed 'dataset/data/train/0/20160830003/20160830144620.jpg'
removed 'dataset/data/train/0/20160830003/20160830144807.jpg'
removed 'dataset/data/train/0/20160830003/20160830144648.jpg'
removed 'dataset/data/train/0/20160830003/20160830144748.jpg'
removed 'dataset/data/train/0/20160830003/20160830144719.jpg'
removed 'dataset/data/train/0/20160830003/20160830144758.jpg'
removed directory 'dataset/data/train/0/20160830003'
removed 'dataset/data/train/0/20150923011/20150923164045.jpg'
removed 'dataset/data/train/0/20150923011/20150923164303.jpg'
removed 'dataset/data/train/0/20150923011/20150923164411.jpg'
removed 'dataset/data/train/0/20150923011/20150923164380.jpg'
removed 'dataset/data/train/0/20150923011/20150923164233.jpg'
removed 'dataset/data/train/0/20150923011/20150923164333.jpg'
removed 'dataset/data/train/0/20150923011/20150923164224.jpg'
removed directory 'dataset/data/train/0/20150923011'
removed 'dataset/data/train/0/20151119001/20151119100144.jpg'
removed 'dataset/data/train/0/20151119001/20151119095816.jpg'
removed 'dataset/data/train/0/20151119001/20151119100043.jpg'
removed 'dataset/data/train/0/20151119001/20151119100343.jpg'
removed 'dataset/data/train/0/20151119001/20151119100108.jpg'
removed 'dataset/data/train/0/20151119001/20151119100027.jpg'
removed 'dataset/data/train/0/20151119001/20151119100145.jpg'
removed directory 'dataset/data/train/0/20151119001'
removed 'dataset/data/train/0/20161013002/20161013120209.jpg'
removed 'dataset/data/train/0/20161013002/20161013120217.jpg'
removed 'dataset/data/train/0/20161013002/20161013120142.jpg'
removed 'dataset/data/train/0/20161013002/20161013115909.jpg'
removed 'dataset/data/train/0/20161013002/20161013120111.jpg'
removed 'dataset/data/train/0/20161013002/20161013120250.jpg'
removed 'dataset/data/train/0/20161013002/20161013120040.jpg'
removed directory 'dataset/data/train/0/20161013002'
removed 'dataset/data/train/0/20151228003/20151228144328.jpg'
removed 'dataset/data/train/0/20151228003/20151228144158.jpg'
removed 'dataset/data/train/0/20151228003/20151228144258.jpg'
removed 'dataset/data/train/0/20151228003/20151228144330.jpg'
removed 'dataset/data/train/0/20151228003/20151228144230.jpg'
removed 'dataset/data/train/0/20151228003/20151228144023.jpg'
removed 'dataset/data/train/0/20151228003/20151228144430.jpg'
removed directory 'dataset/data/train/0/20151228003'
removed 'dataset/data/train/0/20160527004/20160527152850.jpg'
removed 'dataset/data/train/0/20160527004/20160527152616.jpg'
removed 'dataset/data/train/0/20160527004/20160527152758.jpg'
removed 'dataset/data/train/0/20160527004/20160527152956.jpg'
removed 'dataset/data/train/0/20160527004/20160527152816.jpg'
removed 'dataset/data/train/0/20160527004/20160527152909.jpg'
removed 'dataset/data/train/0/20160527004/20160527152914.jpg'
removed directory 'dataset/data/train/0/20160527004'
removed 'dataset/data/train/0/20160621001/20160621115525.jpg'
removed 'dataset/data/train/0/20160621001/20160621115403.jpg'
removed 'dataset/data/train/0/20160621001/20160621115458.jpg'
removed 'dataset/data/train/0/20160621001/20160621115420.jpg'
removed 'dataset/data/train/0/20160621001/20160621115546.jpg'
removed 'dataset/data/train/0/20160621001/20160621115511.jpg'
removed 'dataset/data/train/0/20160621001/20160621115222.jpg'
removed directory 'dataset/data/train/0/20160621001'
removed 'dataset/data/train/0/20160614003/20160614113019.jpg'
removed 'dataset/data/train/0/20160614003/20160614113004.jpg'
removed 'dataset/data/train/0/20160614003/20160614112748.jpg'
removed 'dataset/data/train/0/20160614003/20160614112933.jpg'
removed 'dataset/data/train/0/20160614003/20160614112917.jpg'
removed 'dataset/data/train/0/20160614003/20160614112751.jpg'
removed 'dataset/data/train/0/20160614003/20160614113035.jpg'
removed directory 'dataset/data/train/0/20160614003'
removed 'dataset/data/train/0/20161028008/20161028154942.jpg'
removed 'dataset/data/train/0/20161028008/20161028154907.jpg'
removed 'dataset/data/train/0/20161028008/20161028155032.jpg'
removed 'dataset/data/train/0/20161028008/20161028155033.jpg'
removed 'dataset/data/train/0/20161028008/20161028155214.jpg'
removed 'dataset/data/train/0/20161028008/20161028154718.jpg'
removed 'dataset/data/train/0/20161028008/20161028155004.jpg'
removed directory 'dataset/data/train/0/20161028008'
removed 'dataset/data/train/0/20150911006/20150911160934.jpg'
removed 'dataset/data/train/0/20150911006/20150911160646.jpg'
removed 'dataset/data/train/0/20150911006/20150911160936.jpg'
removed 'dataset/data/train/0/20150911006/20150911160905.jpg'
removed 'dataset/data/train/0/20150911006/20150911160825.jpg'
removed 'dataset/data/train/0/20150911006/20150911161031.jpg'
removed 'dataset/data/train/0/20150911006/20150911160836.jpg'
removed directory 'dataset/data/train/0/20150911006'
removed 'dataset/data/train/0/20160612003/20160612163211.jpg'
removed 'dataset/data/train/0/20160612003/20160612163456.jpg'
removed 'dataset/data/train/0/20160612003/20160612163416.jpg'
removed 'dataset/data/train/0/20160612003/20160612163518.jpg'
removed 'dataset/data/train/0/20160612003/20160612163335.jpg'
removed 'dataset/data/train/0/20160612003/20160612163462.jpg'
removed 'dataset/data/train/0/20160612003/20160612163400.jpg'
removed directory 'dataset/data/train/0/20160612003'
removed 'dataset/data/train/0/20160421004/20160421150516.jpg'
removed 'dataset/data/train/0/20160421004/20160421150408.jpg'
removed 'dataset/data/train/0/20160421004/20160421150622.jpg'
removed 'dataset/data/train/0/20160421004/20160421150443.jpg'
removed 'dataset/data/train/0/20160421004/20160421150141.jpg'
removed 'dataset/data/train/0/20160421004/20160421150327.jpg'
removed 'dataset/data/train/0/20160421004/20160421150424.jpg'
removed directory 'dataset/data/train/0/20160421004'
removed 'dataset/data/train/0/20160606003/20160606155510.jpg'
removed 'dataset/data/train/0/20160606003/20160606155644.jpg'
removed 'dataset/data/train/0/20160606003/20160606155634.jpg'
removed 'dataset/data/train/0/20160606003/20160606155730.jpg'
removed 'dataset/data/train/0/20160606003/20160606155353.jpg'
removed 'dataset/data/train/0/20160606003/20160606155613.jpg'
removed 'dataset/data/train/0/20160606003/20160606155535.jpg'
removed directory 'dataset/data/train/0/20160606003'
removed 'dataset/data/train/0/20160629008/20160629152629.jpg'
removed 'dataset/data/train/0/20160629008/20160629152555.jpg'
removed 'dataset/data/train/0/20160629008/20160629152350.jpg'
removed 'dataset/data/train/0/20160629008/20160629152619.jpg'
removed 'dataset/data/train/0/20160629008/20160629152737.jpg'
removed 'dataset/data/train/0/20160629008/20160629152523.jpg'
removed 'dataset/data/train/0/20160629008/20160629152640.jpg'
removed directory 'dataset/data/train/0/20160629008'
removed 'dataset/data/train/0/20151225002/20151225144938.jpg'
removed 'dataset/data/train/0/20151225002/20151225145033.jpg'
removed 'dataset/data/train/0/20151225002/20151225145152.jpg'
removed 'dataset/data/train/0/20151225002/20151225145003.jpg'
removed 'dataset/data/train/0/20151225002/20151225145111.jpg'
removed 'dataset/data/train/0/20151225002/20151225144818.jpg'
removed 'dataset/data/train/0/20151225002/20151225145158.jpg'
removed directory 'dataset/data/train/0/20151225002'
removed 'dataset/data/train/0/20160229001/20160229143831.jpg'
removed 'dataset/data/train/0/20160229001/20160229143932.jpg'
removed 'dataset/data/train/0/20160229001/20160229143759.jpg'
removed 'dataset/data/train/0/20160229001/20160229144038.jpg'
removed 'dataset/data/train/0/20160229001/20160229143640.jpg'
removed 'dataset/data/train/0/20160229001/20160229143947.jpg'
removed 'dataset/data/train/0/20160229001/20160229143901.jpg'
removed directory 'dataset/data/train/0/20160229001'
removed 'dataset/data/train/0/20160324007/20160324161825.jpg'
removed 'dataset/data/train/0/20160324007/20160324161819.jpg'
removed 'dataset/data/train/0/20160324007/20160324161752.jpg'
removed 'dataset/data/train/0/20160324007/20160324161718.jpg'
removed 'dataset/data/train/0/20160324007/20160324161929.jpg'
removed 'dataset/data/train/0/20160324007/20160324161650.jpg'
removed 'dataset/data/train/0/20160324007/20160324161509.jpg'
removed directory 'dataset/data/train/0/20160324007'
removed 'dataset/data/train/0/20150918003/20150918151223.jpg'
removed 'dataset/data/train/0/20150918003/20150918151201.jpg'
removed 'dataset/data/train/0/20150918003/20150918151519.jpg'
removed 'dataset/data/train/0/20150918003/20150918151351.jpg'
removed 'dataset/data/train/0/20150918003/20150918151248.jpg'
removed 'dataset/data/train/0/20150918003/20150918151318.jpg'
removed 'dataset/data/train/0/20150918003/20150918151028.jpg'
removed directory 'dataset/data/train/0/20150918003'
removed 'dataset/data/train/0/20151027002/20151027153836.jpg'
removed 'dataset/data/train/0/20151027002/20151027153635.jpg'
removed 'dataset/data/train/0/20151027002/20151027154016.jpg'
removed 'dataset/data/train/0/20151027002/20151027154252.jpg'
removed 'dataset/data/train/0/20151027002/20151027153807.jpg'
removed 'dataset/data/train/0/20151027002/20151027153858.jpg'
removed 'dataset/data/train/0/20151027002/20151027153921.jpg'
removed directory 'dataset/data/train/0/20151027002'
removed 'dataset/data/train/0/20151204001/20151204143727.jpg'
removed 'dataset/data/train/0/20151204001/20151204143809.jpg'
removed 'dataset/data/train/0/20151204001/20151204144007.jpg'
removed 'dataset/data/train/0/20151204001/20151204143741.jpg'
removed 'dataset/data/train/0/20151204001/20151204143542.jpg'
removed 'dataset/data/train/0/20151204001/20151204143837.jpg'
removed 'dataset/data/train/0/20151204001/20151204143827.jpg'
removed directory 'dataset/data/train/0/20151204001'
removed 'dataset/data/train/0/20160118001/20160118143024.jpg'
removed 'dataset/data/train/0/20160118001/20160118143055.jpg'
removed 'dataset/data/train/0/20160118001/20160118143242.jpg'
removed 'dataset/data/train/0/20160118001/20160118142910.jpg'
removed 'dataset/data/train/0/20160118001/20160118143123.jpg'
removed 'dataset/data/train/0/20160118001/20160118143160.jpg'
removed 'dataset/data/train/0/20160118001/20160118143153.jpg'
removed directory 'dataset/data/train/0/20160118001'
removed 'dataset/data/train/0/20160918002/20160918145228.jpg'
removed 'dataset/data/train/0/20160918002/20160918145112.jpg'
removed 'dataset/data/train/0/20160918002/20160918145551.jpg'
removed 'dataset/data/train/0/20160918002/20160918145402.jpg'
removed 'dataset/data/train/0/20160918002/20160918145417.jpg'
removed 'dataset/data/train/0/20160918002/20160918145332.jpg'
removed 'dataset/data/train/0/20160918002/20160918145300.jpg'
removed directory 'dataset/data/train/0/20160918002'
removed 'dataset/data/train/0/20160303006/20160303172659.jpg'
removed 'dataset/data/train/0/20160303006/20160303172604.jpg'
removed 'dataset/data/train/0/20160303006/20160303172729.jpg'
removed 'dataset/data/train/0/20160303006/20160303172348.jpg'
removed 'dataset/data/train/0/20160303006/20160303172648.jpg'
removed 'dataset/data/train/0/20160303006/20160303172505.jpg'
removed 'dataset/data/train/0/20160303006/20160303172533.jpg'
removed directory 'dataset/data/train/0/20160303006'
removed 'dataset/data/train/0/20160927006/20160927164440.jpg'
removed 'dataset/data/train/0/20160927006/20160927164124.jpg'
removed 'dataset/data/train/0/20160927006/20160927163959.jpg'
removed 'dataset/data/train/0/20160927006/20160927164208.jpg'
removed 'dataset/data/train/0/20160927006/20160927164230.jpg'
removed 'dataset/data/train/0/20160927006/20160927164302.jpg'
removed 'dataset/data/train/0/20160927006/20160927164251.jpg'
removed directory 'dataset/data/train/0/20160927006'
removed 'dataset/data/train/0/20160418008/20160418153139.jpg'
removed 'dataset/data/train/0/20160418008/20160418153009.jpg'
removed 'dataset/data/train/0/20160418008/20160418152939.jpg'
removed 'dataset/data/train/0/20160418008/20160418153041.jpg'
removed 'dataset/data/train/0/20160418008/20160418152728.jpg'
removed 'dataset/data/train/0/20160418008/20160418152910.jpg'
removed 'dataset/data/train/0/20160418008/20160418153056.jpg'
removed directory 'dataset/data/train/0/20160418008'
removed 'dataset/data/train/0/20160627007/20160627155059.jpg'
removed 'dataset/data/train/0/20160627007/20160627155217.jpg'
removed 'dataset/data/train/0/20160627007/20160627155255.jpg'
removed 'dataset/data/train/0/20160627007/20160627155223.jpg'
removed 'dataset/data/train/0/20160627007/20160627155109.jpg'
removed 'dataset/data/train/0/20160627007/20160627155117.jpg'
removed 'dataset/data/train/0/20160627007/20160627154911.jpg'
removed directory 'dataset/data/train/0/20160627007'
removed 'dataset/data/train/0/20160726002/20160726114849.jpg'
removed 'dataset/data/train/0/20160726002/20160726114916.jpg'
removed 'dataset/data/train/0/20160726002/20160726114726.jpg'
removed 'dataset/data/train/0/20160726002/20160726115026.jpg'
removed 'dataset/data/train/0/20160726002/20160726114948.jpg'
removed 'dataset/data/train/0/20160726002/20160726115035.jpg'
removed 'dataset/data/train/0/20160726002/20160726115213.jpg'
removed directory 'dataset/data/train/0/20160726002'
removed 'dataset/data/train/0/20160420012/20160420162608.jpg'
removed 'dataset/data/train/0/20160420012/20160420162509.jpg'
removed 'dataset/data/train/0/20160420012/20160420162538.jpg'
removed 'dataset/data/train/0/20160420012/20160420162616.jpg'
removed 'dataset/data/train/0/20160420012/20160420162654.jpg'
removed 'dataset/data/train/0/20160420012/20160420162447.jpg'
removed 'dataset/data/train/0/20160420012/20160420162312.jpg'
removed directory 'dataset/data/train/0/20160420012'
removed 'dataset/data/train/0/20160823001/20160823151910.jpg'
removed 'dataset/data/train/0/20160823001/20160823151552.jpg'
removed 'dataset/data/train/0/20160823001/20160823151725.jpg'
removed 'dataset/data/train/0/20160823001/20160823151720.jpg'
removed 'dataset/data/train/0/20160823001/20160823151618.jpg'
removed 'dataset/data/train/0/20160823001/20160823151431.jpg'
removed 'dataset/data/train/0/20160823001/20160823151649.jpg'
removed directory 'dataset/data/train/0/20160823001'
removed 'dataset/data/train/0/20160330020/20160330165032.jpg'
removed 'dataset/data/train/0/20160330020/20160330164946.jpg'
removed 'dataset/data/train/0/20160330020/20160330164806.jpg'
removed 'dataset/data/train/0/20160330020/20160330165105.jpg'
removed 'dataset/data/train/0/20160330020/20160330165013.jpg'
removed 'dataset/data/train/0/20160330020/20160330165204.jpg'
removed 'dataset/data/train/0/20160330020/20160330165102.jpg'
removed directory 'dataset/data/train/0/20160330020'
removed 'dataset/data/train/0/20160324001/20160324103032.jpg'
removed 'dataset/data/train/0/20160324001/20160324102945.jpg'
removed 'dataset/data/train/0/20160324001/20160324102916.jpg'
removed 'dataset/data/train/0/20160324001/20160324103019.jpg'
removed 'dataset/data/train/0/20160324001/20160324102727.jpg'
removed 'dataset/data/train/0/20160324001/20160324103104.jpg'
removed 'dataset/data/train/0/20160324001/20160324103103.jpg'
removed directory 'dataset/data/train/0/20160324001'
removed 'dataset/data/train/0/20160612001/20160612154521.jpg'
removed 'dataset/data/train/0/20160612001/20160612154514.jpg'
removed 'dataset/data/train/0/20160612001/20160612154614.jpg'
removed 'dataset/data/train/0/20160612001/20160612154158.jpg'
removed 'dataset/data/train/0/20160612001/20160612154535.jpg'
removed 'dataset/data/train/0/20160612001/20160612154445.jpg'
removed 'dataset/data/train/0/20160612001/20160612154341.jpg'
removed directory 'dataset/data/train/0/20160612001'
removed 'dataset/data/train/0/20160218002/20160218154320.jpg'
removed 'dataset/data/train/0/20160218002/20160218154349.jpg'
removed 'dataset/data/train/0/20160218002/20160218154221.jpg'
removed 'dataset/data/train/0/20160218002/20160218154153.jpg'
removed 'dataset/data/train/0/20160218002/20160218154252.jpg'
removed 'dataset/data/train/0/20160218002/20160218154001.jpg'
removed 'dataset/data/train/0/20160218002/20160218154324.jpg'
removed directory 'dataset/data/train/0/20160218002'
removed 'dataset/data/train/0/20160803003/20160803144432.jpg'
removed 'dataset/data/train/0/20160803003/20160803144241.jpg'
removed 'dataset/data/train/0/20160803003/20160803144536.jpg'
removed 'dataset/data/train/0/20160803003/20160803144734.jpg'
removed 'dataset/data/train/0/20160803003/20160803144402.jpg'
removed 'dataset/data/train/0/20160803003/20160803144502.jpg'
removed 'dataset/data/train/0/20160803003/20160803144540.jpg'
removed directory 'dataset/data/train/0/20160803003'
removed 'dataset/data/train/0/20160125001/20160125145136.jpg'
removed 'dataset/data/train/0/20160125001/20160125144958.jpg'
removed 'dataset/data/train/0/20160125001/20160125145306.jpg'
removed 'dataset/data/train/0/20160125001/20160125145245.jpg'
removed 'dataset/data/train/0/20160125001/20160125145314.jpg'
removed 'dataset/data/train/0/20160125001/20160125145206.jpg'
removed 'dataset/data/train/0/20160125001/20160125145423.jpg'
removed directory 'dataset/data/train/0/20160125001'
removed 'dataset/data/train/0/20160104010/20160104155608.jpg'
removed 'dataset/data/train/0/20160104010/20160104155907.jpg'
removed 'dataset/data/train/0/20160104010/20160104155960.jpg'
removed 'dataset/data/train/0/20160104010/20160104160017.jpg'
removed 'dataset/data/train/0/20160104010/20160104155759.jpg'
removed 'dataset/data/train/0/20160104010/20160104155837.jpg'
removed 'dataset/data/train/0/20160104010/20160104155931.jpg'
removed directory 'dataset/data/train/0/20160104010'
removed 'dataset/data/train/0/20151120006/20151120152407.jpg'
removed 'dataset/data/train/0/20151120006/20151120152639.jpg'
removed 'dataset/data/train/0/20151120006/20151120152700.jpg'
removed 'dataset/data/train/0/20151120006/20151120152814.jpg'
removed 'dataset/data/train/0/20151120006/20151120152659.jpg'
removed 'dataset/data/train/0/20151120006/20151120152604.jpg'
removed 'dataset/data/train/0/20151120006/20151120152529.jpg'
removed directory 'dataset/data/train/0/20151120006'
removed 'dataset/data/train/0/20160525002/20160525105440.jpg'
removed 'dataset/data/train/0/20160525002/20160525105351.jpg'
removed 'dataset/data/train/0/20160525002/20160525105255.jpg'
removed 'dataset/data/train/0/20160525002/20160525105330.jpg'
removed 'dataset/data/train/0/20160525002/20160525105424.jpg'
removed 'dataset/data/train/0/20160525002/20160525105101.jpg'
removed 'dataset/data/train/0/20160525002/20160525105414.jpg'
removed directory 'dataset/data/train/0/20160525002'
removed 'dataset/data/train/0/20160412003/20160412100814.jpg'
removed 'dataset/data/train/0/20160412003/20160412100602.jpg'
removed 'dataset/data/train/0/20160412003/20160412100446.jpg'
removed 'dataset/data/train/0/20160412003/20160412100702.jpg'
removed 'dataset/data/train/0/20160412003/20160412100730.jpg'
removed 'dataset/data/train/0/20160412003/20160412100630.jpg'
removed 'dataset/data/train/0/20160412003/20160412100741.jpg'
removed directory 'dataset/data/train/0/20160412003'
removed 'dataset/data/train/0/20151214001/20151214144237.jpg'
removed 'dataset/data/train/0/20151214001/20151214144122.jpg'
removed 'dataset/data/train/0/20151214001/20151214144050.jpg'
removed 'dataset/data/train/0/20151214001/20151214144016.jpg'
removed 'dataset/data/train/0/20151214001/20151214144120.jpg'
removed 'dataset/data/train/0/20151214001/20151214143834.jpg'
removed 'dataset/data/train/0/20151214001/20151214143955.jpg'
removed directory 'dataset/data/train/0/20151214001'
removed 'dataset/data/train/0/20160902006/20160902170532.jpg'
removed 'dataset/data/train/0/20160902006/20160902170432.jpg'
removed 'dataset/data/train/0/20160902006/20160902170610.jpg'
removed 'dataset/data/train/0/20160902006/20160902170316.jpg'
removed 'dataset/data/train/0/20160902006/20160902170602.jpg'
removed 'dataset/data/train/0/20160902006/20160902170506.jpg'
removed 'dataset/data/train/0/20160902006/20160902170641.jpg'
removed directory 'dataset/data/train/0/20160902006'
removed 'dataset/data/train/0/20151016002/20151016145826.jpg'
removed 'dataset/data/train/0/20151016002/20151016145932.jpg'
removed 'dataset/data/train/0/20151016002/20151016145537.jpg'
removed 'dataset/data/train/0/20151016002/20151016145703.jpg'
removed 'dataset/data/train/0/20151016002/20151016145828.jpg'
removed 'dataset/data/train/0/20151016002/20151016145803.jpg'
removed 'dataset/data/train/0/20151016002/20151016145726.jpg'
removed directory 'dataset/data/train/0/20151016002'
removed 'dataset/data/train/0/20160108002/20160108104922.jpg'
removed 'dataset/data/train/0/20160108002/20160108104804.jpg'
removed 'dataset/data/train/0/20160108002/20160108104814.jpg'
removed 'dataset/data/train/0/20160108002/20160108105045.jpg'
removed 'dataset/data/train/0/20160108002/20160108104912.jpg'
removed 'dataset/data/train/0/20160108002/20160108104834.jpg'
removed 'dataset/data/train/0/20160108002/20160108104641.jpg'
removed directory 'dataset/data/train/0/20160108002'
removed 'dataset/data/train/0/20160608016/20160608164158.jpg'
removed 'dataset/data/train/0/20160608016/20160608163753.jpg'
removed 'dataset/data/train/0/20160608016/20160608163623.jpg'
removed 'dataset/data/train/0/20160608016/20160608163924.jpg'
removed 'dataset/data/train/0/20160608016/20160608163851.jpg'
removed 'dataset/data/train/0/20160608016/20160608163929.jpg'
removed 'dataset/data/train/0/20160608016/20160608163814.jpg'
removed directory 'dataset/data/train/0/20160608016'
removed 'dataset/data/train/0/20160104002/20160104144052.jpg'
removed 'dataset/data/train/0/20160104002/20160104144020.jpg'
removed 'dataset/data/train/0/20160104002/20160104143821.jpg'
removed 'dataset/data/train/0/20160104002/20160104144121.jpg'
removed 'dataset/data/train/0/20160104002/20160104144114.jpg'
removed 'dataset/data/train/0/20160104002/20160104143951.jpg'
removed 'dataset/data/train/0/20160104002/20160104144256.jpg'
removed directory 'dataset/data/train/0/20160104002'
removed 'dataset/data/train/0/20160622013/20160622165829.jpg'
removed 'dataset/data/train/0/20160622013/20160622165526.jpg'
removed 'dataset/data/train/0/20160622013/20160622165802.jpg'
removed 'dataset/data/train/0/20160622013/20160622165730.jpg'
removed 'dataset/data/train/0/20160622013/20160622170014.jpg'
removed 'dataset/data/train/0/20160622013/20160622165820.jpg'
removed 'dataset/data/train/0/20160622013/20160622165646.jpg'
removed directory 'dataset/data/train/0/20160622013'
removed 'dataset/data/train/0/20160229002/20160229144902.jpg'
removed 'dataset/data/train/0/20160229002/20160229144806.jpg'
removed 'dataset/data/train/0/20160229002/20160229144507.jpg'
removed 'dataset/data/train/0/20160229002/20160229144701.jpg'
removed 'dataset/data/train/0/20160229002/20160229144626.jpg'
removed 'dataset/data/train/0/20160229002/20160229144800.jpg'
removed 'dataset/data/train/0/20160229002/20160229144645.jpg'
removed directory 'dataset/data/train/0/20160229002'
removed 'dataset/data/train/0/20160413010/20160413144625.jpg'
removed 'dataset/data/train/0/20160413010/20160413144421.jpg'
removed 'dataset/data/train/0/20160413010/20160413144725.jpg'
removed 'dataset/data/train/0/20160413010/20160413144601.jpg'
removed 'dataset/data/train/0/20160413010/20160413144657.jpg'
removed 'dataset/data/train/0/20160413010/20160413144636.jpg'
removed 'dataset/data/train/0/20160413010/20160413144661.jpg'
removed directory 'dataset/data/train/0/20160413010'
removed 'dataset/data/train/0/20151223016/20151223180242.jpg'
removed 'dataset/data/train/0/20151223016/20151223175858.jpg'
removed 'dataset/data/train/0/20151223016/20151223180018.jpg'
removed 'dataset/data/train/0/20151223016/20151223180150.jpg'
removed 'dataset/data/train/0/20151223016/20151223180047.jpg'
removed 'dataset/data/train/0/20151223016/20151223180117.jpg'
removed 'dataset/data/train/0/20151223016/20151223180146.jpg'
removed directory 'dataset/data/train/0/20151223016'
removed 'dataset/data/train/0/20160323026/20160323174116.jpg'
removed 'dataset/data/train/0/20160323026/20160323174149.jpg'
removed 'dataset/data/train/0/20160323026/20160323174044.jpg'
removed 'dataset/data/train/0/20160323026/20160323174226.jpg'
removed 'dataset/data/train/0/20160323026/20160323174250.jpg'
removed 'dataset/data/train/0/20160323026/20160323174216.jpg'
removed 'dataset/data/train/0/20160323026/20160323173931.jpg'
removed directory 'dataset/data/train/0/20160323026'
removed 'dataset/data/train/0/20150902011/20150902163201.jpg'
removed 'dataset/data/train/0/20150902011/20150902163230.jpg'
removed 'dataset/data/train/0/20150902011/20150902163388.jpg'
removed 'dataset/data/train/0/20150902011/20150902163134.jpg'
removed 'dataset/data/train/0/20150902011/20150902163431.jpg'
removed 'dataset/data/train/0/20150902011/20150902163305.jpg'
removed 'dataset/data/train/0/20150902011/20150902162952.jpg'
removed directory 'dataset/data/train/0/20150902011'
removed 'dataset/data/train/0/20150902009/20150902160019.jpg'
removed 'dataset/data/train/0/20150902009/20150902160351.jpg'
removed 'dataset/data/train/0/20150902009/20150902160349.jpg'
removed 'dataset/data/train/0/20150902009/20150902160146.jpg'
removed 'dataset/data/train/0/20150902009/20150902160220.jpg'
removed 'dataset/data/train/0/20150902009/20150902160317.jpg'
removed 'dataset/data/train/0/20150902009/20150902160246.jpg'
removed directory 'dataset/data/train/0/20150902009'
removed 'dataset/data/train/0/20150911004/20150911150618.jpg'
removed 'dataset/data/train/0/20150911004/20150911150500.jpg'
removed 'dataset/data/train/0/20150911004/20150911150241.jpg'
removed 'dataset/data/train/0/20150911004/20150911150322.jpg'
removed 'dataset/data/train/0/20150911004/20150911150334.jpg'
removed 'dataset/data/train/0/20150911004/20150911150019.jpg'
removed 'dataset/data/train/0/20150911004/20150911150153.jpg'
removed directory 'dataset/data/train/0/20150911004'
removed 'dataset/data/train/0/20161025002/20161025160814.jpg'
removed 'dataset/data/train/0/20161025002/20161025160909.jpg'
removed 'dataset/data/train/0/20161025002/20161025160724.jpg'
removed 'dataset/data/train/0/20161025002/20161025160854.jpg'
removed 'dataset/data/train/0/20161025002/20161025160751.jpg'
removed 'dataset/data/train/0/20161025002/20161025161020.jpg'
removed 'dataset/data/train/0/20161025002/20161025160558.jpg'
removed directory 'dataset/data/train/0/20161025002'
removed 'dataset/data/train/0/20150826007/20150826150508.jpg'
removed 'dataset/data/train/0/20150826007/20150826150220.jpg'
removed 'dataset/data/train/0/20150826007/20150826150339.jpg'
removed 'dataset/data/train/0/20150826007/20150826150730.jpg'
removed 'dataset/data/train/0/20150826007/20150826150600.jpg'
removed 'dataset/data/train/0/20150826007/20150826150437.jpg'
removed 'dataset/data/train/0/20150826007/20150826150413.jpg'
removed directory 'dataset/data/train/0/20150826007'
removed 'dataset/data/train/0/20151228001/20151228142326.jpg'
removed 'dataset/data/train/0/20151228001/20151228142232.jpg'
removed 'dataset/data/train/0/20151228001/20151228142156.jpg'
removed 'dataset/data/train/0/20151228001/20151228142350.jpg'
removed 'dataset/data/train/0/20151228001/20151228142256.jpg'
removed 'dataset/data/train/0/20151228001/20151228142407.jpg'
removed 'dataset/data/train/0/20151228001/20151228142030.jpg'
removed directory 'dataset/data/train/0/20151228001'
removed 'dataset/data/train/0/20161019017/20161019171601.jpg'
removed 'dataset/data/train/0/20161019017/20161019171502.jpg'
removed 'dataset/data/train/0/20161019017/20161019171334.jpg'
removed 'dataset/data/train/0/20161019017/20161019171532.jpg'
removed 'dataset/data/train/0/20161019017/20161019171610.jpg'
removed 'dataset/data/train/0/20161019017/20161019171716.jpg'
removed 'dataset/data/train/0/20161019017/20161019171556.jpg'
removed directory 'dataset/data/train/0/20161019017'
removed 'dataset/data/train/0/20150808001/20150808112104.jpg'
removed 'dataset/data/train/0/20150808001/20150808112139.jpg'
removed 'dataset/data/train/0/20150808001/20150808112229.jpg'
removed 'dataset/data/train/0/20150808001/20150808112045.jpg'
removed 'dataset/data/train/0/20150808001/20150808112209.jpg'
removed 'dataset/data/train/0/20150808001/20150808112503.jpg'
removed 'dataset/data/train/0/20150808001/20150808111901.jpg'
removed directory 'dataset/data/train/0/20150808001'
removed 'dataset/data/train/0/20150814006/20150814163137.jpg'
removed 'dataset/data/train/0/20150814006/20150814162931.jpg'
removed 'dataset/data/train/0/20150814006/20150814162945.jpg'
removed 'dataset/data/train/0/20150814006/20150814163030.jpg'
removed 'dataset/data/train/0/20150814006/20150814163077.jpg'
removed 'dataset/data/train/0/20150814006/20150814162732.jpg'
removed 'dataset/data/train/0/20150814006/20150814163055.jpg'
removed directory 'dataset/data/train/0/20150814006'
removed 'dataset/data/train/0/20151202002/20151202145014.jpg'
removed 'dataset/data/train/0/20151202002/20151202144916.jpg'
removed 'dataset/data/train/0/20151202002/20151202144611.jpg'
removed 'dataset/data/train/0/20151202002/20151202144806.jpg'
removed 'dataset/data/train/0/20151202002/20151202144836.jpg'
removed 'dataset/data/train/0/20151202002/20151202144984.jpg'
removed 'dataset/data/train/0/20151202002/20151202144737.jpg'
removed directory 'dataset/data/train/0/20151202002'
removed 'dataset/data/train/0/20151028009/20151028152340.jpg'
removed 'dataset/data/train/0/20151028009/20151028152020.jpg'
removed 'dataset/data/train/0/20151028009/20151028152157.jpg'
removed 'dataset/data/train/0/20151028009/20151028152429.jpg'
removed 'dataset/data/train/0/20151028009/20151028152211.jpg'
removed 'dataset/data/train/0/20151028009/20151028152241.jpg'
removed 'dataset/data/train/0/20151028009/20151028152311.jpg'
removed directory 'dataset/data/train/0/20151028009'
removed 'dataset/data/train/0/20151118013/20151118165637.jpg'
removed 'dataset/data/train/0/20151118013/20151118165737.jpg'
removed 'dataset/data/train/0/20151118013/20151118165708.jpg'
removed 'dataset/data/train/0/20151118013/20151118165800.jpg'
removed 'dataset/data/train/0/20151118013/20151118165608.jpg'
removed 'dataset/data/train/0/20151118013/20151118165833.jpg'
removed 'dataset/data/train/0/20151118013/20151118165442.jpg'
removed directory 'dataset/data/train/0/20151118013'
removed 'dataset/data/train/0/20160315003/20160315163424.jpg'
removed 'dataset/data/train/0/20160315003/20160315163454.jpg'
removed 'dataset/data/train/0/20160315003/20160315163257.jpg'
removed 'dataset/data/train/0/20160315003/20160315163550.jpg'
removed 'dataset/data/train/0/20160315003/20160315163522.jpg'
removed 'dataset/data/train/0/20160315003/20160315163560.jpg'
removed 'dataset/data/train/0/20160315003/20160315163710.jpg'
removed directory 'dataset/data/train/0/20160315003'
removed 'dataset/data/train/0/20160504010/20160504170622.jpg'
removed 'dataset/data/train/0/20160504010/20160504170720.jpg'
removed 'dataset/data/train/0/20160504010/20160504170525.jpg'
removed 'dataset/data/train/0/20160504010/20160504170559.jpg'
removed 'dataset/data/train/0/20160504010/20160504170351.jpg'
removed 'dataset/data/train/0/20160504010/20160504170669.jpg'
removed 'dataset/data/train/0/20160504010/20160504170652.jpg'
removed directory 'dataset/data/train/0/20160504010'
removed 'dataset/data/train/0/20160523004/20160523153455.jpg'
removed 'dataset/data/train/0/20160523004/20160523153342.jpg'
removed 'dataset/data/train/0/20160523004/20160523153214.jpg'
removed 'dataset/data/train/0/20160523004/20160523153250.jpg'
removed 'dataset/data/train/0/20160523004/20160523153350.jpg'
removed 'dataset/data/train/0/20160523004/20160523153312.jpg'
removed 'dataset/data/train/0/20160523004/20160523153054.jpg'
removed directory 'dataset/data/train/0/20160523004'
removed 'dataset/data/train/0/20151202009/20151202161403.jpg'
removed 'dataset/data/train/0/20151202009/20151202161742.jpg'
removed 'dataset/data/train/0/20151202009/20151202161926.jpg'
removed 'dataset/data/train/0/20151202009/20151202161819.jpg'
removed 'dataset/data/train/0/20151202009/20151202161635.jpg'
removed 'dataset/data/train/0/20151202009/20151202161720.jpg'
removed 'dataset/data/train/0/20151202009/20151202161745.jpg'
removed directory 'dataset/data/train/0/20151202009'
removed 'dataset/data/train/0/20160128006/20160128102522.jpg'
removed 'dataset/data/train/0/20160128006/20160128102323.jpg'
removed 'dataset/data/train/0/20160128006/20160128102452.jpg'
removed 'dataset/data/train/0/20160128006/20160128102649.jpg'
removed 'dataset/data/train/0/20160128006/20160128102627.jpg'
removed 'dataset/data/train/0/20160128006/20160128102617.jpg'
removed 'dataset/data/train/0/20160128006/20160128102552.jpg'
removed directory 'dataset/data/train/0/20160128006'
removed 'dataset/data/train/0/20160401001/20160401100338.jpg'
removed 'dataset/data/train/0/20160401001/20160401100218.jpg'
removed 'dataset/data/train/0/20160401001/20160401100439.jpg'
removed 'dataset/data/train/0/20160401001/20160401100618.jpg'
removed 'dataset/data/train/0/20160401001/20160401100407.jpg'
removed 'dataset/data/train/0/20160401001/20160401100511.jpg'
removed 'dataset/data/train/0/20160401001/20160401100507.jpg'
removed directory 'dataset/data/train/0/20160401001'
removed 'dataset/data/train/0/20151021002/20151021101953.jpg'
removed 'dataset/data/train/0/20151021002/20151021101927.jpg'
removed 'dataset/data/train/0/20151021002/20151021102104.jpg'
removed 'dataset/data/train/0/20151021002/20151021101853.jpg'
removed 'dataset/data/train/0/20151021002/20151021101700.jpg'
removed 'dataset/data/train/0/20151021002/20151021102000.jpg'
removed 'dataset/data/train/0/20151021002/20151021101831.jpg'
removed directory 'dataset/data/train/0/20151021002'
removed 'dataset/data/train/0/20160406012/20160406144501.jpg'
removed 'dataset/data/train/0/20160406012/20160406144532.jpg'
removed 'dataset/data/train/0/20160406012/20160406144651.jpg'
removed 'dataset/data/train/0/20160406012/20160406144330.jpg'
removed 'dataset/data/train/0/20160406012/20160406144408.jpg'
removed 'dataset/data/train/0/20160406012/20160406144208.jpg'
removed 'dataset/data/train/0/20160406012/20160406144436.jpg'
removed directory 'dataset/data/train/0/20160406012'
removed 'dataset/data/train/0/20150826005/20150826144218.jpg'
removed 'dataset/data/train/0/20150826005/20150826144025.jpg'
removed 'dataset/data/train/0/20150826005/20150826144141.jpg'
removed 'dataset/data/train/0/20150826005/20150826144356.jpg'
removed 'dataset/data/train/0/20150826005/20150826144312.jpg'
removed 'dataset/data/train/0/20150826005/20150826144250.jpg'
removed 'dataset/data/train/0/20150826005/20150826144315.jpg'
removed directory 'dataset/data/train/0/20150826005'
removed 'dataset/data/train/0/20151029001/20151029101449.jpg'
removed 'dataset/data/train/0/20151029001/20151029101417.jpg'
removed 'dataset/data/train/0/20151029001/20151029101500.jpg'
removed 'dataset/data/train/0/20151029001/20151029101631.jpg'
removed 'dataset/data/train/0/20151029001/20151029101338.jpg'
removed 'dataset/data/train/0/20151029001/20151029101103.jpg'
removed 'dataset/data/train/0/20151029001/20151029101332.jpg'
removed directory 'dataset/data/train/0/20151029001'
removed 'dataset/data/train/0/20150722014/20150722163637.jpg'
removed 'dataset/data/train/0/20150722014/20150722163508.jpg'
removed 'dataset/data/train/0/20150722014/20150722163537.jpg'
removed 'dataset/data/train/0/20150722014/20150722163608.jpg'
removed 'dataset/data/train/0/20150722014/20150722163857.jpg'
removed 'dataset/data/train/0/20150722014/20150722163638.jpg'
removed 'dataset/data/train/0/20150722014/20150722163342.jpg'
removed directory 'dataset/data/train/0/20150722014'
removed 'dataset/data/train/0/20151203001/20151203094242.jpg'
removed 'dataset/data/train/0/20151203001/20151203094212.jpg'
removed 'dataset/data/train/0/20151203001/20151203094300.jpg'
removed 'dataset/data/train/0/20151203001/20151203094350.jpg'
removed 'dataset/data/train/0/20151203001/20151203094155.jpg'
removed 'dataset/data/train/0/20151203001/20151203094014.jpg'
removed 'dataset/data/train/0/20151203001/20151203094417.jpg'
removed directory 'dataset/data/train/0/20151203001'
removed 'dataset/data/train/0/20160413009/20160413143458.jpg'
removed 'dataset/data/train/0/20160413009/20160413143500.jpg'
removed 'dataset/data/train/0/20160413009/20160413143427.jpg'
removed 'dataset/data/train/0/20160413009/20160413143208.jpg'
removed 'dataset/data/train/0/20160413009/20160413143551.jpg'
removed 'dataset/data/train/0/20160413009/20160413143404.jpg'
removed 'dataset/data/train/0/20160413009/20160413143327.jpg'
removed directory 'dataset/data/train/0/20160413009'
removed 'dataset/data/train/0/20151127011/20151127160500.jpg'
removed 'dataset/data/train/0/20151127011/20151127160317.jpg'
removed 'dataset/data/train/0/20151127011/20151127160551.jpg'
removed 'dataset/data/train/0/20151127011/20151127160124.jpg'
removed 'dataset/data/train/0/20151127011/20151127160340.jpg'
removed 'dataset/data/train/0/20151127011/20151127160426.jpg'
removed 'dataset/data/train/0/20151127011/20151127160301.jpg'
removed directory 'dataset/data/train/0/20151127011'
removed 'dataset/data/train/0/20150819010/20150819162530.jpg'
removed 'dataset/data/train/0/20150819010/20150819162647.jpg'
removed 'dataset/data/train/0/20150819010/20150819162241.jpg'
removed 'dataset/data/train/0/20150819010/20150819162784.jpg'
removed 'dataset/data/train/0/20150819010/20150819162620.jpg'
removed 'dataset/data/train/0/20150819010/20150819162804.jpg'
removed 'dataset/data/train/0/20150819010/20150819162541.jpg'
removed directory 'dataset/data/train/0/20150819010'
removed 'dataset/data/train/0/20160706005/20160706145249.jpg'
removed 'dataset/data/train/0/20160706005/20160706145144.jpg'
removed 'dataset/data/train/0/20160706005/20160706145220.jpg'
removed 'dataset/data/train/0/20160706005/20160706145048.jpg'
removed 'dataset/data/train/0/20160706005/20160706145117.jpg'
removed 'dataset/data/train/0/20160706005/20160706144908.jpg'
removed 'dataset/data/train/0/20160706005/20160706145214.jpg'
removed directory 'dataset/data/train/0/20160706005'
removed 'dataset/data/train/0/20160120008/20160120142704.jpg'
removed 'dataset/data/train/0/20160120008/20160120142735.jpg'
removed 'dataset/data/train/0/20160120008/20160120142721.jpg'
removed 'dataset/data/train/0/20160120008/20160120142839.jpg'
removed 'dataset/data/train/0/20160120008/20160120142434.jpg'
removed 'dataset/data/train/0/20160120008/20160120142627.jpg'
removed 'dataset/data/train/0/20160120008/20160120142601.jpg'
removed directory 'dataset/data/train/0/20160120008'
removed 'dataset/data/train/0/20160309009/20160309155831.jpg'
removed 'dataset/data/train/0/20160309009/20160309155741.jpg'
removed 'dataset/data/train/0/20160309009/20160309155711.jpg'
removed 'dataset/data/train/0/20160309009/20160309155620.jpg'
removed 'dataset/data/train/0/20160309009/20160309155453.jpg'
removed 'dataset/data/train/0/20160309009/20160309155754.jpg'
removed 'dataset/data/train/0/20160309009/20160309155642.jpg'
removed directory 'dataset/data/train/0/20160309009'
removed 'dataset/data/train/0/20150805007/20150805152120.jpg'
removed 'dataset/data/train/0/20150805007/20150805151947.jpg'
removed 'dataset/data/train/0/20150805007/20150805152034.jpg'
removed 'dataset/data/train/0/20150805007/20150805151951.jpg'
removed 'dataset/data/train/0/20150805007/20150805152115.jpg'
removed 'dataset/data/train/0/20150805007/20150805151805.jpg'
removed 'dataset/data/train/0/20150805007/20150805152026.jpg'
removed directory 'dataset/data/train/0/20150805007'
removed 'dataset/data/train/0/20160519004/20160519145020.jpg'
removed 'dataset/data/train/0/20160519004/20160519145122.jpg'
removed 'dataset/data/train/0/20160519004/20160519144815.jpg'
removed 'dataset/data/train/0/20160519004/20160519145150.jpg'
removed 'dataset/data/train/0/20160519004/20160519145036.jpg'
removed 'dataset/data/train/0/20160519004/20160519145222.jpg'
removed 'dataset/data/train/0/20160519004/20160519145165.jpg'
removed directory 'dataset/data/train/0/20160519004'
removed 'dataset/data/train/0/20160108006/20160108154057.jpg'
removed 'dataset/data/train/0/20160108006/20160108153900.jpg'
removed 'dataset/data/train/0/20160108006/20160108154318.jpg'
removed 'dataset/data/train/0/20160108006/20160108154333.jpg'
removed 'dataset/data/train/0/20160108006/20160108154215.jpg'
removed 'dataset/data/train/0/20160108006/20160108154148.jpg'
removed 'dataset/data/train/0/20160108006/20160108154132.jpg'
removed directory 'dataset/data/train/0/20160108006'
removed 'dataset/data/train/0/20160622006/20160622145637.jpg'
removed 'dataset/data/train/0/20160622006/20160622150000.jpg'
removed 'dataset/data/train/0/20160622006/20160622145832.jpg'
removed 'dataset/data/train/0/20160622006/20160622145906.jpg'
removed 'dataset/data/train/0/20160622006/20160622145909.jpg'
removed 'dataset/data/train/0/20160622006/20160622145758.jpg'
removed 'dataset/data/train/0/20160622006/20160622145817.jpg'
removed directory 'dataset/data/train/0/20160622006'
removed 'dataset/data/train/0/20160608006/20160608103220.jpg'
removed 'dataset/data/train/0/20160608006/20160608103216.jpg'
removed 'dataset/data/train/0/20160608006/20160608103047.jpg'
removed 'dataset/data/train/0/20160608006/20160608103130.jpg'
removed 'dataset/data/train/0/20160608006/20160608102927.jpg'
removed 'dataset/data/train/0/20160608006/20160608103308.jpg'
removed 'dataset/data/train/0/20160608006/20160608103154.jpg'
removed directory 'dataset/data/train/0/20160608006'
removed 'dataset/data/train/0/20160720013/20160720161047.jpg'
removed 'dataset/data/train/0/20160720013/20160720161153.jpg'
removed 'dataset/data/train/0/20160720013/20160720161328.jpg'
removed 'dataset/data/train/0/20160720013/20160720161129.jpg'
removed 'dataset/data/train/0/20160720013/20160720161220.jpg'
removed 'dataset/data/train/0/20160720013/20160720161216.jpg'
removed 'dataset/data/train/0/20160720013/20160720160925.jpg'
removed directory 'dataset/data/train/0/20160720013'
removed 'dataset/data/train/0/20160601006/20160601143854.jpg'
removed 'dataset/data/train/0/20160601006/20160601144051.jpg'
removed 'dataset/data/train/0/20160601006/20160601144020.jpg'
removed 'dataset/data/train/0/20160601006/20160601144166.jpg'
removed 'dataset/data/train/0/20160601006/20160601144150.jpg'
removed 'dataset/data/train/0/20160601006/20160601144120.jpg'
removed 'dataset/data/train/0/20160601006/20160601144234.jpg'
removed directory 'dataset/data/train/0/20160601006'
removed 'dataset/data/train/0/20160519001/20160519085710.jpg'
removed 'dataset/data/train/0/20160519001/20160519085742.jpg'
removed 'dataset/data/train/0/20160519001/20160519085432.jpg'
removed 'dataset/data/train/0/20160519001/20160519085750.jpg'
removed 'dataset/data/train/0/20160519001/20160519085927.jpg'
removed 'dataset/data/train/0/20160519001/20160519085620.jpg'
removed 'dataset/data/train/0/20160519001/20160519085641.jpg'
removed directory 'dataset/data/train/0/20160519001'
removed 'dataset/data/train/0/20160718003/20160718144147.jpg'
removed 'dataset/data/train/0/20160718003/20160718144117.jpg'
removed 'dataset/data/train/0/20160718003/20160718144049.jpg'
removed 'dataset/data/train/0/20160718003/20160718144315.jpg'
removed 'dataset/data/train/0/20160718003/20160718144218.jpg'
removed 'dataset/data/train/0/20160718003/20160718143931.jpg'
removed 'dataset/data/train/0/20160718003/20160718144221.jpg'
removed directory 'dataset/data/train/0/20160718003'
removed 'dataset/data/train/0/20160606005/20160606161713.jpg'
removed 'dataset/data/train/0/20160606005/20160606161617.jpg'
removed 'dataset/data/train/0/20160606005/20160606161720.jpg'
removed 'dataset/data/train/0/20160606005/20160606161644.jpg'
removed 'dataset/data/train/0/20160606005/20160606161546.jpg'
removed 'dataset/data/train/0/20160606005/20160606161831.jpg'
removed 'dataset/data/train/0/20160606005/20160606161420.jpg'
removed directory 'dataset/data/train/0/20160606005'
removed 'dataset/data/train/0/20160427001/20160427092546.jpg'
removed 'dataset/data/train/0/20160427001/20160427092844.jpg'
removed 'dataset/data/train/0/20160427001/20160427092915.jpg'
removed 'dataset/data/train/0/20160427001/20160427092712.jpg'
removed 'dataset/data/train/0/20160427001/20160427092833.jpg'
removed 'dataset/data/train/0/20160427001/20160427092733.jpg'
removed 'dataset/data/train/0/20160427001/20160427092800.jpg'
removed directory 'dataset/data/train/0/20160427001'
removed 'dataset/data/train/0/20160824006/20160824145118.jpg'
removed 'dataset/data/train/0/20160824006/20160824145340.jpg'
removed 'dataset/data/train/0/20160824006/20160824145243.jpg'
removed 'dataset/data/train/0/20160824006/20160824145423.jpg'
removed 'dataset/data/train/0/20160824006/20160824145317.jpg'
removed 'dataset/data/train/0/20160824006/20160824145506.jpg'
removed 'dataset/data/train/0/20160824006/20160824145411.jpg'
removed directory 'dataset/data/train/0/20160824006'
removed 'dataset/data/train/0/20161019006/20161019145501.jpg'
removed 'dataset/data/train/0/20161019006/20161019145239.jpg'
removed 'dataset/data/train/0/20161019006/20161019145511.jpg'
removed 'dataset/data/train/0/20161019006/20161019145525.jpg'
removed 'dataset/data/train/0/20161019006/20161019145411.jpg'
removed 'dataset/data/train/0/20161019006/20161019145425.jpg'
removed 'dataset/data/train/0/20161019006/20161019145728.jpg'
removed directory 'dataset/data/train/0/20161019006'
removed 'dataset/data/train/0/20160120004/20160120101137.jpg'
removed 'dataset/data/train/0/20160120004/20160120101023.jpg'
removed 'dataset/data/train/0/20160120004/20160120101207.jpg'
removed 'dataset/data/train/0/20160120004/20160120101310.jpg'
removed 'dataset/data/train/0/20160120004/20160120101238.jpg'
removed 'dataset/data/train/0/20160120004/20160120101401.jpg'
removed 'dataset/data/train/0/20160120004/20160120101307.jpg'
removed directory 'dataset/data/train/0/20160120004'
removed 'dataset/data/train/0/20151210001/20151210094852.jpg'
removed 'dataset/data/train/0/20151210001/20151210094705.jpg'
removed 'dataset/data/train/0/20151210001/20151210095100.jpg'
removed 'dataset/data/train/0/20151210001/20151210094901.jpg'
removed 'dataset/data/train/0/20151210001/20151210094958.jpg'
removed 'dataset/data/train/0/20151210001/20151210094938.jpg'
removed 'dataset/data/train/0/20151210001/20151210095000.jpg'
removed directory 'dataset/data/train/0/20151210001'
removed 'dataset/data/train/0/20151209006/20151209152858.jpg'
removed 'dataset/data/train/0/20151209006/20151209152916.jpg'
removed 'dataset/data/train/0/20151209006/20151209153237.jpg'
removed 'dataset/data/train/0/20151209006/20151209153022.jpg'
removed 'dataset/data/train/0/20151209006/20151209152955.jpg'
removed 'dataset/data/train/0/20151209006/20151209153013.jpg'
removed 'dataset/data/train/0/20151209006/20151209152652.jpg'
removed directory 'dataset/data/train/0/20151209006'
removed 'dataset/data/train/0/20160831002/20160831102118.jpg'
removed 'dataset/data/train/0/20160831002/20160831102421.jpg'
removed 'dataset/data/train/0/20160831002/20160831102332.jpg'
removed 'dataset/data/train/0/20160831002/20160831102438.jpg'
removed 'dataset/data/train/0/20160831002/20160831102252.jpg'
removed 'dataset/data/train/0/20160831002/20160831102526.jpg'
removed 'dataset/data/train/0/20160831002/20160831102348.jpg'
removed directory 'dataset/data/train/0/20160831002'
removed 'dataset/data/train/0/20160222002/20160222153048.jpg'
removed 'dataset/data/train/0/20160222002/20160222153020.jpg'
removed 'dataset/data/train/0/20160222002/20160222153234.jpg'
removed 'dataset/data/train/0/20160222002/20160222153169.jpg'
removed 'dataset/data/train/0/20160222002/20160222152903.jpg'
removed 'dataset/data/train/0/20160222002/20160222153150.jpg'
removed 'dataset/data/train/0/20160222002/20160222153123.jpg'
removed directory 'dataset/data/train/0/20160222002'
removed 'dataset/data/train/0/20160302011/20160302152123.jpg'
removed 'dataset/data/train/0/20160302011/20160302151956.jpg'
removed 'dataset/data/train/0/20160302011/20160302152310.jpg'
removed 'dataset/data/train/0/20160302011/20160302152218.jpg'
removed 'dataset/data/train/0/20160302011/20160302152259.jpg'
removed 'dataset/data/train/0/20160302011/20160302152240.jpg'
removed 'dataset/data/train/0/20160302011/20160302152127.jpg'
removed directory 'dataset/data/train/0/20160302011'
removed 'dataset/data/train/0/20161024007/20161024160302.jpg'
removed 'dataset/data/train/0/20161024007/20161024160117.jpg'
removed 'dataset/data/train/0/20161024007/20161024160049.jpg'
removed 'dataset/data/train/0/20161024007/20161024155924.jpg'
removed 'dataset/data/train/0/20161024007/20161024160215.jpg'
removed 'dataset/data/train/0/20161024007/20161024160146.jpg'
removed 'dataset/data/train/0/20161024007/20161024160229.jpg'
removed directory 'dataset/data/train/0/20161024007'
removed 'dataset/data/train/0/20160526001/20160526093636.jpg'
removed 'dataset/data/train/0/20160526001/20160526093751.jpg'
removed 'dataset/data/train/0/20160526001/20160526093447.jpg'
removed 'dataset/data/train/0/20160526001/20160526093769.jpg'
removed 'dataset/data/train/0/20160526001/20160526093739.jpg'
removed 'dataset/data/train/0/20160526001/20160526093621.jpg'
removed 'dataset/data/train/0/20160526001/20160526093835.jpg'
removed directory 'dataset/data/train/0/20160526001'
removed 'dataset/data/train/0/20151028001/20151028100907.jpg'
removed 'dataset/data/train/0/20151028001/20151028101139.jpg'
removed 'dataset/data/train/0/20151028001/20151028101200.jpg'
removed 'dataset/data/train/0/20151028001/20151028101037.jpg'
removed 'dataset/data/train/0/20151028001/20151028101322.jpg'
removed 'dataset/data/train/0/20151028001/20151028100852.jpg'
removed 'dataset/data/train/0/20151028001/20151028101051.jpg'
removed directory 'dataset/data/train/0/20151028001'
removed 'dataset/data/train/0/20161021004/20161021163429.jpg'
removed 'dataset/data/train/0/20161021004/20161021163558.jpg'
removed 'dataset/data/train/0/20161021004/20161021163240.jpg'
removed 'dataset/data/train/0/20161021004/20161021163529.jpg'
removed 'dataset/data/train/0/20161021004/20161021163703.jpg'
removed 'dataset/data/train/0/20161021004/20161021163500.jpg'
removed 'dataset/data/train/0/20161021004/20161021163604.jpg'
removed directory 'dataset/data/train/0/20161021004'
removed 'dataset/data/train/0/20160419002/20160419152918.jpg'
removed 'dataset/data/train/0/20160419002/20160419153045.jpg'
removed 'dataset/data/train/0/20160419002/20160419153139.jpg'
removed 'dataset/data/train/0/20160419002/20160419152949.jpg'
removed 'dataset/data/train/0/20160419002/20160419153051.jpg'
removed 'dataset/data/train/0/20160419002/20160419153015.jpg'
removed 'dataset/data/train/0/20160419002/20160419152802.jpg'
removed directory 'dataset/data/train/0/20160419002'
removed 'dataset/data/train/0/20160330016/20160330151833.jpg'
removed 'dataset/data/train/0/20160330016/20160330151939.jpg'
removed 'dataset/data/train/0/20160330016/20160330152048.jpg'
removed 'dataset/data/train/0/20160330016/20160330151936.jpg'
removed 'dataset/data/train/0/20160330016/20160330151814.jpg'
removed 'dataset/data/train/0/20160330016/20160330151906.jpg'
removed 'dataset/data/train/0/20160330016/20160330151627.jpg'
removed directory 'dataset/data/train/0/20160330016'
removed 'dataset/data/train/0/20151202006/20151202153718.jpg'
removed 'dataset/data/train/0/20151202006/20151202153820.jpg'
removed 'dataset/data/train/0/20151202006/20151202153624.jpg'
removed 'dataset/data/train/0/20151202006/20151202153645.jpg'
removed 'dataset/data/train/0/20151202006/20151202153553.jpg'
removed 'dataset/data/train/0/20151202006/20151202153412.jpg'
removed 'dataset/data/train/0/20151202006/20151202153811.jpg'
removed directory 'dataset/data/train/0/20151202006'
removed 'dataset/data/train/0/20150916012/20150916163607.jpg'
removed 'dataset/data/train/0/20150916012/20150916163727.jpg'
removed 'dataset/data/train/0/20150916012/20150916163852.jpg'
removed 'dataset/data/train/0/20150916012/20150916163869.jpg'
removed 'dataset/data/train/0/20150916012/20150916163752.jpg'
removed 'dataset/data/train/0/20150916012/20150916163822.jpg'
removed 'dataset/data/train/0/20150916012/20150916163945.jpg'
removed directory 'dataset/data/train/0/20150916012'
removed 'dataset/data/train/0/20160706014/20160706171828.jpg'
removed 'dataset/data/train/0/20160706014/20160706171957.jpg'
removed 'dataset/data/train/0/20160706014/20160706171647.jpg'
removed 'dataset/data/train/0/20160706014/20160706171835.jpg'
removed 'dataset/data/train/0/20160706014/20160706171920.jpg'
removed 'dataset/data/train/0/20160706014/20160706171930.jpg'
removed 'dataset/data/train/0/20160706014/20160706171852.jpg'
removed directory 'dataset/data/train/0/20160706014'
removed 'dataset/data/train/0/20161025003/20161025162716.jpg'
removed 'dataset/data/train/0/20161025003/20161025162601.jpg'
removed 'dataset/data/train/0/20161025003/20161025162747.jpg'
removed 'dataset/data/train/0/20161025003/20161025162816.jpg'
removed 'dataset/data/train/0/20161025003/20161025162858.jpg'
removed 'dataset/data/train/0/20161025003/20161025163027.jpg'
removed 'dataset/data/train/0/20161025003/20161025162846.jpg'
removed directory 'dataset/data/train/0/20161025003'
removed 'dataset/data/train/0/20160526002/20160526094412.jpg'
removed 'dataset/data/train/0/20160526002/20160526094431.jpg'
removed 'dataset/data/train/0/20160526002/20160526094236.jpg'
removed 'dataset/data/train/0/20160526002/20160526094531.jpg'
removed 'dataset/data/train/0/20160526002/20160526094548.jpg'
removed 'dataset/data/train/0/20160526002/20160526094507.jpg'
removed 'dataset/data/train/0/20160526002/20160526094610.jpg'
removed directory 'dataset/data/train/0/20160526002'
removed 'dataset/data/train/0/20160323001/20160323094943.jpg'
removed 'dataset/data/train/0/20160323001/20160323094625.jpg'
removed 'dataset/data/train/0/20160323001/20160323095021.jpg'
removed 'dataset/data/train/0/20160323001/20160323094804.jpg'
removed 'dataset/data/train/0/20160323001/20160323094931.jpg'
removed 'dataset/data/train/0/20160323001/20160323094859.jpg'
removed 'dataset/data/train/0/20160323001/20160323094825.jpg'
removed directory 'dataset/data/train/0/20160323001'
removed 'dataset/data/train/0/20150722015/20150722173601.jpg'
removed 'dataset/data/train/0/20150722015/20150722173822.jpg'
removed 'dataset/data/train/0/20150722015/20150722173858.jpg'
removed 'dataset/data/train/0/20150722015/20150722174021.jpg'
removed 'dataset/data/train/0/20150722015/20150722173753.jpg'
removed 'dataset/data/train/0/20150722015/20150722173721.jpg'
removed 'dataset/data/train/0/20150722015/20150722173900.jpg'
removed directory 'dataset/data/train/0/20150722015'
removed 'dataset/data/train/0/20151214002/20151214144741.jpg'
removed 'dataset/data/train/0/20151214002/20151214145141.jpg'
removed 'dataset/data/train/0/20151214002/20151214144929.jpg'
removed 'dataset/data/train/0/20151214002/20151214144859.jpg'
removed 'dataset/data/train/0/20151214002/20151214145100.jpg'
removed 'dataset/data/train/0/20151214002/20151214144958.jpg'
removed 'dataset/data/train/0/20151214002/20151214145029.jpg'
removed directory 'dataset/data/train/0/20151214002'
removed 'dataset/data/train/0/20160303001/20160303095251.jpg'
removed 'dataset/data/train/0/20160303001/20160303095213.jpg'
removed 'dataset/data/train/0/20160303001/20160303095324.jpg'
removed 'dataset/data/train/0/20160303001/20160303095433.jpg'
removed 'dataset/data/train/0/20160303001/20160303094958.jpg'
removed 'dataset/data/train/0/20160303001/20160303095319.jpg'
removed 'dataset/data/train/0/20160303001/20160303095223.jpg'
removed directory 'dataset/data/train/0/20160303001'
removed 'dataset/data/train/0/20160503002/20160503100711.jpg'
removed 'dataset/data/train/0/20160503002/20160503100634.jpg'
removed 'dataset/data/train/0/20160503002/20160503100652.jpg'
removed 'dataset/data/train/0/20160503002/20160503100411.jpg'
removed 'dataset/data/train/0/20160503002/20160503100801.jpg'
removed 'dataset/data/train/0/20160503002/20160503100603.jpg'
removed 'dataset/data/train/0/20160503002/20160503100725.jpg'
removed directory 'dataset/data/train/0/20160503002'
removed 'dataset/data/train/0/20160926004/20160926151230.jpg'
removed 'dataset/data/train/0/20160926004/20160926151454.jpg'
removed 'dataset/data/train/0/20160926004/20160926151654.jpg'
removed 'dataset/data/train/0/20160926004/20160926151538.jpg'
removed 'dataset/data/train/0/20160926004/20160926151531.jpg'
removed 'dataset/data/train/0/20160926004/20160926151259.jpg'
removed 'dataset/data/train/0/20160926004/20160926151429.jpg'
removed directory 'dataset/data/train/0/20160926004'
removed 'dataset/data/train/0/20160314006/20160314155641.jpg'
removed 'dataset/data/train/0/20160314006/20160314155566.jpg'
removed 'dataset/data/train/0/20160314006/20160314155430.jpg'
removed 'dataset/data/train/0/20160314006/20160314155459.jpg'
removed 'dataset/data/train/0/20160314006/20160314155559.jpg'
removed 'dataset/data/train/0/20160314006/20160314155426.jpg'
removed 'dataset/data/train/0/20160314006/20160314155248.jpg'
removed directory 'dataset/data/train/0/20160314006'
removed 'dataset/data/train/0/20150916005/20150916145155.jpg'
removed 'dataset/data/train/0/20150916005/20150916145030.jpg'
removed 'dataset/data/train/0/20150916005/20150916145332.jpg'
removed 'dataset/data/train/0/20150916005/20150916145222.jpg'
removed 'dataset/data/train/0/20150916005/20150916145328.jpg'
removed 'dataset/data/train/0/20150916005/20150916145252.jpg'
removed 'dataset/data/train/0/20150916005/20150916145450.jpg'
removed directory 'dataset/data/train/0/20150916005'
removed 'dataset/data/train/0/20160323020/20160323160327.jpg'
removed 'dataset/data/train/0/20160323020/20160323160024.jpg'
removed 'dataset/data/train/0/20160323020/20160323160314.jpg'
removed 'dataset/data/train/0/20160323020/20160323160147.jpg'
removed 'dataset/data/train/0/20160323020/20160323160212.jpg'
removed 'dataset/data/train/0/20160323020/20160323160406.jpg'
removed 'dataset/data/train/0/20160323020/20160323160244.jpg'
removed directory 'dataset/data/train/0/20160323020'
removed 'dataset/data/train/0/20160106016/20160106164415.jpg'
removed 'dataset/data/train/0/20160106016/20160106164510.jpg'
removed 'dataset/data/train/0/20160106016/20160106164407.jpg'
removed 'dataset/data/train/0/20160106016/20160106164237.jpg'
removed 'dataset/data/train/0/20160106016/20160106164308.jpg'
removed 'dataset/data/train/0/20160106016/20160106164333.jpg'
removed 'dataset/data/train/0/20160106016/20160106164113.jpg'
removed directory 'dataset/data/train/0/20160106016'
removed 'dataset/data/train/0/20160330013/20160330151027.jpg'
removed 'dataset/data/train/0/20160330013/20160330150835.jpg'
removed 'dataset/data/train/0/20160330013/20160330150824.jpg'
removed 'dataset/data/train/0/20160330013/20160330150521.jpg'
removed 'dataset/data/train/0/20160330013/20160330150731.jpg'
removed 'dataset/data/train/0/20160330013/20160330150803.jpg'
removed 'dataset/data/train/0/20160330013/20160330150752.jpg'
removed directory 'dataset/data/train/0/20160330013'
removed 'dataset/data/train/0/20160525006/20160525153117.jpg'
removed 'dataset/data/train/0/20160525006/20160525153214.jpg'
removed 'dataset/data/train/0/20160525006/20160525153243.jpg'
removed 'dataset/data/train/0/20160525006/20160525152944.jpg'
removed 'dataset/data/train/0/20160525006/20160525153258.jpg'
removed 'dataset/data/train/0/20160525006/20160525153328.jpg'
removed 'dataset/data/train/0/20160525006/20160525153154.jpg'
removed directory 'dataset/data/train/0/20160525006'
removed 'dataset/data/train/0/20150831002/20150831152945.jpg'
removed 'dataset/data/train/0/20150831002/20150831152645.jpg'
removed 'dataset/data/train/0/20150831002/20150831152914.jpg'
removed 'dataset/data/train/0/20150831002/20150831153000.jpg'
removed 'dataset/data/train/0/20150831002/20150831152842.jpg'
removed 'dataset/data/train/0/20150831002/20150831152814.jpg'
removed 'dataset/data/train/0/20150831002/20150831153104.jpg'
removed directory 'dataset/data/train/0/20150831002'
removed 'dataset/data/train/0/20160407005/20160407161254.jpg'
removed 'dataset/data/train/0/20160407005/20160407161038.jpg'
removed 'dataset/data/train/0/20160407005/20160407161219.jpg'
removed 'dataset/data/train/0/20160407005/20160407161333.jpg'
removed 'dataset/data/train/0/20160407005/20160407161320.jpg'
removed 'dataset/data/train/0/20160407005/20160407161149.jpg'
removed 'dataset/data/train/0/20160407005/20160407161346.jpg'
removed directory 'dataset/data/train/0/20160407005'
removed 'dataset/data/train/0/20160118004/20160118151519.jpg'
removed 'dataset/data/train/0/20160118004/20160118151647.jpg'
removed 'dataset/data/train/0/20160118004/20160118151548.jpg'
removed 'dataset/data/train/0/20160118004/20160118151621.jpg'
removed 'dataset/data/train/0/20160118004/20160118151400.jpg'
removed 'dataset/data/train/0/20160118004/20160118151739.jpg'
removed 'dataset/data/train/0/20160118004/20160118151680.jpg'
removed directory 'dataset/data/train/0/20160118004'
removed 'dataset/data/train/0/20160120002/20160120104251.jpg'
removed 'dataset/data/train/0/20160120002/20160120104147.jpg'
removed 'dataset/data/train/0/20160120002/20160120104001.jpg'
removed 'dataset/data/train/0/20160120002/20160120104218.jpg'
removed 'dataset/data/train/0/20160120002/20160120104241.jpg'
removed 'dataset/data/train/0/20160120002/20160120104328.jpg'
removed 'dataset/data/train/0/20160120002/20160120104117.jpg'
removed directory 'dataset/data/train/0/20160120002'
removed 'dataset/data/train/0/20160323010/20160323112242.jpg'
removed 'dataset/data/train/0/20160323010/20160323112346.jpg'
removed 'dataset/data/train/0/20160323010/20160323112434.jpg'
removed 'dataset/data/train/0/20160323010/20160323112321.jpg'
removed 'dataset/data/train/0/20160323010/20160323112339.jpg'
removed 'dataset/data/train/0/20160323010/20160323112054.jpg'
removed 'dataset/data/train/0/20160323010/20160323112223.jpg'
removed directory 'dataset/data/train/0/20160323010'
removed 'dataset/data/train/0/20160113026/20160113163319.jpg'
removed 'dataset/data/train/0/20160113026/20160113163251.jpg'
removed 'dataset/data/train/0/20160113026/20160113163320.jpg'
removed 'dataset/data/train/0/20160113026/20160113163218.jpg'
removed 'dataset/data/train/0/20160113026/20160113163159.jpg'
removed 'dataset/data/train/0/20160113026/20160113163400.jpg'
removed 'dataset/data/train/0/20160113026/20160113163003.jpg'
removed directory 'dataset/data/train/0/20160113026'
removed 'dataset/data/train/0/20151223012/20151223171540.jpg'
removed 'dataset/data/train/0/20151223012/20151223171519.jpg'
removed 'dataset/data/train/0/20151223012/20151223171428.jpg'
removed 'dataset/data/train/0/20151223012/20151223171618.jpg'
removed 'dataset/data/train/0/20151223012/20151223171300.jpg'
removed 'dataset/data/train/0/20151223012/20151223171735.jpg'
removed 'dataset/data/train/0/20151223012/20151223171609.jpg'
removed directory 'dataset/data/train/0/20151223012'
removed 'dataset/data/train/0/20160428003/20160428152267.jpg'
removed 'dataset/data/train/0/20160428003/20160428152334.jpg'
removed 'dataset/data/train/0/20160428003/20160428152147.jpg'
removed 'dataset/data/train/0/20160428003/20160428152218.jpg'
removed 'dataset/data/train/0/20160428003/20160428152001.jpg'
removed 'dataset/data/train/0/20160428003/20160428152129.jpg'
removed 'dataset/data/train/0/20160428003/20160428152250.jpg'
removed directory 'dataset/data/train/0/20160428003'
removed 'dataset/data/train/0/20161014001/20161014144403.jpg'
removed 'dataset/data/train/0/20161014001/20161014144154.jpg'
removed 'dataset/data/train/0/20161014001/20161014144227.jpg'
removed 'dataset/data/train/0/20161014001/20161014144103.jpg'
removed 'dataset/data/train/0/20161014001/20161014144230.jpg'
removed 'dataset/data/train/0/20161014001/20161014143932.jpg'
removed 'dataset/data/train/0/20161014001/20161014144126.jpg'
removed directory 'dataset/data/train/0/20161014001'
removed 'dataset/data/train/0/20150902006/20150902152429.jpg'
removed 'dataset/data/train/0/20150902006/20150902152536.jpg'
removed 'dataset/data/train/0/20150902006/20150902152460.jpg'
removed 'dataset/data/train/0/20150902006/20150902152359.jpg'
removed 'dataset/data/train/0/20150902006/20150902152331.jpg'
removed 'dataset/data/train/0/20150902006/20150902152203.jpg'
removed 'dataset/data/train/0/20150902006/20150902152459.jpg'
removed directory 'dataset/data/train/0/20150902006'
removed 'dataset/data/train/0/20160321008/20160321160768.jpg'
removed 'dataset/data/train/0/20160321008/20160321160723.jpg'
removed 'dataset/data/train/0/20160321008/20160321160647.jpg'
removed 'dataset/data/train/0/20160321008/20160321160454.jpg'
removed 'dataset/data/train/0/20160321008/20160321160622.jpg'
removed 'dataset/data/train/0/20160321008/20160321160827.jpg'
removed 'dataset/data/train/0/20160321008/20160321160750.jpg'
removed directory 'dataset/data/train/0/20160321008'
removed 'dataset/data/train/0/20160817008/20160817152510.jpg'
removed 'dataset/data/train/0/20160817008/20160817152258.jpg'
removed 'dataset/data/train/0/20160817008/20160817152547.jpg'
removed 'dataset/data/train/0/20160817008/20160817152439.jpg'
removed 'dataset/data/train/0/20160817008/20160817152616.jpg'
removed 'dataset/data/train/0/20160817008/20160817152413.jpg'
removed 'dataset/data/train/0/20160817008/20160817152539.jpg'
removed directory 'dataset/data/train/0/20160817008'
removed 'dataset/data/train/0/20151204007/20151204163016.jpg'
removed 'dataset/data/train/0/20151204007/20151204162848.jpg'
removed 'dataset/data/train/0/20151204007/20151204162634.jpg'
removed 'dataset/data/train/0/20151204007/20151204162933.jpg'
removed 'dataset/data/train/0/20151204007/20151204162840.jpg'
removed 'dataset/data/train/0/20151204007/20151204162829.jpg'
removed 'dataset/data/train/0/20151204007/20151204162925.jpg'
removed directory 'dataset/data/train/0/20151204007'
removed 'dataset/data/train/0/20151230010/20151230160820.jpg'
removed 'dataset/data/train/0/20151230010/20151230160616.jpg'
removed 'dataset/data/train/0/20151230010/20151230160702.jpg'
removed 'dataset/data/train/0/20151230010/20151230160634.jpg'
removed 'dataset/data/train/0/20151230010/20151230160733.jpg'
removed 'dataset/data/train/0/20151230010/20151230160735.jpg'
removed 'dataset/data/train/0/20151230010/20151230160427.jpg'
removed directory 'dataset/data/train/0/20151230010'
removed 'dataset/data/train/0/20161028003/20161028094634.jpg'
removed 'dataset/data/train/0/20161028003/20161028094733.jpg'
removed 'dataset/data/train/0/20161028003/20161028094621.jpg'
removed 'dataset/data/train/0/20161028003/20161028094551.jpg'
removed 'dataset/data/train/0/20161028003/20161028094457.jpg'
removed 'dataset/data/train/0/20161028003/20161028094522.jpg'
removed 'dataset/data/train/0/20161028003/20161028094333.jpg'
removed directory 'dataset/data/train/0/20161028003'
removed 'dataset/data/train/0/20160602002/20160602144948.jpg'
removed 'dataset/data/train/0/20160602002/20160602144705.jpg'
removed 'dataset/data/train/0/20160602002/20160602144813.jpg'
removed 'dataset/data/train/0/20160602002/20160602144738.jpg'
removed 'dataset/data/train/0/20160602002/20160602144804.jpg'
removed 'dataset/data/train/0/20160602002/20160602144524.jpg'
removed 'dataset/data/train/0/20160602002/20160602144634.jpg'
removed directory 'dataset/data/train/0/20160602002'
removed 'dataset/data/train/0/20160113006/20160113105703.jpg'
removed 'dataset/data/train/0/20160113006/20160113105627.jpg'
removed 'dataset/data/train/0/20160113006/20160113110109.jpg'
removed 'dataset/data/train/0/20160113006/20160113105910.jpg'
removed 'dataset/data/train/0/20160113006/20160113105743.jpg'
removed 'dataset/data/train/0/20160113006/20160113105442.jpg'
removed 'dataset/data/train/0/20160113006/20160113105721.jpg'
removed directory 'dataset/data/train/0/20160113006'
removed 'dataset/data/train/0/20160817009/20160817160542.jpg'
removed 'dataset/data/train/0/20160817009/20160817160331.jpg'
removed 'dataset/data/train/0/20160817009/20160817160113.jpg'
removed 'dataset/data/train/0/20160817009/20160817160301.jpg'
removed 'dataset/data/train/0/20160817009/20160817160404.jpg'
removed 'dataset/data/train/0/20160817009/20160817160244.jpg'
removed 'dataset/data/train/0/20160817009/20160817160417.jpg'
removed directory 'dataset/data/train/0/20160817009'
removed 'dataset/data/train/0/20160909001/20160909144157.jpg'
removed 'dataset/data/train/0/20160909001/20160909144456.jpg'
removed 'dataset/data/train/0/20160909001/20160909144350.jpg'
removed 'dataset/data/train/0/20160909001/20160909144450.jpg'
removed 'dataset/data/train/0/20160909001/20160909144421.jpg'
removed 'dataset/data/train/0/20160909001/20160909144326.jpg'
removed 'dataset/data/train/0/20160909001/20160909144540.jpg'
removed directory 'dataset/data/train/0/20160909001'
removed 'dataset/data/train/0/20150917002/20150917101829.jpg'
removed 'dataset/data/train/0/20150917002/20150917101729.jpg'
removed 'dataset/data/train/0/20150917002/20150917101724.jpg'
removed 'dataset/data/train/0/20150917002/20150917101615.jpg'
removed 'dataset/data/train/0/20150917002/20150917101429.jpg'
removed 'dataset/data/train/0/20150917002/20150917101453.jpg'
removed 'dataset/data/train/0/20150917002/20150917101657.jpg'
removed directory 'dataset/data/train/0/20150917002'
removed 'dataset/data/train/0/20160622008/20160622152703.jpg'
removed 'dataset/data/train/0/20160622008/20160622152735.jpg'
removed 'dataset/data/train/0/20160622008/20160622152615.jpg'
removed 'dataset/data/train/0/20160622008/20160622152633.jpg'
removed 'dataset/data/train/0/20160622008/20160622152724.jpg'
removed 'dataset/data/train/0/20160622008/20160622152421.jpg'
removed 'dataset/data/train/0/20160622008/20160622152910.jpg'
removed directory 'dataset/data/train/0/20160622008'
removed 'dataset/data/train/0/20151030003/20151030145526.jpg'
removed 'dataset/data/train/0/20151030003/20151030145555.jpg'
removed 'dataset/data/train/0/20151030003/20151030145641.jpg'
removed 'dataset/data/train/0/20151030003/20151030145600.jpg'
removed 'dataset/data/train/0/20151030003/20151030145309.jpg'
removed 'dataset/data/train/0/20151030003/20151030145425.jpg'
removed 'dataset/data/train/0/20151030003/20151030145455.jpg'
removed directory 'dataset/data/train/0/20151030003'
removed 'dataset/data/train/0/20160128005/20160128101749.jpg'
removed 'dataset/data/train/0/20160128005/20160128101922.jpg'
removed 'dataset/data/train/0/20160128005/20160128101819.jpg'
removed 'dataset/data/train/0/20160128005/20160128101558.jpg'
removed 'dataset/data/train/0/20160128005/20160128101854.jpg'
removed 'dataset/data/train/0/20160128005/20160128101848.jpg'
removed 'dataset/data/train/0/20160128005/20160128101718.jpg'
removed directory 'dataset/data/train/0/20160128005'
removed 'dataset/data/train/0/20160119003/20160119144957.jpg'
removed 'dataset/data/train/0/20160119003/20160119145137.jpg'
removed 'dataset/data/train/0/20160119003/20160119145127.jpg'
removed 'dataset/data/train/0/20160119003/20160119145027.jpg'
removed 'dataset/data/train/0/20160119003/20160119145227.jpg'
removed 'dataset/data/train/0/20160119003/20160119144827.jpg'
removed 'dataset/data/train/0/20160119003/20160119145057.jpg'
removed directory 'dataset/data/train/0/20160119003'
removed 'dataset/data/train/0/20160516004/20160516152544.jpg'
removed 'dataset/data/train/0/20160516004/20160516152745.jpg'
removed 'dataset/data/train/0/20160516004/20160516152642.jpg'
removed 'dataset/data/train/0/20160516004/20160516152611.jpg'
removed 'dataset/data/train/0/20160516004/20160516152512.jpg'
removed 'dataset/data/train/0/20160516004/20160516152638.jpg'
removed 'dataset/data/train/0/20160516004/20160516152344.jpg'
removed directory 'dataset/data/train/0/20160516004'
removed 'dataset/data/train/0/20151028014/20151028162700.jpg'
removed 'dataset/data/train/0/20151028014/20151028162434.jpg'
removed 'dataset/data/train/0/20151028014/20151028162216.jpg'
removed 'dataset/data/train/0/20151028014/20151028162404.jpg'
removed 'dataset/data/train/0/20151028014/20151028162407.jpg'
removed 'dataset/data/train/0/20151028014/20151028162836.jpg'
removed 'dataset/data/train/0/20151028014/20151028162334.jpg'
removed directory 'dataset/data/train/0/20151028014'
removed 'dataset/data/train/0/20160201007/20160201161816.jpg'
removed 'dataset/data/train/0/20160201007/20160201162100.jpg'
removed 'dataset/data/train/0/20160201007/20160201162002.jpg'
removed 'dataset/data/train/0/20160201007/20160201162105.jpg'
removed 'dataset/data/train/0/20160201007/20160201161933.jpg'
removed 'dataset/data/train/0/20160201007/20160201162138.jpg'
removed 'dataset/data/train/0/20160201007/20160201162029.jpg'
removed directory 'dataset/data/train/0/20160201007'
removed 'dataset/data/train/0/20151225003/20151225150325.jpg'
removed 'dataset/data/train/0/20151225003/20151225150154.jpg'
removed 'dataset/data/train/0/20151225003/20151225150224.jpg'
removed 'dataset/data/train/0/20151225003/20151225150306.jpg'
removed 'dataset/data/train/0/20151225003/20151225150425.jpg'
removed 'dataset/data/train/0/20151225003/20151225150032.jpg'
removed 'dataset/data/train/0/20151225003/20151225150350.jpg'
removed directory 'dataset/data/train/0/20151225003'
removed 'dataset/data/train/0/20160705003/20160705115745.jpg'
removed 'dataset/data/train/0/20160705003/20160705115644.jpg'
removed 'dataset/data/train/0/20160705003/20160705115815.jpg'
removed 'dataset/data/train/0/20160705003/20160705115825.jpg'
removed 'dataset/data/train/0/20160705003/20160705115910.jpg'
removed 'dataset/data/train/0/20160705003/20160705115717.jpg'
removed 'dataset/data/train/0/20160705003/20160705115519.jpg'
removed directory 'dataset/data/train/0/20160705003'
removed 'dataset/data/train/0/20151123006/20151123153916.jpg'
removed 'dataset/data/train/0/20151123006/20151123153733.jpg'
removed 'dataset/data/train/0/20151123006/20151123153815.jpg'
removed 'dataset/data/train/0/20151123006/20151123153854.jpg'
removed 'dataset/data/train/0/20151123006/20151123154018.jpg'
removed 'dataset/data/train/0/20151123006/20151123153609.jpg'
removed 'dataset/data/train/0/20151123006/20151123153912.jpg'
removed directory 'dataset/data/train/0/20151123006'
removed 'dataset/data/train/0/20161026006/20161026114940.jpg'
removed 'dataset/data/train/0/20161026006/20161026114609.jpg'
removed 'dataset/data/train/0/20161026006/20161026114903.jpg'
removed 'dataset/data/train/0/20161026006/20161026114828.jpg'
removed 'dataset/data/train/0/20161026006/20161026114729.jpg'
removed 'dataset/data/train/0/20161026006/20161026114810.jpg'
removed 'dataset/data/train/0/20161026006/20161026114911.jpg'
removed directory 'dataset/data/train/0/20161026006'
removed 'dataset/data/train/0/20160120010/20160120145614.jpg'
removed 'dataset/data/train/0/20160120010/20160120145425.jpg'
removed 'dataset/data/train/0/20160120010/20160120145644.jpg'
removed 'dataset/data/train/0/20160120010/20160120145859.jpg'
removed 'dataset/data/train/0/20160120010/20160120145744.jpg'
removed 'dataset/data/train/0/20160120010/20160120145754.jpg'
removed 'dataset/data/train/0/20160120010/20160120145713.jpg'
removed directory 'dataset/data/train/0/20160120010'
removed 'dataset/data/train/0/20151202007/20151202155011.jpg'
removed 'dataset/data/train/0/20151202007/20151202154804.jpg'
removed 'dataset/data/train/0/20151202007/20151202154531.jpg'
removed 'dataset/data/train/0/20151202007/20151202154706.jpg'
removed 'dataset/data/train/0/20151202007/20151202154833.jpg'
removed 'dataset/data/train/0/20151202007/20151202154838.jpg'
removed 'dataset/data/train/0/20151202007/20151202154734.jpg'
removed directory 'dataset/data/train/0/20151202007'
removed 'dataset/data/train/0/20160506007/20160506155430.jpg'
removed 'dataset/data/train/0/20160506007/20160506155150.jpg'
removed 'dataset/data/train/0/20160506007/20160506155456.jpg'
removed 'dataset/data/train/0/20160506007/20160506155351.jpg'
removed 'dataset/data/train/0/20160506007/20160506155335.jpg'
removed 'dataset/data/train/0/20160506007/20160506155607.jpg'
removed 'dataset/data/train/0/20160506007/20160506155467.jpg'
removed directory 'dataset/data/train/0/20160506007'
removed 'dataset/data/train/0/20150916011/20150916160249.jpg'
removed 'dataset/data/train/0/20150916011/20150916160346.jpg'
removed 'dataset/data/train/0/20150916011/20150916160518.jpg'
removed 'dataset/data/train/0/20150916011/20150916160316.jpg'
removed 'dataset/data/train/0/20150916011/20150916160128.jpg'
removed 'dataset/data/train/0/20150916011/20150916160455.jpg'
removed 'dataset/data/train/0/20150916011/20150916160415.jpg'
removed directory 'dataset/data/train/0/20150916011'
removed 'dataset/data/train/0/20151012006/20151012160216.jpg'
removed 'dataset/data/train/0/20151012006/20151012160021.jpg'
removed 'dataset/data/train/0/20151012006/20151012155951.jpg'
removed 'dataset/data/train/0/20151012006/20151012160121.jpg'
removed 'dataset/data/train/0/20151012006/20151012160051.jpg'
removed 'dataset/data/train/0/20151012006/20151012155835.jpg'
removed 'dataset/data/train/0/20151012006/20151012160200.jpg'
removed directory 'dataset/data/train/0/20151012006'
removed 'dataset/data/train/0/20160720001/20160720101731.jpg'
removed 'dataset/data/train/0/20160720001/20160720101651.jpg'
removed 'dataset/data/train/0/20160720001/20160720101558.jpg'
removed 'dataset/data/train/0/20160720001/20160720101635.jpg'
removed 'dataset/data/train/0/20160720001/20160720101543.jpg'
removed 'dataset/data/train/0/20160720001/20160720101662.jpg'
removed 'dataset/data/train/0/20160720001/20160720101409.jpg'
removed directory 'dataset/data/train/0/20160720001'
removed 'dataset/data/train/0/20151202011/20151202165220.jpg'
removed 'dataset/data/train/0/20151202011/20151202165207.jpg'
removed 'dataset/data/train/0/20151202011/20151202165056.jpg'
removed 'dataset/data/train/0/20151202011/20151202165034.jpg'
removed 'dataset/data/train/0/20151202011/20151202165239.jpg'
removed 'dataset/data/train/0/20151202011/20151202165130.jpg'
removed 'dataset/data/train/0/20151202011/20151202164902.jpg'
removed directory 'dataset/data/train/0/20151202011'
removed 'dataset/data/train/0/20160608007/20160608104226.jpg'
removed 'dataset/data/train/0/20160608007/20160608104255.jpg'
removed 'dataset/data/train/0/20160608007/20160608103932.jpg'
removed 'dataset/data/train/0/20160608007/20160608104107.jpg'
removed 'dataset/data/train/0/20160608007/20160608104235.jpg'
removed 'dataset/data/train/0/20160608007/20160608104132.jpg'
removed 'dataset/data/train/0/20160608007/20160608104154.jpg'
removed directory 'dataset/data/train/0/20160608007'
removed 'dataset/data/train/0/20161010005/20161010145830.jpg'
removed 'dataset/data/train/0/20161010005/20161010145900.jpg'
removed 'dataset/data/train/0/20161010005/20161010145957.jpg'
removed 'dataset/data/train/0/20161010005/20161010145917.jpg'
removed 'dataset/data/train/0/20161010005/20161010145802.jpg'
removed 'dataset/data/train/0/20161010005/20161010145738.jpg'
removed 'dataset/data/train/0/20161010005/20161010145611.jpg'
removed directory 'dataset/data/train/0/20161010005'
removed 'dataset/data/train/0/20161017001/20161017112206.jpg'
removed 'dataset/data/train/0/20161017001/20161017112341.jpg'
removed 'dataset/data/train/0/20161017001/20161017112239.jpg'
removed 'dataset/data/train/0/20161017001/20161017112306.jpg'
removed 'dataset/data/train/0/20161017001/20161017112311.jpg'
removed 'dataset/data/train/0/20161017001/20161017112015.jpg'
removed 'dataset/data/train/0/20161017001/20161017112139.jpg'
removed directory 'dataset/data/train/0/20161017001'
removed 'dataset/data/train/0/20160224008/20160224150651.jpg'
removed 'dataset/data/train/0/20160224008/20160224150349.jpg'
removed 'dataset/data/train/0/20160224008/20160224150536.jpg'
removed 'dataset/data/train/0/20160224008/20160224150621.jpg'
removed 'dataset/data/train/0/20160224008/20160224150826.jpg'
removed 'dataset/data/train/0/20160224008/20160224150636.jpg'
removed 'dataset/data/train/0/20160224008/20160224150508.jpg'
removed directory 'dataset/data/train/0/20160224008'
removed 'dataset/data/train/0/20160401002/20160401145651.jpg'
removed 'dataset/data/train/0/20160401002/20160401145725.jpg'
removed 'dataset/data/train/0/20160401002/20160401145755.jpg'
removed 'dataset/data/train/0/20160401002/20160401145452.jpg'
removed 'dataset/data/train/0/20160401002/20160401145813.jpg'
removed 'dataset/data/train/0/20160401002/20160401145904.jpg'
removed 'dataset/data/train/0/20160401002/20160401145823.jpg'
removed directory 'dataset/data/train/0/20160401002'
removed 'dataset/data/train/0/20160203004/20160203155004.jpg'
removed 'dataset/data/train/0/20160203004/20160203154902.jpg'
removed 'dataset/data/train/0/20160203004/20160203155008.jpg'
removed 'dataset/data/train/0/20160203004/20160203154711.jpg'
removed 'dataset/data/train/0/20160203004/20160203154931.jpg'
removed 'dataset/data/train/0/20160203004/20160203155111.jpg'
removed 'dataset/data/train/0/20160203004/20160203154834.jpg'
removed directory 'dataset/data/train/0/20160203004'
removed 'dataset/data/train/0/20160421001/20160421152956.jpg'
removed 'dataset/data/train/0/20160421001/20160421153032.jpg'
removed 'dataset/data/train/0/20160421001/20160421152750.jpg'
removed 'dataset/data/train/0/20160421001/20160421153056.jpg'
removed 'dataset/data/train/0/20160421001/20160421153151.jpg'
removed 'dataset/data/train/0/20160421001/20160421152925.jpg'
removed 'dataset/data/train/0/20160421001/20160421153066.jpg'
removed directory 'dataset/data/train/0/20160421001'
removed 'dataset/data/train/0/20160504004/20160504144252.jpg'
removed 'dataset/data/train/0/20160504004/20160504144317.jpg'
removed 'dataset/data/train/0/20160504004/20160504144417.jpg'
removed 'dataset/data/train/0/20160504004/20160504144326.jpg'
removed 'dataset/data/train/0/20160504004/20160504144215.jpg'
removed 'dataset/data/train/0/20160504004/20160504144011.jpg'
removed 'dataset/data/train/0/20160504004/20160504144144.jpg'
removed directory 'dataset/data/train/0/20160504004'
removed 'dataset/data/train/0/20151123008/20151123155407.jpg'
removed 'dataset/data/train/0/20151123008/20151123155657.jpg'
removed 'dataset/data/train/0/20151123008/20151123155738.jpg'
removed 'dataset/data/train/0/20151123008/20151123155750.jpg'
removed 'dataset/data/train/0/20151123008/20151123155630.jpg'
removed 'dataset/data/train/0/20151123008/20151123155827.jpg'
removed 'dataset/data/train/0/20151123008/20151123155614.jpg'
removed directory 'dataset/data/train/0/20151123008'
removed 'dataset/data/train/0/20160406003/20160406102449.jpg'
removed 'dataset/data/train/0/20160406003/20160406102838.jpg'
removed 'dataset/data/train/0/20160406003/20160406102823.jpg'
removed 'dataset/data/train/0/20160406003/20160406102647.jpg'
removed 'dataset/data/train/0/20160406003/20160406102613.jpg'
removed 'dataset/data/train/0/20160406003/20160406102954.jpg'
removed 'dataset/data/train/0/20160406003/20160406102718.jpg'
removed directory 'dataset/data/train/0/20160406003'
removed 'dataset/data/train/0/20160803008/20160803160205.jpg'
removed 'dataset/data/train/0/20160803008/20160803160247.jpg'
removed 'dataset/data/train/0/20160803008/20160803160154.jpg'
removed 'dataset/data/train/0/20160803008/20160803160124.jpg'
removed 'dataset/data/train/0/20160803008/20160803155916.jpg'
removed 'dataset/data/train/0/20160803008/20160803160052.jpg'
removed 'dataset/data/train/0/20160803008/20160803160211.jpg'
removed directory 'dataset/data/train/0/20160803008'
removed 'dataset/data/train/0/20160420006/20160420150142.jpg'
removed 'dataset/data/train/0/20160420006/20160420150121.jpg'
removed 'dataset/data/train/0/20160420006/20160420150242.jpg'
removed 'dataset/data/train/0/20160420006/20160420150053.jpg'
removed 'dataset/data/train/0/20160420006/20160420150150.jpg'
removed 'dataset/data/train/0/20160420006/20160420145831.jpg'
removed 'dataset/data/train/0/20160420006/20160420150019.jpg'
removed directory 'dataset/data/train/0/20160420006'
removed 'dataset/data/train/0/20160113007/20160113110507.jpg'
removed 'dataset/data/train/0/20160113007/20160113110744.jpg'
removed 'dataset/data/train/0/20160113007/20160113110653.jpg'
removed 'dataset/data/train/0/20160113007/20160113110853.jpg'
removed 'dataset/data/train/0/20160113007/20160113110644.jpg'
removed 'dataset/data/train/0/20160113007/20160113110814.jpg'
removed 'dataset/data/train/0/20160113007/20160113110754.jpg'
removed directory 'dataset/data/train/0/20160113007'
removed 'dataset/data/train/0/20160815006/20160815162347.jpg'
removed 'dataset/data/train/0/20160815006/20160815162305.jpg'
removed 'dataset/data/train/0/20160815006/20160815162119.jpg'
removed 'dataset/data/train/0/20160815006/20160815162342.jpg'
removed 'dataset/data/train/0/20160815006/20160815162439.jpg'
removed 'dataset/data/train/0/20160815006/20160815162234.jpg'
removed 'dataset/data/train/0/20160815006/20160815162310.jpg'
removed directory 'dataset/data/train/0/20160815006'
removed 'dataset/data/train/0/20160316010/20160316155837.jpg'
removed 'dataset/data/train/0/20160316010/20160316155648.jpg'
removed 'dataset/data/train/0/20160316010/20160316155750.jpg'
removed 'dataset/data/train/0/20160316010/20160316155743.jpg'
removed 'dataset/data/train/0/20160316010/20160316155717.jpg'
removed 'dataset/data/train/0/20160316010/20160316155634.jpg'
removed 'dataset/data/train/0/20160316010/20160316155436.jpg'
removed directory 'dataset/data/train/0/20160316010'
removed 'dataset/data/train/0/20160425009/20160425162725.jpg'
removed 'dataset/data/train/0/20160425009/20160425162422.jpg'
removed 'dataset/data/train/0/20160425009/20160425162803.jpg'
removed 'dataset/data/train/0/20160425009/20160425162631.jpg'
removed 'dataset/data/train/0/20160425009/20160425162657.jpg'
removed 'dataset/data/train/0/20160425009/20160425162735.jpg'
removed 'dataset/data/train/0/20160425009/20160425162556.jpg'
removed directory 'dataset/data/train/0/20160425009'
removed 'dataset/data/train/0/20160602003/20160602145256.jpg'
removed 'dataset/data/train/0/20160602003/20160602145544.jpg'
removed 'dataset/data/train/0/20160602003/20160602145516.jpg'
removed 'dataset/data/train/0/20160602003/20160602145415.jpg'
removed 'dataset/data/train/0/20160602003/20160602145450.jpg'
removed 'dataset/data/train/0/20160602003/20160602145550.jpg'
removed 'dataset/data/train/0/20160602003/20160602145627.jpg'
removed directory 'dataset/data/train/0/20160602003'
removed 'dataset/data/train/0/20160912008/20160912155809.jpg'
removed 'dataset/data/train/0/20160912008/20160912160067.jpg'
removed 'dataset/data/train/0/20160912008/20160912160040.jpg'
removed 'dataset/data/train/0/20160912008/20160912160155.jpg'
removed 'dataset/data/train/0/20160912008/20160912155933.jpg'
removed 'dataset/data/train/0/20160912008/20160912155958.jpg'
removed 'dataset/data/train/0/20160912008/20160912160058.jpg'
removed directory 'dataset/data/train/0/20160912008'
removed 'dataset/data/train/0/20160411004/20160411152058.jpg'
removed 'dataset/data/train/0/20160411004/20160411152113.jpg'
removed 'dataset/data/train/0/20160411004/20160411152238.jpg'
removed 'dataset/data/train/0/20160411004/20160411151912.jpg'
removed 'dataset/data/train/0/20160411004/20160411152222.jpg'
removed 'dataset/data/train/0/20160411004/20160411152158.jpg'
removed 'dataset/data/train/0/20160411004/20160411152335.jpg'
removed directory 'dataset/data/train/0/20160411004'
removed 'dataset/data/train/0/20160203010/20160203171959.jpg'
removed 'dataset/data/train/0/20160203010/20160203171949.jpg'
removed 'dataset/data/train/0/20160203010/20160203171817.jpg'
removed 'dataset/data/train/0/20160203010/20160203172027.jpg'
removed 'dataset/data/train/0/20160203010/20160203171644.jpg'
removed 'dataset/data/train/0/20160203010/20160203171844.jpg'
removed 'dataset/data/train/0/20160203010/20160203171915.jpg'
removed directory 'dataset/data/train/0/20160203010'
removed 'dataset/data/train/0/20160323019/20160323155157.jpg'
removed 'dataset/data/train/0/20160323019/20160323154925.jpg'
removed 'dataset/data/train/0/20160323019/20160323155108.jpg'
removed 'dataset/data/train/0/20160323019/20160323155127.jpg'
removed 'dataset/data/train/0/20160323019/20160323155214.jpg'
removed 'dataset/data/train/0/20160323019/20160323155229.jpg'
removed 'dataset/data/train/0/20160323019/20160323155336.jpg'
removed directory 'dataset/data/train/0/20160323019'
removed 'dataset/data/train/0/20151023005/20151023152438.jpg'
removed 'dataset/data/train/0/20151023005/20151023152740.jpg'
removed 'dataset/data/train/0/20151023005/20151023152818.jpg'
removed 'dataset/data/train/0/20151023005/20151023152624.jpg'
removed 'dataset/data/train/0/20151023005/20151023152554.jpg'
removed 'dataset/data/train/0/20151023005/20151023152653.jpg'
removed 'dataset/data/train/0/20151023005/20151023152723.jpg'
removed directory 'dataset/data/train/0/20151023005'
removed 'dataset/data/train/0/20151207009/20151207155133.jpg'
removed 'dataset/data/train/0/20151207009/20151207155343.jpg'
removed 'dataset/data/train/0/20151207009/20151207154843.jpg'
removed 'dataset/data/train/0/20151207009/20151207155202.jpg'
removed 'dataset/data/train/0/20151207009/20151207155235.jpg'
removed 'dataset/data/train/0/20151207009/20151207155280.jpg'
removed 'dataset/data/train/0/20151207009/20151207155148.jpg'
removed directory 'dataset/data/train/0/20151207009'
removed 'dataset/data/train/0/20160105005/20160105160510.jpg'
removed 'dataset/data/train/0/20160105005/20160105160639.jpg'
removed 'dataset/data/train/0/20160105005/20160105160643.jpg'
removed 'dataset/data/train/0/20160105005/20160105160544.jpg'
removed 'dataset/data/train/0/20160105005/20160105160607.jpg'
removed 'dataset/data/train/0/20160105005/20160105160342.jpg'
removed 'dataset/data/train/0/20160105005/20160105160820.jpg'
removed directory 'dataset/data/train/0/20160105005'
removed 'dataset/data/train/0/20160225007/20160225153233.jpg'
removed 'dataset/data/train/0/20160225007/20160225152934.jpg'
removed 'dataset/data/train/0/20160225007/20160225153127.jpg'
removed 'dataset/data/train/0/20160225007/20160225153055.jpg'
removed 'dataset/data/train/0/20160225007/20160225153348.jpg'
removed 'dataset/data/train/0/20160225007/20160225153159.jpg'
removed 'dataset/data/train/0/20160225007/20160225153225.jpg'
removed directory 'dataset/data/train/0/20160225007'
removed 'dataset/data/train/0/20150901002/20150901110616.jpg'
removed 'dataset/data/train/0/20150901002/20150901110510.jpg'
removed 'dataset/data/train/0/20150901002/20150901110343.jpg'
removed 'dataset/data/train/0/20150901002/20150901110518.jpg'
removed 'dataset/data/train/0/20150901002/20150901110417.jpg'
removed 'dataset/data/train/0/20150901002/20150901110439.jpg'
removed 'dataset/data/train/0/20150901002/20150901110219.jpg'
removed directory 'dataset/data/train/0/20150901002'
removed 'dataset/data/train/0/20151204004/20151204151940.jpg'
removed 'dataset/data/train/0/20151204004/20151204151923.jpg'
removed 'dataset/data/train/0/20151204004/20151204151657.jpg'
removed 'dataset/data/train/0/20151204004/20151204151936.jpg'
removed 'dataset/data/train/0/20151204004/20151204151900.jpg'
removed 'dataset/data/train/0/20151204004/20151204152137.jpg'
removed 'dataset/data/train/0/20151204004/20151204151823.jpg'
removed directory 'dataset/data/train/0/20151204004'
removed 'dataset/data/train/0/20150923007/20150923155939.jpg'
removed 'dataset/data/train/0/20150923007/20150923160007.jpg'
removed 'dataset/data/train/0/20150923007/20150923160107.jpg'
removed 'dataset/data/train/0/20150923007/20150923155803.jpg'
removed 'dataset/data/train/0/20150923007/20150923160122.jpg'
removed 'dataset/data/train/0/20150923007/20150923160143.jpg'
removed 'dataset/data/train/0/20150923007/20150923160036.jpg'
removed directory 'dataset/data/train/0/20150923007'
removed 'dataset/data/train/0/20160421003/20160421145562.jpg'
removed 'dataset/data/train/0/20160421003/20160421145303.jpg'
removed 'dataset/data/train/0/20160421003/20160421145525.jpg'
removed 'dataset/data/train/0/20160421003/20160421145509.jpg'
removed 'dataset/data/train/0/20160421003/20160421145442.jpg'
removed 'dataset/data/train/0/20160421003/20160421145703.jpg'
removed 'dataset/data/train/0/20160421003/20160421145552.jpg'
removed directory 'dataset/data/train/0/20160421003'
removed 'dataset/data/train/0/20151026002/20151026092135.jpg'
removed 'dataset/data/train/0/20151026002/20151026092354.jpg'
removed 'dataset/data/train/0/20151026002/20151026092602.jpg'
removed 'dataset/data/train/0/20151026002/20151026092258.jpg'
removed 'dataset/data/train/0/20151026002/20151026092448.jpg'
removed 'dataset/data/train/0/20151026002/20151026092425.jpg'
removed 'dataset/data/train/0/20151026002/20151026092324.jpg'
removed directory 'dataset/data/train/0/20151026002'
removed 'dataset/data/train/0/20160104006/20160104151217.jpg'
removed 'dataset/data/train/0/20160104006/20160104151310.jpg'
removed 'dataset/data/train/0/20160104006/20160104151333.jpg'
removed 'dataset/data/train/0/20160104006/20160104151416.jpg'
removed 'dataset/data/train/0/20160104006/20160104151140.jpg'
removed 'dataset/data/train/0/20160104006/20160104151240.jpg'
removed 'dataset/data/train/0/20160104006/20160104150923.jpg'
removed directory 'dataset/data/train/0/20160104006'
removed 'dataset/data/train/0/20160120007/20160120114920.jpg'
removed 'dataset/data/train/0/20160120007/20160120114857.jpg'
removed 'dataset/data/train/0/20160120007/20160120114630.jpg'
removed 'dataset/data/train/0/20160120007/20160120114831.jpg'
removed 'dataset/data/train/0/20160120007/20160120114745.jpg'
removed 'dataset/data/train/0/20160120007/20160120114915.jpg'
removed 'dataset/data/train/0/20160120007/20160120115010.jpg'
removed directory 'dataset/data/train/0/20160120007'
removed 'dataset/data/train/0/20150803002/20150803100006.jpg'
removed 'dataset/data/train/0/20150803002/20150803095706.jpg'
removed 'dataset/data/train/0/20150803002/20150803095856.jpg'
removed 'dataset/data/train/0/20150803002/20150803100053.jpg'
removed 'dataset/data/train/0/20150803002/20150803095830.jpg'
removed 'dataset/data/train/0/20150803002/20150803095957.jpg'
removed 'dataset/data/train/0/20150803002/20150803095926.jpg'
removed directory 'dataset/data/train/0/20150803002'
removed 'dataset/data/train/0/20151214007/20151214151305.jpg'
removed 'dataset/data/train/0/20151214007/20151214151450.jpg'
removed 'dataset/data/train/0/20151214007/20151214151335.jpg'
removed 'dataset/data/train/0/20151214007/20151214151044.jpg'
removed 'dataset/data/train/0/20151214007/20151214151236.jpg'
removed 'dataset/data/train/0/20151214007/20151214151513.jpg'
removed 'dataset/data/train/0/20151214007/20151214151405.jpg'
removed directory 'dataset/data/train/0/20151214007'
removed 'dataset/data/train/0/20160104005/20160104150228.jpg'
removed 'dataset/data/train/0/20160104005/20160104150530.jpg'
removed 'dataset/data/train/0/20160104005/20160104150618.jpg'
removed 'dataset/data/train/0/20160104005/20160104150420.jpg'
removed 'dataset/data/train/0/20160104005/20160104150500.jpg'
removed 'dataset/data/train/0/20160104005/20160104150524.jpg'
removed 'dataset/data/train/0/20160104005/20160104150352.jpg'
removed directory 'dataset/data/train/0/20160104005'
removed 'dataset/data/train/0/20151223009/20151223161136.jpg'
removed 'dataset/data/train/0/20151223009/20151223160635.jpg'
removed 'dataset/data/train/0/20151223009/20151223160921.jpg'
removed 'dataset/data/train/0/20151223009/20151223160944.jpg'
removed 'dataset/data/train/0/20151223009/20151223160504.jpg'
removed 'dataset/data/train/0/20151223009/20151223160949.jpg'
removed 'dataset/data/train/0/20151223009/20151223160941.jpg'
removed directory 'dataset/data/train/0/20151223009'
removed 'dataset/data/train/0/20151014011/20151014161450.jpg'
removed 'dataset/data/train/0/20151014011/20151014161255.jpg'
removed 'dataset/data/train/0/20151014011/20151014161629.jpg'
removed 'dataset/data/train/0/20151014011/20151014161421.jpg'
removed 'dataset/data/train/0/20151014011/20151014161517.jpg'
removed 'dataset/data/train/0/20151014011/20151014161551.jpg'
removed 'dataset/data/train/0/20151014011/20151014161547.jpg'
removed directory 'dataset/data/train/0/20151014011'
removed 'dataset/data/train/0/20160111007/20160111151940.jpg'
removed 'dataset/data/train/0/20160111007/20160111151806.jpg'
removed 'dataset/data/train/0/20160111007/20160111151648.jpg'
removed 'dataset/data/train/0/20160111007/20160111151908.jpg'
removed 'dataset/data/train/0/20160111007/20160111152025.jpg'
removed 'dataset/data/train/0/20160111007/20160111151937.jpg'
removed 'dataset/data/train/0/20160111007/20160111151838.jpg'
removed directory 'dataset/data/train/0/20160111007'
removed 'dataset/data/train/0/20161025001/20161025153223.jpg'
removed 'dataset/data/train/0/20161025001/20161025152903.jpg'
removed 'dataset/data/train/0/20161025001/20161025153020.jpg'
removed 'dataset/data/train/0/20161025001/20161025153122.jpg'
removed 'dataset/data/train/0/20161025001/20161025153150.jpg'
removed 'dataset/data/train/0/20161025001/20161025153168.jpg'
removed 'dataset/data/train/0/20161025001/20161025153111.jpg'
removed directory 'dataset/data/train/0/20161025001'
removed 'dataset/data/train/0/20151216015/20151216170910.jpg'
removed 'dataset/data/train/0/20151216015/20151216171026.jpg'
removed 'dataset/data/train/0/20151216015/20151216170621.jpg'
removed 'dataset/data/train/0/20151216015/20151216170909.jpg'
removed 'dataset/data/train/0/20151216015/20151216170839.jpg'
removed 'dataset/data/train/0/20151216015/20151216170739.jpg'
removed 'dataset/data/train/0/20151216015/20151216170815.jpg'
removed directory 'dataset/data/train/0/20151216015'
removed 'dataset/data/train/0/20160108005/20160108160521.jpg'
removed 'dataset/data/train/0/20160108005/20160108160354.jpg'
removed 'dataset/data/train/0/20160108005/20160108160435.jpg'
removed 'dataset/data/train/0/20160108005/20160108160610.jpg'
removed 'dataset/data/train/0/20160108005/20160108160409.jpg'
removed 'dataset/data/train/0/20160108005/20160108160530.jpg'
removed 'dataset/data/train/0/20160108005/20160108160221.jpg'
removed directory 'dataset/data/train/0/20160108005'
removed 'dataset/data/train/0/20160921013/20160921152604.jpg'
removed 'dataset/data/train/0/20160921013/20160921152919.jpg'
removed 'dataset/data/train/0/20160921013/20160921152813.jpg'
removed 'dataset/data/train/0/20160921013/20160921152745.jpg'
removed 'dataset/data/train/0/20160921013/20160921153034.jpg'
removed 'dataset/data/train/0/20160921013/20160921152924.jpg'
removed 'dataset/data/train/0/20160921013/20160921152837.jpg'
removed directory 'dataset/data/train/0/20160921013'
removed 'dataset/data/train/0/20160810007/20160810151102.jpg'
removed 'dataset/data/train/0/20160810007/20160810151336.jpg'
removed 'dataset/data/train/0/20160810007/20160810151348.jpg'
removed 'dataset/data/train/0/20160810007/20160810151432.jpg'
removed 'dataset/data/train/0/20160810007/20160810151225.jpg'
removed 'dataset/data/train/0/20160810007/20160810151317.jpg'
removed 'dataset/data/train/0/20160810007/20160810151354.jpg'
removed directory 'dataset/data/train/0/20160810007'
removed 'dataset/data/train/0/20160921009/20160921111815.jpg'
removed 'dataset/data/train/0/20160921009/20160921111916.jpg'
removed 'dataset/data/train/0/20160921009/20160921111846.jpg'
removed 'dataset/data/train/0/20160921009/20160921111925.jpg'
removed 'dataset/data/train/0/20160921009/20160921111936.jpg'
removed 'dataset/data/train/0/20160921009/20160921111745.jpg'
removed 'dataset/data/train/0/20160921009/20160921111633.jpg'
removed directory 'dataset/data/train/0/20160921009'
removed 'dataset/data/train/0/20150930008/20150930154113.jpg'
removed 'dataset/data/train/0/20150930008/20150930154321.jpg'
removed 'dataset/data/train/0/20150930008/20150930154450.jpg'
removed 'dataset/data/train/0/20150930008/20150930154301.jpg'
removed 'dataset/data/train/0/20150930008/20150930154509.jpg'
removed 'dataset/data/train/0/20150930008/20150930154414.jpg'
removed 'dataset/data/train/0/20150930008/20150930154342.jpg'
removed directory 'dataset/data/train/0/20150930008'
removed 'dataset/data/train/0/20160720015/20160720163557.jpg'
removed 'dataset/data/train/0/20160720015/20160720163626.jpg'
removed 'dataset/data/train/0/20160720015/20160720163814.jpg'
removed 'dataset/data/train/0/20160720015/20160720163613.jpg'
removed 'dataset/data/train/0/20160720015/20160720163631.jpg'
removed 'dataset/data/train/0/20160720015/20160720163533.jpg'
removed 'dataset/data/train/0/20160720015/20160720163359.jpg'
removed directory 'dataset/data/train/0/20160720015'
removed 'dataset/data/train/0/20160614001/20160614102434.jpg'
removed 'dataset/data/train/0/20160614001/20160614102011.jpg'
removed 'dataset/data/train/0/20160614001/20160614102317.jpg'
removed 'dataset/data/train/0/20160614001/20160614102334.jpg'
removed 'dataset/data/train/0/20160614001/20160614102235.jpg'
removed 'dataset/data/train/0/20160614001/20160614102155.jpg'
removed 'dataset/data/train/0/20160614001/20160614102322.jpg'
removed directory 'dataset/data/train/0/20160614001'
removed 'dataset/data/train/0/20160509002/20160509144152.jpg'
removed 'dataset/data/train/0/20160509002/20160509144122.jpg'
removed 'dataset/data/train/0/20160509002/20160509144267.jpg'
removed 'dataset/data/train/0/20160509002/20160509144251.jpg'
removed 'dataset/data/train/0/20160509002/20160509144221.jpg'
removed 'dataset/data/train/0/20160509002/20160509143953.jpg'
removed 'dataset/data/train/0/20160509002/20160509144334.jpg'
removed directory 'dataset/data/train/0/20160509002'
removed 'dataset/data/train/0/20160506004/20160506151402.jpg'
removed 'dataset/data/train/0/20160506004/20160506151030.jpg'
removed 'dataset/data/train/0/20160506004/20160506151301.jpg'
removed 'dataset/data/train/0/20160506004/20160506151235.jpg'
removed 'dataset/data/train/0/20160506004/20160506151325.jpg'
removed 'dataset/data/train/0/20160506004/20160506151333.jpg'
removed 'dataset/data/train/0/20160506004/20160506151218.jpg'
removed directory 'dataset/data/train/0/20160506004'
removed directory 'dataset/data/train/0'
removed 'dataset/data/train/2/20151113006/20151113151945.jpg'
removed 'dataset/data/train/2/20151113006/20151113151812.jpg'
removed 'dataset/data/train/2/20151113006/20151113152043.jpg'
removed 'dataset/data/train/2/20151113006/20151113151842.jpg'
removed 'dataset/data/train/2/20151113006/20151113151940.jpg'
removed 'dataset/data/train/2/20151113006/20151113151651.jpg'
removed 'dataset/data/train/2/20151113006/20151113151910.jpg'
removed directory 'dataset/data/train/2/20151113006'
removed 'dataset/data/train/2/154307580/154307580Image3.jpg'
removed 'dataset/data/train/2/154307580/154307580Image8.jpg'
removed 'dataset/data/train/2/154307580/154307580Image7.jpg'
removed 'dataset/data/train/2/154307580/154307580Image0.jpg'
removed 'dataset/data/train/2/154307580/154307580Image6.jpg'
removed 'dataset/data/train/2/154307580/154307580Image2.jpg'
removed 'dataset/data/train/2/154307580/154307580Image5.jpg'
removed directory 'dataset/data/train/2/154307580'
removed 'dataset/data/train/2/20160608003/20160608100314.jpg'
removed 'dataset/data/train/2/20160608003/20160608095938.jpg'
removed 'dataset/data/train/2/20160608003/20160608100229.jpg'
removed 'dataset/data/train/2/20160608003/20160608100204.jpg'
removed 'dataset/data/train/2/20160608003/20160608100227.jpg'
removed 'dataset/data/train/2/20160608003/20160608100109.jpg'
removed 'dataset/data/train/2/20160608003/20160608100127.jpg'
removed directory 'dataset/data/train/2/20160608003'
removed 'dataset/data/train/2/20151216012/20151216162415.jpg'
removed 'dataset/data/train/2/20151216012/20151216162130.jpg'
removed 'dataset/data/train/2/20151216012/20151216162156.jpg'
removed 'dataset/data/train/2/20151216012/20151216161839.jpg'
removed 'dataset/data/train/2/20151216012/20151216162504.jpg'
removed 'dataset/data/train/2/20151216012/20151216162051.jpg'
removed 'dataset/data/train/2/20151216012/20151216162028.jpg'
removed directory 'dataset/data/train/2/20151216012'
removed 'dataset/data/train/2/20160411002/20160411145108.jpg'
removed 'dataset/data/train/2/20160411002/20160411145020.jpg'
removed 'dataset/data/train/2/20160411002/20160411145124.jpg'
removed 'dataset/data/train/2/20160411002/20160411144840.jpg'
removed 'dataset/data/train/2/20160411002/20160411145402.jpg'
removed 'dataset/data/train/2/20160411002/20160411145135.jpg'
removed 'dataset/data/train/2/20160411002/20160411145050.jpg'
removed directory 'dataset/data/train/2/20160411002'
removed 'dataset/data/train/2/20151104005/20151104141943.jpg'
removed 'dataset/data/train/2/20151104005/20151104142208.jpg'
removed 'dataset/data/train/2/20151104005/20151104142113.jpg'
removed 'dataset/data/train/2/20151104005/20151104142244.jpg'
removed 'dataset/data/train/2/20151104005/20151104142138.jpg'
removed 'dataset/data/train/2/20151104005/20151104142249.jpg'
removed 'dataset/data/train/2/20151104005/20151104142328.jpg'
removed directory 'dataset/data/train/2/20151104005'
removed 'dataset/data/train/2/20160201003/20160201145407.jpg'
removed 'dataset/data/train/2/20160201003/20160201145334.jpg'
removed 'dataset/data/train/2/20160201003/20160201145143.jpg'
removed 'dataset/data/train/2/20160201003/20160201145428.jpg'
removed 'dataset/data/train/2/20160201003/20160201145429.jpg'
removed 'dataset/data/train/2/20160201003/20160201145301.jpg'
removed 'dataset/data/train/2/20160201003/20160201145537.jpg'
removed directory 'dataset/data/train/2/20160201003'
removed 'dataset/data/train/2/152815657/152815657Image8.jpg'
removed 'dataset/data/train/2/152815657/152815657Image3.jpg'
removed 'dataset/data/train/2/152815657/152815657Image5.jpg'
removed 'dataset/data/train/2/152815657/152815657Image2.jpg'
removed 'dataset/data/train/2/152815657/152815657Image0.jpg'
removed 'dataset/data/train/2/152815657/152815657Image6.jpg'
removed 'dataset/data/train/2/152815657/152815657Image7.jpg'
removed directory 'dataset/data/train/2/152815657'
removed 'dataset/data/train/2/20151127014/20151127163130.jpg'
removed 'dataset/data/train/2/20151127014/20151127162832.jpg'
removed 'dataset/data/train/2/20151127014/20151127162953.jpg'
removed 'dataset/data/train/2/20151127014/20151127162822.jpg'
removed 'dataset/data/train/2/20151127014/20151127162941.jpg'
removed 'dataset/data/train/2/20151127014/20151127162609.jpg'
removed 'dataset/data/train/2/20151127014/20151127162834.jpg'
removed directory 'dataset/data/train/2/20151127014'
removed 'dataset/data/train/2/20160927005/20160927162301.jpg'
removed 'dataset/data/train/2/20160927005/20160927162212.jpg'
removed 'dataset/data/train/2/20160927005/20160927162302.jpg'
removed 'dataset/data/train/2/20160927005/20160927162403.jpg'
removed 'dataset/data/train/2/20160927005/20160927162237.jpg'
removed 'dataset/data/train/2/20160927005/20160927162205.jpg'
removed 'dataset/data/train/2/20160927005/20160927161957.jpg'
removed directory 'dataset/data/train/2/20160927005'
removed 'dataset/data/train/2/20151118002/20151118151950.jpg'
removed 'dataset/data/train/2/20151118002/20151118151825.jpg'
removed 'dataset/data/train/2/20151118002/20151118152132.jpg'
removed 'dataset/data/train/2/20151118002/20151118152026.jpg'
removed 'dataset/data/train/2/20151118002/20151118151650.jpg'
removed 'dataset/data/train/2/20151118002/20151118151928.jpg'
removed 'dataset/data/train/2/20151118002/20151118151901.jpg'
removed directory 'dataset/data/train/2/20151118002'
removed 'dataset/data/train/2/20160511003/20160511101211.jpg'
removed 'dataset/data/train/2/20160511003/20160511101339.jpg'
removed 'dataset/data/train/2/20160511003/20160511101141.jpg'
removed 'dataset/data/train/2/20160511003/20160511101243.jpg'
removed 'dataset/data/train/2/20160511003/20160511101023.jpg'
removed 'dataset/data/train/2/20160511003/20160511101312.jpg'
removed 'dataset/data/train/2/20160511003/20160511101315.jpg'
removed directory 'dataset/data/train/2/20160511003'
removed 'dataset/data/train/2/20150819005/20150819151030.jpg'
removed 'dataset/data/train/2/20150819005/20150819151145.jpg'
removed 'dataset/data/train/2/20150819005/20150819151221.jpg'
removed 'dataset/data/train/2/20150819005/20150819150839.jpg'
removed 'dataset/data/train/2/20150819005/20150819151129.jpg'
removed 'dataset/data/train/2/20150819005/20150819150958.jpg'
removed 'dataset/data/train/2/20150819005/20150819151101.jpg'
removed directory 'dataset/data/train/2/20150819005'
removed 'dataset/data/train/2/20160914015/20160914170505.jpg'
removed 'dataset/data/train/2/20160914015/20160914170719.jpg'
removed 'dataset/data/train/2/20160914015/20160914170558.jpg'
removed 'dataset/data/train/2/20160914015/20160914170331.jpg'
removed 'dataset/data/train/2/20160914015/20160914170528.jpg'
removed 'dataset/data/train/2/20160914015/20160914170629.jpg'
removed 'dataset/data/train/2/20160914015/20160914170628.jpg'
removed directory 'dataset/data/train/2/20160914015'
removed 'dataset/data/train/2/160305777/160305777Image2.jpg'
removed 'dataset/data/train/2/160305777/160305777Image4.jpg'
removed 'dataset/data/train/2/160305777/160305777Image0.jpg'
removed 'dataset/data/train/2/160305777/160305777Image7.jpg'
removed 'dataset/data/train/2/160305777/160305777Image6.jpg'
removed 'dataset/data/train/2/160305777/160305777Image9.jpg'
removed 'dataset/data/train/2/160305777/160305777Image3.jpg'
removed directory 'dataset/data/train/2/160305777'
removed 'dataset/data/train/2/20160419001/20160419112445.jpg'
removed 'dataset/data/train/2/20160419001/20160419112413.jpg'
removed 'dataset/data/train/2/20160419001/20160419112609.jpg'
removed 'dataset/data/train/2/20160419001/20160419112344.jpg'
removed 'dataset/data/train/2/20160419001/20160419112514.jpg'
removed 'dataset/data/train/2/20160419001/20160419112518.jpg'
removed 'dataset/data/train/2/20160419001/20160419112223.jpg'
removed directory 'dataset/data/train/2/20160419001'
removed 'dataset/data/train/2/103428260/103428260Image6.jpg'
removed 'dataset/data/train/2/103428260/103428260Image0.jpg'
removed 'dataset/data/train/2/103428260/103428260Image4.jpg'
removed 'dataset/data/train/2/103428260/103428260Image12.jpg'
removed 'dataset/data/train/2/103428260/103428260Image15.jpg'
removed 'dataset/data/train/2/103428260/103428260Image17.jpg'
removed 'dataset/data/train/2/103428260/103428260Image3.jpg'
removed directory 'dataset/data/train/2/103428260'
removed 'dataset/data/train/2/20151230015/20151230171034.jpg'
removed 'dataset/data/train/2/20151230015/20151230171438.jpg'
removed 'dataset/data/train/2/20151230015/20151230171328.jpg'
removed 'dataset/data/train/2/20151230015/20151230171224.jpg'
removed 'dataset/data/train/2/20151230015/20151230171324.jpg'
removed 'dataset/data/train/2/20151230015/20151230171203.jpg'
removed 'dataset/data/train/2/20151230015/20151230171258.jpg'
removed directory 'dataset/data/train/2/20151230015'
removed 'dataset/data/train/2/20160405004/20160405123022.jpg'
removed 'dataset/data/train/2/20160405004/20160405123148.jpg'
removed 'dataset/data/train/2/20160405004/20160405123114.jpg'
removed 'dataset/data/train/2/20160405004/20160405122840.jpg'
removed 'dataset/data/train/2/20160405004/20160405123259.jpg'
removed 'dataset/data/train/2/20160405004/20160405123144.jpg'
removed 'dataset/data/train/2/20160405004/20160405123044.jpg'
removed directory 'dataset/data/train/2/20160405004'
removed 'dataset/data/train/2/20160705001/20160705112514.jpg'
removed 'dataset/data/train/2/20160705001/20160705112809.jpg'
removed 'dataset/data/train/2/20160705001/20160705112648.jpg'
removed 'dataset/data/train/2/20160705001/20160705112806.jpg'
removed 'dataset/data/train/2/20160705001/20160705112752.jpg'
removed 'dataset/data/train/2/20160705001/20160705112841.jpg'
removed 'dataset/data/train/2/20160705001/20160705112710.jpg'
removed directory 'dataset/data/train/2/20160705001'
removed 'dataset/data/train/2/20160607001/20160607151233.jpg'
removed 'dataset/data/train/2/20160607001/20160607151129.jpg'
removed 'dataset/data/train/2/20160607001/20160607150750.jpg'
removed 'dataset/data/train/2/20160607001/20160607151059.jpg'
removed 'dataset/data/train/2/20160607001/20160607151127.jpg'
removed 'dataset/data/train/2/20160607001/20160607151028.jpg'
removed 'dataset/data/train/2/20160607001/20160607151011.jpg'
removed directory 'dataset/data/train/2/20160607001'
removed 'dataset/data/train/2/152953857/152953857Image0.jpg'
removed 'dataset/data/train/2/152953857/152953857Image5.jpg'
removed 'dataset/data/train/2/152953857/152953857Image9.jpg'
removed 'dataset/data/train/2/152953857/152953857Image8.jpg'
removed 'dataset/data/train/2/152953857/152953857Image12.jpg'
removed 'dataset/data/train/2/152953857/152953857Image7.jpg'
removed 'dataset/data/train/2/152953857/152953857Image10.jpg'
removed directory 'dataset/data/train/2/152953857'
removed 'dataset/data/train/2/145912857/145912857Image2.jpg'
removed 'dataset/data/train/2/145912857/145912857Image7.jpg'
removed 'dataset/data/train/2/145912857/145912857Image5.jpg'
removed 'dataset/data/train/2/145912857/145912857Image8.jpg'
removed 'dataset/data/train/2/145912857/145912857Image3.jpg'
removed 'dataset/data/train/2/145912857/145912857Image0.jpg'
removed 'dataset/data/train/2/145912857/145912857Image9.jpg'
removed directory 'dataset/data/train/2/145912857'
removed 'dataset/data/train/2/20160316009/20160316154635.jpg'
removed 'dataset/data/train/2/20160316009/20160316154443.jpg'
removed 'dataset/data/train/2/20160316009/20160316154515.jpg'
removed 'dataset/data/train/2/20160316009/20160316154331.jpg'
removed 'dataset/data/train/2/20160316009/20160316154518.jpg'
removed 'dataset/data/train/2/20160316009/20160316154216.jpg'
removed 'dataset/data/train/2/20160316009/20160316154414.jpg'
removed directory 'dataset/data/train/2/20160316009'
removed 'dataset/data/train/2/154804597/154804597Image3.jpg'
removed 'dataset/data/train/2/154804597/154804597Image9.jpg'
removed 'dataset/data/train/2/154804597/154804597Image8.jpg'
removed 'dataset/data/train/2/154804597/154804597Image5.jpg'
removed 'dataset/data/train/2/154804597/154804597Image7.jpg'
removed 'dataset/data/train/2/154804597/154804597Image0.jpg'
removed 'dataset/data/train/2/154804597/154804597Image2.jpg'
removed directory 'dataset/data/train/2/154804597'
removed 'dataset/data/train/2/100938850/100938850Image8.jpg'
removed 'dataset/data/train/2/100938850/100938850Image2.jpg'
removed 'dataset/data/train/2/100938850/100938850Image5.jpg'
removed 'dataset/data/train/2/100938850/100938850Image7.jpg'
removed 'dataset/data/train/2/100938850/100938850Image0.jpg'
removed 'dataset/data/train/2/100938850/100938850Image14.jpg'
removed 'dataset/data/train/2/100938850/100938850Image10.jpg'
removed directory 'dataset/data/train/2/100938850'
removed 'dataset/data/train/2/20151130008/20151130160139.jpg'
removed 'dataset/data/train/2/20151130008/20151130155922.jpg'
removed 'dataset/data/train/2/20151130008/20151130155900.jpg'
removed 'dataset/data/train/2/20151130008/20151130160035.jpg'
removed 'dataset/data/train/2/20151130008/20151130160030.jpg'
removed 'dataset/data/train/2/20151130008/20151130160005.jpg'
removed 'dataset/data/train/2/20151130008/20151130155656.jpg'
removed directory 'dataset/data/train/2/20151130008'
removed 'dataset/data/train/2/101249150/101249150Image10.jpg'
removed 'dataset/data/train/2/101249150/101249150Image0.jpg'
removed 'dataset/data/train/2/101249150/101249150Image4.jpg'
removed 'dataset/data/train/2/101249150/101249150Image114.jpg'
removed 'dataset/data/train/2/101249150/101249150Image6.jpg'
removed 'dataset/data/train/2/101249150/101249150Image5.jpg'
removed 'dataset/data/train/2/101249150/101249150Image94.jpg'
removed directory 'dataset/data/train/2/101249150'
removed 'dataset/data/train/2/20160601007/20160601145756.jpg'
removed 'dataset/data/train/2/20160601007/20160601145730.jpg'
removed 'dataset/data/train/2/20160601007/20160601145828.jpg'
removed 'dataset/data/train/2/20160601007/20160601145825.jpg'
removed 'dataset/data/train/2/20160601007/20160601145539.jpg'
removed 'dataset/data/train/2/20160601007/20160601145655.jpg'
removed 'dataset/data/train/2/20160601007/20160601145929.jpg'
removed directory 'dataset/data/train/2/20160601007'
removed 'dataset/data/train/2/20160722007/20160722165254.jpg'
removed 'dataset/data/train/2/20160722007/20160722165412.jpg'
removed 'dataset/data/train/2/20160722007/20160722165316.jpg'
removed 'dataset/data/train/2/20160722007/20160722165346.jpg'
removed 'dataset/data/train/2/20160722007/20160722165549.jpg'
removed 'dataset/data/train/2/20160722007/20160722165414.jpg'
removed 'dataset/data/train/2/20160722007/20160722165122.jpg'
removed directory 'dataset/data/train/2/20160722007'
removed 'dataset/data/train/2/20151230007/20151230152558.jpg'
removed 'dataset/data/train/2/20151230007/20151230152455.jpg'
removed 'dataset/data/train/2/20151230007/20151230152431.jpg'
removed 'dataset/data/train/2/20151230007/20151230152708.jpg'
removed 'dataset/data/train/2/20151230007/20151230152256.jpg'
removed 'dataset/data/train/2/20151230007/20151230152559.jpg'
removed 'dataset/data/train/2/20151230007/20151230152523.jpg'
removed directory 'dataset/data/train/2/20151230007'
removed 'dataset/data/train/2/20160727003/20160727105351.jpg'
removed 'dataset/data/train/2/20160727003/20160727105317.jpg'
removed 'dataset/data/train/2/20160727003/20160727105243.jpg'
removed 'dataset/data/train/2/20160727003/20160727105108.jpg'
removed 'dataset/data/train/2/20160727003/20160727105359.jpg'
removed 'dataset/data/train/2/20160727003/20160727105415.jpg'
removed 'dataset/data/train/2/20160727003/20160727105221.jpg'
removed directory 'dataset/data/train/2/20160727003'
removed 'dataset/data/train/2/20160317001/20160317152107.jpg'
removed 'dataset/data/train/2/20160317001/20160317152005.jpg'
removed 'dataset/data/train/2/20160317001/20160317151833.jpg'
removed 'dataset/data/train/2/20160317001/20160317151931.jpg'
removed 'dataset/data/train/2/20160317001/20160317151903.jpg'
removed 'dataset/data/train/2/20160317001/20160317151710.jpg'
removed 'dataset/data/train/2/20160317001/20160317152003.jpg'
removed directory 'dataset/data/train/2/20160317001'
removed 'dataset/data/train/2/20160425002/20160425143325.jpg'
removed 'dataset/data/train/2/20160425002/20160425143615.jpg'
removed 'dataset/data/train/2/20160425002/20160425143518.jpg'
removed 'dataset/data/train/2/20160425002/20160425143612.jpg'
removed 'dataset/data/train/2/20160425002/20160425143721.jpg'
removed 'dataset/data/train/2/20160425002/20160425143535.jpg'
removed 'dataset/data/train/2/20160425002/20160425143443.jpg'
removed directory 'dataset/data/train/2/20160425002'
removed 'dataset/data/train/2/20150902008/20150902154324.jpg'
removed 'dataset/data/train/2/20150902008/20150902154358.jpg'
removed 'dataset/data/train/2/20150902008/20150902154350.jpg'
removed 'dataset/data/train/2/20150902008/20150902154045.jpg'
removed 'dataset/data/train/2/20150902008/20150902154249.jpg'
removed 'dataset/data/train/2/20150902008/20150902154542.jpg'
removed 'dataset/data/train/2/20150902008/20150902154223.jpg'
removed directory 'dataset/data/train/2/20150902008'
removed 'dataset/data/train/2/20160425003/20160425150250.jpg'
removed 'dataset/data/train/2/20160425003/20160425150139.jpg'
removed 'dataset/data/train/2/20160425003/20160425150205.jpg'
removed 'dataset/data/train/2/20160425003/20160425145851.jpg'
removed 'dataset/data/train/2/20160425003/20160425150108.jpg'
removed 'dataset/data/train/2/20160425003/20160425150202.jpg'
removed 'dataset/data/train/2/20160425003/20160425150041.jpg'
removed directory 'dataset/data/train/2/20160425003'
removed 'dataset/data/train/2/161206193/161206193Image5.jpg'
removed 'dataset/data/train/2/161206193/161206193Image60.jpg'
removed 'dataset/data/train/2/161206193/161206193Image7.jpg'
removed 'dataset/data/train/2/161206193/161206193Image0.jpg'
removed 'dataset/data/train/2/161206193/161206193Image80.jpg'
removed 'dataset/data/train/2/161206193/161206193Image4.jpg'
removed 'dataset/data/train/2/161206193/161206193Image2.jpg'
removed directory 'dataset/data/train/2/161206193'
removed 'dataset/data/train/2/20151230003/20151230144558.jpg'
removed 'dataset/data/train/2/20151230003/20151230144448.jpg'
removed 'dataset/data/train/2/20151230003/20151230144656.jpg'
removed 'dataset/data/train/2/20151230003/20151230144252.jpg'
removed 'dataset/data/train/2/20151230003/20151230144517.jpg'
removed 'dataset/data/train/2/20151230003/20151230144549.jpg'
removed 'dataset/data/train/2/20151230003/20151230144417.jpg'
removed directory 'dataset/data/train/2/20151230003'
removed 'dataset/data/train/2/20151023006/20151023155940.jpg'
removed 'dataset/data/train/2/20151023006/20151023155848.jpg'
removed 'dataset/data/train/2/20151023006/20151023155750.jpg'
removed 'dataset/data/train/2/20151023006/20151023155818.jpg'
removed 'dataset/data/train/2/20151023006/20151023155916.jpg'
removed 'dataset/data/train/2/20151023006/20151023155614.jpg'
removed 'dataset/data/train/2/20151023006/20151023160353.jpg'
removed directory 'dataset/data/train/2/20151023006'
removed 'dataset/data/train/2/092814720/092814720Image74.jpg'
removed 'dataset/data/train/2/092814720/092814720Image2.jpg'
removed 'dataset/data/train/2/092814720/092814720Image6.jpg'
removed 'dataset/data/train/2/092814720/092814720Image3.jpg'
removed 'dataset/data/train/2/092814720/092814720Image0.jpg'
removed 'dataset/data/train/2/092814720/092814720Image5.jpg'
removed 'dataset/data/train/2/092814720/092814720Image44.jpg'
removed directory 'dataset/data/train/2/092814720'
removed 'dataset/data/train/2/151543077/151543077Image5.jpg'
removed 'dataset/data/train/2/151543077/151543077Image3.jpg'
removed 'dataset/data/train/2/151543077/151543077Image4.jpg'
removed 'dataset/data/train/2/151543077/151543077Image6.jpg'
removed 'dataset/data/train/2/151543077/151543077Image0.jpg'
removed 'dataset/data/train/2/151543077/151543077Image7.jpg'
removed 'dataset/data/train/2/151543077/151543077Image2.jpg'
removed directory 'dataset/data/train/2/151543077'
removed 'dataset/data/train/2/20150923009/20150923161947.jpg'
removed 'dataset/data/train/2/20150923009/20150923162146.jpg'
removed 'dataset/data/train/2/20150923009/20150923162054.jpg'
removed 'dataset/data/train/2/20150923009/20150923162023.jpg'
removed 'dataset/data/train/2/20150923009/20150923161717.jpg'
removed 'dataset/data/train/2/20150923009/20150923161921.jpg'
removed 'dataset/data/train/2/20150923009/20150923161858.jpg'
removed directory 'dataset/data/train/2/20150923009'
removed 'dataset/data/train/2/20151130007/20151130161943.jpg'
removed 'dataset/data/train/2/20151130007/20151130161848.jpg'
removed 'dataset/data/train/2/20151130007/20151130162007.jpg'
removed 'dataset/data/train/2/20151130007/20151130161859.jpg'
removed 'dataset/data/train/2/20151130007/20151130162116.jpg'
removed 'dataset/data/train/2/20151130007/20151130161719.jpg'
removed 'dataset/data/train/2/20151130007/20151130162006.jpg'
removed directory 'dataset/data/train/2/20151130007'
removed 'dataset/data/train/2/100347213/100347213Image3.jpg'
removed 'dataset/data/train/2/100347213/100347213Image4.jpg'
removed 'dataset/data/train/2/100347213/100347213Image74.jpg'
removed 'dataset/data/train/2/100347213/100347213Image0.jpg'
removed 'dataset/data/train/2/100347213/100347213Image55.jpg'
removed 'dataset/data/train/2/100347213/100347213Image2.jpg'
removed 'dataset/data/train/2/100347213/100347213Image6.jpg'
removed directory 'dataset/data/train/2/100347213'
removed 'dataset/data/train/2/153026150/153026150Image10.jpg'
removed 'dataset/data/train/2/153026150/153026150Image0.jpg'
removed 'dataset/data/train/2/153026150/153026150Image9.jpg'
removed 'dataset/data/train/2/153026150/153026150Image2.jpg'
removed 'dataset/data/train/2/153026150/153026150Image6.jpg'
removed 'dataset/data/train/2/153026150/153026150Image3.jpg'
removed 'dataset/data/train/2/153026150/153026150Image8.jpg'
removed directory 'dataset/data/train/2/153026150'
removed 'dataset/data/train/2/20160629009/20160629153816.jpg'
removed 'dataset/data/train/2/20160629009/20160629153916.jpg'
removed 'dataset/data/train/2/20160629009/20160629153942.jpg'
removed 'dataset/data/train/2/20160629009/20160629153937.jpg'
removed 'dataset/data/train/2/20160629009/20160629153840.jpg'
removed 'dataset/data/train/2/20160629009/20160629153634.jpg'
removed 'dataset/data/train/2/20160629009/20160629154124.jpg'
removed directory 'dataset/data/train/2/20160629009'
removed 'dataset/data/train/2/20160822001/20160822144911.jpg'
removed 'dataset/data/train/2/20160822001/20160822145203.jpg'
removed 'dataset/data/train/2/20160822001/20160822145141.jpg'
removed 'dataset/data/train/2/20160822001/20160822145300.jpg'
removed 'dataset/data/train/2/20160822001/20160822145208.jpg'
removed 'dataset/data/train/2/20160822001/20160822145105.jpg'
removed 'dataset/data/train/2/20160822001/20160822145034.jpg'
removed directory 'dataset/data/train/2/20160822001'
removed 'dataset/data/train/2/20150826001/20150826100707.jpg'
removed 'dataset/data/train/2/20150826001/20150826101105.jpg'
removed 'dataset/data/train/2/20150826001/20150826100929.jpg'
removed 'dataset/data/train/2/20150826001/20150826100833.jpg'
removed 'dataset/data/train/2/20150826001/20150826100927.jpg'
removed 'dataset/data/train/2/20150826001/20150826100701.jpg'
removed 'dataset/data/train/2/20150826001/20150826100901.jpg'
removed directory 'dataset/data/train/2/20150826001'
removed 'dataset/data/train/2/20160907004/20160907150931.jpg'
removed 'dataset/data/train/2/20160907004/20160907150702.jpg'
removed 'dataset/data/train/2/20160907004/20160907150723.jpg'
removed 'dataset/data/train/2/20160907004/20160907150748.jpg'
removed 'dataset/data/train/2/20160907004/20160907150742.jpg'
removed 'dataset/data/train/2/20160907004/20160907150456.jpg'
removed 'dataset/data/train/2/20160907004/20160907150633.jpg'
removed directory 'dataset/data/train/2/20160907004'
removed 'dataset/data/train/2/20150902004/20150902145714.jpg'
removed 'dataset/data/train/2/20150902004/20150902145516.jpg'
removed 'dataset/data/train/2/20150902004/20150902145817.jpg'
removed 'dataset/data/train/2/20150902004/20150902145915.jpg'
removed 'dataset/data/train/2/20150902004/20150902145636.jpg'
removed 'dataset/data/train/2/20150902004/20150902145752.jpg'
removed 'dataset/data/train/2/20150902004/20150902145813.jpg'
removed directory 'dataset/data/train/2/20150902004'
removed 'dataset/data/train/2/20151231006/20151231155553.jpg'
removed 'dataset/data/train/2/20151231006/20151231155558.jpg'
removed 'dataset/data/train/2/20151231006/20151231155455.jpg'
removed 'dataset/data/train/2/20151231006/20151231155729.jpg'
removed 'dataset/data/train/2/20151231006/20151231155532.jpg'
removed 'dataset/data/train/2/20151231006/20151231155428.jpg'
removed 'dataset/data/train/2/20151231006/20151231155301.jpg'
removed directory 'dataset/data/train/2/20151231006'
removed 'dataset/data/train/2/20160928004/20160928102715.jpg'
removed 'dataset/data/train/2/20160928004/20160928102546.jpg'
removed 'dataset/data/train/2/20160928004/20160928102749.jpg'
removed 'dataset/data/train/2/20160928004/20160928102713.jpg'
removed 'dataset/data/train/2/20160928004/20160928102640.jpg'
removed 'dataset/data/train/2/20160928004/20160928102428.jpg'
removed 'dataset/data/train/2/20160928004/20160928102610.jpg'
removed directory 'dataset/data/train/2/20160928004'
removed 'dataset/data/train/2/152111870/152111870Image7.jpg'
removed 'dataset/data/train/2/152111870/152111870Image2.jpg'
removed 'dataset/data/train/2/152111870/152111870Image3.jpg'
removed 'dataset/data/train/2/152111870/152111870Image5.jpg'
removed 'dataset/data/train/2/152111870/152111870Image6.jpg'
removed 'dataset/data/train/2/152111870/152111870Image0.jpg'
removed 'dataset/data/train/2/152111870/152111870Image4.jpg'
removed directory 'dataset/data/train/2/152111870'
removed 'dataset/data/train/2/20151211005/20151211155240.jpg'
removed 'dataset/data/train/2/20151211005/20151211155345.jpg'
removed 'dataset/data/train/2/20151211005/20151211155505.jpg'
removed 'dataset/data/train/2/20151211005/20151211155048.jpg'
removed 'dataset/data/train/2/20151211005/20151211155311.jpg'
removed 'dataset/data/train/2/20151211005/20151211155341.jpg'
removed 'dataset/data/train/2/20151211005/20151211155217.jpg'
removed directory 'dataset/data/train/2/20151211005'
removed 'dataset/data/train/2/20160704002/20160704144018.jpg'
removed 'dataset/data/train/2/20160704002/20160704143753.jpg'
removed 'dataset/data/train/2/20160704002/20160704144040.jpg'
removed 'dataset/data/train/2/20160704002/20160704143939.jpg'
removed 'dataset/data/train/2/20160704002/20160704143911.jpg'
removed 'dataset/data/train/2/20160704002/20160704144048.jpg'
removed 'dataset/data/train/2/20160704002/20160704144138.jpg'
removed directory 'dataset/data/train/2/20160704002'
removed 'dataset/data/train/2/20160721004/20160721111851.jpg'
removed 'dataset/data/train/2/20160721004/20160721111833.jpg'
removed 'dataset/data/train/2/20160721004/20160721111629.jpg'
removed 'dataset/data/train/2/20160721004/20160721111925.jpg'
removed 'dataset/data/train/2/20160721004/20160721111803.jpg'
removed 'dataset/data/train/2/20160721004/20160721111921.jpg'
removed 'dataset/data/train/2/20160721004/20160721112056.jpg'
removed directory 'dataset/data/train/2/20160721004'
removed 'dataset/data/train/2/20160825004/20160825174237.jpg'
removed 'dataset/data/train/2/20160825004/20160825173923.jpg'
removed 'dataset/data/train/2/20160825004/20160825174136.jpg'
removed 'dataset/data/train/2/20160825004/20160825174348.jpg'
removed 'dataset/data/train/2/20160825004/20160825174155.jpg'
removed 'dataset/data/train/2/20160825004/20160825174210.jpg'
removed 'dataset/data/train/2/20160825004/20160825174239.jpg'
removed directory 'dataset/data/train/2/20160825004'
removed 'dataset/data/train/2/20151116005/20151116153648.jpg'
removed 'dataset/data/train/2/20151116005/20151116153513.jpg'
removed 'dataset/data/train/2/20151116005/20151116153643.jpg'
removed 'dataset/data/train/2/20151116005/20151116153542.jpg'
removed 'dataset/data/train/2/20151116005/20151116153350.jpg'
removed 'dataset/data/train/2/20151116005/20151116153610.jpg'
removed 'dataset/data/train/2/20151116005/20151116153749.jpg'
removed directory 'dataset/data/train/2/20151116005'
removed 'dataset/data/train/2/20160425010/20160425170302.jpg'
removed 'dataset/data/train/2/20160425010/20160425170403.jpg'
removed 'dataset/data/train/2/20160425010/20160425170230.jpg'
removed 'dataset/data/train/2/20160425010/20160425170200.jpg'
removed 'dataset/data/train/2/20160425010/20160425170137.jpg'
removed 'dataset/data/train/2/20160425010/20160425170308.jpg'
removed 'dataset/data/train/2/20160425010/20160425170006.jpg'
removed directory 'dataset/data/train/2/20160425010'
removed 'dataset/data/train/2/091316240/091316240Image3.jpg'
removed 'dataset/data/train/2/091316240/091316240Image0.jpg'
removed 'dataset/data/train/2/091316240/091316240Image8.jpg'
removed 'dataset/data/train/2/091316240/091316240Image4.jpg'
removed 'dataset/data/train/2/091316240/091316240Image11.jpg'
removed 'dataset/data/train/2/091316240/091316240Image15.jpg'
removed 'dataset/data/train/2/091316240/091316240Image13.jpg'
removed directory 'dataset/data/train/2/091316240'
removed 'dataset/data/train/2/20160406010/20160406150239.jpg'
removed 'dataset/data/train/2/20160406010/20160406150310.jpg'
removed 'dataset/data/train/2/20160406010/20160406150400.jpg'
removed 'dataset/data/train/2/20160406010/20160406150011.jpg'
removed 'dataset/data/train/2/20160406010/20160406150211.jpg'
removed 'dataset/data/train/2/20160406010/20160406150139.jpg'
removed 'dataset/data/train/2/20160406010/20160406150315.jpg'
removed directory 'dataset/data/train/2/20160406010'
removed 'dataset/data/train/2/20160321001/20160321144151.jpg'
removed 'dataset/data/train/2/20160321001/20160321144108.jpg'
removed 'dataset/data/train/2/20160321001/20160321144035.jpg'
removed 'dataset/data/train/2/20160321001/20160321144007.jpg'
removed 'dataset/data/train/2/20160321001/20160321143933.jpg'
removed 'dataset/data/train/2/20160321001/20160321143811.jpg'
removed 'dataset/data/train/2/20160321001/20160321144103.jpg'
removed directory 'dataset/data/train/2/20160321001'
removed 'dataset/data/train/2/095446777/095446777Image104.jpg'
removed 'dataset/data/train/2/095446777/095446777Image9.jpg'
removed 'dataset/data/train/2/095446777/095446777Image54.jpg'
removed 'dataset/data/train/2/095446777/095446777Image8.jpg'
removed 'dataset/data/train/2/095446777/095446777Image3.jpg'
removed 'dataset/data/train/2/095446777/095446777Image4.jpg'
removed 'dataset/data/train/2/095446777/095446777Image0.jpg'
removed directory 'dataset/data/train/2/095446777'
removed 'dataset/data/train/2/102415710/102415710Image2.jpg'
removed 'dataset/data/train/2/102415710/102415710Image5.jpg'
removed 'dataset/data/train/2/102415710/102415710Image7.jpg'
removed 'dataset/data/train/2/102415710/102415710Image3.jpg'
removed 'dataset/data/train/2/102415710/102415710Image4.jpg'
removed 'dataset/data/train/2/102415710/102415710Image0.jpg'
removed 'dataset/data/train/2/102415710/102415710Image6.jpg'
removed directory 'dataset/data/train/2/102415710'
removed 'dataset/data/train/2/20151230009/20151230155159.jpg'
removed 'dataset/data/train/2/20151230009/20151230155317.jpg'
removed 'dataset/data/train/2/20151230009/20151230155234.jpg'
removed 'dataset/data/train/2/20151230009/20151230154917.jpg'
removed 'dataset/data/train/2/20151230009/20151230155102.jpg'
removed 'dataset/data/train/2/20151230009/20151230155229.jpg'
removed 'dataset/data/train/2/20151230009/20151230155136.jpg'
removed directory 'dataset/data/train/2/20151230009'
removed 'dataset/data/train/2/20160912007/20160912152135.jpg'
removed 'dataset/data/train/2/20160912007/20160912152055.jpg'
removed 'dataset/data/train/2/20160912007/20160912152134.jpg'
removed 'dataset/data/train/2/20160912007/20160912152011.jpg'
removed 'dataset/data/train/2/20160912007/20160912152249.jpg'
removed 'dataset/data/train/2/20160912007/20160912151821.jpg'
removed 'dataset/data/train/2/20160912007/20160912152040.jpg'
removed directory 'dataset/data/train/2/20160912007'
removed 'dataset/data/train/2/20160712002/20160712112826.jpg'
removed 'dataset/data/train/2/20160712002/20160712112546.jpg'
removed 'dataset/data/train/2/20160712002/20160712112759.jpg'
removed 'dataset/data/train/2/20160712002/20160712112902.jpg'
removed 'dataset/data/train/2/20160712002/20160712112730.jpg'
removed 'dataset/data/train/2/20160712002/20160712112905.jpg'
removed 'dataset/data/train/2/20160712002/20160712113008.jpg'
removed directory 'dataset/data/train/2/20160712002'
removed 'dataset/data/train/2/20151104004/20151104113515.jpg'
removed 'dataset/data/train/2/20151104004/20151104113513.jpg'
removed 'dataset/data/train/2/20151104004/20151104113446.jpg'
removed 'dataset/data/train/2/20151104004/20151104113452.jpg'
removed 'dataset/data/train/2/20151104004/20151104113625.jpg'
removed 'dataset/data/train/2/20151104004/20151104113302.jpg'
removed 'dataset/data/train/2/20151104004/20151104113507.jpg'
removed directory 'dataset/data/train/2/20151104004'
removed 'dataset/data/train/2/20151216007/20151216144755.jpg'
removed 'dataset/data/train/2/20151216007/20151216144457.jpg'
removed 'dataset/data/train/2/20151216007/20151216144717.jpg'
removed 'dataset/data/train/2/20151216007/20151216144908.jpg'
removed 'dataset/data/train/2/20151216007/20151216144617.jpg'
removed 'dataset/data/train/2/20151216007/20151216144648.jpg'
removed 'dataset/data/train/2/20151216007/20151216144750.jpg'
removed directory 'dataset/data/train/2/20151216007'
removed 'dataset/data/train/2/155511193/155511193Image7.jpg'
removed 'dataset/data/train/2/155511193/155511193Image4.jpg'
removed 'dataset/data/train/2/155511193/155511193Image3.jpg'
removed 'dataset/data/train/2/155511193/155511193Image60.jpg'
removed 'dataset/data/train/2/155511193/155511193Image70.jpg'
removed 'dataset/data/train/2/155511193/155511193Image0.jpg'
removed 'dataset/data/train/2/155511193/155511193Image2.jpg'
removed directory 'dataset/data/train/2/155511193'
removed 'dataset/data/train/2/20160817004/20160817150956.jpg'
removed 'dataset/data/train/2/20160817004/20160817150712.jpg'
removed 'dataset/data/train/2/20160817004/20160817150812.jpg'
removed 'dataset/data/train/2/20160817004/20160817150645.jpg'
removed 'dataset/data/train/2/20160817004/20160817150522.jpg'
removed 'dataset/data/train/2/20160817004/20160817150747.jpg'
removed 'dataset/data/train/2/20160817004/20160817150819.jpg'
removed directory 'dataset/data/train/2/20160817004'
removed 'dataset/data/train/2/20151113005/20151113150852.jpg'
removed 'dataset/data/train/2/20151113005/20151113150638.jpg'
removed 'dataset/data/train/2/20151113005/20151113150738.jpg'
removed 'dataset/data/train/2/20151113005/20151113150704.jpg'
removed 'dataset/data/train/2/20151113005/20151113150507.jpg'
removed 'dataset/data/train/2/20151113005/20151113150824.jpg'
removed 'dataset/data/train/2/20151113005/20151113150801.jpg'
removed directory 'dataset/data/train/2/20151113005'
removed 'dataset/data/train/2/20151214008/20151214152609.jpg'
removed 'dataset/data/train/2/20151214008/20151214152437.jpg'
removed 'dataset/data/train/2/20151214008/20151214152541.jpg'
removed 'dataset/data/train/2/20151214008/20151214152700.jpg'
removed 'dataset/data/train/2/20151214008/20151214152509.jpg'
removed 'dataset/data/train/2/20151214008/20151214152153.jpg'
removed 'dataset/data/train/2/20151214008/20151214152606.jpg'
removed directory 'dataset/data/train/2/20151214008'
removed 'dataset/data/train/2/20160908003/20160908105024.jpg'
removed 'dataset/data/train/2/20160908003/20160908105152.jpg'
removed 'dataset/data/train/2/20160908003/20160908105243.jpg'
removed 'dataset/data/train/2/20160908003/20160908105123.jpg'
removed 'dataset/data/train/2/20160908003/20160908105153.jpg'
removed 'dataset/data/train/2/20160908003/20160908105052.jpg'
removed 'dataset/data/train/2/20160908003/20160908104712.jpg'
removed directory 'dataset/data/train/2/20160908003'
removed 'dataset/data/train/2/20160817002/20160817144936.jpg'
removed 'dataset/data/train/2/20160817002/20160817144739.jpg'
removed 'dataset/data/train/2/20160817002/20160817144606.jpg'
removed 'dataset/data/train/2/20160817002/20160817144859.jpg'
removed 'dataset/data/train/2/20160817002/20160817144858.jpg'
removed 'dataset/data/train/2/20160817002/20160817144831.jpg'
removed 'dataset/data/train/2/20160817002/20160817144758.jpg'
removed directory 'dataset/data/train/2/20160817002'
removed 'dataset/data/train/2/20160504009/20160504163406.jpg'
removed 'dataset/data/train/2/20160504009/20160504163236.jpg'
removed 'dataset/data/train/2/20160504009/20160504163408.jpg'
removed 'dataset/data/train/2/20160504009/20160504163119.jpg'
removed 'dataset/data/train/2/20160504009/20160504163454.jpg'
removed 'dataset/data/train/2/20160504009/20160504163336.jpg'
removed 'dataset/data/train/2/20160504009/20160504163306.jpg'
removed directory 'dataset/data/train/2/20160504009'
removed 'dataset/data/train/2/150649750/150649750Image44.jpg'
removed 'dataset/data/train/2/150649750/150649750Image2.jpg'
removed 'dataset/data/train/2/150649750/150649750Image12.jpg'
removed 'dataset/data/train/2/150649750/150649750Image3.jpg'
removed 'dataset/data/train/2/150649750/150649750Image6.jpg'
removed 'dataset/data/train/2/150649750/150649750Image4.jpg'
removed 'dataset/data/train/2/150649750/150649750Image0.jpg'
removed directory 'dataset/data/train/2/150649750'
removed 'dataset/data/train/2/20161010006/20161010150658.jpg'
removed 'dataset/data/train/2/20161010006/20161010150522.jpg'
removed 'dataset/data/train/2/20161010006/20161010150814.jpg'
removed 'dataset/data/train/2/20161010006/20161010150914.jpg'
removed 'dataset/data/train/2/20161010006/20161010150713.jpg'
removed 'dataset/data/train/2/20161010006/20161010150813.jpg'
removed 'dataset/data/train/2/20161010006/20161010150753.jpg'
removed directory 'dataset/data/train/2/20161010006'
removed 'dataset/data/train/2/20151216009/20151216153403.jpg'
removed 'dataset/data/train/2/20151216009/20151216153457.jpg'
removed 'dataset/data/train/2/20151216009/20151216153331.jpg'
removed 'dataset/data/train/2/20151216009/20151216153408.jpg'
removed 'dataset/data/train/2/20151216009/20151216153230.jpg'
removed 'dataset/data/train/2/20151216009/20151216153107.jpg'
removed 'dataset/data/train/2/20151216009/20151216153307.jpg'
removed directory 'dataset/data/train/2/20151216009'
removed 'dataset/data/train/2/153019317/153019317Image8.jpg'
removed 'dataset/data/train/2/153019317/153019317Image3.jpg'
removed 'dataset/data/train/2/153019317/153019317Image10.jpg'
removed 'dataset/data/train/2/153019317/153019317Image9.jpg'
removed 'dataset/data/train/2/153019317/153019317Image6.jpg'
removed 'dataset/data/train/2/153019317/153019317Image4.jpg'
removed 'dataset/data/train/2/153019317/153019317Image0.jpg'
removed directory 'dataset/data/train/2/153019317'
removed 'dataset/data/train/2/145703093/145703093Image6.jpg'
removed 'dataset/data/train/2/145703093/145703093Image5.jpg'
removed 'dataset/data/train/2/145703093/145703093Image0.jpg'
removed 'dataset/data/train/2/145703093/145703093Image3.jpg'
removed 'dataset/data/train/2/145703093/145703093Image15.jpg'
removed 'dataset/data/train/2/145703093/145703093Image2.jpg'
removed 'dataset/data/train/2/145703093/145703093Image10.jpg'
removed directory 'dataset/data/train/2/145703093'
removed 'dataset/data/train/2/20160426001/20160426114749.jpg'
removed 'dataset/data/train/2/20160426001/20160426114647.jpg'
removed 'dataset/data/train/2/20160426001/20160426114509.jpg'
removed 'dataset/data/train/2/20160426001/20160426114604.jpg'
removed 'dataset/data/train/2/20160426001/20160426114533.jpg'
removed 'dataset/data/train/2/20160426001/20160426114319.jpg'
removed 'dataset/data/train/2/20160426001/20160426114648.jpg'
removed directory 'dataset/data/train/2/20160426001'
removed 'dataset/data/train/2/20160406013/20160406154408.jpg'
removed 'dataset/data/train/2/20160406013/20160406154306.jpg'
removed 'dataset/data/train/2/20160406013/20160406154234.jpg'
removed 'dataset/data/train/2/20160406013/20160406154405.jpg'
removed 'dataset/data/train/2/20160406013/20160406154343.jpg'
removed 'dataset/data/train/2/20160406013/20160406154107.jpg'
removed 'dataset/data/train/2/20160406013/20160406154450.jpg'
removed directory 'dataset/data/train/2/20160406013'
removed 'dataset/data/train/2/20150902002/20150902100631.jpg'
removed 'dataset/data/train/2/20150902002/20150902100419.jpg'
removed 'dataset/data/train/2/20150902002/20150902100456.jpg'
removed 'dataset/data/train/2/20150902002/20150902100725.jpg'
removed 'dataset/data/train/2/20150902002/20150902100728.jpg'
removed 'dataset/data/train/2/20150902002/20150902100659.jpg'
removed 'dataset/data/train/2/20150902002/20150902100821.jpg'
removed directory 'dataset/data/train/2/20150902002'
removed 'dataset/data/train/2/155818790/155818790Image80.jpg'
removed 'dataset/data/train/2/155818790/155818790Image100.jpg'
removed 'dataset/data/train/2/155818790/155818790Image7.jpg'
removed 'dataset/data/train/2/155818790/155818790Image9.jpg'
removed 'dataset/data/train/2/155818790/155818790Image5.jpg'
removed 'dataset/data/train/2/155818790/155818790Image2.jpg'
removed 'dataset/data/train/2/155818790/155818790Image0.jpg'
removed directory 'dataset/data/train/2/155818790'
removed 'dataset/data/train/2/20151116001/20151116144136.jpg'
removed 'dataset/data/train/2/20151116001/20151116144239.jpg'
removed 'dataset/data/train/2/20151116001/20151116144107.jpg'
removed 'dataset/data/train/2/20151116001/20151116144324.jpg'
removed 'dataset/data/train/2/20151116001/20151116144205.jpg'
removed 'dataset/data/train/2/20151116001/20151116144236.jpg'
removed 'dataset/data/train/2/20151116001/20151116143926.jpg'
removed directory 'dataset/data/train/2/20151116001'
removed 'dataset/data/train/2/110100163/110100163Image6.jpg'
removed 'dataset/data/train/2/110100163/110100163Image4.jpg'
removed 'dataset/data/train/2/110100163/110100163Image2.jpg'
removed 'dataset/data/train/2/110100163/110100163Image12.jpg'
removed 'dataset/data/train/2/110100163/110100163Image0.jpg'
removed 'dataset/data/train/2/110100163/110100163Image5.jpg'
removed 'dataset/data/train/2/110100163/110100163Image1.jpg'
removed directory 'dataset/data/train/2/110100163'
removed 'dataset/data/train/2/20160704004/20160704150717.jpg'
removed 'dataset/data/train/2/20160704004/20160704150819.jpg'
removed 'dataset/data/train/2/20160704004/20160704150747.jpg'
removed 'dataset/data/train/2/20160704004/20160704150905.jpg'
removed 'dataset/data/train/2/20160704004/20160704150805.jpg'
removed 'dataset/data/train/2/20160704004/20160704150558.jpg'
removed 'dataset/data/train/2/20160704004/20160704150814.jpg'
removed directory 'dataset/data/train/2/20160704004'
removed 'dataset/data/train/2/152720600/152720600Image6.jpg'
removed 'dataset/data/train/2/152720600/152720600Image2.jpg'
removed 'dataset/data/train/2/152720600/152720600Image0.jpg'
removed 'dataset/data/train/2/152720600/152720600Image8.jpg'
removed 'dataset/data/train/2/152720600/152720600Image3.jpg'
removed 'dataset/data/train/2/152720600/152720600Image10.jpg'
removed 'dataset/data/train/2/152720600/152720600Image7.jpg'
removed directory 'dataset/data/train/2/152720600'
removed 'dataset/data/train/2/144011743/144011743Image6.jpg'
removed 'dataset/data/train/2/144011743/144011743Image8.jpg'
removed 'dataset/data/train/2/144011743/144011743Image1.jpg'
removed 'dataset/data/train/2/144011743/144011743Image9.jpg'
removed 'dataset/data/train/2/144011743/144011743Image4.jpg'
removed 'dataset/data/train/2/144011743/144011743Image0.jpg'
removed 'dataset/data/train/2/144011743/144011743Image3.jpg'
removed directory 'dataset/data/train/2/144011743'
removed 'dataset/data/train/2/091029280/091029280Image2.jpg'
removed 'dataset/data/train/2/091029280/091029280Image15.jpg'
removed 'dataset/data/train/2/091029280/091029280Image64.jpg'
removed 'dataset/data/train/2/091029280/091029280Image5.jpg'
removed 'dataset/data/train/2/091029280/091029280Image4.jpg'
removed 'dataset/data/train/2/091029280/091029280Image0.jpg'
removed 'dataset/data/train/2/091029280/091029280Image3.jpg'
removed directory 'dataset/data/train/2/091029280'
removed 'dataset/data/train/2/152325917/152325917Image2.jpg'
removed 'dataset/data/train/2/152325917/152325917Image4.jpg'
removed 'dataset/data/train/2/152325917/152325917Image3.jpg'
removed 'dataset/data/train/2/152325917/152325917Image11.jpg'
removed 'dataset/data/train/2/152325917/152325917Image0.jpg'
removed 'dataset/data/train/2/152325917/152325917Image9.jpg'
removed 'dataset/data/train/2/152325917/152325917Image8.jpg'
removed directory 'dataset/data/train/2/152325917'
removed 'dataset/data/train/2/20160920006/20160920170435.jpg'
removed 'dataset/data/train/2/20160920006/20160920170737.jpg'
removed 'dataset/data/train/2/20160920006/20160920170553.jpg'
removed 'dataset/data/train/2/20160920006/20160920170653.jpg'
removed 'dataset/data/train/2/20160920006/20160920170652.jpg'
removed 'dataset/data/train/2/20160920006/20160920170625.jpg'
removed 'dataset/data/train/2/20160920006/20160920170608.jpg'
removed directory 'dataset/data/train/2/20160920006'
removed 'dataset/data/train/2/20160725003/20160725144346.jpg'
removed 'dataset/data/train/2/20160725003/20160725144650.jpg'
removed 'dataset/data/train/2/20160725003/20160725144618.jpg'
removed 'dataset/data/train/2/20160725003/20160725144639.jpg'
removed 'dataset/data/train/2/20160725003/20160725144745.jpg'
removed 'dataset/data/train/2/20160725003/20160725144545.jpg'
removed 'dataset/data/train/2/20160725003/20160725144516.jpg'
removed directory 'dataset/data/train/2/20160725003'
removed 'dataset/data/train/2/20151113008/20151113154115.jpg'
removed 'dataset/data/train/2/20151113008/20151113154218.jpg'
removed 'dataset/data/train/2/20151113008/20151113154045.jpg'
removed 'dataset/data/train/2/20151113008/20151113153922.jpg'
removed 'dataset/data/train/2/20151113008/20151113154144.jpg'
removed 'dataset/data/train/2/20151113008/20151113154214.jpg'
removed 'dataset/data/train/2/20151113008/20151113154303.jpg'
removed directory 'dataset/data/train/2/20151113008'
removed 'dataset/data/train/2/20161008001/20161008095426.jpg'
removed 'dataset/data/train/2/20161008001/20161008095354.jpg'
removed 'dataset/data/train/2/20161008001/20161008095455.jpg'
removed 'dataset/data/train/2/20161008001/20161008095454.jpg'
removed 'dataset/data/train/2/20161008001/20161008095338.jpg'
removed 'dataset/data/train/2/20161008001/20161008095156.jpg'
removed 'dataset/data/train/2/20161008001/20161008095533.jpg'
removed directory 'dataset/data/train/2/20161008001'
removed 'dataset/data/train/2/160015640/160015640Image3.jpg'
removed 'dataset/data/train/2/160015640/160015640Image6.jpg'
removed 'dataset/data/train/2/160015640/160015640Image2.jpg'
removed 'dataset/data/train/2/160015640/160015640Image8.jpg'
removed 'dataset/data/train/2/160015640/160015640Image4.jpg'
removed 'dataset/data/train/2/160015640/160015640Image0.jpg'
removed 'dataset/data/train/2/160015640/160015640Image7.jpg'
removed directory 'dataset/data/train/2/160015640'
removed 'dataset/data/train/2/101155613/101155613Image14.jpg'
removed 'dataset/data/train/2/101155613/101155613Image10.jpg'
removed 'dataset/data/train/2/101155613/101155613Image13.jpg'
removed 'dataset/data/train/2/101155613/101155613Image9.jpg'
removed 'dataset/data/train/2/101155613/101155613Image0.jpg'
removed 'dataset/data/train/2/101155613/101155613Image12.jpg'
removed 'dataset/data/train/2/101155613/101155613Image8.jpg'
removed directory 'dataset/data/train/2/101155613'
removed 'dataset/data/train/2/20160401003/20160401150836.jpg'
removed 'dataset/data/train/2/20160401003/20160401150952.jpg'
removed 'dataset/data/train/2/20160401003/20160401150627.jpg'
removed 'dataset/data/train/2/20160401003/20160401150811.jpg'
removed 'dataset/data/train/2/20160401003/20160401150850.jpg'
removed 'dataset/data/train/2/20160401003/20160401150919.jpg'
removed 'dataset/data/train/2/20160401003/20160401150917.jpg'
removed directory 'dataset/data/train/2/20160401003'
removed 'dataset/data/train/2/20150819011/20150819163415.jpg'
removed 'dataset/data/train/2/20150819011/20150819163708.jpg'
removed 'dataset/data/train/2/20150819011/20150819163538.jpg'
removed 'dataset/data/train/2/20150819011/20150819163638.jpg'
removed 'dataset/data/train/2/20150819011/20150819163805.jpg'
removed 'dataset/data/train/2/20150819011/20150819163709.jpg'
removed 'dataset/data/train/2/20150819011/20150819163608.jpg'
removed directory 'dataset/data/train/2/20150819011'
removed 'dataset/data/train/2/084219150/084219150Image1.jpg'
removed 'dataset/data/train/2/084219150/084219150Image0.jpg'
removed 'dataset/data/train/2/084219150/084219150Image9.jpg'
removed 'dataset/data/train/2/084219150/084219150Image10.jpg'
removed 'dataset/data/train/2/084219150/084219150Image5.jpg'
removed 'dataset/data/train/2/084219150/084219150Image2.jpg'
removed 'dataset/data/train/2/084219150/084219150Image8.jpg'
removed directory 'dataset/data/train/2/084219150'
removed 'dataset/data/train/2/151937793/151937793Image2.jpg'
removed 'dataset/data/train/2/151937793/151937793Image12.jpg'
removed 'dataset/data/train/2/151937793/151937793Image15.jpg'
removed 'dataset/data/train/2/151937793/151937793Image13.jpg'
removed 'dataset/data/train/2/151937793/151937793Image6.jpg'
removed 'dataset/data/train/2/151937793/151937793Image8.jpg'
removed 'dataset/data/train/2/151937793/151937793Image5.jpg'
removed directory 'dataset/data/train/2/151937793'
removed 'dataset/data/train/2/20151111003/20151111145500.jpg'
removed 'dataset/data/train/2/20151111003/20151111145308.jpg'
removed 'dataset/data/train/2/20151111003/20151111145537.jpg'
removed 'dataset/data/train/2/20151111003/20151111145634.jpg'
removed 'dataset/data/train/2/20151111003/20151111145639.jpg'
removed 'dataset/data/train/2/20151111003/20151111145601.jpg'
removed 'dataset/data/train/2/20151111003/20151111145732.jpg'
removed directory 'dataset/data/train/2/20151111003'
removed 'dataset/data/train/2/20151112002/20151112093428.jpg'
removed 'dataset/data/train/2/20151112002/20151112093253.jpg'
removed 'dataset/data/train/2/20151112002/20151112093448.jpg'
removed 'dataset/data/train/2/20151112002/20151112093522.jpg'
removed 'dataset/data/train/2/20151112002/20151112093548.jpg'
removed 'dataset/data/train/2/20151112002/20151112093634.jpg'
removed 'dataset/data/train/2/20151112002/20151112093541.jpg'
removed directory 'dataset/data/train/2/20151112002'
removed 'dataset/data/train/2/20160601004/20160601102000.jpg'
removed 'dataset/data/train/2/20160601004/20160601101748.jpg'
removed 'dataset/data/train/2/20160601004/20160601102017.jpg'
removed 'dataset/data/train/2/20160601004/20160601101929.jpg'
removed 'dataset/data/train/2/20160601004/20160601102124.jpg'
removed 'dataset/data/train/2/20160601004/20160601102058.jpg'
removed 'dataset/data/train/2/20160601004/20160601102050.jpg'
removed directory 'dataset/data/train/2/20160601004'
removed 'dataset/data/train/2/145747013/145747013Image9.jpg'
removed 'dataset/data/train/2/145747013/145747013Image7.jpg'
removed 'dataset/data/train/2/145747013/145747013Image0.jpg'
removed 'dataset/data/train/2/145747013/145747013Image6.jpg'
removed 'dataset/data/train/2/145747013/145747013Image2.jpg'
removed 'dataset/data/train/2/145747013/145747013Image5.jpg'
removed 'dataset/data/train/2/145747013/145747013Image3.jpg'
removed directory 'dataset/data/train/2/145747013'
removed 'dataset/data/train/2/100847130/100847130Image12.jpg'
removed 'dataset/data/train/2/100847130/100847130Image0.jpg'
removed 'dataset/data/train/2/100847130/100847130Image2.jpg'
removed 'dataset/data/train/2/100847130/100847130Image5.jpg'
removed 'dataset/data/train/2/100847130/100847130Image10.jpg'
removed 'dataset/data/train/2/100847130/100847130Image4.jpg'
removed 'dataset/data/train/2/100847130/100847130Image3.jpg'
removed directory 'dataset/data/train/2/100847130'
removed 'dataset/data/train/2/20160428001/20160428104939.jpg'
removed 'dataset/data/train/2/20160428001/20160428105108.jpg'
removed 'dataset/data/train/2/20160428001/20160428104839.jpg'
removed 'dataset/data/train/2/20160428001/20160428105010.jpg'
removed 'dataset/data/train/2/20160428001/20160428104702.jpg'
removed 'dataset/data/train/2/20160428001/20160428105020.jpg'
removed 'dataset/data/train/2/20160428001/20160428104921.jpg'
removed directory 'dataset/data/train/2/20160428001'
removed 'dataset/data/train/2/20151110002/20151110145424.jpg'
removed 'dataset/data/train/2/20151110002/20151110145505.jpg'
removed 'dataset/data/train/2/20151110002/20151110145323.jpg'
removed 'dataset/data/train/2/20151110002/20151110145500.jpg'
removed 'dataset/data/train/2/20151110002/20151110145552.jpg'
removed 'dataset/data/train/2/20151110002/20151110145351.jpg'
removed 'dataset/data/train/2/20151110002/20151110145147.jpg'
removed directory 'dataset/data/train/2/20151110002'
removed 'dataset/data/train/2/20151216001/20151216150158.jpg'
removed 'dataset/data/train/2/20151216001/20151216150022.jpg'
removed 'dataset/data/train/2/20151216001/20151216150047.jpg'
removed 'dataset/data/train/2/20151216001/20151216150258.jpg'
removed 'dataset/data/train/2/20151216001/20151216150151.jpg'
removed 'dataset/data/train/2/20151216001/20151216150123.jpg'
removed 'dataset/data/train/2/20151216001/20151216145857.jpg'
removed directory 'dataset/data/train/2/20151216001'
removed 'dataset/data/train/2/143028670/143028670Image5.jpg'
removed 'dataset/data/train/2/143028670/143028670Image100.jpg'
removed 'dataset/data/train/2/143028670/143028670Image11.jpg'
removed 'dataset/data/train/2/143028670/143028670Image0.jpg'
removed 'dataset/data/train/2/143028670/143028670Image3.jpg'
removed 'dataset/data/train/2/143028670/143028670Image4.jpg'
removed 'dataset/data/train/2/143028670/143028670Image120.jpg'
removed directory 'dataset/data/train/2/143028670'
removed 'dataset/data/train/2/20160824007/20160824150606.jpg'
removed 'dataset/data/train/2/20160824007/20160824150403.jpg'
removed 'dataset/data/train/2/20160824007/20160824150816.jpg'
removed 'dataset/data/train/2/20160824007/20160824150540.jpg'
removed 'dataset/data/train/2/20160824007/20160824150705.jpg'
removed 'dataset/data/train/2/20160824007/20160824150638.jpg'
removed 'dataset/data/train/2/20160824007/20160824150708.jpg'
removed directory 'dataset/data/train/2/20160824007'
removed 'dataset/data/train/2/20160914012/20160914163210.jpg'
removed 'dataset/data/train/2/20160914012/20160914163435.jpg'
removed 'dataset/data/train/2/20160914012/20160914163522.jpg'
removed 'dataset/data/train/2/20160914012/20160914163434.jpg'
removed 'dataset/data/train/2/20160914012/20160914163401.jpg'
removed 'dataset/data/train/2/20160914012/20160914163346.jpg'
removed 'dataset/data/train/2/20160914012/20160914163421.jpg'
removed directory 'dataset/data/train/2/20160914012'
removed 'dataset/data/train/2/145132100/145132100Image9.jpg'
removed 'dataset/data/train/2/145132100/145132100Image3.jpg'
removed 'dataset/data/train/2/145132100/145132100Image6.jpg'
removed 'dataset/data/train/2/145132100/145132100Image0.jpg'
removed 'dataset/data/train/2/145132100/145132100Image1.jpg'
removed 'dataset/data/train/2/145132100/145132100Image7.jpg'
removed 'dataset/data/train/2/145132100/145132100Image4.jpg'
removed directory 'dataset/data/train/2/145132100'
removed 'dataset/data/train/2/103206470/103206470Image2.jpg'
removed 'dataset/data/train/2/103206470/103206470Image8.jpg'
removed 'dataset/data/train/2/103206470/103206470Image7.jpg'
removed 'dataset/data/train/2/103206470/103206470Image0.jpg'
removed 'dataset/data/train/2/103206470/103206470Image3.jpg'
removed 'dataset/data/train/2/103206470/103206470Image4.jpg'
removed 'dataset/data/train/2/103206470/103206470Image5.jpg'
removed directory 'dataset/data/train/2/103206470'
removed 'dataset/data/train/2/20160808001/20160808144712.jpg'
removed 'dataset/data/train/2/20160808001/20160808144906.jpg'
removed 'dataset/data/train/2/20160808001/20160808144841.jpg'
removed 'dataset/data/train/2/20160808001/20160808144740.jpg'
removed 'dataset/data/train/2/20160808001/20160808144845.jpg'
removed 'dataset/data/train/2/20160808001/20160808144812.jpg'
removed 'dataset/data/train/2/20160808001/20160808144553.jpg'
removed directory 'dataset/data/train/2/20160808001'
removed 'dataset/data/train/2/20160720009/20160720151520.jpg'
removed 'dataset/data/train/2/20160720009/20160720151746.jpg'
removed 'dataset/data/train/2/20160720009/20160720151819.jpg'
removed 'dataset/data/train/2/20160720009/20160720151929.jpg'
removed 'dataset/data/train/2/20160720009/20160720151718.jpg'
removed 'dataset/data/train/2/20160720009/20160720151659.jpg'
removed 'dataset/data/train/2/20160720009/20160720151816.jpg'
removed directory 'dataset/data/train/2/20160720009'
removed 'dataset/data/train/2/20151214011/20151214161944.jpg'
removed 'dataset/data/train/2/20151214011/20151214162049.jpg'
removed 'dataset/data/train/2/20151214011/20151214161800.jpg'
removed 'dataset/data/train/2/20151214011/20151214162017.jpg'
removed 'dataset/data/train/2/20151214011/20151214162046.jpg'
removed 'dataset/data/train/2/20151214011/20151214162201.jpg'
removed 'dataset/data/train/2/20151214011/20151214161914.jpg'
removed directory 'dataset/data/train/2/20151214011'
removed 'dataset/data/train/2/20160324005/20160324153156.jpg'
removed 'dataset/data/train/2/20160324005/20160324153226.jpg'
removed 'dataset/data/train/2/20160324005/20160324153130.jpg'
removed 'dataset/data/train/2/20160324005/20160324153258.jpg'
removed 'dataset/data/train/2/20160324005/20160324153256.jpg'
removed 'dataset/data/train/2/20160324005/20160324153345.jpg'
removed 'dataset/data/train/2/20160324005/20160324153010.jpg'
removed directory 'dataset/data/train/2/20160324005'
removed 'dataset/data/train/2/105918500/105918500Image4.jpg'
removed 'dataset/data/train/2/105918500/105918500Image0.jpg'
removed 'dataset/data/train/2/105918500/105918500Image6.jpg'
removed 'dataset/data/train/2/105918500/105918500Image10.jpg'
removed 'dataset/data/train/2/105918500/105918500Image8.jpg'
removed 'dataset/data/train/2/105918500/105918500Image2.jpg'
removed 'dataset/data/train/2/105918500/105918500Image7.jpg'
removed directory 'dataset/data/train/2/105918500'
removed 'dataset/data/train/2/105332243/105332243Image12.jpg'
removed 'dataset/data/train/2/105332243/105332243Image5.jpg'
removed 'dataset/data/train/2/105332243/105332243Image0.jpg'
removed 'dataset/data/train/2/105332243/105332243Image10.jpg'
removed 'dataset/data/train/2/105332243/105332243Image4.jpg'
removed 'dataset/data/train/2/105332243/105332243Image13.jpg'
removed 'dataset/data/train/2/105332243/105332243Image2.jpg'
removed directory 'dataset/data/train/2/105332243'
removed 'dataset/data/train/2/153932600/153932600Image2.jpg'
removed 'dataset/data/train/2/153932600/153932600Image7.jpg'
removed 'dataset/data/train/2/153932600/153932600Image3.jpg'
removed 'dataset/data/train/2/153932600/153932600Image60.jpg'
removed 'dataset/data/train/2/153932600/153932600Image0.jpg'
removed 'dataset/data/train/2/153932600/153932600Image80.jpg'
removed 'dataset/data/train/2/153932600/153932600Image4.jpg'
removed directory 'dataset/data/train/2/153932600'
removed directory 'dataset/data/train/2'
removed directory 'dataset/data/train'
removed 'dataset/data/test/3/20150826002/20150826104148.jpg'
removed 'dataset/data/test/3/20150826002/20150826104150.jpg'
removed 'dataset/data/test/3/20150826002/20150826104127.jpg'
removed 'dataset/data/test/3/20150826002/20150826104147.jpg'
removed 'dataset/data/test/3/20150826002/20150826103859.jpg'
removed 'dataset/data/test/3/20150826002/20150826104106.jpg'
removed 'dataset/data/test/3/20150826002/20150826104107.jpg'
removed directory 'dataset/data/test/3/20150826002'
removed 'dataset/data/test/3/20151119003/20151119152839.jpg'
removed 'dataset/data/test/3/20151119003/20151119152531.jpg'
removed 'dataset/data/test/3/20151119003/20151119152837.jpg'
removed 'dataset/data/test/3/20151119003/20151119152710.jpg'
removed 'dataset/data/test/3/20151119003/20151119152921.jpg'
removed 'dataset/data/test/3/20151119003/20151119152815.jpg'
removed 'dataset/data/test/3/20151119003/20151119152744.jpg'
removed directory 'dataset/data/test/3/20151119003'
removed 'dataset/data/test/3/163546870/163546870Image7.jpg'
removed 'dataset/data/test/3/163546870/163546870Image4.jpg'
removed 'dataset/data/test/3/163546870/163546870Image8.jpg'
removed 'dataset/data/test/3/163546870/163546870Image5.jpg'
removed 'dataset/data/test/3/163546870/163546870Image6.jpg'
removed 'dataset/data/test/3/163546870/163546870Image2.jpg'
removed 'dataset/data/test/3/163546870/163546870Image0.jpg'
removed directory 'dataset/data/test/3/163546870'
removed 'dataset/data/test/3/20151118018/20151118175530.jpg'
removed 'dataset/data/test/3/20151118018/20151118175540.jpg'
removed 'dataset/data/test/3/20151118018/20151118175619.jpg'
removed 'dataset/data/test/3/20151118018/20151118175237.jpg'
removed 'dataset/data/test/3/20151118018/20151118175430.jpg'
removed 'dataset/data/test/3/20151118018/20151118175359.jpg'
removed 'dataset/data/test/3/20151118018/20151118175455.jpg'
removed directory 'dataset/data/test/3/20151118018'
removed 'dataset/data/test/3/151705083/151705083Image2.jpg'
removed 'dataset/data/test/3/151705083/151705083Image7.jpg'
removed 'dataset/data/test/3/151705083/151705083Image8.jpg'
removed 'dataset/data/test/3/151705083/151705083Image3.jpg'
removed 'dataset/data/test/3/151705083/151705083Image5.jpg'
removed 'dataset/data/test/3/151705083/151705083Image6.jpg'
removed 'dataset/data/test/3/151705083/151705083Image0.jpg'
removed directory 'dataset/data/test/3/151705083'
removed 'dataset/data/test/3/20150821002/20150821160244.jpg'
removed 'dataset/data/test/3/20150821002/20150821160551.jpg'
removed 'dataset/data/test/3/20150821002/20150821160621.jpg'
removed 'dataset/data/test/3/20150821002/20150821160755.jpg'
removed 'dataset/data/test/3/20150821002/20150821160510.jpg'
removed 'dataset/data/test/3/20150821002/20150821160648.jpg'
removed 'dataset/data/test/3/20150821002/20150821160650.jpg'
removed directory 'dataset/data/test/3/20150821002'
removed 'dataset/data/test/3/153226430/153226430Image2.jpg'
removed 'dataset/data/test/3/153226430/153226430Image6.jpg'
removed 'dataset/data/test/3/153226430/153226430Image0.jpg'
removed 'dataset/data/test/3/153226430/153226430Image3.jpg'
removed 'dataset/data/test/3/153226430/153226430Image4.jpg'
removed 'dataset/data/test/3/153226430/153226430Image7.jpg'
removed 'dataset/data/test/3/153226430/153226430Image8.jpg'
removed directory 'dataset/data/test/3/153226430'
removed 'dataset/data/test/3/165048077/165048077Image3.jpg'
removed 'dataset/data/test/3/165048077/165048077Image5.jpg'
removed 'dataset/data/test/3/165048077/165048077Image7.jpg'
removed 'dataset/data/test/3/165048077/165048077Image0.jpg'
removed 'dataset/data/test/3/165048077/165048077Image6.jpg'
removed 'dataset/data/test/3/165048077/165048077Image4.jpg'
removed 'dataset/data/test/3/165048077/165048077Image8.jpg'
removed directory 'dataset/data/test/3/165048077'
removed 'dataset/data/test/3/154649120/154649120Image3.jpg'
removed 'dataset/data/test/3/154649120/154649120Image4.jpg'
removed 'dataset/data/test/3/154649120/154649120Image0.jpg'
removed 'dataset/data/test/3/154649120/154649120Image10.jpg'
removed 'dataset/data/test/3/154649120/154649120Image11.jpg'
removed 'dataset/data/test/3/154649120/154649120Image2.jpg'
removed 'dataset/data/test/3/154649120/154649120Image9.jpg'
removed directory 'dataset/data/test/3/154649120'
removed 'dataset/data/test/3/174946503/174946503Image6.jpg'
removed 'dataset/data/test/3/174946503/174946503Image5.jpg'
removed 'dataset/data/test/3/174946503/174946503Image8.jpg'
removed 'dataset/data/test/3/174946503/174946503Image0.jpg'
removed 'dataset/data/test/3/174946503/174946503Image3.jpg'
removed 'dataset/data/test/3/174946503/174946503Image9.jpg'
removed 'dataset/data/test/3/174946503/174946503Image10.jpg'
removed directory 'dataset/data/test/3/174946503'
removed 'dataset/data/test/3/20150805013/20150805164752.jpg'
removed 'dataset/data/test/3/20150805013/20150805164727.jpg'
removed 'dataset/data/test/3/20150805013/20150805164531.jpg'
removed 'dataset/data/test/3/20150805013/20150805164853.jpg'
removed 'dataset/data/test/3/20150805013/20150805164822.jpg'
removed 'dataset/data/test/3/20150805013/20150805165042.jpg'
removed 'dataset/data/test/3/20150805013/20150805164854.jpg'
removed directory 'dataset/data/test/3/20150805013'
removed 'dataset/data/test/3/20150717003/20150717152961.jpg'
removed 'dataset/data/test/3/20150717003/20150717152616.jpg'
removed 'dataset/data/test/3/20150717003/20150717152951.jpg'
removed 'dataset/data/test/3/20150717003/20150717152921.jpg'
removed 'dataset/data/test/3/20150717003/20150717153107.jpg'
removed 'dataset/data/test/3/20150717003/20150717152852.jpg'
removed 'dataset/data/test/3/20150717003/20150717152829.jpg'
removed directory 'dataset/data/test/3/20150717003'
removed directory 'dataset/data/test/3'
removed 'dataset/data/test/1/20150819006/20150819152905.jpg'
removed 'dataset/data/test/1/20150819006/20150819152827.jpg'
removed 'dataset/data/test/1/20150819006/20150819152726.jpg'
removed 'dataset/data/test/1/20150819006/20150819152524.jpg'
removed 'dataset/data/test/1/20150819006/20150819152825.jpg'
removed 'dataset/data/test/1/20150819006/20150819152652.jpg'
removed 'dataset/data/test/1/20150819006/20150819152755.jpg'
removed directory 'dataset/data/test/1/20150819006'
removed 'dataset/data/test/1/164145173/164145173Image7.jpg'
removed 'dataset/data/test/1/164145173/164145173Image6.jpg'
removed 'dataset/data/test/1/164145173/164145173Image3.jpg'
removed 'dataset/data/test/1/164145173/164145173Image4.jpg'
removed 'dataset/data/test/1/164145173/164145173Image2.jpg'
removed 'dataset/data/test/1/164145173/164145173Image0.jpg'
removed 'dataset/data/test/1/164145173/164145173Image8.jpg'
removed directory 'dataset/data/test/1/164145173'
removed 'dataset/data/test/1/20150812006/20150812144206.jpg'
removed 'dataset/data/test/1/20150812006/20150812143943.jpg'
removed 'dataset/data/test/1/20150812006/20150812144318.jpg'
removed 'dataset/data/test/1/20150812006/20150812144047.jpg'
removed 'dataset/data/test/1/20150812006/20150812144013.jpg'
removed 'dataset/data/test/1/20150812006/20150812143825.jpg'
removed 'dataset/data/test/1/20150812006/20150812144114.jpg'
removed directory 'dataset/data/test/1/20150812006'
removed 'dataset/data/test/1/162231763/162231763Image2.jpg'
removed 'dataset/data/test/1/162231763/162231763Image7.jpg'
removed 'dataset/data/test/1/162231763/162231763Image3.jpg'
removed 'dataset/data/test/1/162231763/162231763Image6.jpg'
removed 'dataset/data/test/1/162231763/162231763Image8.jpg'
removed 'dataset/data/test/1/162231763/162231763Image5.jpg'
removed 'dataset/data/test/1/162231763/162231763Image0.jpg'
removed directory 'dataset/data/test/1/162231763'
removed 'dataset/data/test/1/20150729004/20150729142712.jpg'
removed 'dataset/data/test/1/20150729004/20150729143009.jpg'
removed 'dataset/data/test/1/20150729004/20150729142617.jpg'
removed 'dataset/data/test/1/20150729004/20150729142638.jpg'
removed 'dataset/data/test/1/20150729004/20150729142536.jpg'
removed 'dataset/data/test/1/20150729004/20150729142728.jpg'
removed 'dataset/data/test/1/20150729004/20150729142418.jpg'
removed directory 'dataset/data/test/1/20150729004'
removed 'dataset/data/test/1/165554510/165554510Image4.jpg'
removed 'dataset/data/test/1/165554510/165554510Image8.jpg'
removed 'dataset/data/test/1/165554510/165554510Image6.jpg'
removed 'dataset/data/test/1/165554510/165554510Image5.jpg'
removed 'dataset/data/test/1/165554510/165554510Image0.jpg'
removed 'dataset/data/test/1/165554510/165554510Image3.jpg'
removed 'dataset/data/test/1/165554510/165554510Image7.jpg'
removed directory 'dataset/data/test/1/165554510'
removed 'dataset/data/test/1/20150818001/20150818113549.jpg'
removed 'dataset/data/test/1/20150818001/20150818113719.jpg'
removed 'dataset/data/test/1/20150818001/20150818113827.jpg'
removed 'dataset/data/test/1/20150818001/20150818113721.jpg'
removed 'dataset/data/test/1/20150818001/20150818113649.jpg'
removed 'dataset/data/test/1/20150818001/20150818113423.jpg'
removed 'dataset/data/test/1/20150818001/20150818113620.jpg'
removed directory 'dataset/data/test/1/20150818001'
removed 'dataset/data/test/1/20150731002/20150731164418.jpg'
removed 'dataset/data/test/1/20150731002/20150731164344.jpg'
removed 'dataset/data/test/1/20150731002/20150731164116.jpg'
removed 'dataset/data/test/1/20150731002/20150731164411.jpg'
removed 'dataset/data/test/1/20150731002/20150731164301.jpg'
removed 'dataset/data/test/1/20150731002/20150731164556.jpg'
removed 'dataset/data/test/1/20150731002/20150731164316.jpg'
removed directory 'dataset/data/test/1/20150731002'
removed 'dataset/data/test/1/163856190/163856190Image3.jpg'
removed 'dataset/data/test/1/163856190/163856190Image5.jpg'
removed 'dataset/data/test/1/163856190/163856190Image2.jpg'
removed 'dataset/data/test/1/163856190/163856190Image7.jpg'
removed 'dataset/data/test/1/163856190/163856190Image9.jpg'
removed 'dataset/data/test/1/163856190/163856190Image6.jpg'
removed 'dataset/data/test/1/163856190/163856190Image0.jpg'
removed directory 'dataset/data/test/1/163856190'
removed 'dataset/data/test/1/20150826008/20150826153113.jpg'
removed 'dataset/data/test/1/20150826008/20150826153539.jpg'
removed 'dataset/data/test/1/20150826008/20150826153241.jpg'
removed 'dataset/data/test/1/20150826008/20150826153425.jpg'
removed 'dataset/data/test/1/20150826008/20150826153340.jpg'
removed 'dataset/data/test/1/20150826008/20150826153410.jpg'
removed 'dataset/data/test/1/20150826008/20150826153310.jpg'
removed directory 'dataset/data/test/1/20150826008'
removed 'dataset/data/test/1/171212253/171212253Image5.jpg'
removed 'dataset/data/test/1/171212253/171212253Image4.jpg'
removed 'dataset/data/test/1/171212253/171212253Image6.jpg'
removed 'dataset/data/test/1/171212253/171212253Image9.jpg'
removed 'dataset/data/test/1/171212253/171212253Image3.jpg'
removed 'dataset/data/test/1/171212253/171212253Image0.jpg'
removed 'dataset/data/test/1/171212253/171212253Image8.jpg'
removed directory 'dataset/data/test/1/171212253'
removed 'dataset/data/test/1/20150819011/20150819163415.jpg'
removed 'dataset/data/test/1/20150819011/20150819163708.jpg'
removed 'dataset/data/test/1/20150819011/20150819163538.jpg'
removed 'dataset/data/test/1/20150819011/20150819163638.jpg'
removed 'dataset/data/test/1/20150819011/20150819163805.jpg'
removed 'dataset/data/test/1/20150819011/20150819163778.jpg'
removed 'dataset/data/test/1/20150819011/20150819163608.jpg'
removed directory 'dataset/data/test/1/20150819011'
removed directory 'dataset/data/test/1'
removed 'dataset/data/test/0/20151111007/20151111154626.jpg'
removed 'dataset/data/test/0/20151111007/20151111154450.jpg'
removed 'dataset/data/test/0/20151111007/20151111154740.jpg'
removed 'dataset/data/test/0/20151111007/20151111154800.jpg'
removed 'dataset/data/test/0/20151111007/20151111154652.jpg'
removed 'dataset/data/test/0/20151111007/20151111154900.jpg'
removed 'dataset/data/test/0/20151111007/20151111154725.jpg'
removed directory 'dataset/data/test/0/20151111007'
removed 'dataset/data/test/0/20151106002/20151106101957.jpg'
removed 'dataset/data/test/0/20151106002/20151106101845.jpg'
removed 'dataset/data/test/0/20151106002/20151106101644.jpg'
removed 'dataset/data/test/0/20151106002/20151106101905.jpg'
removed 'dataset/data/test/0/20151106002/20151106102000.jpg'
removed 'dataset/data/test/0/20151106002/20151106102110.jpg'
removed 'dataset/data/test/0/20151106002/20151106101935.jpg'
removed directory 'dataset/data/test/0/20151106002'
removed 'dataset/data/test/0/20151111002/20151111144420.jpg'
removed 'dataset/data/test/0/20151111002/20151111144348.jpg'
removed 'dataset/data/test/0/20151111002/20151111144515.jpg'
removed 'dataset/data/test/0/20151111002/20151111144511.jpg'
removed 'dataset/data/test/0/20151111002/20151111144654.jpg'
removed 'dataset/data/test/0/20151111002/20151111144157.jpg'
removed 'dataset/data/test/0/20151111002/20151111144506.jpg'
removed directory 'dataset/data/test/0/20151111002'
removed 'dataset/data/test/0/20151103002/20151103113755.jpg'
removed 'dataset/data/test/0/20151103002/20151103113659.jpg'
removed 'dataset/data/test/0/20151103002/20151103113752.jpg'
removed 'dataset/data/test/0/20151103002/20151103113722.jpg'
removed 'dataset/data/test/0/20151103002/20151103113833.jpg'
removed 'dataset/data/test/0/20151103002/20151103113458.jpg'
removed 'dataset/data/test/0/20151103002/20151103113637.jpg'
removed directory 'dataset/data/test/0/20151103002'
removed 'dataset/data/test/0/20151118011/20151118163318.jpg'
removed 'dataset/data/test/0/20151118011/20151118162920.jpg'
removed 'dataset/data/test/0/20151118011/20151118163215.jpg'
removed 'dataset/data/test/0/20151118011/20151118163137.jpg'
removed 'dataset/data/test/0/20151118011/20151118163150.jpg'
removed 'dataset/data/test/0/20151118011/20151118163100.jpg'
removed 'dataset/data/test/0/20151118011/20151118163218.jpg'
removed directory 'dataset/data/test/0/20151118011'
removed 'dataset/data/test/0/20151117005/20151117112126.jpg'
removed 'dataset/data/test/0/20151117005/20151117112400.jpg'
removed 'dataset/data/test/0/20151117005/20151117112146.jpg'
removed 'dataset/data/test/0/20151117005/20151117111950.jpg'
removed 'dataset/data/test/0/20151117005/20151117112246.jpg'
removed 'dataset/data/test/0/20151117005/20151117112508.jpg'
removed 'dataset/data/test/0/20151117005/20151117112314.jpg'
removed directory 'dataset/data/test/0/20151117005'
removed 'dataset/data/test/0/20151118009/20151118160970.jpg'
removed 'dataset/data/test/0/20151118009/20151118160649.jpg'
removed 'dataset/data/test/0/20151118009/20151118160839.jpg'
removed 'dataset/data/test/0/20151118009/20151118160853.jpg'
removed 'dataset/data/test/0/20151118009/20151118160952.jpg'
removed 'dataset/data/test/0/20151118009/20151118160924.jpg'
removed 'dataset/data/test/0/20151118009/20151118161030.jpg'
removed directory 'dataset/data/test/0/20151118009'
removed 'dataset/data/test/0/20151111004/20151111151152.jpg'
removed 'dataset/data/test/0/20151111004/20151111151033.jpg'
removed 'dataset/data/test/0/20151111004/20151111150820.jpg'
removed 'dataset/data/test/0/20151111004/20151111151160.jpg'
removed 'dataset/data/test/0/20151111004/20151111151127.jpg'
removed 'dataset/data/test/0/20151111004/20151111151245.jpg'
removed 'dataset/data/test/0/20151111004/20151111151104.jpg'
removed directory 'dataset/data/test/0/20151111004'
removed 'dataset/data/test/0/20151103005/20151103161836.jpg'
removed 'dataset/data/test/0/20151103005/20151103162027.jpg'
removed 'dataset/data/test/0/20151103005/20151103162122.jpg'
removed 'dataset/data/test/0/20151103005/20151103161719.jpg'
removed 'dataset/data/test/0/20151103005/20151103161938.jpg'
removed 'dataset/data/test/0/20151103005/20151103161908.jpg'
removed 'dataset/data/test/0/20151103005/20151103162254.jpg'
removed directory 'dataset/data/test/0/20151103005'
removed 'dataset/data/test/0/20151111010/20151111163257.jpg'
removed 'dataset/data/test/0/20151111010/20151111163105.jpg'
removed 'dataset/data/test/0/20151111010/20151111163012.jpg'
removed 'dataset/data/test/0/20151111010/20151111162959.jpg'
removed 'dataset/data/test/0/20151111010/20151111163041.jpg'
removed 'dataset/data/test/0/20151111010/20151111163115.jpg'
removed 'dataset/data/test/0/20151111010/20151111162830.jpg'
removed directory 'dataset/data/test/0/20151111010'
removed 'dataset/data/test/0/20151113012/20151113164201.jpg'
removed 'dataset/data/test/0/20151113012/20151113163859.jpg'
removed 'dataset/data/test/0/20151113012/20151113163928.jpg'
removed 'dataset/data/test/0/20151113012/20151113163733.jpg'
removed 'dataset/data/test/0/20151113012/20151113164000.jpg'
removed 'dataset/data/test/0/20151113012/20151113164028.jpg'
removed 'dataset/data/test/0/20151113012/20151113164100.jpg'
removed directory 'dataset/data/test/0/20151113012'
removed 'dataset/data/test/0/20151118006/20151118144519.jpg'
removed 'dataset/data/test/0/20151118006/20151118144414.jpg'
removed 'dataset/data/test/0/20151118006/20151118144448.jpg'
removed 'dataset/data/test/0/20151118006/20151118144600.jpg'
removed 'dataset/data/test/0/20151118006/20151118144610.jpg'
removed 'dataset/data/test/0/20151118006/20151118144223.jpg'
removed 'dataset/data/test/0/20151118006/20151118144344.jpg'
removed directory 'dataset/data/test/0/20151118006'
removed directory 'dataset/data/test/0'
removed 'dataset/data/test/2/20150722013/20150722161913.jpg'
removed 'dataset/data/test/2/20150722013/20150722162013.jpg'
removed 'dataset/data/test/2/20150722013/20150722162101.jpg'
removed 'dataset/data/test/2/20150722013/20150722161844.jpg'
removed 'dataset/data/test/2/20150722013/20150722161717.jpg'
removed 'dataset/data/test/2/20150722013/20150722161943.jpg'
removed 'dataset/data/test/2/20150722013/20150722162015.jpg'
removed directory 'dataset/data/test/2/20150722013'
removed 'dataset/data/test/2/20150805009/20150805154760.jpg'
removed 'dataset/data/test/2/20150805009/20150805154923.jpg'
removed 'dataset/data/test/2/20150805009/20150805154759.jpg'
removed 'dataset/data/test/2/20150805009/20150805154629.jpg'
removed 'dataset/data/test/2/20150805009/20150805154701.jpg'
removed 'dataset/data/test/2/20150805009/20150805154502.jpg'
removed 'dataset/data/test/2/20150805009/20150805154729.jpg'
removed directory 'dataset/data/test/2/20150805009'
removed 'dataset/data/test/2/20150812011/20150812165027.jpg'
removed 'dataset/data/test/2/20150812011/20150812165159.jpg'
removed 'dataset/data/test/2/20150812011/20150812165126.jpg'
removed 'dataset/data/test/2/20150812011/20150812165247.jpg'
removed 'dataset/data/test/2/20150812011/20150812165156.jpg'
removed 'dataset/data/test/2/20150812011/20150812164905.jpg'
removed 'dataset/data/test/2/20150812011/20150812165058.jpg'
removed directory 'dataset/data/test/2/20150812011'
removed 'dataset/data/test/2/163747350/163747350Image7.jpg'
removed 'dataset/data/test/2/163747350/163747350Image6.jpg'
removed 'dataset/data/test/2/163747350/163747350Image0.jpg'
removed 'dataset/data/test/2/163747350/163747350Image8.jpg'
removed 'dataset/data/test/2/163747350/163747350Image4.jpg'
removed 'dataset/data/test/2/163747350/163747350Image2.jpg'
removed 'dataset/data/test/2/163747350/163747350Image3.jpg'
removed directory 'dataset/data/test/2/163747350'
removed 'dataset/data/test/2/165313413/165313413Image7.jpg'
removed 'dataset/data/test/2/165313413/165313413Image2.jpg'
removed 'dataset/data/test/2/165313413/165313413Image10.jpg'
removed 'dataset/data/test/2/165313413/165313413Image0.jpg'
removed 'dataset/data/test/2/165313413/165313413Image4.jpg'
removed 'dataset/data/test/2/165313413/165313413Image8.jpg'
removed 'dataset/data/test/2/165313413/165313413Image5.jpg'
removed directory 'dataset/data/test/2/165313413'
removed 'dataset/data/test/2/162334723/162334723Image8.jpg'
removed 'dataset/data/test/2/162334723/162334723Image4.jpg'
removed 'dataset/data/test/2/162334723/162334723Image0.jpg'
removed 'dataset/data/test/2/162334723/162334723Image9.jpg'
removed 'dataset/data/test/2/162334723/162334723Image12.jpg'
removed 'dataset/data/test/2/162334723/162334723Image10.jpg'
removed 'dataset/data/test/2/162334723/162334723Image2.jpg'
removed directory 'dataset/data/test/2/162334723'
removed 'dataset/data/test/2/162403397/162403397Image80.jpg'
removed 'dataset/data/test/2/162403397/162403397Image100.jpg'
removed 'dataset/data/test/2/162403397/162403397Image9.jpg'
removed 'dataset/data/test/2/162403397/162403397Image1.jpg'
removed 'dataset/data/test/2/162403397/162403397Image3.jpg'
removed 'dataset/data/test/2/162403397/162403397Image4.jpg'
removed 'dataset/data/test/2/162403397/162403397Image0.jpg'
removed directory 'dataset/data/test/2/162403397'
removed 'dataset/data/test/2/162021000/162021000Image70.jpg'
removed 'dataset/data/test/2/162021000/162021000Image0.jpg'
removed 'dataset/data/test/2/162021000/162021000Image6.jpg'
removed 'dataset/data/test/2/162021000/162021000Image3.jpg'
removed 'dataset/data/test/2/162021000/162021000Image9.jpg'
removed 'dataset/data/test/2/162021000/162021000Image8.jpg'
removed 'dataset/data/test/2/162021000/162021000Image100.jpg'
removed directory 'dataset/data/test/2/162021000'
removed 'dataset/data/test/2/20150818003/20150818165742.jpg'
removed 'dataset/data/test/2/20150818003/20150818165643.jpg'
removed 'dataset/data/test/2/20150818003/20150818165454.jpg'
removed 'dataset/data/test/2/20150818003/20150818165815.jpg'
removed 'dataset/data/test/2/20150818003/20150818165812.jpg'
removed 'dataset/data/test/2/20150818003/20150818165713.jpg'
removed 'dataset/data/test/2/20150818003/20150818165916.jpg'
removed directory 'dataset/data/test/2/20150818003'
removed 'dataset/data/test/2/20150729007/20150729170203.jpg'
removed 'dataset/data/test/2/20150729007/20150729165856.jpg'
removed 'dataset/data/test/2/20150729007/20150729170022.jpg'
removed 'dataset/data/test/2/20150729007/20150729165922.jpg'
removed 'dataset/data/test/2/20150729007/20150729165730.jpg'
removed 'dataset/data/test/2/20150729007/20150729170025.jpg'
removed 'dataset/data/test/2/20150729007/20150729165954.jpg'
removed directory 'dataset/data/test/2/20150729007'
removed 'dataset/data/test/2/20150731003/20150731171123.jpg'
removed 'dataset/data/test/2/20150731003/20150731170730.jpg'
removed 'dataset/data/test/2/20150731003/20150731170906.jpg'
removed 'dataset/data/test/2/20150731003/20150731170908.jpg'
removed 'dataset/data/test/2/20150731003/20150731170522.jpg'
removed 'dataset/data/test/2/20150731003/20150731170755.jpg'
removed 'dataset/data/test/2/20150731003/20150731170825.jpg'
removed directory 'dataset/data/test/2/20150731003'
removed 'dataset/data/test/2/163138313/163138313Image0.jpg'
removed 'dataset/data/test/2/163138313/163138313Image2.jpg'
removed 'dataset/data/test/2/163138313/163138313Image5.jpg'
removed 'dataset/data/test/2/163138313/163138313Image8.jpg'
removed 'dataset/data/test/2/163138313/163138313Image7.jpg'
removed 'dataset/data/test/2/163138313/163138313Image4.jpg'
removed 'dataset/data/test/2/163138313/163138313Image3.jpg'
removed directory 'dataset/data/test/2/163138313'
removed directory 'dataset/data/test/2'
removed directory 'dataset/data/test'
removed directory 'dataset/data'
removed 'dataset/cervigram-image-dataset-v2.zip'
removed directory 'dataset'
Archive: dataset/cervigram-image-dataset-v2.zip
creating: dataset/data/
creating: dataset/data/test/
creating: dataset/data/test/0/
creating: dataset/data/test/0/20151103002/
inflating: dataset/data/test/0/20151103002/20151103113458.jpg
inflating: dataset/data/test/0/20151103002/20151103113637.jpg
inflating: dataset/data/test/0/20151103002/20151103113659.jpg
inflating: dataset/data/test/0/20151103002/20151103113722.jpg
inflating: dataset/data/test/0/20151103002/20151103113752.jpg
inflating: dataset/data/test/0/20151103002/20151103113755.jpg
inflating: dataset/data/test/0/20151103002/20151103113833.jpg
creating: dataset/data/test/0/20151103005/
inflating: dataset/data/test/0/20151103005/20151103161719.jpg
inflating: dataset/data/test/0/20151103005/20151103161836.jpg
inflating: dataset/data/test/0/20151103005/20151103161908.jpg
inflating: dataset/data/test/0/20151103005/20151103161938.jpg
inflating: dataset/data/test/0/20151103005/20151103162027.jpg
inflating: dataset/data/test/0/20151103005/20151103162122.jpg
inflating: dataset/data/test/0/20151103005/20151103162254.jpg
creating: dataset/data/test/0/20151106002/
inflating: dataset/data/test/0/20151106002/20151106101644.jpg
inflating: dataset/data/test/0/20151106002/20151106101845.jpg
inflating: dataset/data/test/0/20151106002/20151106101905.jpg
inflating: dataset/data/test/0/20151106002/20151106101935.jpg
inflating: dataset/data/test/0/20151106002/20151106101957.jpg
inflating: dataset/data/test/0/20151106002/20151106102000.jpg
inflating: dataset/data/test/0/20151106002/20151106102110.jpg
creating: dataset/data/test/0/20151111002/
inflating: dataset/data/test/0/20151111002/20151111144157.jpg
inflating: dataset/data/test/0/20151111002/20151111144348.jpg
inflating: dataset/data/test/0/20151111002/20151111144420.jpg
inflating: dataset/data/test/0/20151111002/20151111144506.jpg
inflating: dataset/data/test/0/20151111002/20151111144511.jpg
inflating: dataset/data/test/0/20151111002/20151111144515.jpg
inflating: dataset/data/test/0/20151111002/20151111144654.jpg
creating: dataset/data/test/0/20151111004/
inflating: dataset/data/test/0/20151111004/20151111150820.jpg
inflating: dataset/data/test/0/20151111004/20151111151033.jpg
inflating: dataset/data/test/0/20151111004/20151111151104.jpg
inflating: dataset/data/test/0/20151111004/20151111151127.jpg
inflating: dataset/data/test/0/20151111004/20151111151152.jpg
inflating: dataset/data/test/0/20151111004/20151111151160.jpg
inflating: dataset/data/test/0/20151111004/20151111151245.jpg
creating: dataset/data/test/0/20151111007/
inflating: dataset/data/test/0/20151111007/20151111154450.jpg
inflating: dataset/data/test/0/20151111007/20151111154626.jpg
inflating: dataset/data/test/0/20151111007/20151111154652.jpg
inflating: dataset/data/test/0/20151111007/20151111154725.jpg
inflating: dataset/data/test/0/20151111007/20151111154740.jpg
inflating: dataset/data/test/0/20151111007/20151111154800.jpg
inflating: dataset/data/test/0/20151111007/20151111154900.jpg
creating: dataset/data/test/0/20151111010/
inflating: dataset/data/test/0/20151111010/20151111162830.jpg
inflating: dataset/data/test/0/20151111010/20151111162959.jpg
inflating: dataset/data/test/0/20151111010/20151111163012.jpg
inflating: dataset/data/test/0/20151111010/20151111163041.jpg
inflating: dataset/data/test/0/20151111010/20151111163105.jpg
inflating: dataset/data/test/0/20151111010/20151111163115.jpg
inflating: dataset/data/test/0/20151111010/20151111163257.jpg
creating: dataset/data/test/0/20151113012/
inflating: dataset/data/test/0/20151113012/20151113163733.jpg
inflating: dataset/data/test/0/20151113012/20151113163859.jpg
inflating: dataset/data/test/0/20151113012/20151113163928.jpg
inflating: dataset/data/test/0/20151113012/20151113164000.jpg
inflating: dataset/data/test/0/20151113012/20151113164028.jpg
inflating: dataset/data/test/0/20151113012/20151113164100.jpg
inflating: dataset/data/test/0/20151113012/20151113164201.jpg
creating: dataset/data/test/0/20151117005/
inflating: dataset/data/test/0/20151117005/20151117111950.jpg
inflating: dataset/data/test/0/20151117005/20151117112126.jpg
inflating: dataset/data/test/0/20151117005/20151117112146.jpg
inflating: dataset/data/test/0/20151117005/20151117112246.jpg
inflating: dataset/data/test/0/20151117005/20151117112314.jpg
inflating: dataset/data/test/0/20151117005/20151117112400.jpg
inflating: dataset/data/test/0/20151117005/20151117112508.jpg
creating: dataset/data/test/0/20151118006/
inflating: dataset/data/test/0/20151118006/20151118144223.jpg
inflating: dataset/data/test/0/20151118006/20151118144344.jpg
inflating: dataset/data/test/0/20151118006/20151118144414.jpg
inflating: dataset/data/test/0/20151118006/20151118144448.jpg
inflating: dataset/data/test/0/20151118006/20151118144519.jpg
inflating: dataset/data/test/0/20151118006/20151118144600.jpg
inflating: dataset/data/test/0/20151118006/20151118144610.jpg
creating: dataset/data/test/0/20151118009/
inflating: dataset/data/test/0/20151118009/20151118160649.jpg
inflating: dataset/data/test/0/20151118009/20151118160839.jpg
inflating: dataset/data/test/0/20151118009/20151118160853.jpg
inflating: dataset/data/test/0/20151118009/20151118160924.jpg
inflating: dataset/data/test/0/20151118009/20151118160952.jpg
inflating: dataset/data/test/0/20151118009/20151118160970.jpg
inflating: dataset/data/test/0/20151118009/20151118161030.jpg
creating: dataset/data/test/0/20151118011/
inflating: dataset/data/test/0/20151118011/20151118162920.jpg
inflating: dataset/data/test/0/20151118011/20151118163100.jpg
inflating: dataset/data/test/0/20151118011/20151118163137.jpg
inflating: dataset/data/test/0/20151118011/20151118163150.jpg
inflating: dataset/data/test/0/20151118011/20151118163215.jpg
inflating: dataset/data/test/0/20151118011/20151118163218.jpg
inflating: dataset/data/test/0/20151118011/20151118163318.jpg
creating: dataset/data/test/1/
creating: dataset/data/test/1/162231763/
inflating: dataset/data/test/1/162231763/162231763Image0.jpg
inflating: dataset/data/test/1/162231763/162231763Image2.jpg
inflating: dataset/data/test/1/162231763/162231763Image3.jpg
inflating: dataset/data/test/1/162231763/162231763Image5.jpg
inflating: dataset/data/test/1/162231763/162231763Image6.jpg
inflating: dataset/data/test/1/162231763/162231763Image7.jpg
inflating: dataset/data/test/1/162231763/162231763Image8.jpg
creating: dataset/data/test/1/163856190/
inflating: dataset/data/test/1/163856190/163856190Image0.jpg
inflating: dataset/data/test/1/163856190/163856190Image2.jpg
inflating: dataset/data/test/1/163856190/163856190Image3.jpg
inflating: dataset/data/test/1/163856190/163856190Image5.jpg
inflating: dataset/data/test/1/163856190/163856190Image6.jpg
inflating: dataset/data/test/1/163856190/163856190Image7.jpg
inflating: dataset/data/test/1/163856190/163856190Image9.jpg
creating: dataset/data/test/1/164145173/
inflating: dataset/data/test/1/164145173/164145173Image0.jpg
inflating: dataset/data/test/1/164145173/164145173Image2.jpg
inflating: dataset/data/test/1/164145173/164145173Image3.jpg
inflating: dataset/data/test/1/164145173/164145173Image4.jpg
inflating: dataset/data/test/1/164145173/164145173Image6.jpg
inflating: dataset/data/test/1/164145173/164145173Image7.jpg
inflating: dataset/data/test/1/164145173/164145173Image8.jpg
creating: dataset/data/test/1/165554510/
inflating: dataset/data/test/1/165554510/165554510Image0.jpg
inflating: dataset/data/test/1/165554510/165554510Image3.jpg
inflating: dataset/data/test/1/165554510/165554510Image4.jpg
inflating: dataset/data/test/1/165554510/165554510Image5.jpg
inflating: dataset/data/test/1/165554510/165554510Image6.jpg
inflating: dataset/data/test/1/165554510/165554510Image7.jpg
inflating: dataset/data/test/1/165554510/165554510Image8.jpg
creating: dataset/data/test/1/171212253/
inflating: dataset/data/test/1/171212253/171212253Image0.jpg
inflating: dataset/data/test/1/171212253/171212253Image3.jpg
inflating: dataset/data/test/1/171212253/171212253Image4.jpg
inflating: dataset/data/test/1/171212253/171212253Image5.jpg
inflating: dataset/data/test/1/171212253/171212253Image6.jpg
inflating: dataset/data/test/1/171212253/171212253Image8.jpg
inflating: dataset/data/test/1/171212253/171212253Image9.jpg
creating: dataset/data/test/1/20150729004/
inflating: dataset/data/test/1/20150729004/20150729142418.jpg
inflating: dataset/data/test/1/20150729004/20150729142536.jpg
inflating: dataset/data/test/1/20150729004/20150729142617.jpg
inflating: dataset/data/test/1/20150729004/20150729142638.jpg
inflating: dataset/data/test/1/20150729004/20150729142712.jpg
inflating: dataset/data/test/1/20150729004/20150729142728.jpg
inflating: dataset/data/test/1/20150729004/20150729143009.jpg
creating: dataset/data/test/1/20150731002/
inflating: dataset/data/test/1/20150731002/20150731164116.jpg
inflating: dataset/data/test/1/20150731002/20150731164301.jpg
inflating: dataset/data/test/1/20150731002/20150731164316.jpg
inflating: dataset/data/test/1/20150731002/20150731164344.jpg
inflating: dataset/data/test/1/20150731002/20150731164411.jpg
inflating: dataset/data/test/1/20150731002/20150731164418.jpg
inflating: dataset/data/test/1/20150731002/20150731164556.jpg
creating: dataset/data/test/1/20150812006/
inflating: dataset/data/test/1/20150812006/20150812143825.jpg
inflating: dataset/data/test/1/20150812006/20150812143943.jpg
inflating: dataset/data/test/1/20150812006/20150812144013.jpg
inflating: dataset/data/test/1/20150812006/20150812144047.jpg
inflating: dataset/data/test/1/20150812006/20150812144114.jpg
inflating: dataset/data/test/1/20150812006/20150812144206.jpg
inflating: dataset/data/test/1/20150812006/20150812144318.jpg
creating: dataset/data/test/1/20150818001/
inflating: dataset/data/test/1/20150818001/20150818113423.jpg
inflating: dataset/data/test/1/20150818001/20150818113549.jpg
inflating: dataset/data/test/1/20150818001/20150818113620.jpg
inflating: dataset/data/test/1/20150818001/20150818113649.jpg
inflating: dataset/data/test/1/20150818001/20150818113719.jpg
inflating: dataset/data/test/1/20150818001/20150818113721.jpg
inflating: dataset/data/test/1/20150818001/20150818113827.jpg
creating: dataset/data/test/1/20150819006/
inflating: dataset/data/test/1/20150819006/20150819152524.jpg
inflating: dataset/data/test/1/20150819006/20150819152652.jpg
inflating: dataset/data/test/1/20150819006/20150819152726.jpg
inflating: dataset/data/test/1/20150819006/20150819152755.jpg
inflating: dataset/data/test/1/20150819006/20150819152825.jpg
inflating: dataset/data/test/1/20150819006/20150819152827.jpg
inflating: dataset/data/test/1/20150819006/20150819152905.jpg
creating: dataset/data/test/1/20150819011/
inflating: dataset/data/test/1/20150819011/20150819163415.jpg
inflating: dataset/data/test/1/20150819011/20150819163538.jpg
inflating: dataset/data/test/1/20150819011/20150819163608.jpg
inflating: dataset/data/test/1/20150819011/20150819163638.jpg
inflating: dataset/data/test/1/20150819011/20150819163708.jpg
inflating: dataset/data/test/1/20150819011/20150819163778.jpg
inflating: dataset/data/test/1/20150819011/20150819163805.jpg
creating: dataset/data/test/1/20150826008/
inflating: dataset/data/test/1/20150826008/20150826153113.jpg
inflating: dataset/data/test/1/20150826008/20150826153241.jpg
inflating: dataset/data/test/1/20150826008/20150826153310.jpg
inflating: dataset/data/test/1/20150826008/20150826153340.jpg
inflating: dataset/data/test/1/20150826008/20150826153410.jpg
inflating: dataset/data/test/1/20150826008/20150826153425.jpg
inflating: dataset/data/test/1/20150826008/20150826153539.jpg
creating: dataset/data/test/2/
creating: dataset/data/test/2/162021000/
inflating: dataset/data/test/2/162021000/162021000Image0.jpg
inflating: dataset/data/test/2/162021000/162021000Image100.jpg
inflating: dataset/data/test/2/162021000/162021000Image3.jpg
inflating: dataset/data/test/2/162021000/162021000Image6.jpg
inflating: dataset/data/test/2/162021000/162021000Image70.jpg
inflating: dataset/data/test/2/162021000/162021000Image8.jpg
inflating: dataset/data/test/2/162021000/162021000Image9.jpg
creating: dataset/data/test/2/162334723/
inflating: dataset/data/test/2/162334723/162334723Image0.jpg
inflating: dataset/data/test/2/162334723/162334723Image10.jpg
inflating: dataset/data/test/2/162334723/162334723Image12.jpg
inflating: dataset/data/test/2/162334723/162334723Image2.jpg
inflating: dataset/data/test/2/162334723/162334723Image4.jpg
inflating: dataset/data/test/2/162334723/162334723Image8.jpg
inflating: dataset/data/test/2/162334723/162334723Image9.jpg
creating: dataset/data/test/2/162403397/
inflating: dataset/data/test/2/162403397/162403397Image0.jpg
inflating: dataset/data/test/2/162403397/162403397Image1.jpg
inflating: dataset/data/test/2/162403397/162403397Image100.jpg
inflating: dataset/data/test/2/162403397/162403397Image3.jpg
inflating: dataset/data/test/2/162403397/162403397Image4.jpg
inflating: dataset/data/test/2/162403397/162403397Image80.jpg
inflating: dataset/data/test/2/162403397/162403397Image9.jpg
creating: dataset/data/test/2/163138313/
inflating: dataset/data/test/2/163138313/163138313Image0.jpg
inflating: dataset/data/test/2/163138313/163138313Image2.jpg
inflating: dataset/data/test/2/163138313/163138313Image3.jpg
inflating: dataset/data/test/2/163138313/163138313Image4.jpg
inflating: dataset/data/test/2/163138313/163138313Image5.jpg
inflating: dataset/data/test/2/163138313/163138313Image7.jpg
inflating: dataset/data/test/2/163138313/163138313Image8.jpg
creating: dataset/data/test/2/163747350/
inflating: dataset/data/test/2/163747350/163747350Image0.jpg
inflating: dataset/data/test/2/163747350/163747350Image2.jpg
inflating: dataset/data/test/2/163747350/163747350Image3.jpg
inflating: dataset/data/test/2/163747350/163747350Image4.jpg
inflating: dataset/data/test/2/163747350/163747350Image6.jpg
inflating: dataset/data/test/2/163747350/163747350Image7.jpg
inflating: dataset/data/test/2/163747350/163747350Image8.jpg
creating: dataset/data/test/2/165313413/
inflating: dataset/data/test/2/165313413/165313413Image0.jpg
inflating: dataset/data/test/2/165313413/165313413Image10.jpg
inflating: dataset/data/test/2/165313413/165313413Image2.jpg
inflating: dataset/data/test/2/165313413/165313413Image4.jpg
inflating: dataset/data/test/2/165313413/165313413Image5.jpg
inflating: dataset/data/test/2/165313413/165313413Image7.jpg
inflating: dataset/data/test/2/165313413/165313413Image8.jpg
creating: dataset/data/test/2/20150722013/
inflating: dataset/data/test/2/20150722013/20150722161717.jpg
inflating: dataset/data/test/2/20150722013/20150722161844.jpg
inflating: dataset/data/test/2/20150722013/20150722161913.jpg
inflating: dataset/data/test/2/20150722013/20150722161943.jpg
inflating: dataset/data/test/2/20150722013/20150722162013.jpg
inflating: dataset/data/test/2/20150722013/20150722162015.jpg
inflating: dataset/data/test/2/20150722013/20150722162101.jpg
creating: dataset/data/test/2/20150729007/
inflating: dataset/data/test/2/20150729007/20150729165730.jpg
inflating: dataset/data/test/2/20150729007/20150729165856.jpg
inflating: dataset/data/test/2/20150729007/20150729165922.jpg
inflating: dataset/data/test/2/20150729007/20150729165954.jpg
inflating: dataset/data/test/2/20150729007/20150729170022.jpg
inflating: dataset/data/test/2/20150729007/20150729170025.jpg
inflating: dataset/data/test/2/20150729007/20150729170203.jpg
creating: dataset/data/test/2/20150731003/
inflating: dataset/data/test/2/20150731003/20150731170522.jpg
inflating: dataset/data/test/2/20150731003/20150731170730.jpg
inflating: dataset/data/test/2/20150731003/20150731170755.jpg
inflating: dataset/data/test/2/20150731003/20150731170825.jpg
inflating: dataset/data/test/2/20150731003/20150731170906.jpg
inflating: dataset/data/test/2/20150731003/20150731170908.jpg
inflating: dataset/data/test/2/20150731003/20150731171123.jpg
creating: dataset/data/test/2/20150805009/
inflating: dataset/data/test/2/20150805009/20150805154502.jpg
inflating: dataset/data/test/2/20150805009/20150805154629.jpg
inflating: dataset/data/test/2/20150805009/20150805154701.jpg
inflating: dataset/data/test/2/20150805009/20150805154729.jpg
inflating: dataset/data/test/2/20150805009/20150805154759.jpg
inflating: dataset/data/test/2/20150805009/20150805154760.jpg
inflating: dataset/data/test/2/20150805009/20150805154923.jpg
creating: dataset/data/test/2/20150812011/
inflating: dataset/data/test/2/20150812011/20150812164905.jpg
inflating: dataset/data/test/2/20150812011/20150812165027.jpg
inflating: dataset/data/test/2/20150812011/20150812165058.jpg
inflating: dataset/data/test/2/20150812011/20150812165126.jpg
inflating: dataset/data/test/2/20150812011/20150812165156.jpg
inflating: dataset/data/test/2/20150812011/20150812165159.jpg
inflating: dataset/data/test/2/20150812011/20150812165247.jpg
creating: dataset/data/test/2/20150818003/
inflating: dataset/data/test/2/20150818003/20150818165454.jpg
inflating: dataset/data/test/2/20150818003/20150818165643.jpg
inflating: dataset/data/test/2/20150818003/20150818165713.jpg
inflating: dataset/data/test/2/20150818003/20150818165742.jpg
inflating: dataset/data/test/2/20150818003/20150818165812.jpg
inflating: dataset/data/test/2/20150818003/20150818165815.jpg
inflating: dataset/data/test/2/20150818003/20150818165916.jpg
creating: dataset/data/test/3/
creating: dataset/data/test/3/151705083/
inflating: dataset/data/test/3/151705083/151705083Image0.jpg
inflating: dataset/data/test/3/151705083/151705083Image2.jpg
inflating: dataset/data/test/3/151705083/151705083Image3.jpg
inflating: dataset/data/test/3/151705083/151705083Image5.jpg
inflating: dataset/data/test/3/151705083/151705083Image6.jpg
inflating: dataset/data/test/3/151705083/151705083Image7.jpg
inflating: dataset/data/test/3/151705083/151705083Image8.jpg
creating: dataset/data/test/3/153226430/
inflating: dataset/data/test/3/153226430/153226430Image0.jpg
inflating: dataset/data/test/3/153226430/153226430Image2.jpg
inflating: dataset/data/test/3/153226430/153226430Image3.jpg
inflating: dataset/data/test/3/153226430/153226430Image4.jpg
inflating: dataset/data/test/3/153226430/153226430Image6.jpg
inflating: dataset/data/test/3/153226430/153226430Image7.jpg
inflating: dataset/data/test/3/153226430/153226430Image8.jpg
creating: dataset/data/test/3/154649120/
inflating: dataset/data/test/3/154649120/154649120Image0.jpg
inflating: dataset/data/test/3/154649120/154649120Image10.jpg
inflating: dataset/data/test/3/154649120/154649120Image11.jpg
inflating: dataset/data/test/3/154649120/154649120Image2.jpg
inflating: dataset/data/test/3/154649120/154649120Image3.jpg
inflating: dataset/data/test/3/154649120/154649120Image4.jpg
inflating: dataset/data/test/3/154649120/154649120Image9.jpg
creating: dataset/data/test/3/163546870/
inflating: dataset/data/test/3/163546870/163546870Image0.jpg
inflating: dataset/data/test/3/163546870/163546870Image2.jpg
inflating: dataset/data/test/3/163546870/163546870Image4.jpg
inflating: dataset/data/test/3/163546870/163546870Image5.jpg
inflating: dataset/data/test/3/163546870/163546870Image6.jpg
inflating: dataset/data/test/3/163546870/163546870Image7.jpg
inflating: dataset/data/test/3/163546870/163546870Image8.jpg
creating: dataset/data/test/3/165048077/
inflating: dataset/data/test/3/165048077/165048077Image0.jpg
inflating: dataset/data/test/3/165048077/165048077Image3.jpg
inflating: dataset/data/test/3/165048077/165048077Image4.jpg
inflating: dataset/data/test/3/165048077/165048077Image5.jpg
inflating: dataset/data/test/3/165048077/165048077Image6.jpg
inflating: dataset/data/test/3/165048077/165048077Image7.jpg
inflating: dataset/data/test/3/165048077/165048077Image8.jpg
creating: dataset/data/test/3/174946503/
inflating: dataset/data/test/3/174946503/174946503Image0.jpg
inflating: dataset/data/test/3/174946503/174946503Image10.jpg
inflating: dataset/data/test/3/174946503/174946503Image3.jpg
inflating: dataset/data/test/3/174946503/174946503Image5.jpg
inflating: dataset/data/test/3/174946503/174946503Image6.jpg
inflating: dataset/data/test/3/174946503/174946503Image8.jpg
inflating: dataset/data/test/3/174946503/174946503Image9.jpg
creating: dataset/data/test/3/20150717003/
inflating: dataset/data/test/3/20150717003/20150717152616.jpg
inflating: dataset/data/test/3/20150717003/20150717152829.jpg
inflating: dataset/data/test/3/20150717003/20150717152852.jpg
inflating: dataset/data/test/3/20150717003/20150717152921.jpg
inflating: dataset/data/test/3/20150717003/20150717152951.jpg
inflating: dataset/data/test/3/20150717003/20150717152961.jpg
inflating: dataset/data/test/3/20150717003/20150717153107.jpg
creating: dataset/data/test/3/20150805013/
inflating: dataset/data/test/3/20150805013/20150805164531.jpg
inflating: dataset/data/test/3/20150805013/20150805164727.jpg
inflating: dataset/data/test/3/20150805013/20150805164752.jpg
inflating: dataset/data/test/3/20150805013/20150805164822.jpg
inflating: dataset/data/test/3/20150805013/20150805164853.jpg
inflating: dataset/data/test/3/20150805013/20150805164854.jpg
inflating: dataset/data/test/3/20150805013/20150805165042.jpg
creating: dataset/data/test/3/20150821002/
inflating: dataset/data/test/3/20150821002/20150821160244.jpg
inflating: dataset/data/test/3/20150821002/20150821160510.jpg
inflating: dataset/data/test/3/20150821002/20150821160551.jpg
inflating: dataset/data/test/3/20150821002/20150821160621.jpg
inflating: dataset/data/test/3/20150821002/20150821160648.jpg
inflating: dataset/data/test/3/20150821002/20150821160650.jpg
inflating: dataset/data/test/3/20150821002/20150821160755.jpg
creating: dataset/data/test/3/20150826002/
inflating: dataset/data/test/3/20150826002/20150826103859.jpg
inflating: dataset/data/test/3/20150826002/20150826104106.jpg
inflating: dataset/data/test/3/20150826002/20150826104107.jpg
inflating: dataset/data/test/3/20150826002/20150826104127.jpg
inflating: dataset/data/test/3/20150826002/20150826104147.jpg
inflating: dataset/data/test/3/20150826002/20150826104148.jpg
inflating: dataset/data/test/3/20150826002/20150826104150.jpg
creating: dataset/data/test/3/20151118018/
inflating: dataset/data/test/3/20151118018/20151118175237.jpg
inflating: dataset/data/test/3/20151118018/20151118175359.jpg
inflating: dataset/data/test/3/20151118018/20151118175430.jpg
inflating: dataset/data/test/3/20151118018/20151118175455.jpg
inflating: dataset/data/test/3/20151118018/20151118175530.jpg
inflating: dataset/data/test/3/20151118018/20151118175540.jpg
inflating: dataset/data/test/3/20151118018/20151118175619.jpg
creating: dataset/data/test/3/20151119003/
inflating: dataset/data/test/3/20151119003/20151119152531.jpg
inflating: dataset/data/test/3/20151119003/20151119152710.jpg
inflating: dataset/data/test/3/20151119003/20151119152744.jpg
inflating: dataset/data/test/3/20151119003/20151119152815.jpg
inflating: dataset/data/test/3/20151119003/20151119152837.jpg
inflating: dataset/data/test/3/20151119003/20151119152839.jpg
inflating: dataset/data/test/3/20151119003/20151119152921.jpg
creating: dataset/data/train/
creating: dataset/data/train/0/
creating: dataset/data/train/0/20150722014/
inflating: dataset/data/train/0/20150722014/20150722163342.jpg
inflating: dataset/data/train/0/20150722014/20150722163508.jpg
inflating: dataset/data/train/0/20150722014/20150722163537.jpg
inflating: dataset/data/train/0/20150722014/20150722163608.jpg
inflating: dataset/data/train/0/20150722014/20150722163637.jpg
inflating: dataset/data/train/0/20150722014/20150722163638.jpg
inflating: dataset/data/train/0/20150722014/20150722163857.jpg
creating: dataset/data/train/0/20150722015/
inflating: dataset/data/train/0/20150722015/20150722173601.jpg
inflating: dataset/data/train/0/20150722015/20150722173721.jpg
inflating: dataset/data/train/0/20150722015/20150722173753.jpg
inflating: dataset/data/train/0/20150722015/20150722173822.jpg
inflating: dataset/data/train/0/20150722015/20150722173858.jpg
inflating: dataset/data/train/0/20150722015/20150722173900.jpg
inflating: dataset/data/train/0/20150722015/20150722174021.jpg
creating: dataset/data/train/0/20150803002/
inflating: dataset/data/train/0/20150803002/20150803095706.jpg
inflating: dataset/data/train/0/20150803002/20150803095830.jpg
inflating: dataset/data/train/0/20150803002/20150803095856.jpg
inflating: dataset/data/train/0/20150803002/20150803095926.jpg
inflating: dataset/data/train/0/20150803002/20150803095957.jpg
inflating: dataset/data/train/0/20150803002/20150803100006.jpg
inflating: dataset/data/train/0/20150803002/20150803100053.jpg
creating: dataset/data/train/0/20150805007/
inflating: dataset/data/train/0/20150805007/20150805151805.jpg
inflating: dataset/data/train/0/20150805007/20150805151947.jpg
inflating: dataset/data/train/0/20150805007/20150805151951.jpg
inflating: dataset/data/train/0/20150805007/20150805152026.jpg
inflating: dataset/data/train/0/20150805007/20150805152034.jpg
inflating: dataset/data/train/0/20150805007/20150805152115.jpg
inflating: dataset/data/train/0/20150805007/20150805152120.jpg
creating: dataset/data/train/0/20150808001/
inflating: dataset/data/train/0/20150808001/20150808111901.jpg
inflating: dataset/data/train/0/20150808001/20150808112045.jpg
inflating: dataset/data/train/0/20150808001/20150808112104.jpg
inflating: dataset/data/train/0/20150808001/20150808112139.jpg
inflating: dataset/data/train/0/20150808001/20150808112209.jpg
inflating: dataset/data/train/0/20150808001/20150808112229.jpg
inflating: dataset/data/train/0/20150808001/20150808112503.jpg
creating: dataset/data/train/0/20150814006/
inflating: dataset/data/train/0/20150814006/20150814162732.jpg
inflating: dataset/data/train/0/20150814006/20150814162931.jpg
inflating: dataset/data/train/0/20150814006/20150814162945.jpg
inflating: dataset/data/train/0/20150814006/20150814163030.jpg
inflating: dataset/data/train/0/20150814006/20150814163055.jpg
inflating: dataset/data/train/0/20150814006/20150814163077.jpg
inflating: dataset/data/train/0/20150814006/20150814163137.jpg
creating: dataset/data/train/0/20150819010/
inflating: dataset/data/train/0/20150819010/20150819162241.jpg
inflating: dataset/data/train/0/20150819010/20150819162530.jpg
inflating: dataset/data/train/0/20150819010/20150819162541.jpg
inflating: dataset/data/train/0/20150819010/20150819162620.jpg
inflating: dataset/data/train/0/20150819010/20150819162647.jpg
inflating: dataset/data/train/0/20150819010/20150819162784.jpg
inflating: dataset/data/train/0/20150819010/20150819162804.jpg
creating: dataset/data/train/0/20150826005/
inflating: dataset/data/train/0/20150826005/20150826144025.jpg
inflating: dataset/data/train/0/20150826005/20150826144141.jpg
inflating: dataset/data/train/0/20150826005/20150826144218.jpg
inflating: dataset/data/train/0/20150826005/20150826144250.jpg
inflating: dataset/data/train/0/20150826005/20150826144312.jpg
inflating: dataset/data/train/0/20150826005/20150826144315.jpg
inflating: dataset/data/train/0/20150826005/20150826144356.jpg
creating: dataset/data/train/0/20150826007/
inflating: dataset/data/train/0/20150826007/20150826150220.jpg
inflating: dataset/data/train/0/20150826007/20150826150339.jpg
inflating: dataset/data/train/0/20150826007/20150826150413.jpg
inflating: dataset/data/train/0/20150826007/20150826150437.jpg
inflating: dataset/data/train/0/20150826007/20150826150508.jpg
inflating: dataset/data/train/0/20150826007/20150826150600.jpg
inflating: dataset/data/train/0/20150826007/20150826150730.jpg
creating: dataset/data/train/0/20150831002/
inflating: dataset/data/train/0/20150831002/20150831152645.jpg
inflating: dataset/data/train/0/20150831002/20150831152814.jpg
inflating: dataset/data/train/0/20150831002/20150831152842.jpg
inflating: dataset/data/train/0/20150831002/20150831152914.jpg
inflating: dataset/data/train/0/20150831002/20150831152945.jpg
inflating: dataset/data/train/0/20150831002/20150831153000.jpg
inflating: dataset/data/train/0/20150831002/20150831153104.jpg
creating: dataset/data/train/0/20150901002/
inflating: dataset/data/train/0/20150901002/20150901110219.jpg
inflating: dataset/data/train/0/20150901002/20150901110343.jpg
inflating: dataset/data/train/0/20150901002/20150901110417.jpg
inflating: dataset/data/train/0/20150901002/20150901110439.jpg
inflating: dataset/data/train/0/20150901002/20150901110510.jpg
inflating: dataset/data/train/0/20150901002/20150901110518.jpg
inflating: dataset/data/train/0/20150901002/20150901110616.jpg
creating: dataset/data/train/0/20150902006/
inflating: dataset/data/train/0/20150902006/20150902152203.jpg
inflating: dataset/data/train/0/20150902006/20150902152331.jpg
inflating: dataset/data/train/0/20150902006/20150902152359.jpg
inflating: dataset/data/train/0/20150902006/20150902152429.jpg
inflating: dataset/data/train/0/20150902006/20150902152459.jpg
inflating: dataset/data/train/0/20150902006/20150902152460.jpg
inflating: dataset/data/train/0/20150902006/20150902152536.jpg
creating: dataset/data/train/0/20150902009/
inflating: dataset/data/train/0/20150902009/20150902160019.jpg
inflating: dataset/data/train/0/20150902009/20150902160146.jpg
inflating: dataset/data/train/0/20150902009/20150902160220.jpg
inflating: dataset/data/train/0/20150902009/20150902160246.jpg
inflating: dataset/data/train/0/20150902009/20150902160317.jpg
inflating: dataset/data/train/0/20150902009/20150902160349.jpg
inflating: dataset/data/train/0/20150902009/20150902160351.jpg
creating: dataset/data/train/0/20150902011/
inflating: dataset/data/train/0/20150902011/20150902162952.jpg
inflating: dataset/data/train/0/20150902011/20150902163134.jpg
inflating: dataset/data/train/0/20150902011/20150902163201.jpg
inflating: dataset/data/train/0/20150902011/20150902163230.jpg
inflating: dataset/data/train/0/20150902011/20150902163305.jpg
inflating: dataset/data/train/0/20150902011/20150902163388.jpg
inflating: dataset/data/train/0/20150902011/20150902163431.jpg
creating: dataset/data/train/0/20150911004/
inflating: dataset/data/train/0/20150911004/20150911150019.jpg
inflating: dataset/data/train/0/20150911004/20150911150153.jpg
inflating: dataset/data/train/0/20150911004/20150911150241.jpg
inflating: dataset/data/train/0/20150911004/20150911150322.jpg
inflating: dataset/data/train/0/20150911004/20150911150334.jpg
inflating: dataset/data/train/0/20150911004/20150911150500.jpg
inflating: dataset/data/train/0/20150911004/20150911150618.jpg
creating: dataset/data/train/0/20150911006/
inflating: dataset/data/train/0/20150911006/20150911160646.jpg
inflating: dataset/data/train/0/20150911006/20150911160825.jpg
inflating: dataset/data/train/0/20150911006/20150911160836.jpg
inflating: dataset/data/train/0/20150911006/20150911160905.jpg
inflating: dataset/data/train/0/20150911006/20150911160934.jpg
inflating: dataset/data/train/0/20150911006/20150911160936.jpg
inflating: dataset/data/train/0/20150911006/20150911161031.jpg
creating: dataset/data/train/0/20150916005/
inflating: dataset/data/train/0/20150916005/20150916145030.jpg
inflating: dataset/data/train/0/20150916005/20150916145155.jpg
inflating: dataset/data/train/0/20150916005/20150916145222.jpg
inflating: dataset/data/train/0/20150916005/20150916145252.jpg
inflating: dataset/data/train/0/20150916005/20150916145328.jpg
inflating: dataset/data/train/0/20150916005/20150916145332.jpg
inflating: dataset/data/train/0/20150916005/20150916145450.jpg
creating: dataset/data/train/0/20150916011/
inflating: dataset/data/train/0/20150916011/20150916160128.jpg
inflating: dataset/data/train/0/20150916011/20150916160249.jpg
inflating: dataset/data/train/0/20150916011/20150916160316.jpg
inflating: dataset/data/train/0/20150916011/20150916160346.jpg
inflating: dataset/data/train/0/20150916011/20150916160415.jpg
inflating: dataset/data/train/0/20150916011/20150916160455.jpg
inflating: dataset/data/train/0/20150916011/20150916160518.jpg
creating: dataset/data/train/0/20150916012/
inflating: dataset/data/train/0/20150916012/20150916163607.jpg
inflating: dataset/data/train/0/20150916012/20150916163727.jpg
inflating: dataset/data/train/0/20150916012/20150916163752.jpg
inflating: dataset/data/train/0/20150916012/20150916163822.jpg
inflating: dataset/data/train/0/20150916012/20150916163852.jpg
inflating: dataset/data/train/0/20150916012/20150916163869.jpg
inflating: dataset/data/train/0/20150916012/20150916163945.jpg
creating: dataset/data/train/0/20150917002/
inflating: dataset/data/train/0/20150917002/20150917101429.jpg
inflating: dataset/data/train/0/20150917002/20150917101453.jpg
inflating: dataset/data/train/0/20150917002/20150917101615.jpg
inflating: dataset/data/train/0/20150917002/20150917101657.jpg
inflating: dataset/data/train/0/20150917002/20150917101724.jpg
inflating: dataset/data/train/0/20150917002/20150917101729.jpg
inflating: dataset/data/train/0/20150917002/20150917101829.jpg
creating: dataset/data/train/0/20150918003/
inflating: dataset/data/train/0/20150918003/20150918151028.jpg
inflating: dataset/data/train/0/20150918003/20150918151201.jpg
inflating: dataset/data/train/0/20150918003/20150918151223.jpg
inflating: dataset/data/train/0/20150918003/20150918151248.jpg
inflating: dataset/data/train/0/20150918003/20150918151318.jpg
inflating: dataset/data/train/0/20150918003/20150918151351.jpg
inflating: dataset/data/train/0/20150918003/20150918151519.jpg
creating: dataset/data/train/0/20150923007/
inflating: dataset/data/train/0/20150923007/20150923155803.jpg
inflating: dataset/data/train/0/20150923007/20150923155939.jpg
inflating: dataset/data/train/0/20150923007/20150923160007.jpg
inflating: dataset/data/train/0/20150923007/20150923160036.jpg
inflating: dataset/data/train/0/20150923007/20150923160107.jpg
inflating: dataset/data/train/0/20150923007/20150923160122.jpg
inflating: dataset/data/train/0/20150923007/20150923160143.jpg
creating: dataset/data/train/0/20150923011/
inflating: dataset/data/train/0/20150923011/20150923164045.jpg
inflating: dataset/data/train/0/20150923011/20150923164224.jpg
inflating: dataset/data/train/0/20150923011/20150923164233.jpg
inflating: dataset/data/train/0/20150923011/20150923164303.jpg
inflating: dataset/data/train/0/20150923011/20150923164333.jpg
inflating: dataset/data/train/0/20150923011/20150923164380.jpg
inflating: dataset/data/train/0/20150923011/20150923164411.jpg
creating: dataset/data/train/0/20150930008/
inflating: dataset/data/train/0/20150930008/20150930154113.jpg
inflating: dataset/data/train/0/20150930008/20150930154301.jpg
inflating: dataset/data/train/0/20150930008/20150930154321.jpg
inflating: dataset/data/train/0/20150930008/20150930154342.jpg
inflating: dataset/data/train/0/20150930008/20150930154414.jpg
inflating: dataset/data/train/0/20150930008/20150930154450.jpg
inflating: dataset/data/train/0/20150930008/20150930154509.jpg
creating: dataset/data/train/0/20151012006/
inflating: dataset/data/train/0/20151012006/20151012155835.jpg
inflating: dataset/data/train/0/20151012006/20151012155951.jpg
inflating: dataset/data/train/0/20151012006/20151012160021.jpg
inflating: dataset/data/train/0/20151012006/20151012160051.jpg
inflating: dataset/data/train/0/20151012006/20151012160121.jpg
inflating: dataset/data/train/0/20151012006/20151012160200.jpg
inflating: dataset/data/train/0/20151012006/20151012160216.jpg
creating: dataset/data/train/0/20151014011/
inflating: dataset/data/train/0/20151014011/20151014161255.jpg
inflating: dataset/data/train/0/20151014011/20151014161421.jpg
inflating: dataset/data/train/0/20151014011/20151014161450.jpg
inflating: dataset/data/train/0/20151014011/20151014161517.jpg
inflating: dataset/data/train/0/20151014011/20151014161547.jpg
inflating: dataset/data/train/0/20151014011/20151014161551.jpg
inflating: dataset/data/train/0/20151014011/20151014161629.jpg
creating: dataset/data/train/0/20151016002/
inflating: dataset/data/train/0/20151016002/20151016145537.jpg
inflating: dataset/data/train/0/20151016002/20151016145703.jpg
inflating: dataset/data/train/0/20151016002/20151016145726.jpg
inflating: dataset/data/train/0/20151016002/20151016145803.jpg
inflating: dataset/data/train/0/20151016002/20151016145826.jpg
inflating: dataset/data/train/0/20151016002/20151016145828.jpg
inflating: dataset/data/train/0/20151016002/20151016145932.jpg
creating: dataset/data/train/0/20151021002/
inflating: dataset/data/train/0/20151021002/20151021101700.jpg
inflating: dataset/data/train/0/20151021002/20151021101831.jpg
inflating: dataset/data/train/0/20151021002/20151021101853.jpg
inflating: dataset/data/train/0/20151021002/20151021101927.jpg
inflating: dataset/data/train/0/20151021002/20151021101953.jpg
inflating: dataset/data/train/0/20151021002/20151021102000.jpg
inflating: dataset/data/train/0/20151021002/20151021102104.jpg
creating: dataset/data/train/0/20151023005/
inflating: dataset/data/train/0/20151023005/20151023152438.jpg
inflating: dataset/data/train/0/20151023005/20151023152554.jpg
inflating: dataset/data/train/0/20151023005/20151023152624.jpg
inflating: dataset/data/train/0/20151023005/20151023152653.jpg
inflating: dataset/data/train/0/20151023005/20151023152723.jpg
inflating: dataset/data/train/0/20151023005/20151023152740.jpg
inflating: dataset/data/train/0/20151023005/20151023152818.jpg
creating: dataset/data/train/0/20151026002/
inflating: dataset/data/train/0/20151026002/20151026092135.jpg
inflating: dataset/data/train/0/20151026002/20151026092258.jpg
inflating: dataset/data/train/0/20151026002/20151026092324.jpg
inflating: dataset/data/train/0/20151026002/20151026092354.jpg
inflating: dataset/data/train/0/20151026002/20151026092425.jpg
inflating: dataset/data/train/0/20151026002/20151026092448.jpg
inflating: dataset/data/train/0/20151026002/20151026092602.jpg
creating: dataset/data/train/0/20151027002/
inflating: dataset/data/train/0/20151027002/20151027153635.jpg
inflating: dataset/data/train/0/20151027002/20151027153807.jpg
inflating: dataset/data/train/0/20151027002/20151027153836.jpg
inflating: dataset/data/train/0/20151027002/20151027153858.jpg
inflating: dataset/data/train/0/20151027002/20151027153921.jpg
inflating: dataset/data/train/0/20151027002/20151027154016.jpg
inflating: dataset/data/train/0/20151027002/20151027154252.jpg
creating: dataset/data/train/0/20151028001/
inflating: dataset/data/train/0/20151028001/20151028100852.jpg
inflating: dataset/data/train/0/20151028001/20151028100907.jpg
inflating: dataset/data/train/0/20151028001/20151028101037.jpg
inflating: dataset/data/train/0/20151028001/20151028101051.jpg
inflating: dataset/data/train/0/20151028001/20151028101139.jpg
inflating: dataset/data/train/0/20151028001/20151028101200.jpg
inflating: dataset/data/train/0/20151028001/20151028101322.jpg
creating: dataset/data/train/0/20151028009/
inflating: dataset/data/train/0/20151028009/20151028152020.jpg
inflating: dataset/data/train/0/20151028009/20151028152157.jpg
inflating: dataset/data/train/0/20151028009/20151028152211.jpg
inflating: dataset/data/train/0/20151028009/20151028152241.jpg
inflating: dataset/data/train/0/20151028009/20151028152311.jpg
inflating: dataset/data/train/0/20151028009/20151028152340.jpg
inflating: dataset/data/train/0/20151028009/20151028152429.jpg
creating: dataset/data/train/0/20151028014/
inflating: dataset/data/train/0/20151028014/20151028162216.jpg
inflating: dataset/data/train/0/20151028014/20151028162334.jpg
inflating: dataset/data/train/0/20151028014/20151028162404.jpg
inflating: dataset/data/train/0/20151028014/20151028162407.jpg
inflating: dataset/data/train/0/20151028014/20151028162434.jpg
inflating: dataset/data/train/0/20151028014/20151028162700.jpg
inflating: dataset/data/train/0/20151028014/20151028162836.jpg
creating: dataset/data/train/0/20151028016/
inflating: dataset/data/train/0/20151028016/20151028170023.jpg
inflating: dataset/data/train/0/20151028016/20151028170202.jpg
inflating: dataset/data/train/0/20151028016/20151028170221.jpg
inflating: dataset/data/train/0/20151028016/20151028170251.jpg
inflating: dataset/data/train/0/20151028016/20151028170326.jpg
inflating: dataset/data/train/0/20151028016/20151028170352.jpg
inflating: dataset/data/train/0/20151028016/20151028170509.jpg
creating: dataset/data/train/0/20151029001/
inflating: dataset/data/train/0/20151029001/20151029101103.jpg
inflating: dataset/data/train/0/20151029001/20151029101332.jpg
inflating: dataset/data/train/0/20151029001/20151029101338.jpg
inflating: dataset/data/train/0/20151029001/20151029101417.jpg
inflating: dataset/data/train/0/20151029001/20151029101449.jpg
inflating: dataset/data/train/0/20151029001/20151029101500.jpg
inflating: dataset/data/train/0/20151029001/20151029101631.jpg
creating: dataset/data/train/0/20151030003/
inflating: dataset/data/train/0/20151030003/20151030145309.jpg
inflating: dataset/data/train/0/20151030003/20151030145425.jpg
inflating: dataset/data/train/0/20151030003/20151030145455.jpg
inflating: dataset/data/train/0/20151030003/20151030145526.jpg
inflating: dataset/data/train/0/20151030003/20151030145555.jpg
inflating: dataset/data/train/0/20151030003/20151030145600.jpg
inflating: dataset/data/train/0/20151030003/20151030145641.jpg
creating: dataset/data/train/0/20151031002/
inflating: dataset/data/train/0/20151031002/20151031105551.jpg
inflating: dataset/data/train/0/20151031002/20151031105817.jpg
inflating: dataset/data/train/0/20151031002/20151031105828.jpg
inflating: dataset/data/train/0/20151031002/20151031105859.jpg
inflating: dataset/data/train/0/20151031002/20151031105927.jpg
inflating: dataset/data/train/0/20151031002/20151031105980.jpg
inflating: dataset/data/train/0/20151031002/20151031110102.jpg
creating: dataset/data/train/0/20151118013/
inflating: dataset/data/train/0/20151118013/20151118165442.jpg
inflating: dataset/data/train/0/20151118013/20151118165608.jpg
inflating: dataset/data/train/0/20151118013/20151118165637.jpg
inflating: dataset/data/train/0/20151118013/20151118165708.jpg
inflating: dataset/data/train/0/20151118013/20151118165737.jpg
inflating: dataset/data/train/0/20151118013/20151118165800.jpg
inflating: dataset/data/train/0/20151118013/20151118165833.jpg
creating: dataset/data/train/0/20151119001/
inflating: dataset/data/train/0/20151119001/20151119095816.jpg
inflating: dataset/data/train/0/20151119001/20151119100027.jpg
inflating: dataset/data/train/0/20151119001/20151119100043.jpg
inflating: dataset/data/train/0/20151119001/20151119100108.jpg
inflating: dataset/data/train/0/20151119001/20151119100144.jpg
inflating: dataset/data/train/0/20151119001/20151119100145.jpg
inflating: dataset/data/train/0/20151119001/20151119100343.jpg
creating: dataset/data/train/0/20151120006/
inflating: dataset/data/train/0/20151120006/20151120152407.jpg
inflating: dataset/data/train/0/20151120006/20151120152529.jpg
inflating: dataset/data/train/0/20151120006/20151120152604.jpg
inflating: dataset/data/train/0/20151120006/20151120152639.jpg
inflating: dataset/data/train/0/20151120006/20151120152659.jpg
inflating: dataset/data/train/0/20151120006/20151120152700.jpg
inflating: dataset/data/train/0/20151120006/20151120152814.jpg
creating: dataset/data/train/0/20151123006/
inflating: dataset/data/train/0/20151123006/20151123153609.jpg
inflating: dataset/data/train/0/20151123006/20151123153733.jpg
inflating: dataset/data/train/0/20151123006/20151123153815.jpg
inflating: dataset/data/train/0/20151123006/20151123153854.jpg
inflating: dataset/data/train/0/20151123006/20151123153912.jpg
inflating: dataset/data/train/0/20151123006/20151123153916.jpg
inflating: dataset/data/train/0/20151123006/20151123154018.jpg
creating: dataset/data/train/0/20151123008/
inflating: dataset/data/train/0/20151123008/20151123155407.jpg
inflating: dataset/data/train/0/20151123008/20151123155614.jpg
inflating: dataset/data/train/0/20151123008/20151123155630.jpg
inflating: dataset/data/train/0/20151123008/20151123155657.jpg
inflating: dataset/data/train/0/20151123008/20151123155738.jpg
inflating: dataset/data/train/0/20151123008/20151123155750.jpg
inflating: dataset/data/train/0/20151123008/20151123155827.jpg
creating: dataset/data/train/0/20151127011/
inflating: dataset/data/train/0/20151127011/20151127160124.jpg
inflating: dataset/data/train/0/20151127011/20151127160301.jpg
inflating: dataset/data/train/0/20151127011/20151127160317.jpg
inflating: dataset/data/train/0/20151127011/20151127160340.jpg
inflating: dataset/data/train/0/20151127011/20151127160426.jpg
inflating: dataset/data/train/0/20151127011/20151127160500.jpg
inflating: dataset/data/train/0/20151127011/20151127160551.jpg
creating: dataset/data/train/0/20151202002/
inflating: dataset/data/train/0/20151202002/20151202144611.jpg
inflating: dataset/data/train/0/20151202002/20151202144737.jpg
inflating: dataset/data/train/0/20151202002/20151202144806.jpg
inflating: dataset/data/train/0/20151202002/20151202144836.jpg
inflating: dataset/data/train/0/20151202002/20151202144916.jpg
inflating: dataset/data/train/0/20151202002/20151202144984.jpg
inflating: dataset/data/train/0/20151202002/20151202145014.jpg
creating: dataset/data/train/0/20151202006/
inflating: dataset/data/train/0/20151202006/20151202153412.jpg
inflating: dataset/data/train/0/20151202006/20151202153553.jpg
inflating: dataset/data/train/0/20151202006/20151202153624.jpg
inflating: dataset/data/train/0/20151202006/20151202153645.jpg
inflating: dataset/data/train/0/20151202006/20151202153718.jpg
inflating: dataset/data/train/0/20151202006/20151202153811.jpg
inflating: dataset/data/train/0/20151202006/20151202153820.jpg
creating: dataset/data/train/0/20151202007/
inflating: dataset/data/train/0/20151202007/20151202154531.jpg
inflating: dataset/data/train/0/20151202007/20151202154706.jpg
inflating: dataset/data/train/0/20151202007/20151202154734.jpg
inflating: dataset/data/train/0/20151202007/20151202154804.jpg
inflating: dataset/data/train/0/20151202007/20151202154833.jpg
inflating: dataset/data/train/0/20151202007/20151202154838.jpg
inflating: dataset/data/train/0/20151202007/20151202155011.jpg
creating: dataset/data/train/0/20151202009/
inflating: dataset/data/train/0/20151202009/20151202161403.jpg
inflating: dataset/data/train/0/20151202009/20151202161635.jpg
inflating: dataset/data/train/0/20151202009/20151202161720.jpg
inflating: dataset/data/train/0/20151202009/20151202161742.jpg
inflating: dataset/data/train/0/20151202009/20151202161745.jpg
inflating: dataset/data/train/0/20151202009/20151202161819.jpg
inflating: dataset/data/train/0/20151202009/20151202161926.jpg
creating: dataset/data/train/0/20151202011/
inflating: dataset/data/train/0/20151202011/20151202164902.jpg
inflating: dataset/data/train/0/20151202011/20151202165034.jpg
inflating: dataset/data/train/0/20151202011/20151202165056.jpg
inflating: dataset/data/train/0/20151202011/20151202165130.jpg
inflating: dataset/data/train/0/20151202011/20151202165207.jpg
inflating: dataset/data/train/0/20151202011/20151202165220.jpg
inflating: dataset/data/train/0/20151202011/20151202165239.jpg
creating: dataset/data/train/0/20151203001/
inflating: dataset/data/train/0/20151203001/20151203094014.jpg
inflating: dataset/data/train/0/20151203001/20151203094155.jpg
inflating: dataset/data/train/0/20151203001/20151203094212.jpg
inflating: dataset/data/train/0/20151203001/20151203094242.jpg
inflating: dataset/data/train/0/20151203001/20151203094300.jpg
inflating: dataset/data/train/0/20151203001/20151203094350.jpg
inflating: dataset/data/train/0/20151203001/20151203094417.jpg
creating: dataset/data/train/0/20151204001/
inflating: dataset/data/train/0/20151204001/20151204143542.jpg
inflating: dataset/data/train/0/20151204001/20151204143727.jpg
inflating: dataset/data/train/0/20151204001/20151204143741.jpg
inflating: dataset/data/train/0/20151204001/20151204143809.jpg
inflating: dataset/data/train/0/20151204001/20151204143827.jpg
inflating: dataset/data/train/0/20151204001/20151204143837.jpg
inflating: dataset/data/train/0/20151204001/20151204144007.jpg
creating: dataset/data/train/0/20151204004/
inflating: dataset/data/train/0/20151204004/20151204151657.jpg
inflating: dataset/data/train/0/20151204004/20151204151823.jpg
inflating: dataset/data/train/0/20151204004/20151204151900.jpg
inflating: dataset/data/train/0/20151204004/20151204151923.jpg
inflating: dataset/data/train/0/20151204004/20151204151936.jpg
inflating: dataset/data/train/0/20151204004/20151204151940.jpg
inflating: dataset/data/train/0/20151204004/20151204152137.jpg
creating: dataset/data/train/0/20151204007/
inflating: dataset/data/train/0/20151204007/20151204162634.jpg
inflating: dataset/data/train/0/20151204007/20151204162829.jpg
inflating: dataset/data/train/0/20151204007/20151204162840.jpg
inflating: dataset/data/train/0/20151204007/20151204162848.jpg
inflating: dataset/data/train/0/20151204007/20151204162925.jpg
inflating: dataset/data/train/0/20151204007/20151204162933.jpg
inflating: dataset/data/train/0/20151204007/20151204163016.jpg
creating: dataset/data/train/0/20151207009/
inflating: dataset/data/train/0/20151207009/20151207154843.jpg
inflating: dataset/data/train/0/20151207009/20151207155133.jpg
inflating: dataset/data/train/0/20151207009/20151207155148.jpg
inflating: dataset/data/train/0/20151207009/20151207155202.jpg
inflating: dataset/data/train/0/20151207009/20151207155235.jpg
inflating: dataset/data/train/0/20151207009/20151207155280.jpg
inflating: dataset/data/train/0/20151207009/20151207155343.jpg
creating: dataset/data/train/0/20151209006/
inflating: dataset/data/train/0/20151209006/20151209152652.jpg
inflating: dataset/data/train/0/20151209006/20151209152858.jpg
inflating: dataset/data/train/0/20151209006/20151209152916.jpg
inflating: dataset/data/train/0/20151209006/20151209152955.jpg
inflating: dataset/data/train/0/20151209006/20151209153013.jpg
inflating: dataset/data/train/0/20151209006/20151209153022.jpg
inflating: dataset/data/train/0/20151209006/20151209153237.jpg
creating: dataset/data/train/0/20151210001/
inflating: dataset/data/train/0/20151210001/20151210094705.jpg
inflating: dataset/data/train/0/20151210001/20151210094852.jpg
inflating: dataset/data/train/0/20151210001/20151210094901.jpg
inflating: dataset/data/train/0/20151210001/20151210094938.jpg
inflating: dataset/data/train/0/20151210001/20151210094958.jpg
inflating: dataset/data/train/0/20151210001/20151210095000.jpg
inflating: dataset/data/train/0/20151210001/20151210095100.jpg
creating: dataset/data/train/0/20151214001/
inflating: dataset/data/train/0/20151214001/20151214143834.jpg
inflating: dataset/data/train/0/20151214001/20151214143955.jpg
inflating: dataset/data/train/0/20151214001/20151214144016.jpg
inflating: dataset/data/train/0/20151214001/20151214144050.jpg
inflating: dataset/data/train/0/20151214001/20151214144120.jpg
inflating: dataset/data/train/0/20151214001/20151214144122.jpg
inflating: dataset/data/train/0/20151214001/20151214144237.jpg
creating: dataset/data/train/0/20151214002/
inflating: dataset/data/train/0/20151214002/20151214144741.jpg
inflating: dataset/data/train/0/20151214002/20151214144859.jpg
inflating: dataset/data/train/0/20151214002/20151214144929.jpg
inflating: dataset/data/train/0/20151214002/20151214144958.jpg
inflating: dataset/data/train/0/20151214002/20151214145029.jpg
inflating: dataset/data/train/0/20151214002/20151214145100.jpg
inflating: dataset/data/train/0/20151214002/20151214145141.jpg
creating: dataset/data/train/0/20151214007/
inflating: dataset/data/train/0/20151214007/20151214151044.jpg
inflating: dataset/data/train/0/20151214007/20151214151236.jpg
inflating: dataset/data/train/0/20151214007/20151214151305.jpg
inflating: dataset/data/train/0/20151214007/20151214151335.jpg
inflating: dataset/data/train/0/20151214007/20151214151405.jpg
inflating: dataset/data/train/0/20151214007/20151214151450.jpg
inflating: dataset/data/train/0/20151214007/20151214151513.jpg
creating: dataset/data/train/0/20151216015/
inflating: dataset/data/train/0/20151216015/20151216170621.jpg
inflating: dataset/data/train/0/20151216015/20151216170739.jpg
inflating: dataset/data/train/0/20151216015/20151216170815.jpg
inflating: dataset/data/train/0/20151216015/20151216170839.jpg
inflating: dataset/data/train/0/20151216015/20151216170909.jpg
inflating: dataset/data/train/0/20151216015/20151216170910.jpg
inflating: dataset/data/train/0/20151216015/20151216171026.jpg
creating: dataset/data/train/0/20151222004/
inflating: dataset/data/train/0/20151222004/20151222165821.jpg
inflating: dataset/data/train/0/20151222004/20151222170001.jpg
inflating: dataset/data/train/0/20151222004/20151222170023.jpg
inflating: dataset/data/train/0/20151222004/20151222170057.jpg
inflating: dataset/data/train/0/20151222004/20151222170123.jpg
inflating: dataset/data/train/0/20151222004/20151222170150.jpg
inflating: dataset/data/train/0/20151222004/20151222170216.jpg
creating: dataset/data/train/0/20151223009/
inflating: dataset/data/train/0/20151223009/20151223160504.jpg
inflating: dataset/data/train/0/20151223009/20151223160635.jpg
inflating: dataset/data/train/0/20151223009/20151223160921.jpg
inflating: dataset/data/train/0/20151223009/20151223160941.jpg
inflating: dataset/data/train/0/20151223009/20151223160944.jpg
inflating: dataset/data/train/0/20151223009/20151223160949.jpg
inflating: dataset/data/train/0/20151223009/20151223161136.jpg
creating: dataset/data/train/0/20151223012/
inflating: dataset/data/train/0/20151223012/20151223171300.jpg
inflating: dataset/data/train/0/20151223012/20151223171428.jpg
inflating: dataset/data/train/0/20151223012/20151223171519.jpg
inflating: dataset/data/train/0/20151223012/20151223171540.jpg
inflating: dataset/data/train/0/20151223012/20151223171609.jpg
inflating: dataset/data/train/0/20151223012/20151223171618.jpg
inflating: dataset/data/train/0/20151223012/20151223171735.jpg
creating: dataset/data/train/0/20151223015/
inflating: dataset/data/train/0/20151223015/20151223174832.jpg
inflating: dataset/data/train/0/20151223015/20151223175013.jpg
inflating: dataset/data/train/0/20151223015/20151223175039.jpg
inflating: dataset/data/train/0/20151223015/20151223175114.jpg
inflating: dataset/data/train/0/20151223015/20151223175141.jpg
inflating: dataset/data/train/0/20151223015/20151223175150.jpg
inflating: dataset/data/train/0/20151223015/20151223175227.jpg
creating: dataset/data/train/0/20151223016/
inflating: dataset/data/train/0/20151223016/20151223175858.jpg
inflating: dataset/data/train/0/20151223016/20151223180018.jpg
inflating: dataset/data/train/0/20151223016/20151223180047.jpg
inflating: dataset/data/train/0/20151223016/20151223180117.jpg
inflating: dataset/data/train/0/20151223016/20151223180146.jpg
inflating: dataset/data/train/0/20151223016/20151223180150.jpg
inflating: dataset/data/train/0/20151223016/20151223180242.jpg
creating: dataset/data/train/0/20151225002/
inflating: dataset/data/train/0/20151225002/20151225144818.jpg
inflating: dataset/data/train/0/20151225002/20151225144938.jpg
inflating: dataset/data/train/0/20151225002/20151225145003.jpg
inflating: dataset/data/train/0/20151225002/20151225145033.jpg
inflating: dataset/data/train/0/20151225002/20151225145111.jpg
inflating: dataset/data/train/0/20151225002/20151225145152.jpg
inflating: dataset/data/train/0/20151225002/20151225145158.jpg
creating: dataset/data/train/0/20151225003/
inflating: dataset/data/train/0/20151225003/20151225150032.jpg
inflating: dataset/data/train/0/20151225003/20151225150154.jpg
inflating: dataset/data/train/0/20151225003/20151225150224.jpg
inflating: dataset/data/train/0/20151225003/20151225150306.jpg
inflating: dataset/data/train/0/20151225003/20151225150325.jpg
inflating: dataset/data/train/0/20151225003/20151225150350.jpg
inflating: dataset/data/train/0/20151225003/20151225150425.jpg
creating: dataset/data/train/0/20151228001/
inflating: dataset/data/train/0/20151228001/20151228142030.jpg
inflating: dataset/data/train/0/20151228001/20151228142156.jpg
inflating: dataset/data/train/0/20151228001/20151228142232.jpg
inflating: dataset/data/train/0/20151228001/20151228142256.jpg
inflating: dataset/data/train/0/20151228001/20151228142326.jpg
inflating: dataset/data/train/0/20151228001/20151228142350.jpg
inflating: dataset/data/train/0/20151228001/20151228142407.jpg
creating: dataset/data/train/0/20151228003/
inflating: dataset/data/train/0/20151228003/20151228144023.jpg
inflating: dataset/data/train/0/20151228003/20151228144158.jpg
inflating: dataset/data/train/0/20151228003/20151228144230.jpg
inflating: dataset/data/train/0/20151228003/20151228144258.jpg
inflating: dataset/data/train/0/20151228003/20151228144328.jpg
inflating: dataset/data/train/0/20151228003/20151228144330.jpg
inflating: dataset/data/train/0/20151228003/20151228144430.jpg
creating: dataset/data/train/0/20151230010/
inflating: dataset/data/train/0/20151230010/20151230160427.jpg
inflating: dataset/data/train/0/20151230010/20151230160616.jpg
inflating: dataset/data/train/0/20151230010/20151230160634.jpg
inflating: dataset/data/train/0/20151230010/20151230160702.jpg
inflating: dataset/data/train/0/20151230010/20151230160733.jpg
inflating: dataset/data/train/0/20151230010/20151230160735.jpg
inflating: dataset/data/train/0/20151230010/20151230160820.jpg
creating: dataset/data/train/0/20160104002/
inflating: dataset/data/train/0/20160104002/20160104143821.jpg
inflating: dataset/data/train/0/20160104002/20160104143951.jpg
inflating: dataset/data/train/0/20160104002/20160104144020.jpg
inflating: dataset/data/train/0/20160104002/20160104144052.jpg
inflating: dataset/data/train/0/20160104002/20160104144114.jpg
inflating: dataset/data/train/0/20160104002/20160104144121.jpg
inflating: dataset/data/train/0/20160104002/20160104144256.jpg
creating: dataset/data/train/0/20160104005/
inflating: dataset/data/train/0/20160104005/20160104150228.jpg
inflating: dataset/data/train/0/20160104005/20160104150352.jpg
inflating: dataset/data/train/0/20160104005/20160104150420.jpg
inflating: dataset/data/train/0/20160104005/20160104150500.jpg
inflating: dataset/data/train/0/20160104005/20160104150524.jpg
inflating: dataset/data/train/0/20160104005/20160104150530.jpg
inflating: dataset/data/train/0/20160104005/20160104150618.jpg
creating: dataset/data/train/0/20160104006/
inflating: dataset/data/train/0/20160104006/20160104150923.jpg
inflating: dataset/data/train/0/20160104006/20160104151140.jpg
inflating: dataset/data/train/0/20160104006/20160104151217.jpg
inflating: dataset/data/train/0/20160104006/20160104151240.jpg
inflating: dataset/data/train/0/20160104006/20160104151310.jpg
inflating: dataset/data/train/0/20160104006/20160104151333.jpg
inflating: dataset/data/train/0/20160104006/20160104151416.jpg
creating: dataset/data/train/0/20160104010/
inflating: dataset/data/train/0/20160104010/20160104155608.jpg
inflating: dataset/data/train/0/20160104010/20160104155759.jpg
inflating: dataset/data/train/0/20160104010/20160104155837.jpg
inflating: dataset/data/train/0/20160104010/20160104155907.jpg
inflating: dataset/data/train/0/20160104010/20160104155931.jpg
inflating: dataset/data/train/0/20160104010/20160104155960.jpg
inflating: dataset/data/train/0/20160104010/20160104160017.jpg
creating: dataset/data/train/0/20160105005/
inflating: dataset/data/train/0/20160105005/20160105160342.jpg
inflating: dataset/data/train/0/20160105005/20160105160510.jpg
inflating: dataset/data/train/0/20160105005/20160105160544.jpg
inflating: dataset/data/train/0/20160105005/20160105160607.jpg
inflating: dataset/data/train/0/20160105005/20160105160639.jpg
inflating: dataset/data/train/0/20160105005/20160105160643.jpg
inflating: dataset/data/train/0/20160105005/20160105160820.jpg
creating: dataset/data/train/0/20160106016/
inflating: dataset/data/train/0/20160106016/20160106164113.jpg
inflating: dataset/data/train/0/20160106016/20160106164237.jpg
inflating: dataset/data/train/0/20160106016/20160106164308.jpg
inflating: dataset/data/train/0/20160106016/20160106164333.jpg
inflating: dataset/data/train/0/20160106016/20160106164407.jpg
inflating: dataset/data/train/0/20160106016/20160106164415.jpg
inflating: dataset/data/train/0/20160106016/20160106164510.jpg
creating: dataset/data/train/0/20160108002/
inflating: dataset/data/train/0/20160108002/20160108104641.jpg
inflating: dataset/data/train/0/20160108002/20160108104804.jpg
inflating: dataset/data/train/0/20160108002/20160108104814.jpg
inflating: dataset/data/train/0/20160108002/20160108104834.jpg
inflating: dataset/data/train/0/20160108002/20160108104912.jpg
inflating: dataset/data/train/0/20160108002/20160108104922.jpg
inflating: dataset/data/train/0/20160108002/20160108105045.jpg
creating: dataset/data/train/0/20160108005/
inflating: dataset/data/train/0/20160108005/20160108160221.jpg
inflating: dataset/data/train/0/20160108005/20160108160354.jpg
inflating: dataset/data/train/0/20160108005/20160108160409.jpg
inflating: dataset/data/train/0/20160108005/20160108160435.jpg
inflating: dataset/data/train/0/20160108005/20160108160521.jpg
inflating: dataset/data/train/0/20160108005/20160108160530.jpg
inflating: dataset/data/train/0/20160108005/20160108160610.jpg
creating: dataset/data/train/0/20160108006/
inflating: dataset/data/train/0/20160108006/20160108153900.jpg
inflating: dataset/data/train/0/20160108006/20160108154057.jpg
inflating: dataset/data/train/0/20160108006/20160108154132.jpg
inflating: dataset/data/train/0/20160108006/20160108154148.jpg
inflating: dataset/data/train/0/20160108006/20160108154215.jpg
inflating: dataset/data/train/0/20160108006/20160108154318.jpg
inflating: dataset/data/train/0/20160108006/20160108154333.jpg
creating: dataset/data/train/0/20160111007/
inflating: dataset/data/train/0/20160111007/20160111151648.jpg
inflating: dataset/data/train/0/20160111007/20160111151806.jpg
inflating: dataset/data/train/0/20160111007/20160111151838.jpg
inflating: dataset/data/train/0/20160111007/20160111151908.jpg
inflating: dataset/data/train/0/20160111007/20160111151937.jpg
inflating: dataset/data/train/0/20160111007/20160111151940.jpg
inflating: dataset/data/train/0/20160111007/20160111152025.jpg
creating: dataset/data/train/0/20160111011/
inflating: dataset/data/train/0/20160111011/20160111161955.jpg
inflating: dataset/data/train/0/20160111011/20160111162139.jpg
inflating: dataset/data/train/0/20160111011/20160111162209.jpg
inflating: dataset/data/train/0/20160111011/20160111162249.jpg
inflating: dataset/data/train/0/20160111011/20160111162309.jpg
inflating: dataset/data/train/0/20160111011/20160111162316.jpg
inflating: dataset/data/train/0/20160111011/20160111162430.jpg
creating: dataset/data/train/0/20160113006/
inflating: dataset/data/train/0/20160113006/20160113105442.jpg
inflating: dataset/data/train/0/20160113006/20160113105627.jpg
inflating: dataset/data/train/0/20160113006/20160113105703.jpg
inflating: dataset/data/train/0/20160113006/20160113105721.jpg
inflating: dataset/data/train/0/20160113006/20160113105743.jpg
inflating: dataset/data/train/0/20160113006/20160113105910.jpg
inflating: dataset/data/train/0/20160113006/20160113110109.jpg
creating: dataset/data/train/0/20160113007/
inflating: dataset/data/train/0/20160113007/20160113110507.jpg
inflating: dataset/data/train/0/20160113007/20160113110644.jpg
inflating: dataset/data/train/0/20160113007/20160113110653.jpg
inflating: dataset/data/train/0/20160113007/20160113110744.jpg
inflating: dataset/data/train/0/20160113007/20160113110754.jpg
inflating: dataset/data/train/0/20160113007/20160113110814.jpg
inflating: dataset/data/train/0/20160113007/20160113110853.jpg
creating: dataset/data/train/0/20160113026/
inflating: dataset/data/train/0/20160113026/20160113163003.jpg
inflating: dataset/data/train/0/20160113026/20160113163159.jpg
inflating: dataset/data/train/0/20160113026/20160113163218.jpg
inflating: dataset/data/train/0/20160113026/20160113163251.jpg
inflating: dataset/data/train/0/20160113026/20160113163319.jpg
inflating: dataset/data/train/0/20160113026/20160113163320.jpg
inflating: dataset/data/train/0/20160113026/20160113163400.jpg
creating: dataset/data/train/0/20160118001/
inflating: dataset/data/train/0/20160118001/20160118142910.jpg
inflating: dataset/data/train/0/20160118001/20160118143024.jpg
inflating: dataset/data/train/0/20160118001/20160118143055.jpg
inflating: dataset/data/train/0/20160118001/20160118143123.jpg
inflating: dataset/data/train/0/20160118001/20160118143153.jpg
inflating: dataset/data/train/0/20160118001/20160118143160.jpg
inflating: dataset/data/train/0/20160118001/20160118143242.jpg
creating: dataset/data/train/0/20160118004/
inflating: dataset/data/train/0/20160118004/20160118151400.jpg
inflating: dataset/data/train/0/20160118004/20160118151519.jpg
inflating: dataset/data/train/0/20160118004/20160118151548.jpg
inflating: dataset/data/train/0/20160118004/20160118151621.jpg
inflating: dataset/data/train/0/20160118004/20160118151647.jpg
inflating: dataset/data/train/0/20160118004/20160118151680.jpg
inflating: dataset/data/train/0/20160118004/20160118151739.jpg
creating: dataset/data/train/0/20160119003/
inflating: dataset/data/train/0/20160119003/20160119144827.jpg
inflating: dataset/data/train/0/20160119003/20160119144957.jpg
inflating: dataset/data/train/0/20160119003/20160119145027.jpg
inflating: dataset/data/train/0/20160119003/20160119145057.jpg
inflating: dataset/data/train/0/20160119003/20160119145127.jpg
inflating: dataset/data/train/0/20160119003/20160119145137.jpg
inflating: dataset/data/train/0/20160119003/20160119145227.jpg
creating: dataset/data/train/0/20160120002/
inflating: dataset/data/train/0/20160120002/20160120104001.jpg
inflating: dataset/data/train/0/20160120002/20160120104117.jpg
inflating: dataset/data/train/0/20160120002/20160120104147.jpg
inflating: dataset/data/train/0/20160120002/20160120104218.jpg
inflating: dataset/data/train/0/20160120002/20160120104241.jpg
inflating: dataset/data/train/0/20160120002/20160120104251.jpg
inflating: dataset/data/train/0/20160120002/20160120104328.jpg
creating: dataset/data/train/0/20160120004/
inflating: dataset/data/train/0/20160120004/20160120101023.jpg
inflating: dataset/data/train/0/20160120004/20160120101137.jpg
inflating: dataset/data/train/0/20160120004/20160120101207.jpg
inflating: dataset/data/train/0/20160120004/20160120101238.jpg
inflating: dataset/data/train/0/20160120004/20160120101307.jpg
inflating: dataset/data/train/0/20160120004/20160120101310.jpg
inflating: dataset/data/train/0/20160120004/20160120101401.jpg
creating: dataset/data/train/0/20160120007/
inflating: dataset/data/train/0/20160120007/20160120114630.jpg
inflating: dataset/data/train/0/20160120007/20160120114745.jpg
inflating: dataset/data/train/0/20160120007/20160120114831.jpg
inflating: dataset/data/train/0/20160120007/20160120114857.jpg
inflating: dataset/data/train/0/20160120007/20160120114915.jpg
inflating: dataset/data/train/0/20160120007/20160120114920.jpg
inflating: dataset/data/train/0/20160120007/20160120115010.jpg
creating: dataset/data/train/0/20160120008/
inflating: dataset/data/train/0/20160120008/20160120142434.jpg
inflating: dataset/data/train/0/20160120008/20160120142601.jpg
inflating: dataset/data/train/0/20160120008/20160120142627.jpg
inflating: dataset/data/train/0/20160120008/20160120142704.jpg
inflating: dataset/data/train/0/20160120008/20160120142721.jpg
inflating: dataset/data/train/0/20160120008/20160120142735.jpg
inflating: dataset/data/train/0/20160120008/20160120142839.jpg
creating: dataset/data/train/0/20160120010/
inflating: dataset/data/train/0/20160120010/20160120145425.jpg
inflating: dataset/data/train/0/20160120010/20160120145614.jpg
inflating: dataset/data/train/0/20160120010/20160120145644.jpg
inflating: dataset/data/train/0/20160120010/20160120145713.jpg
inflating: dataset/data/train/0/20160120010/20160120145744.jpg
inflating: dataset/data/train/0/20160120010/20160120145754.jpg
inflating: dataset/data/train/0/20160120010/20160120145859.jpg
creating: dataset/data/train/0/20160125001/
inflating: dataset/data/train/0/20160125001/20160125144958.jpg
inflating: dataset/data/train/0/20160125001/20160125145136.jpg
inflating: dataset/data/train/0/20160125001/20160125145206.jpg
inflating: dataset/data/train/0/20160125001/20160125145245.jpg
inflating: dataset/data/train/0/20160125001/20160125145306.jpg
inflating: dataset/data/train/0/20160125001/20160125145314.jpg
inflating: dataset/data/train/0/20160125001/20160125145423.jpg
creating: dataset/data/train/0/20160128005/
inflating: dataset/data/train/0/20160128005/20160128101558.jpg
inflating: dataset/data/train/0/20160128005/20160128101718.jpg
inflating: dataset/data/train/0/20160128005/20160128101749.jpg
inflating: dataset/data/train/0/20160128005/20160128101819.jpg
inflating: dataset/data/train/0/20160128005/20160128101848.jpg
inflating: dataset/data/train/0/20160128005/20160128101854.jpg
inflating: dataset/data/train/0/20160128005/20160128101922.jpg
creating: dataset/data/train/0/20160128006/
inflating: dataset/data/train/0/20160128006/20160128102323.jpg
inflating: dataset/data/train/0/20160128006/20160128102452.jpg
inflating: dataset/data/train/0/20160128006/20160128102522.jpg
inflating: dataset/data/train/0/20160128006/20160128102552.jpg
inflating: dataset/data/train/0/20160128006/20160128102617.jpg
inflating: dataset/data/train/0/20160128006/20160128102627.jpg
inflating: dataset/data/train/0/20160128006/20160128102649.jpg
creating: dataset/data/train/0/20160201007/
inflating: dataset/data/train/0/20160201007/20160201161816.jpg
inflating: dataset/data/train/0/20160201007/20160201161933.jpg
inflating: dataset/data/train/0/20160201007/20160201162002.jpg
inflating: dataset/data/train/0/20160201007/20160201162029.jpg
inflating: dataset/data/train/0/20160201007/20160201162100.jpg
inflating: dataset/data/train/0/20160201007/20160201162105.jpg
inflating: dataset/data/train/0/20160201007/20160201162138.jpg
creating: dataset/data/train/0/20160203004/
inflating: dataset/data/train/0/20160203004/20160203154711.jpg
inflating: dataset/data/train/0/20160203004/20160203154834.jpg
inflating: dataset/data/train/0/20160203004/20160203154902.jpg
inflating: dataset/data/train/0/20160203004/20160203154931.jpg
inflating: dataset/data/train/0/20160203004/20160203155004.jpg
inflating: dataset/data/train/0/20160203004/20160203155008.jpg
inflating: dataset/data/train/0/20160203004/20160203155111.jpg
creating: dataset/data/train/0/20160203010/
inflating: dataset/data/train/0/20160203010/20160203171644.jpg
inflating: dataset/data/train/0/20160203010/20160203171817.jpg
inflating: dataset/data/train/0/20160203010/20160203171844.jpg
inflating: dataset/data/train/0/20160203010/20160203171915.jpg
inflating: dataset/data/train/0/20160203010/20160203171949.jpg
inflating: dataset/data/train/0/20160203010/20160203171959.jpg
inflating: dataset/data/train/0/20160203010/20160203172027.jpg
creating: dataset/data/train/0/20160218002/
inflating: dataset/data/train/0/20160218002/20160218154001.jpg
inflating: dataset/data/train/0/20160218002/20160218154153.jpg
inflating: dataset/data/train/0/20160218002/20160218154221.jpg
inflating: dataset/data/train/0/20160218002/20160218154252.jpg
inflating: dataset/data/train/0/20160218002/20160218154320.jpg
inflating: dataset/data/train/0/20160218002/20160218154324.jpg
inflating: dataset/data/train/0/20160218002/20160218154349.jpg
creating: dataset/data/train/0/20160222002/
inflating: dataset/data/train/0/20160222002/20160222152903.jpg
inflating: dataset/data/train/0/20160222002/20160222153020.jpg
inflating: dataset/data/train/0/20160222002/20160222153048.jpg
inflating: dataset/data/train/0/20160222002/20160222153123.jpg
inflating: dataset/data/train/0/20160222002/20160222153150.jpg
inflating: dataset/data/train/0/20160222002/20160222153169.jpg
inflating: dataset/data/train/0/20160222002/20160222153234.jpg
creating: dataset/data/train/0/20160224008/
inflating: dataset/data/train/0/20160224008/20160224150349.jpg
inflating: dataset/data/train/0/20160224008/20160224150508.jpg
inflating: dataset/data/train/0/20160224008/20160224150536.jpg
inflating: dataset/data/train/0/20160224008/20160224150621.jpg
inflating: dataset/data/train/0/20160224008/20160224150636.jpg
inflating: dataset/data/train/0/20160224008/20160224150651.jpg
inflating: dataset/data/train/0/20160224008/20160224150826.jpg
creating: dataset/data/train/0/20160225007/
inflating: dataset/data/train/0/20160225007/20160225152934.jpg
inflating: dataset/data/train/0/20160225007/20160225153055.jpg
inflating: dataset/data/train/0/20160225007/20160225153127.jpg
inflating: dataset/data/train/0/20160225007/20160225153159.jpg
inflating: dataset/data/train/0/20160225007/20160225153225.jpg
inflating: dataset/data/train/0/20160225007/20160225153233.jpg
inflating: dataset/data/train/0/20160225007/20160225153348.jpg
creating: dataset/data/train/0/20160229001/
inflating: dataset/data/train/0/20160229001/20160229143640.jpg
inflating: dataset/data/train/0/20160229001/20160229143759.jpg
inflating: dataset/data/train/0/20160229001/20160229143831.jpg
inflating: dataset/data/train/0/20160229001/20160229143901.jpg
inflating: dataset/data/train/0/20160229001/20160229143932.jpg
inflating: dataset/data/train/0/20160229001/20160229143947.jpg
inflating: dataset/data/train/0/20160229001/20160229144038.jpg
creating: dataset/data/train/0/20160229002/
inflating: dataset/data/train/0/20160229002/20160229144507.jpg
inflating: dataset/data/train/0/20160229002/20160229144626.jpg
inflating: dataset/data/train/0/20160229002/20160229144645.jpg
inflating: dataset/data/train/0/20160229002/20160229144701.jpg
inflating: dataset/data/train/0/20160229002/20160229144800.jpg
inflating: dataset/data/train/0/20160229002/20160229144806.jpg
inflating: dataset/data/train/0/20160229002/20160229144902.jpg
creating: dataset/data/train/0/20160302011/
inflating: dataset/data/train/0/20160302011/20160302151956.jpg
inflating: dataset/data/train/0/20160302011/20160302152123.jpg
inflating: dataset/data/train/0/20160302011/20160302152127.jpg
inflating: dataset/data/train/0/20160302011/20160302152218.jpg
inflating: dataset/data/train/0/20160302011/20160302152240.jpg
inflating: dataset/data/train/0/20160302011/20160302152259.jpg
inflating: dataset/data/train/0/20160302011/20160302152310.jpg
creating: dataset/data/train/0/20160303001/
inflating: dataset/data/train/0/20160303001/20160303094958.jpg
inflating: dataset/data/train/0/20160303001/20160303095213.jpg
inflating: dataset/data/train/0/20160303001/20160303095223.jpg
inflating: dataset/data/train/0/20160303001/20160303095251.jpg
inflating: dataset/data/train/0/20160303001/20160303095319.jpg
inflating: dataset/data/train/0/20160303001/20160303095324.jpg
inflating: dataset/data/train/0/20160303001/20160303095433.jpg
creating: dataset/data/train/0/20160303006/
inflating: dataset/data/train/0/20160303006/20160303172348.jpg
inflating: dataset/data/train/0/20160303006/20160303172505.jpg
inflating: dataset/data/train/0/20160303006/20160303172533.jpg
inflating: dataset/data/train/0/20160303006/20160303172604.jpg
inflating: dataset/data/train/0/20160303006/20160303172648.jpg
inflating: dataset/data/train/0/20160303006/20160303172659.jpg
inflating: dataset/data/train/0/20160303006/20160303172729.jpg
creating: dataset/data/train/0/20160309009/
inflating: dataset/data/train/0/20160309009/20160309155453.jpg
inflating: dataset/data/train/0/20160309009/20160309155620.jpg
inflating: dataset/data/train/0/20160309009/20160309155642.jpg
inflating: dataset/data/train/0/20160309009/20160309155711.jpg
inflating: dataset/data/train/0/20160309009/20160309155741.jpg
inflating: dataset/data/train/0/20160309009/20160309155754.jpg
inflating: dataset/data/train/0/20160309009/20160309155831.jpg
creating: dataset/data/train/0/20160314006/
inflating: dataset/data/train/0/20160314006/20160314155248.jpg
inflating: dataset/data/train/0/20160314006/20160314155426.jpg
inflating: dataset/data/train/0/20160314006/20160314155430.jpg
inflating: dataset/data/train/0/20160314006/20160314155459.jpg
inflating: dataset/data/train/0/20160314006/20160314155559.jpg
inflating: dataset/data/train/0/20160314006/20160314155566.jpg
inflating: dataset/data/train/0/20160314006/20160314155641.jpg
creating: dataset/data/train/0/20160315003/
inflating: dataset/data/train/0/20160315003/20160315163257.jpg
inflating: dataset/data/train/0/20160315003/20160315163424.jpg
inflating: dataset/data/train/0/20160315003/20160315163454.jpg
inflating: dataset/data/train/0/20160315003/20160315163522.jpg
inflating: dataset/data/train/0/20160315003/20160315163550.jpg
inflating: dataset/data/train/0/20160315003/20160315163560.jpg
inflating: dataset/data/train/0/20160315003/20160315163710.jpg
creating: dataset/data/train/0/20160316010/
inflating: dataset/data/train/0/20160316010/20160316155436.jpg
inflating: dataset/data/train/0/20160316010/20160316155634.jpg
inflating: dataset/data/train/0/20160316010/20160316155648.jpg
inflating: dataset/data/train/0/20160316010/20160316155717.jpg
inflating: dataset/data/train/0/20160316010/20160316155743.jpg
inflating: dataset/data/train/0/20160316010/20160316155750.jpg
inflating: dataset/data/train/0/20160316010/20160316155837.jpg
creating: dataset/data/train/0/20160321008/
inflating: dataset/data/train/0/20160321008/20160321160454.jpg
inflating: dataset/data/train/0/20160321008/20160321160622.jpg
inflating: dataset/data/train/0/20160321008/20160321160647.jpg
inflating: dataset/data/train/0/20160321008/20160321160723.jpg
inflating: dataset/data/train/0/20160321008/20160321160750.jpg
inflating: dataset/data/train/0/20160321008/20160321160768.jpg
inflating: dataset/data/train/0/20160321008/20160321160827.jpg
creating: dataset/data/train/0/20160323001/
inflating: dataset/data/train/0/20160323001/20160323094625.jpg
inflating: dataset/data/train/0/20160323001/20160323094804.jpg
inflating: dataset/data/train/0/20160323001/20160323094825.jpg
inflating: dataset/data/train/0/20160323001/20160323094859.jpg
inflating: dataset/data/train/0/20160323001/20160323094931.jpg
inflating: dataset/data/train/0/20160323001/20160323094943.jpg
inflating: dataset/data/train/0/20160323001/20160323095021.jpg
creating: dataset/data/train/0/20160323010/
inflating: dataset/data/train/0/20160323010/20160323112054.jpg
inflating: dataset/data/train/0/20160323010/20160323112223.jpg
inflating: dataset/data/train/0/20160323010/20160323112242.jpg
inflating: dataset/data/train/0/20160323010/20160323112321.jpg
inflating: dataset/data/train/0/20160323010/20160323112339.jpg
inflating: dataset/data/train/0/20160323010/20160323112346.jpg
inflating: dataset/data/train/0/20160323010/20160323112434.jpg
creating: dataset/data/train/0/20160323019/
inflating: dataset/data/train/0/20160323019/20160323154925.jpg
inflating: dataset/data/train/0/20160323019/20160323155108.jpg
inflating: dataset/data/train/0/20160323019/20160323155127.jpg
inflating: dataset/data/train/0/20160323019/20160323155157.jpg
inflating: dataset/data/train/0/20160323019/20160323155214.jpg
inflating: dataset/data/train/0/20160323019/20160323155229.jpg
inflating: dataset/data/train/0/20160323019/20160323155336.jpg
creating: dataset/data/train/0/20160323020/
inflating: dataset/data/train/0/20160323020/20160323160024.jpg
inflating: dataset/data/train/0/20160323020/20160323160147.jpg
inflating: dataset/data/train/0/20160323020/20160323160212.jpg
inflating: dataset/data/train/0/20160323020/20160323160244.jpg
inflating: dataset/data/train/0/20160323020/20160323160314.jpg
inflating: dataset/data/train/0/20160323020/20160323160327.jpg
inflating: dataset/data/train/0/20160323020/20160323160406.jpg
creating: dataset/data/train/0/20160323026/
inflating: dataset/data/train/0/20160323026/20160323173931.jpg
inflating: dataset/data/train/0/20160323026/20160323174044.jpg
inflating: dataset/data/train/0/20160323026/20160323174116.jpg
inflating: dataset/data/train/0/20160323026/20160323174149.jpg
inflating: dataset/data/train/0/20160323026/20160323174216.jpg
inflating: dataset/data/train/0/20160323026/20160323174226.jpg
inflating: dataset/data/train/0/20160323026/20160323174250.jpg
creating: dataset/data/train/0/20160324001/
inflating: dataset/data/train/0/20160324001/20160324102727.jpg
inflating: dataset/data/train/0/20160324001/20160324102916.jpg
inflating: dataset/data/train/0/20160324001/20160324102945.jpg
inflating: dataset/data/train/0/20160324001/20160324103019.jpg
inflating: dataset/data/train/0/20160324001/20160324103032.jpg
inflating: dataset/data/train/0/20160324001/20160324103103.jpg
inflating: dataset/data/train/0/20160324001/20160324103104.jpg
creating: dataset/data/train/0/20160324007/
inflating: dataset/data/train/0/20160324007/20160324161509.jpg
inflating: dataset/data/train/0/20160324007/20160324161650.jpg
inflating: dataset/data/train/0/20160324007/20160324161718.jpg
inflating: dataset/data/train/0/20160324007/20160324161752.jpg
inflating: dataset/data/train/0/20160324007/20160324161819.jpg
inflating: dataset/data/train/0/20160324007/20160324161825.jpg
inflating: dataset/data/train/0/20160324007/20160324161929.jpg
creating: dataset/data/train/0/20160330013/
inflating: dataset/data/train/0/20160330013/20160330150521.jpg
inflating: dataset/data/train/0/20160330013/20160330150731.jpg
inflating: dataset/data/train/0/20160330013/20160330150752.jpg
inflating: dataset/data/train/0/20160330013/20160330150803.jpg
inflating: dataset/data/train/0/20160330013/20160330150824.jpg
inflating: dataset/data/train/0/20160330013/20160330150835.jpg
inflating: dataset/data/train/0/20160330013/20160330151027.jpg
creating: dataset/data/train/0/20160330016/
inflating: dataset/data/train/0/20160330016/20160330151627.jpg
inflating: dataset/data/train/0/20160330016/20160330151814.jpg
inflating: dataset/data/train/0/20160330016/20160330151833.jpg
inflating: dataset/data/train/0/20160330016/20160330151906.jpg
inflating: dataset/data/train/0/20160330016/20160330151936.jpg
inflating: dataset/data/train/0/20160330016/20160330151939.jpg
inflating: dataset/data/train/0/20160330016/20160330152048.jpg
creating: dataset/data/train/0/20160330020/
inflating: dataset/data/train/0/20160330020/20160330164806.jpg
inflating: dataset/data/train/0/20160330020/20160330164946.jpg
inflating: dataset/data/train/0/20160330020/20160330165013.jpg
inflating: dataset/data/train/0/20160330020/20160330165032.jpg
inflating: dataset/data/train/0/20160330020/20160330165102.jpg
inflating: dataset/data/train/0/20160330020/20160330165105.jpg
inflating: dataset/data/train/0/20160330020/20160330165204.jpg
creating: dataset/data/train/0/20160401001/
inflating: dataset/data/train/0/20160401001/20160401100218.jpg
inflating: dataset/data/train/0/20160401001/20160401100338.jpg
inflating: dataset/data/train/0/20160401001/20160401100407.jpg
inflating: dataset/data/train/0/20160401001/20160401100439.jpg
inflating: dataset/data/train/0/20160401001/20160401100507.jpg
inflating: dataset/data/train/0/20160401001/20160401100511.jpg
inflating: dataset/data/train/0/20160401001/20160401100618.jpg
creating: dataset/data/train/0/20160401002/
inflating: dataset/data/train/0/20160401002/20160401145452.jpg
inflating: dataset/data/train/0/20160401002/20160401145651.jpg
inflating: dataset/data/train/0/20160401002/20160401145725.jpg
inflating: dataset/data/train/0/20160401002/20160401145755.jpg
inflating: dataset/data/train/0/20160401002/20160401145813.jpg
inflating: dataset/data/train/0/20160401002/20160401145823.jpg
inflating: dataset/data/train/0/20160401002/20160401145904.jpg
creating: dataset/data/train/0/20160406003/
inflating: dataset/data/train/0/20160406003/20160406102449.jpg
inflating: dataset/data/train/0/20160406003/20160406102613.jpg
inflating: dataset/data/train/0/20160406003/20160406102647.jpg
inflating: dataset/data/train/0/20160406003/20160406102718.jpg
inflating: dataset/data/train/0/20160406003/20160406102823.jpg
inflating: dataset/data/train/0/20160406003/20160406102838.jpg
inflating: dataset/data/train/0/20160406003/20160406102954.jpg
creating: dataset/data/train/0/20160406012/
inflating: dataset/data/train/0/20160406012/20160406144208.jpg
inflating: dataset/data/train/0/20160406012/20160406144330.jpg
inflating: dataset/data/train/0/20160406012/20160406144408.jpg
inflating: dataset/data/train/0/20160406012/20160406144436.jpg
inflating: dataset/data/train/0/20160406012/20160406144501.jpg
inflating: dataset/data/train/0/20160406012/20160406144532.jpg
inflating: dataset/data/train/0/20160406012/20160406144651.jpg
creating: dataset/data/train/0/20160407005/
inflating: dataset/data/train/0/20160407005/20160407161038.jpg
inflating: dataset/data/train/0/20160407005/20160407161149.jpg
inflating: dataset/data/train/0/20160407005/20160407161219.jpg
inflating: dataset/data/train/0/20160407005/20160407161254.jpg
inflating: dataset/data/train/0/20160407005/20160407161320.jpg
inflating: dataset/data/train/0/20160407005/20160407161333.jpg
inflating: dataset/data/train/0/20160407005/20160407161346.jpg
creating: dataset/data/train/0/20160411004/
inflating: dataset/data/train/0/20160411004/20160411151912.jpg
inflating: dataset/data/train/0/20160411004/20160411152058.jpg
inflating: dataset/data/train/0/20160411004/20160411152113.jpg
inflating: dataset/data/train/0/20160411004/20160411152158.jpg
inflating: dataset/data/train/0/20160411004/20160411152222.jpg
inflating: dataset/data/train/0/20160411004/20160411152238.jpg
inflating: dataset/data/train/0/20160411004/20160411152335.jpg
creating: dataset/data/train/0/20160412003/
inflating: dataset/data/train/0/20160412003/20160412100446.jpg
inflating: dataset/data/train/0/20160412003/20160412100602.jpg
inflating: dataset/data/train/0/20160412003/20160412100630.jpg
inflating: dataset/data/train/0/20160412003/20160412100702.jpg
inflating: dataset/data/train/0/20160412003/20160412100730.jpg
inflating: dataset/data/train/0/20160412003/20160412100741.jpg
inflating: dataset/data/train/0/20160412003/20160412100814.jpg
creating: dataset/data/train/0/20160413009/
inflating: dataset/data/train/0/20160413009/20160413143208.jpg
inflating: dataset/data/train/0/20160413009/20160413143327.jpg
inflating: dataset/data/train/0/20160413009/20160413143404.jpg
inflating: dataset/data/train/0/20160413009/20160413143427.jpg
inflating: dataset/data/train/0/20160413009/20160413143458.jpg
inflating: dataset/data/train/0/20160413009/20160413143500.jpg
inflating: dataset/data/train/0/20160413009/20160413143551.jpg
creating: dataset/data/train/0/20160413010/
inflating: dataset/data/train/0/20160413010/20160413144421.jpg
inflating: dataset/data/train/0/20160413010/20160413144601.jpg
inflating: dataset/data/train/0/20160413010/20160413144625.jpg
inflating: dataset/data/train/0/20160413010/20160413144636.jpg
inflating: dataset/data/train/0/20160413010/20160413144657.jpg
inflating: dataset/data/train/0/20160413010/20160413144661.jpg
inflating: dataset/data/train/0/20160413010/20160413144725.jpg
creating: dataset/data/train/0/20160418008/
inflating: dataset/data/train/0/20160418008/20160418152728.jpg
inflating: dataset/data/train/0/20160418008/20160418152910.jpg
inflating: dataset/data/train/0/20160418008/20160418152939.jpg
inflating: dataset/data/train/0/20160418008/20160418153009.jpg
inflating: dataset/data/train/0/20160418008/20160418153041.jpg
inflating: dataset/data/train/0/20160418008/20160418153056.jpg
inflating: dataset/data/train/0/20160418008/20160418153139.jpg
creating: dataset/data/train/0/20160419002/
inflating: dataset/data/train/0/20160419002/20160419152802.jpg
inflating: dataset/data/train/0/20160419002/20160419152918.jpg
inflating: dataset/data/train/0/20160419002/20160419152949.jpg
inflating: dataset/data/train/0/20160419002/20160419153015.jpg
inflating: dataset/data/train/0/20160419002/20160419153045.jpg
inflating: dataset/data/train/0/20160419002/20160419153051.jpg
inflating: dataset/data/train/0/20160419002/20160419153139.jpg
creating: dataset/data/train/0/20160420006/
inflating: dataset/data/train/0/20160420006/20160420145831.jpg
inflating: dataset/data/train/0/20160420006/20160420150019.jpg
inflating: dataset/data/train/0/20160420006/20160420150053.jpg
inflating: dataset/data/train/0/20160420006/20160420150121.jpg
inflating: dataset/data/train/0/20160420006/20160420150142.jpg
inflating: dataset/data/train/0/20160420006/20160420150150.jpg
inflating: dataset/data/train/0/20160420006/20160420150242.jpg
creating: dataset/data/train/0/20160420009/
inflating: dataset/data/train/0/20160420009/20160420153448.jpg
inflating: dataset/data/train/0/20160420009/20160420153613.jpg
inflating: dataset/data/train/0/20160420009/20160420153639.jpg
inflating: dataset/data/train/0/20160420009/20160420153709.jpg
inflating: dataset/data/train/0/20160420009/20160420153739.jpg
inflating: dataset/data/train/0/20160420009/20160420153764.jpg
inflating: dataset/data/train/0/20160420009/20160420153822.jpg
creating: dataset/data/train/0/20160420012/
inflating: dataset/data/train/0/20160420012/20160420162312.jpg
inflating: dataset/data/train/0/20160420012/20160420162447.jpg
inflating: dataset/data/train/0/20160420012/20160420162509.jpg
inflating: dataset/data/train/0/20160420012/20160420162538.jpg
inflating: dataset/data/train/0/20160420012/20160420162608.jpg
inflating: dataset/data/train/0/20160420012/20160420162616.jpg
inflating: dataset/data/train/0/20160420012/20160420162654.jpg
creating: dataset/data/train/0/20160421001/
inflating: dataset/data/train/0/20160421001/20160421152750.jpg
inflating: dataset/data/train/0/20160421001/20160421152925.jpg
inflating: dataset/data/train/0/20160421001/20160421152956.jpg
inflating: dataset/data/train/0/20160421001/20160421153032.jpg
inflating: dataset/data/train/0/20160421001/20160421153056.jpg
inflating: dataset/data/train/0/20160421001/20160421153066.jpg
inflating: dataset/data/train/0/20160421001/20160421153151.jpg
creating: dataset/data/train/0/20160421003/
inflating: dataset/data/train/0/20160421003/20160421145303.jpg
inflating: dataset/data/train/0/20160421003/20160421145442.jpg
inflating: dataset/data/train/0/20160421003/20160421145509.jpg
inflating: dataset/data/train/0/20160421003/20160421145525.jpg
inflating: dataset/data/train/0/20160421003/20160421145552.jpg
inflating: dataset/data/train/0/20160421003/20160421145562.jpg
inflating: dataset/data/train/0/20160421003/20160421145703.jpg
creating: dataset/data/train/0/20160421004/
inflating: dataset/data/train/0/20160421004/20160421150141.jpg
inflating: dataset/data/train/0/20160421004/20160421150327.jpg
inflating: dataset/data/train/0/20160421004/20160421150408.jpg
inflating: dataset/data/train/0/20160421004/20160421150424.jpg
inflating: dataset/data/train/0/20160421004/20160421150443.jpg
inflating: dataset/data/train/0/20160421004/20160421150516.jpg
inflating: dataset/data/train/0/20160421004/20160421150622.jpg
creating: dataset/data/train/0/20160425009/
inflating: dataset/data/train/0/20160425009/20160425162422.jpg
inflating: dataset/data/train/0/20160425009/20160425162556.jpg
inflating: dataset/data/train/0/20160425009/20160425162631.jpg
inflating: dataset/data/train/0/20160425009/20160425162657.jpg
inflating: dataset/data/train/0/20160425009/20160425162725.jpg
inflating: dataset/data/train/0/20160425009/20160425162735.jpg
inflating: dataset/data/train/0/20160425009/20160425162803.jpg
creating: dataset/data/train/0/20160427001/
inflating: dataset/data/train/0/20160427001/20160427092546.jpg
inflating: dataset/data/train/0/20160427001/20160427092712.jpg
inflating: dataset/data/train/0/20160427001/20160427092733.jpg
inflating: dataset/data/train/0/20160427001/20160427092800.jpg
inflating: dataset/data/train/0/20160427001/20160427092833.jpg
inflating: dataset/data/train/0/20160427001/20160427092844.jpg
inflating: dataset/data/train/0/20160427001/20160427092915.jpg
creating: dataset/data/train/0/20160428003/
inflating: dataset/data/train/0/20160428003/20160428152001.jpg
inflating: dataset/data/train/0/20160428003/20160428152129.jpg
inflating: dataset/data/train/0/20160428003/20160428152147.jpg
inflating: dataset/data/train/0/20160428003/20160428152218.jpg
inflating: dataset/data/train/0/20160428003/20160428152250.jpg
inflating: dataset/data/train/0/20160428003/20160428152267.jpg
inflating: dataset/data/train/0/20160428003/20160428152334.jpg
creating: dataset/data/train/0/20160503002/
inflating: dataset/data/train/0/20160503002/20160503100411.jpg
inflating: dataset/data/train/0/20160503002/20160503100603.jpg
inflating: dataset/data/train/0/20160503002/20160503100634.jpg
inflating: dataset/data/train/0/20160503002/20160503100652.jpg
inflating: dataset/data/train/0/20160503002/20160503100711.jpg
inflating: dataset/data/train/0/20160503002/20160503100725.jpg
inflating: dataset/data/train/0/20160503002/20160503100801.jpg
creating: dataset/data/train/0/20160504004/
inflating: dataset/data/train/0/20160504004/20160504144011.jpg
inflating: dataset/data/train/0/20160504004/20160504144144.jpg
inflating: dataset/data/train/0/20160504004/20160504144215.jpg
inflating: dataset/data/train/0/20160504004/20160504144252.jpg
inflating: dataset/data/train/0/20160504004/20160504144317.jpg
inflating: dataset/data/train/0/20160504004/20160504144326.jpg
inflating: dataset/data/train/0/20160504004/20160504144417.jpg
creating: dataset/data/train/0/20160504006/
inflating: dataset/data/train/0/20160504006/20160504154230.jpg
inflating: dataset/data/train/0/20160504006/20160504154400.jpg
inflating: dataset/data/train/0/20160504006/20160504154420.jpg
inflating: dataset/data/train/0/20160504006/20160504154453.jpg
inflating: dataset/data/train/0/20160504006/20160504154531.jpg
inflating: dataset/data/train/0/20160504006/20160504154549.jpg
inflating: dataset/data/train/0/20160504006/20160504154623.jpg
creating: dataset/data/train/0/20160504010/
inflating: dataset/data/train/0/20160504010/20160504170351.jpg
inflating: dataset/data/train/0/20160504010/20160504170525.jpg
inflating: dataset/data/train/0/20160504010/20160504170559.jpg
inflating: dataset/data/train/0/20160504010/20160504170622.jpg
inflating: dataset/data/train/0/20160504010/20160504170652.jpg
inflating: dataset/data/train/0/20160504010/20160504170669.jpg
inflating: dataset/data/train/0/20160504010/20160504170720.jpg
creating: dataset/data/train/0/20160506004/
inflating: dataset/data/train/0/20160506004/20160506151030.jpg
inflating: dataset/data/train/0/20160506004/20160506151218.jpg
inflating: dataset/data/train/0/20160506004/20160506151235.jpg
inflating: dataset/data/train/0/20160506004/20160506151301.jpg
inflating: dataset/data/train/0/20160506004/20160506151325.jpg
inflating: dataset/data/train/0/20160506004/20160506151333.jpg
inflating: dataset/data/train/0/20160506004/20160506151402.jpg
creating: dataset/data/train/0/20160506006/
inflating: dataset/data/train/0/20160506006/20160506152949.jpg
inflating: dataset/data/train/0/20160506006/20160506153118.jpg
inflating: dataset/data/train/0/20160506006/20160506153122.jpg
inflating: dataset/data/train/0/20160506006/20160506153153.jpg
inflating: dataset/data/train/0/20160506006/20160506153220.jpg
inflating: dataset/data/train/0/20160506006/20160506153238.jpg
inflating: dataset/data/train/0/20160506006/20160506153255.jpg
creating: dataset/data/train/0/20160506007/
inflating: dataset/data/train/0/20160506007/20160506155150.jpg
inflating: dataset/data/train/0/20160506007/20160506155335.jpg
inflating: dataset/data/train/0/20160506007/20160506155351.jpg
inflating: dataset/data/train/0/20160506007/20160506155430.jpg
inflating: dataset/data/train/0/20160506007/20160506155456.jpg
inflating: dataset/data/train/0/20160506007/20160506155467.jpg
inflating: dataset/data/train/0/20160506007/20160506155607.jpg
creating: dataset/data/train/0/20160509002/
inflating: dataset/data/train/0/20160509002/20160509143953.jpg
inflating: dataset/data/train/0/20160509002/20160509144122.jpg
inflating: dataset/data/train/0/20160509002/20160509144152.jpg
inflating: dataset/data/train/0/20160509002/20160509144221.jpg
inflating: dataset/data/train/0/20160509002/20160509144251.jpg
inflating: dataset/data/train/0/20160509002/20160509144267.jpg
inflating: dataset/data/train/0/20160509002/20160509144334.jpg
creating: dataset/data/train/0/20160516004/
inflating: dataset/data/train/0/20160516004/20160516152344.jpg
inflating: dataset/data/train/0/20160516004/20160516152512.jpg
inflating: dataset/data/train/0/20160516004/20160516152544.jpg
inflating: dataset/data/train/0/20160516004/20160516152611.jpg
inflating: dataset/data/train/0/20160516004/20160516152638.jpg
inflating: dataset/data/train/0/20160516004/20160516152642.jpg
inflating: dataset/data/train/0/20160516004/20160516152745.jpg
creating: dataset/data/train/0/20160519001/
inflating: dataset/data/train/0/20160519001/20160519085432.jpg
inflating: dataset/data/train/0/20160519001/20160519085620.jpg
inflating: dataset/data/train/0/20160519001/20160519085641.jpg
inflating: dataset/data/train/0/20160519001/20160519085710.jpg
inflating: dataset/data/train/0/20160519001/20160519085742.jpg
inflating: dataset/data/train/0/20160519001/20160519085750.jpg
inflating: dataset/data/train/0/20160519001/20160519085927.jpg
creating: dataset/data/train/0/20160519004/
inflating: dataset/data/train/0/20160519004/20160519144815.jpg
inflating: dataset/data/train/0/20160519004/20160519145020.jpg
inflating: dataset/data/train/0/20160519004/20160519145036.jpg
inflating: dataset/data/train/0/20160519004/20160519145122.jpg
inflating: dataset/data/train/0/20160519004/20160519145150.jpg
inflating: dataset/data/train/0/20160519004/20160519145165.jpg
inflating: dataset/data/train/0/20160519004/20160519145222.jpg
creating: dataset/data/train/0/20160523004/
inflating: dataset/data/train/0/20160523004/20160523153054.jpg
inflating: dataset/data/train/0/20160523004/20160523153214.jpg
inflating: dataset/data/train/0/20160523004/20160523153250.jpg
inflating: dataset/data/train/0/20160523004/20160523153312.jpg
inflating: dataset/data/train/0/20160523004/20160523153342.jpg
inflating: dataset/data/train/0/20160523004/20160523153350.jpg
inflating: dataset/data/train/0/20160523004/20160523153455.jpg
creating: dataset/data/train/0/20160525002/
inflating: dataset/data/train/0/20160525002/20160525105101.jpg
inflating: dataset/data/train/0/20160525002/20160525105255.jpg
inflating: dataset/data/train/0/20160525002/20160525105330.jpg
inflating: dataset/data/train/0/20160525002/20160525105351.jpg
inflating: dataset/data/train/0/20160525002/20160525105414.jpg
inflating: dataset/data/train/0/20160525002/20160525105424.jpg
inflating: dataset/data/train/0/20160525002/20160525105440.jpg
creating: dataset/data/train/0/20160525006/
inflating: dataset/data/train/0/20160525006/20160525152944.jpg
inflating: dataset/data/train/0/20160525006/20160525153117.jpg
inflating: dataset/data/train/0/20160525006/20160525153154.jpg
inflating: dataset/data/train/0/20160525006/20160525153214.jpg
inflating: dataset/data/train/0/20160525006/20160525153243.jpg
inflating: dataset/data/train/0/20160525006/20160525153258.jpg
inflating: dataset/data/train/0/20160525006/20160525153328.jpg
creating: dataset/data/train/0/20160526001/
inflating: dataset/data/train/0/20160526001/20160526093447.jpg
inflating: dataset/data/train/0/20160526001/20160526093621.jpg
inflating: dataset/data/train/0/20160526001/20160526093636.jpg
inflating: dataset/data/train/0/20160526001/20160526093739.jpg
inflating: dataset/data/train/0/20160526001/20160526093751.jpg
inflating: dataset/data/train/0/20160526001/20160526093769.jpg
inflating: dataset/data/train/0/20160526001/20160526093835.jpg
creating: dataset/data/train/0/20160526002/
inflating: dataset/data/train/0/20160526002/20160526094236.jpg
inflating: dataset/data/train/0/20160526002/20160526094412.jpg
inflating: dataset/data/train/0/20160526002/20160526094431.jpg
inflating: dataset/data/train/0/20160526002/20160526094507.jpg
inflating: dataset/data/train/0/20160526002/20160526094531.jpg
inflating: dataset/data/train/0/20160526002/20160526094548.jpg
inflating: dataset/data/train/0/20160526002/20160526094610.jpg
creating: dataset/data/train/0/20160527004/
inflating: dataset/data/train/0/20160527004/20160527152616.jpg
inflating: dataset/data/train/0/20160527004/20160527152758.jpg
inflating: dataset/data/train/0/20160527004/20160527152816.jpg
inflating: dataset/data/train/0/20160527004/20160527152850.jpg
inflating: dataset/data/train/0/20160527004/20160527152909.jpg
inflating: dataset/data/train/0/20160527004/20160527152914.jpg
inflating: dataset/data/train/0/20160527004/20160527152956.jpg
creating: dataset/data/train/0/20160601006/
inflating: dataset/data/train/0/20160601006/20160601143854.jpg
inflating: dataset/data/train/0/20160601006/20160601144020.jpg
inflating: dataset/data/train/0/20160601006/20160601144051.jpg
inflating: dataset/data/train/0/20160601006/20160601144120.jpg
inflating: dataset/data/train/0/20160601006/20160601144150.jpg
inflating: dataset/data/train/0/20160601006/20160601144166.jpg
inflating: dataset/data/train/0/20160601006/20160601144234.jpg
creating: dataset/data/train/0/20160602002/
inflating: dataset/data/train/0/20160602002/20160602144524.jpg
inflating: dataset/data/train/0/20160602002/20160602144634.jpg
inflating: dataset/data/train/0/20160602002/20160602144705.jpg
inflating: dataset/data/train/0/20160602002/20160602144738.jpg
inflating: dataset/data/train/0/20160602002/20160602144804.jpg
inflating: dataset/data/train/0/20160602002/20160602144813.jpg
inflating: dataset/data/train/0/20160602002/20160602144948.jpg
creating: dataset/data/train/0/20160602003/
inflating: dataset/data/train/0/20160602003/20160602145256.jpg
inflating: dataset/data/train/0/20160602003/20160602145415.jpg
inflating: dataset/data/train/0/20160602003/20160602145450.jpg
inflating: dataset/data/train/0/20160602003/20160602145516.jpg
inflating: dataset/data/train/0/20160602003/20160602145544.jpg
inflating: dataset/data/train/0/20160602003/20160602145550.jpg
inflating: dataset/data/train/0/20160602003/20160602145627.jpg
creating: dataset/data/train/0/20160606003/
inflating: dataset/data/train/0/20160606003/20160606155353.jpg
inflating: dataset/data/train/0/20160606003/20160606155510.jpg
inflating: dataset/data/train/0/20160606003/20160606155535.jpg
inflating: dataset/data/train/0/20160606003/20160606155613.jpg
inflating: dataset/data/train/0/20160606003/20160606155634.jpg
inflating: dataset/data/train/0/20160606003/20160606155644.jpg
inflating: dataset/data/train/0/20160606003/20160606155730.jpg
creating: dataset/data/train/0/20160606005/
inflating: dataset/data/train/0/20160606005/20160606161420.jpg
inflating: dataset/data/train/0/20160606005/20160606161546.jpg
inflating: dataset/data/train/0/20160606005/20160606161617.jpg
inflating: dataset/data/train/0/20160606005/20160606161644.jpg
inflating: dataset/data/train/0/20160606005/20160606161713.jpg
inflating: dataset/data/train/0/20160606005/20160606161720.jpg
inflating: dataset/data/train/0/20160606005/20160606161831.jpg
creating: dataset/data/train/0/20160608006/
inflating: dataset/data/train/0/20160608006/20160608102927.jpg
inflating: dataset/data/train/0/20160608006/20160608103047.jpg
inflating: dataset/data/train/0/20160608006/20160608103130.jpg
inflating: dataset/data/train/0/20160608006/20160608103154.jpg
inflating: dataset/data/train/0/20160608006/20160608103216.jpg
inflating: dataset/data/train/0/20160608006/20160608103220.jpg
inflating: dataset/data/train/0/20160608006/20160608103308.jpg
creating: dataset/data/train/0/20160608007/
inflating: dataset/data/train/0/20160608007/20160608103932.jpg
inflating: dataset/data/train/0/20160608007/20160608104107.jpg
inflating: dataset/data/train/0/20160608007/20160608104132.jpg
inflating: dataset/data/train/0/20160608007/20160608104154.jpg
inflating: dataset/data/train/0/20160608007/20160608104226.jpg
inflating: dataset/data/train/0/20160608007/20160608104235.jpg
inflating: dataset/data/train/0/20160608007/20160608104255.jpg
creating: dataset/data/train/0/20160608016/
inflating: dataset/data/train/0/20160608016/20160608163623.jpg
inflating: dataset/data/train/0/20160608016/20160608163753.jpg
inflating: dataset/data/train/0/20160608016/20160608163814.jpg
inflating: dataset/data/train/0/20160608016/20160608163851.jpg
inflating: dataset/data/train/0/20160608016/20160608163924.jpg
inflating: dataset/data/train/0/20160608016/20160608163929.jpg
inflating: dataset/data/train/0/20160608016/20160608164158.jpg
creating: dataset/data/train/0/20160612001/
inflating: dataset/data/train/0/20160612001/20160612154158.jpg
inflating: dataset/data/train/0/20160612001/20160612154341.jpg
inflating: dataset/data/train/0/20160612001/20160612154445.jpg
inflating: dataset/data/train/0/20160612001/20160612154514.jpg
inflating: dataset/data/train/0/20160612001/20160612154521.jpg
inflating: dataset/data/train/0/20160612001/20160612154535.jpg
inflating: dataset/data/train/0/20160612001/20160612154614.jpg
creating: dataset/data/train/0/20160612003/
inflating: dataset/data/train/0/20160612003/20160612163211.jpg
inflating: dataset/data/train/0/20160612003/20160612163335.jpg
inflating: dataset/data/train/0/20160612003/20160612163400.jpg
inflating: dataset/data/train/0/20160612003/20160612163416.jpg
inflating: dataset/data/train/0/20160612003/20160612163456.jpg
inflating: dataset/data/train/0/20160612003/20160612163462.jpg
inflating: dataset/data/train/0/20160612003/20160612163518.jpg
creating: dataset/data/train/0/20160614001/
inflating: dataset/data/train/0/20160614001/20160614102011.jpg
inflating: dataset/data/train/0/20160614001/20160614102155.jpg
inflating: dataset/data/train/0/20160614001/20160614102235.jpg
inflating: dataset/data/train/0/20160614001/20160614102317.jpg
inflating: dataset/data/train/0/20160614001/20160614102322.jpg
inflating: dataset/data/train/0/20160614001/20160614102334.jpg
inflating: dataset/data/train/0/20160614001/20160614102434.jpg
creating: dataset/data/train/0/20160614003/
inflating: dataset/data/train/0/20160614003/20160614112748.jpg
inflating: dataset/data/train/0/20160614003/20160614112751.jpg
inflating: dataset/data/train/0/20160614003/20160614112917.jpg
inflating: dataset/data/train/0/20160614003/20160614112933.jpg
inflating: dataset/data/train/0/20160614003/20160614113004.jpg
inflating: dataset/data/train/0/20160614003/20160614113019.jpg
inflating: dataset/data/train/0/20160614003/20160614113035.jpg
creating: dataset/data/train/0/20160621001/
inflating: dataset/data/train/0/20160621001/20160621115222.jpg
inflating: dataset/data/train/0/20160621001/20160621115403.jpg
inflating: dataset/data/train/0/20160621001/20160621115420.jpg
inflating: dataset/data/train/0/20160621001/20160621115458.jpg
inflating: dataset/data/train/0/20160621001/20160621115511.jpg
inflating: dataset/data/train/0/20160621001/20160621115525.jpg
inflating: dataset/data/train/0/20160621001/20160621115546.jpg
creating: dataset/data/train/0/20160622006/
inflating: dataset/data/train/0/20160622006/20160622145637.jpg
inflating: dataset/data/train/0/20160622006/20160622145758.jpg
inflating: dataset/data/train/0/20160622006/20160622145817.jpg
inflating: dataset/data/train/0/20160622006/20160622145832.jpg
inflating: dataset/data/train/0/20160622006/20160622145906.jpg
inflating: dataset/data/train/0/20160622006/20160622145909.jpg
inflating: dataset/data/train/0/20160622006/20160622150000.jpg
creating: dataset/data/train/0/20160622008/
inflating: dataset/data/train/0/20160622008/20160622152421.jpg
inflating: dataset/data/train/0/20160622008/20160622152615.jpg
inflating: dataset/data/train/0/20160622008/20160622152633.jpg
inflating: dataset/data/train/0/20160622008/20160622152703.jpg
inflating: dataset/data/train/0/20160622008/20160622152724.jpg
inflating: dataset/data/train/0/20160622008/20160622152735.jpg
inflating: dataset/data/train/0/20160622008/20160622152910.jpg
creating: dataset/data/train/0/20160622012/
inflating: dataset/data/train/0/20160622012/20160622160856.jpg
inflating: dataset/data/train/0/20160622012/20160622161021.jpg
inflating: dataset/data/train/0/20160622012/20160622161042.jpg
inflating: dataset/data/train/0/20160622012/20160622161112.jpg
inflating: dataset/data/train/0/20160622012/20160622161133.jpg
inflating: dataset/data/train/0/20160622012/20160622161144.jpg
inflating: dataset/data/train/0/20160622012/20160622161244.jpg
creating: dataset/data/train/0/20160622013/
inflating: dataset/data/train/0/20160622013/20160622165526.jpg
inflating: dataset/data/train/0/20160622013/20160622165646.jpg
inflating: dataset/data/train/0/20160622013/20160622165730.jpg
inflating: dataset/data/train/0/20160622013/20160622165802.jpg
inflating: dataset/data/train/0/20160622013/20160622165820.jpg
inflating: dataset/data/train/0/20160622013/20160622165829.jpg
inflating: dataset/data/train/0/20160622013/20160622170014.jpg
creating: dataset/data/train/0/20160627007/
inflating: dataset/data/train/0/20160627007/20160627154911.jpg
inflating: dataset/data/train/0/20160627007/20160627155059.jpg
inflating: dataset/data/train/0/20160627007/20160627155109.jpg
inflating: dataset/data/train/0/20160627007/20160627155117.jpg
inflating: dataset/data/train/0/20160627007/20160627155217.jpg
inflating: dataset/data/train/0/20160627007/20160627155223.jpg
inflating: dataset/data/train/0/20160627007/20160627155255.jpg
creating: dataset/data/train/0/20160629008/
inflating: dataset/data/train/0/20160629008/20160629152350.jpg
inflating: dataset/data/train/0/20160629008/20160629152523.jpg
inflating: dataset/data/train/0/20160629008/20160629152555.jpg
inflating: dataset/data/train/0/20160629008/20160629152619.jpg
inflating: dataset/data/train/0/20160629008/20160629152629.jpg
inflating: dataset/data/train/0/20160629008/20160629152640.jpg
inflating: dataset/data/train/0/20160629008/20160629152737.jpg
creating: dataset/data/train/0/20160705003/
inflating: dataset/data/train/0/20160705003/20160705115519.jpg
inflating: dataset/data/train/0/20160705003/20160705115644.jpg
inflating: dataset/data/train/0/20160705003/20160705115717.jpg
inflating: dataset/data/train/0/20160705003/20160705115745.jpg
inflating: dataset/data/train/0/20160705003/20160705115815.jpg
inflating: dataset/data/train/0/20160705003/20160705115825.jpg
inflating: dataset/data/train/0/20160705003/20160705115910.jpg
creating: dataset/data/train/0/20160706005/
inflating: dataset/data/train/0/20160706005/20160706144908.jpg
inflating: dataset/data/train/0/20160706005/20160706145048.jpg
inflating: dataset/data/train/0/20160706005/20160706145117.jpg
inflating: dataset/data/train/0/20160706005/20160706145144.jpg
inflating: dataset/data/train/0/20160706005/20160706145214.jpg
inflating: dataset/data/train/0/20160706005/20160706145220.jpg
inflating: dataset/data/train/0/20160706005/20160706145249.jpg
creating: dataset/data/train/0/20160706014/
inflating: dataset/data/train/0/20160706014/20160706171647.jpg
inflating: dataset/data/train/0/20160706014/20160706171828.jpg
inflating: dataset/data/train/0/20160706014/20160706171835.jpg
inflating: dataset/data/train/0/20160706014/20160706171852.jpg
inflating: dataset/data/train/0/20160706014/20160706171920.jpg
inflating: dataset/data/train/0/20160706014/20160706171930.jpg
inflating: dataset/data/train/0/20160706014/20160706171957.jpg
creating: dataset/data/train/0/20160718003/
inflating: dataset/data/train/0/20160718003/20160718143931.jpg
inflating: dataset/data/train/0/20160718003/20160718144049.jpg
inflating: dataset/data/train/0/20160718003/20160718144117.jpg
inflating: dataset/data/train/0/20160718003/20160718144147.jpg
inflating: dataset/data/train/0/20160718003/20160718144218.jpg
inflating: dataset/data/train/0/20160718003/20160718144221.jpg
inflating: dataset/data/train/0/20160718003/20160718144315.jpg
creating: dataset/data/train/0/20160720001/
inflating: dataset/data/train/0/20160720001/20160720101409.jpg
inflating: dataset/data/train/0/20160720001/20160720101543.jpg
inflating: dataset/data/train/0/20160720001/20160720101558.jpg
inflating: dataset/data/train/0/20160720001/20160720101635.jpg
inflating: dataset/data/train/0/20160720001/20160720101651.jpg
inflating: dataset/data/train/0/20160720001/20160720101662.jpg
inflating: dataset/data/train/0/20160720001/20160720101731.jpg
creating: dataset/data/train/0/20160720013/
inflating: dataset/data/train/0/20160720013/20160720160925.jpg
inflating: dataset/data/train/0/20160720013/20160720161047.jpg
inflating: dataset/data/train/0/20160720013/20160720161129.jpg
inflating: dataset/data/train/0/20160720013/20160720161153.jpg
inflating: dataset/data/train/0/20160720013/20160720161216.jpg
inflating: dataset/data/train/0/20160720013/20160720161220.jpg
inflating: dataset/data/train/0/20160720013/20160720161328.jpg
creating: dataset/data/train/0/20160720015/
inflating: dataset/data/train/0/20160720015/20160720163359.jpg
inflating: dataset/data/train/0/20160720015/20160720163533.jpg
inflating: dataset/data/train/0/20160720015/20160720163557.jpg
inflating: dataset/data/train/0/20160720015/20160720163613.jpg
inflating: dataset/data/train/0/20160720015/20160720163626.jpg
inflating: dataset/data/train/0/20160720015/20160720163631.jpg
inflating: dataset/data/train/0/20160720015/20160720163814.jpg
creating: dataset/data/train/0/20160725006/
inflating: dataset/data/train/0/20160725006/20160725153839.jpg
inflating: dataset/data/train/0/20160725006/20160725154007.jpg
inflating: dataset/data/train/0/20160725006/20160725154041.jpg
inflating: dataset/data/train/0/20160725006/20160725154102.jpg
inflating: dataset/data/train/0/20160725006/20160725154131.jpg
inflating: dataset/data/train/0/20160725006/20160725154148.jpg
inflating: dataset/data/train/0/20160725006/20160725154221.jpg
creating: dataset/data/train/0/20160726002/
inflating: dataset/data/train/0/20160726002/20160726114726.jpg
inflating: dataset/data/train/0/20160726002/20160726114849.jpg
inflating: dataset/data/train/0/20160726002/20160726114916.jpg
inflating: dataset/data/train/0/20160726002/20160726114948.jpg
inflating: dataset/data/train/0/20160726002/20160726115026.jpg
inflating: dataset/data/train/0/20160726002/20160726115035.jpg
inflating: dataset/data/train/0/20160726002/20160726115213.jpg
creating: dataset/data/train/0/20160803003/
inflating: dataset/data/train/0/20160803003/20160803144241.jpg
inflating: dataset/data/train/0/20160803003/20160803144402.jpg
inflating: dataset/data/train/0/20160803003/20160803144432.jpg
inflating: dataset/data/train/0/20160803003/20160803144502.jpg
inflating: dataset/data/train/0/20160803003/20160803144536.jpg
inflating: dataset/data/train/0/20160803003/20160803144540.jpg
inflating: dataset/data/train/0/20160803003/20160803144734.jpg
creating: dataset/data/train/0/20160803008/
inflating: dataset/data/train/0/20160803008/20160803155916.jpg
inflating: dataset/data/train/0/20160803008/20160803160052.jpg
inflating: dataset/data/train/0/20160803008/20160803160124.jpg
inflating: dataset/data/train/0/20160803008/20160803160154.jpg
inflating: dataset/data/train/0/20160803008/20160803160205.jpg
inflating: dataset/data/train/0/20160803008/20160803160211.jpg
inflating: dataset/data/train/0/20160803008/20160803160247.jpg
creating: dataset/data/train/0/20160810007/
inflating: dataset/data/train/0/20160810007/20160810151102.jpg
inflating: dataset/data/train/0/20160810007/20160810151225.jpg
inflating: dataset/data/train/0/20160810007/20160810151317.jpg
inflating: dataset/data/train/0/20160810007/20160810151336.jpg
inflating: dataset/data/train/0/20160810007/20160810151348.jpg
inflating: dataset/data/train/0/20160810007/20160810151354.jpg
inflating: dataset/data/train/0/20160810007/20160810151432.jpg
creating: dataset/data/train/0/20160815006/
inflating: dataset/data/train/0/20160815006/20160815162119.jpg
inflating: dataset/data/train/0/20160815006/20160815162234.jpg
inflating: dataset/data/train/0/20160815006/20160815162305.jpg
inflating: dataset/data/train/0/20160815006/20160815162310.jpg
inflating: dataset/data/train/0/20160815006/20160815162342.jpg
inflating: dataset/data/train/0/20160815006/20160815162347.jpg
inflating: dataset/data/train/0/20160815006/20160815162439.jpg
creating: dataset/data/train/0/20160817008/
inflating: dataset/data/train/0/20160817008/20160817152258.jpg
inflating: dataset/data/train/0/20160817008/20160817152413.jpg
inflating: dataset/data/train/0/20160817008/20160817152439.jpg
inflating: dataset/data/train/0/20160817008/20160817152510.jpg
inflating: dataset/data/train/0/20160817008/20160817152539.jpg
inflating: dataset/data/train/0/20160817008/20160817152547.jpg
inflating: dataset/data/train/0/20160817008/20160817152616.jpg
creating: dataset/data/train/0/20160817009/
inflating: dataset/data/train/0/20160817009/20160817160113.jpg
inflating: dataset/data/train/0/20160817009/20160817160244.jpg
inflating: dataset/data/train/0/20160817009/20160817160301.jpg
inflating: dataset/data/train/0/20160817009/20160817160331.jpg
inflating: dataset/data/train/0/20160817009/20160817160404.jpg
inflating: dataset/data/train/0/20160817009/20160817160417.jpg
inflating: dataset/data/train/0/20160817009/20160817160542.jpg
creating: dataset/data/train/0/20160823001/
inflating: dataset/data/train/0/20160823001/20160823151431.jpg
inflating: dataset/data/train/0/20160823001/20160823151552.jpg
inflating: dataset/data/train/0/20160823001/20160823151618.jpg
inflating: dataset/data/train/0/20160823001/20160823151649.jpg
inflating: dataset/data/train/0/20160823001/20160823151720.jpg
inflating: dataset/data/train/0/20160823001/20160823151725.jpg
inflating: dataset/data/train/0/20160823001/20160823151910.jpg
creating: dataset/data/train/0/20160824006/
inflating: dataset/data/train/0/20160824006/20160824145118.jpg
inflating: dataset/data/train/0/20160824006/20160824145243.jpg
inflating: dataset/data/train/0/20160824006/20160824145317.jpg
inflating: dataset/data/train/0/20160824006/20160824145340.jpg
inflating: dataset/data/train/0/20160824006/20160824145411.jpg
inflating: dataset/data/train/0/20160824006/20160824145423.jpg
inflating: dataset/data/train/0/20160824006/20160824145506.jpg
creating: dataset/data/train/0/20160830003/
inflating: dataset/data/train/0/20160830003/20160830144457.jpg
inflating: dataset/data/train/0/20160830003/20160830144620.jpg
inflating: dataset/data/train/0/20160830003/20160830144648.jpg
inflating: dataset/data/train/0/20160830003/20160830144719.jpg
inflating: dataset/data/train/0/20160830003/20160830144748.jpg
inflating: dataset/data/train/0/20160830003/20160830144758.jpg
inflating: dataset/data/train/0/20160830003/20160830144807.jpg
creating: dataset/data/train/0/20160831002/
inflating: dataset/data/train/0/20160831002/20160831102118.jpg
inflating: dataset/data/train/0/20160831002/20160831102252.jpg
inflating: dataset/data/train/0/20160831002/20160831102332.jpg
inflating: dataset/data/train/0/20160831002/20160831102348.jpg
inflating: dataset/data/train/0/20160831002/20160831102421.jpg
inflating: dataset/data/train/0/20160831002/20160831102438.jpg
inflating: dataset/data/train/0/20160831002/20160831102526.jpg
creating: dataset/data/train/0/20160901007/
inflating: dataset/data/train/0/20160901007/20160901183820.jpg
inflating: dataset/data/train/0/20160901007/20160901183944.jpg
inflating: dataset/data/train/0/20160901007/20160901184008.jpg
inflating: dataset/data/train/0/20160901007/20160901184038.jpg
inflating: dataset/data/train/0/20160901007/20160901184108.jpg
inflating: dataset/data/train/0/20160901007/20160901184114.jpg
inflating: dataset/data/train/0/20160901007/20160901184210.jpg
creating: dataset/data/train/0/20160902006/
inflating: dataset/data/train/0/20160902006/20160902170316.jpg
inflating: dataset/data/train/0/20160902006/20160902170432.jpg
inflating: dataset/data/train/0/20160902006/20160902170506.jpg
inflating: dataset/data/train/0/20160902006/20160902170532.jpg
inflating: dataset/data/train/0/20160902006/20160902170602.jpg
inflating: dataset/data/train/0/20160902006/20160902170610.jpg
inflating: dataset/data/train/0/20160902006/20160902170641.jpg
creating: dataset/data/train/0/20160909001/
inflating: dataset/data/train/0/20160909001/20160909144157.jpg
inflating: dataset/data/train/0/20160909001/20160909144326.jpg
inflating: dataset/data/train/0/20160909001/20160909144350.jpg
inflating: dataset/data/train/0/20160909001/20160909144421.jpg
inflating: dataset/data/train/0/20160909001/20160909144450.jpg
inflating: dataset/data/train/0/20160909001/20160909144456.jpg
inflating: dataset/data/train/0/20160909001/20160909144540.jpg
creating: dataset/data/train/0/20160912008/
inflating: dataset/data/train/0/20160912008/20160912155809.jpg
inflating: dataset/data/train/0/20160912008/20160912155933.jpg
inflating: dataset/data/train/0/20160912008/20160912155958.jpg
inflating: dataset/data/train/0/20160912008/20160912160040.jpg
inflating: dataset/data/train/0/20160912008/20160912160058.jpg
inflating: dataset/data/train/0/20160912008/20160912160067.jpg
inflating: dataset/data/train/0/20160912008/20160912160155.jpg
creating: dataset/data/train/0/20160918002/
inflating: dataset/data/train/0/20160918002/20160918145112.jpg
inflating: dataset/data/train/0/20160918002/20160918145228.jpg
inflating: dataset/data/train/0/20160918002/20160918145300.jpg
inflating: dataset/data/train/0/20160918002/20160918145332.jpg
inflating: dataset/data/train/0/20160918002/20160918145402.jpg
inflating: dataset/data/train/0/20160918002/20160918145417.jpg
inflating: dataset/data/train/0/20160918002/20160918145551.jpg
creating: dataset/data/train/0/20160921009/
inflating: dataset/data/train/0/20160921009/20160921111633.jpg
inflating: dataset/data/train/0/20160921009/20160921111745.jpg
inflating: dataset/data/train/0/20160921009/20160921111815.jpg
inflating: dataset/data/train/0/20160921009/20160921111846.jpg
inflating: dataset/data/train/0/20160921009/20160921111916.jpg
inflating: dataset/data/train/0/20160921009/20160921111925.jpg
inflating: dataset/data/train/0/20160921009/20160921111936.jpg
creating: dataset/data/train/0/20160921013/
inflating: dataset/data/train/0/20160921013/20160921152604.jpg
inflating: dataset/data/train/0/20160921013/20160921152745.jpg
inflating: dataset/data/train/0/20160921013/20160921152813.jpg
inflating: dataset/data/train/0/20160921013/20160921152837.jpg
inflating: dataset/data/train/0/20160921013/20160921152919.jpg
inflating: dataset/data/train/0/20160921013/20160921152924.jpg
inflating: dataset/data/train/0/20160921013/20160921153034.jpg
creating: dataset/data/train/0/20160926004/
inflating: dataset/data/train/0/20160926004/20160926151230.jpg
inflating: dataset/data/train/0/20160926004/20160926151259.jpg
inflating: dataset/data/train/0/20160926004/20160926151429.jpg
inflating: dataset/data/train/0/20160926004/20160926151454.jpg
inflating: dataset/data/train/0/20160926004/20160926151531.jpg
inflating: dataset/data/train/0/20160926004/20160926151538.jpg
inflating: dataset/data/train/0/20160926004/20160926151654.jpg
creating: dataset/data/train/0/20160927006/
inflating: dataset/data/train/0/20160927006/20160927163959.jpg
inflating: dataset/data/train/0/20160927006/20160927164124.jpg
inflating: dataset/data/train/0/20160927006/20160927164208.jpg
inflating: dataset/data/train/0/20160927006/20160927164230.jpg
inflating: dataset/data/train/0/20160927006/20160927164251.jpg
inflating: dataset/data/train/0/20160927006/20160927164302.jpg
inflating: dataset/data/train/0/20160927006/20160927164440.jpg
creating: dataset/data/train/0/20161010005/
inflating: dataset/data/train/0/20161010005/20161010145611.jpg
inflating: dataset/data/train/0/20161010005/20161010145738.jpg
inflating: dataset/data/train/0/20161010005/20161010145802.jpg
inflating: dataset/data/train/0/20161010005/20161010145830.jpg
inflating: dataset/data/train/0/20161010005/20161010145900.jpg
inflating: dataset/data/train/0/20161010005/20161010145917.jpg
inflating: dataset/data/train/0/20161010005/20161010145957.jpg
creating: dataset/data/train/0/20161013002/
inflating: dataset/data/train/0/20161013002/20161013115909.jpg
inflating: dataset/data/train/0/20161013002/20161013120040.jpg
inflating: dataset/data/train/0/20161013002/20161013120111.jpg
inflating: dataset/data/train/0/20161013002/20161013120142.jpg
inflating: dataset/data/train/0/20161013002/20161013120209.jpg
inflating: dataset/data/train/0/20161013002/20161013120217.jpg
inflating: dataset/data/train/0/20161013002/20161013120250.jpg
creating: dataset/data/train/0/20161014001/
inflating: dataset/data/train/0/20161014001/20161014143932.jpg
inflating: dataset/data/train/0/20161014001/20161014144103.jpg
inflating: dataset/data/train/0/20161014001/20161014144126.jpg
inflating: dataset/data/train/0/20161014001/20161014144154.jpg
inflating: dataset/data/train/0/20161014001/20161014144227.jpg
inflating: dataset/data/train/0/20161014001/20161014144230.jpg
inflating: dataset/data/train/0/20161014001/20161014144403.jpg
creating: dataset/data/train/0/20161017001/
inflating: dataset/data/train/0/20161017001/20161017112015.jpg
inflating: dataset/data/train/0/20161017001/20161017112139.jpg
inflating: dataset/data/train/0/20161017001/20161017112206.jpg
inflating: dataset/data/train/0/20161017001/20161017112239.jpg
inflating: dataset/data/train/0/20161017001/20161017112306.jpg
inflating: dataset/data/train/0/20161017001/20161017112311.jpg
inflating: dataset/data/train/0/20161017001/20161017112341.jpg
creating: dataset/data/train/0/20161019006/
inflating: dataset/data/train/0/20161019006/20161019145239.jpg
inflating: dataset/data/train/0/20161019006/20161019145411.jpg
inflating: dataset/data/train/0/20161019006/20161019145425.jpg
inflating: dataset/data/train/0/20161019006/20161019145501.jpg
inflating: dataset/data/train/0/20161019006/20161019145511.jpg
inflating: dataset/data/train/0/20161019006/20161019145525.jpg
inflating: dataset/data/train/0/20161019006/20161019145728.jpg
creating: dataset/data/train/0/20161019017/
inflating: dataset/data/train/0/20161019017/20161019171334.jpg
inflating: dataset/data/train/0/20161019017/20161019171502.jpg
inflating: dataset/data/train/0/20161019017/20161019171532.jpg
inflating: dataset/data/train/0/20161019017/20161019171556.jpg
inflating: dataset/data/train/0/20161019017/20161019171601.jpg
inflating: dataset/data/train/0/20161019017/20161019171610.jpg
inflating: dataset/data/train/0/20161019017/20161019171716.jpg
creating: dataset/data/train/0/20161021004/
inflating: dataset/data/train/0/20161021004/20161021163240.jpg
inflating: dataset/data/train/0/20161021004/20161021163429.jpg
inflating: dataset/data/train/0/20161021004/20161021163500.jpg
inflating: dataset/data/train/0/20161021004/20161021163529.jpg
inflating: dataset/data/train/0/20161021004/20161021163558.jpg
inflating: dataset/data/train/0/20161021004/20161021163604.jpg
inflating: dataset/data/train/0/20161021004/20161021163703.jpg
creating: dataset/data/train/0/20161024007/
inflating: dataset/data/train/0/20161024007/20161024155924.jpg
inflating: dataset/data/train/0/20161024007/20161024160049.jpg
inflating: dataset/data/train/0/20161024007/20161024160117.jpg
inflating: dataset/data/train/0/20161024007/20161024160146.jpg
inflating: dataset/data/train/0/20161024007/20161024160215.jpg
inflating: dataset/data/train/0/20161024007/20161024160229.jpg
inflating: dataset/data/train/0/20161024007/20161024160302.jpg
creating: dataset/data/train/0/20161025001/
inflating: dataset/data/train/0/20161025001/20161025152903.jpg
inflating: dataset/data/train/0/20161025001/20161025153020.jpg
inflating: dataset/data/train/0/20161025001/20161025153111.jpg
inflating: dataset/data/train/0/20161025001/20161025153122.jpg
inflating: dataset/data/train/0/20161025001/20161025153150.jpg
inflating: dataset/data/train/0/20161025001/20161025153168.jpg
inflating: dataset/data/train/0/20161025001/20161025153223.jpg
creating: dataset/data/train/0/20161025002/
inflating: dataset/data/train/0/20161025002/20161025160558.jpg
inflating: dataset/data/train/0/20161025002/20161025160724.jpg
inflating: dataset/data/train/0/20161025002/20161025160751.jpg
inflating: dataset/data/train/0/20161025002/20161025160814.jpg
inflating: dataset/data/train/0/20161025002/20161025160854.jpg
inflating: dataset/data/train/0/20161025002/20161025160909.jpg
inflating: dataset/data/train/0/20161025002/20161025161020.jpg
creating: dataset/data/train/0/20161025003/
inflating: dataset/data/train/0/20161025003/20161025162601.jpg
inflating: dataset/data/train/0/20161025003/20161025162716.jpg
inflating: dataset/data/train/0/20161025003/20161025162747.jpg
inflating: dataset/data/train/0/20161025003/20161025162816.jpg
inflating: dataset/data/train/0/20161025003/20161025162846.jpg
inflating: dataset/data/train/0/20161025003/20161025162858.jpg
inflating: dataset/data/train/0/20161025003/20161025163027.jpg
creating: dataset/data/train/0/20161026006/
inflating: dataset/data/train/0/20161026006/20161026114609.jpg
inflating: dataset/data/train/0/20161026006/20161026114729.jpg
inflating: dataset/data/train/0/20161026006/20161026114810.jpg
inflating: dataset/data/train/0/20161026006/20161026114828.jpg
inflating: dataset/data/train/0/20161026006/20161026114903.jpg
inflating: dataset/data/train/0/20161026006/20161026114911.jpg
inflating: dataset/data/train/0/20161026006/20161026114940.jpg
creating: dataset/data/train/0/20161028003/
inflating: dataset/data/train/0/20161028003/20161028094333.jpg
inflating: dataset/data/train/0/20161028003/20161028094457.jpg
inflating: dataset/data/train/0/20161028003/20161028094522.jpg
inflating: dataset/data/train/0/20161028003/20161028094551.jpg
inflating: dataset/data/train/0/20161028003/20161028094621.jpg
inflating: dataset/data/train/0/20161028003/20161028094634.jpg
inflating: dataset/data/train/0/20161028003/20161028094733.jpg
creating: dataset/data/train/0/20161028008/
inflating: dataset/data/train/0/20161028008/20161028154718.jpg
inflating: dataset/data/train/0/20161028008/20161028154907.jpg
inflating: dataset/data/train/0/20161028008/20161028154942.jpg
inflating: dataset/data/train/0/20161028008/20161028155004.jpg
inflating: dataset/data/train/0/20161028008/20161028155032.jpg
inflating: dataset/data/train/0/20161028008/20161028155033.jpg
inflating: dataset/data/train/0/20161028008/20161028155214.jpg
creating: dataset/data/train/1/
creating: dataset/data/train/1/090450340/
inflating: dataset/data/train/1/090450340/090450340Image0.jpg
inflating: dataset/data/train/1/090450340/090450340Image1.jpg
inflating: dataset/data/train/1/090450340/090450340Image2.jpg
inflating: dataset/data/train/1/090450340/090450340Image4.jpg
inflating: dataset/data/train/1/090450340/090450340Image5.jpg
inflating: dataset/data/train/1/090450340/090450340Image6.jpg
inflating: dataset/data/train/1/090450340/090450340Image7.jpg
creating: dataset/data/train/1/090614767/
inflating: dataset/data/train/1/090614767/090614767Image0.jpg
inflating: dataset/data/train/1/090614767/090614767Image10.jpg
inflating: dataset/data/train/1/090614767/090614767Image121.jpg
inflating: dataset/data/train/1/090614767/090614767Image4.jpg
inflating: dataset/data/train/1/090614767/090614767Image5.jpg
inflating: dataset/data/train/1/090614767/090614767Image75.jpg
inflating: dataset/data/train/1/090614767/090614767Image8.jpg
creating: dataset/data/train/1/091028237/
inflating: dataset/data/train/1/091028237/091028237Image0.jpg
inflating: dataset/data/train/1/091028237/091028237Image1.jpg
inflating: dataset/data/train/1/091028237/091028237Image3.jpg
inflating: dataset/data/train/1/091028237/091028237Image4.jpg
inflating: dataset/data/train/1/091028237/091028237Image6.jpg
inflating: dataset/data/train/1/091028237/091028237Image8.jpg
inflating: dataset/data/train/1/091028237/091028237Image9.jpg
creating: dataset/data/train/1/092355743/
inflating: dataset/data/train/1/092355743/092355743Image0.jpg
inflating: dataset/data/train/1/092355743/092355743Image2.jpg
inflating: dataset/data/train/1/092355743/092355743Image3.jpg
inflating: dataset/data/train/1/092355743/092355743Image4.jpg
inflating: dataset/data/train/1/092355743/092355743Image54.jpg
inflating: dataset/data/train/1/092355743/092355743Image6.jpg
inflating: dataset/data/train/1/092355743/092355743Image74.jpg
creating: dataset/data/train/1/094549143/
inflating: dataset/data/train/1/094549143/094549143Image1.jpg
inflating: dataset/data/train/1/094549143/094549143Image10.jpg
inflating: dataset/data/train/1/094549143/094549143Image12.jpg
inflating: dataset/data/train/1/094549143/094549143Image2.jpg
inflating: dataset/data/train/1/094549143/094549143Image4.jpg
inflating: dataset/data/train/1/094549143/094549143Image5.jpg
inflating: dataset/data/train/1/094549143/094549143Image6.jpg
creating: dataset/data/train/1/100004000/
inflating: dataset/data/train/1/100004000/100004000Image0.jpg
inflating: dataset/data/train/1/100004000/100004000Image10.jpg
inflating: dataset/data/train/1/100004000/100004000Image3.jpg
inflating: dataset/data/train/1/100004000/100004000Image4.jpg
inflating: dataset/data/train/1/100004000/100004000Image5.jpg
inflating: dataset/data/train/1/100004000/100004000Image7.jpg
inflating: dataset/data/train/1/100004000/100004000Image8.jpg
creating: dataset/data/train/1/100435333/
inflating: dataset/data/train/1/100435333/100435333Image0.jpg
inflating: dataset/data/train/1/100435333/100435333Image2.jpg
inflating: dataset/data/train/1/100435333/100435333Image4.jpg
inflating: dataset/data/train/1/100435333/100435333Image5.jpg
inflating: dataset/data/train/1/100435333/100435333Image6.jpg
inflating: dataset/data/train/1/100435333/100435333Image8.jpg
inflating: dataset/data/train/1/100435333/100435333Image9.jpg
creating: dataset/data/train/1/102219213/
inflating: dataset/data/train/1/102219213/102219213Image0.jpg
inflating: dataset/data/train/1/102219213/102219213Image2.jpg
inflating: dataset/data/train/1/102219213/102219213Image3.jpg
inflating: dataset/data/train/1/102219213/102219213Image5.jpg
inflating: dataset/data/train/1/102219213/102219213Image6.jpg
inflating: dataset/data/train/1/102219213/102219213Image7.jpg
inflating: dataset/data/train/1/102219213/102219213Image9.jpg
creating: dataset/data/train/1/103022110/
inflating: dataset/data/train/1/103022110/103022110Image0.jpg
inflating: dataset/data/train/1/103022110/103022110Image1.jpg
inflating: dataset/data/train/1/103022110/103022110Image3.jpg
inflating: dataset/data/train/1/103022110/103022110Image4.jpg
inflating: dataset/data/train/1/103022110/103022110Image6.jpg
inflating: dataset/data/train/1/103022110/103022110Image7.jpg
inflating: dataset/data/train/1/103022110/103022110Image9.jpg
creating: dataset/data/train/1/104638520/
inflating: dataset/data/train/1/104638520/104638520Image0.jpg
inflating: dataset/data/train/1/104638520/104638520Image10.jpg
inflating: dataset/data/train/1/104638520/104638520Image2.jpg
inflating: dataset/data/train/1/104638520/104638520Image5.jpg
inflating: dataset/data/train/1/104638520/104638520Image6.jpg
inflating: dataset/data/train/1/104638520/104638520Image7.jpg
inflating: dataset/data/train/1/104638520/104638520Image8.jpg
creating: dataset/data/train/1/105806403/
inflating: dataset/data/train/1/105806403/105806403Image0.jpg
inflating: dataset/data/train/1/105806403/105806403Image3.jpg
inflating: dataset/data/train/1/105806403/105806403Image4.jpg
inflating: dataset/data/train/1/105806403/105806403Image5.jpg
inflating: dataset/data/train/1/105806403/105806403Image6.jpg
inflating: dataset/data/train/1/105806403/105806403Image7.jpg
inflating: dataset/data/train/1/105806403/105806403Image8.jpg
creating: dataset/data/train/1/115033773/
inflating: dataset/data/train/1/115033773/115033773Image0.jpg
inflating: dataset/data/train/1/115033773/115033773Image2.jpg
inflating: dataset/data/train/1/115033773/115033773Image4.jpg
inflating: dataset/data/train/1/115033773/115033773Image5.jpg
inflating: dataset/data/train/1/115033773/115033773Image6.jpg
inflating: dataset/data/train/1/115033773/115033773Image7.jpg
inflating: dataset/data/train/1/115033773/115033773Image8.jpg
creating: dataset/data/train/1/143607127/
inflating: dataset/data/train/1/143607127/143607127Image0.jpg
inflating: dataset/data/train/1/143607127/143607127Image2.jpg
inflating: dataset/data/train/1/143607127/143607127Image3.jpg
inflating: dataset/data/train/1/143607127/143607127Image4.jpg
inflating: dataset/data/train/1/143607127/143607127Image5.jpg
inflating: dataset/data/train/1/143607127/143607127Image6.jpg
inflating: dataset/data/train/1/143607127/143607127Image7.jpg
creating: dataset/data/train/1/143855533/
inflating: dataset/data/train/1/143855533/143855533Image0.jpg
inflating: dataset/data/train/1/143855533/143855533Image2.jpg
inflating: dataset/data/train/1/143855533/143855533Image3.jpg
inflating: dataset/data/train/1/143855533/143855533Image4.jpg
inflating: dataset/data/train/1/143855533/143855533Image5.jpg
inflating: dataset/data/train/1/143855533/143855533Image6.jpg
inflating: dataset/data/train/1/143855533/143855533Image7.jpg
creating: dataset/data/train/1/144111257/
inflating: dataset/data/train/1/144111257/144111257Image0.jpg
inflating: dataset/data/train/1/144111257/144111257Image2.jpg
inflating: dataset/data/train/1/144111257/144111257Image3.jpg
inflating: dataset/data/train/1/144111257/144111257Image4.jpg
inflating: dataset/data/train/1/144111257/144111257Image46.jpg
inflating: dataset/data/train/1/144111257/144111257Image54.jpg
inflating: dataset/data/train/1/144111257/144111257Image7.jpg
creating: dataset/data/train/1/145030457/
inflating: dataset/data/train/1/145030457/145030457Image0.jpg
inflating: dataset/data/train/1/145030457/145030457Image2.jpg
inflating: dataset/data/train/1/145030457/145030457Image3.jpg
inflating: dataset/data/train/1/145030457/145030457Image4.jpg
inflating: dataset/data/train/1/145030457/145030457Image5.jpg
inflating: dataset/data/train/1/145030457/145030457Image6.jpg
inflating: dataset/data/train/1/145030457/145030457Image7.jpg
creating: dataset/data/train/1/145618193/
inflating: dataset/data/train/1/145618193/145618193Image0.jpg
inflating: dataset/data/train/1/145618193/145618193Image2.jpg
inflating: dataset/data/train/1/145618193/145618193Image3.jpg
inflating: dataset/data/train/1/145618193/145618193Image4.jpg
inflating: dataset/data/train/1/145618193/145618193Image5.jpg
inflating: dataset/data/train/1/145618193/145618193Image6.jpg
inflating: dataset/data/train/1/145618193/145618193Image7.jpg
creating: dataset/data/train/1/145846147/
inflating: dataset/data/train/1/145846147/145846147Image0.jpg
inflating: dataset/data/train/1/145846147/145846147Image34.jpg
inflating: dataset/data/train/1/145846147/145846147Image4.jpg
inflating: dataset/data/train/1/145846147/145846147Image5.jpg
inflating: dataset/data/train/1/145846147/145846147Image6.jpg
inflating: dataset/data/train/1/145846147/145846147Image8.jpg
inflating: dataset/data/train/1/145846147/145846147Image94.jpg
creating: dataset/data/train/1/150222377/
inflating: dataset/data/train/1/150222377/150222377Image0.jpg
inflating: dataset/data/train/1/150222377/150222377Image104.jpg
inflating: dataset/data/train/1/150222377/150222377Image2.jpg
inflating: dataset/data/train/1/150222377/150222377Image3.jpg
inflating: dataset/data/train/1/150222377/150222377Image5.jpg
inflating: dataset/data/train/1/150222377/150222377Image64.jpg
inflating: dataset/data/train/1/150222377/150222377Image9.jpg
creating: dataset/data/train/1/150501227/
inflating: dataset/data/train/1/150501227/150501227Image0.jpg
inflating: dataset/data/train/1/150501227/150501227Image1.jpg
inflating: dataset/data/train/1/150501227/150501227Image3.jpg
inflating: dataset/data/train/1/150501227/150501227Image4.jpg
inflating: dataset/data/train/1/150501227/150501227Image5.jpg
inflating: dataset/data/train/1/150501227/150501227Image666.jpg
inflating: dataset/data/train/1/150501227/150501227Image74.jpg
creating: dataset/data/train/1/150521730/
inflating: dataset/data/train/1/150521730/150521730Image0.jpg
inflating: dataset/data/train/1/150521730/150521730Image2.jpg
inflating: dataset/data/train/1/150521730/150521730Image4.jpg
inflating: dataset/data/train/1/150521730/150521730Image5.jpg
inflating: dataset/data/train/1/150521730/150521730Image6.jpg
inflating: dataset/data/train/1/150521730/150521730Image7.jpg
inflating: dataset/data/train/1/150521730/150521730Image9.jpg
creating: dataset/data/train/1/151423737/
inflating: dataset/data/train/1/151423737/151423737Image0.jpg
inflating: dataset/data/train/1/151423737/151423737Image2.jpg
inflating: dataset/data/train/1/151423737/151423737Image3.jpg
inflating: dataset/data/train/1/151423737/151423737Image4.jpg
inflating: dataset/data/train/1/151423737/151423737Image5.jpg
inflating: dataset/data/train/1/151423737/151423737Image6.jpg
inflating: dataset/data/train/1/151423737/151423737Image9.jpg
creating: dataset/data/train/1/151849783/
inflating: dataset/data/train/1/151849783/151849783Image0.jpg
inflating: dataset/data/train/1/151849783/151849783Image2.jpg
inflating: dataset/data/train/1/151849783/151849783Image3.jpg
inflating: dataset/data/train/1/151849783/151849783Image4.jpg
inflating: dataset/data/train/1/151849783/151849783Image54.jpg
inflating: dataset/data/train/1/151849783/151849783Image6.jpg
inflating: dataset/data/train/1/151849783/151849783Image78.jpg
creating: dataset/data/train/1/152356617/
inflating: dataset/data/train/1/152356617/152356617Image0.jpg
inflating: dataset/data/train/1/152356617/152356617Image3.jpg
inflating: dataset/data/train/1/152356617/152356617Image4.jpg
inflating: dataset/data/train/1/152356617/152356617Image54.jpg
inflating: dataset/data/train/1/152356617/152356617Image6.jpg
inflating: dataset/data/train/1/152356617/152356617Image7.jpg
inflating: dataset/data/train/1/152356617/152356617Image88.jpg
creating: dataset/data/train/1/152815657/
inflating: dataset/data/train/1/152815657/152815657Image0.jpg
inflating: dataset/data/train/1/152815657/152815657Image2.jpg
inflating: dataset/data/train/1/152815657/152815657Image3.jpg
inflating: dataset/data/train/1/152815657/152815657Image5.jpg
inflating: dataset/data/train/1/152815657/152815657Image6.jpg
inflating: dataset/data/train/1/152815657/152815657Image7.jpg
inflating: dataset/data/train/1/152815657/152815657Image8.jpg
creating: dataset/data/train/1/153524690/
inflating: dataset/data/train/1/153524690/153524690Image0.jpg
inflating: dataset/data/train/1/153524690/153524690Image2.jpg
inflating: dataset/data/train/1/153524690/153524690Image3.jpg
inflating: dataset/data/train/1/153524690/153524690Image44.jpg
inflating: dataset/data/train/1/153524690/153524690Image6.jpg
inflating: dataset/data/train/1/153524690/153524690Image7.jpg
inflating: dataset/data/train/1/153524690/153524690Image84.jpg
creating: dataset/data/train/1/153732347/
inflating: dataset/data/train/1/153732347/153732347Image0.jpg
inflating: dataset/data/train/1/153732347/153732347Image1.jpg
inflating: dataset/data/train/1/153732347/153732347Image2.jpg
inflating: dataset/data/train/1/153732347/153732347Image4.jpg
inflating: dataset/data/train/1/153732347/153732347Image5.jpg
inflating: dataset/data/train/1/153732347/153732347Image6.jpg
inflating: dataset/data/train/1/153732347/153732347Image7.jpg
creating: dataset/data/train/1/155511193/
inflating: dataset/data/train/1/155511193/155511193Image0.jpg
inflating: dataset/data/train/1/155511193/155511193Image2.jpg
inflating: dataset/data/train/1/155511193/155511193Image3.jpg
inflating: dataset/data/train/1/155511193/155511193Image4.jpg
inflating: dataset/data/train/1/155511193/155511193Image65.jpg
inflating: dataset/data/train/1/155511193/155511193Image7.jpg
inflating: dataset/data/train/1/155511193/155511193Image84.jpg
creating: dataset/data/train/1/155933557/
inflating: dataset/data/train/1/155933557/155933557Image0.jpg
inflating: dataset/data/train/1/155933557/155933557Image2.jpg
inflating: dataset/data/train/1/155933557/155933557Image3.jpg
inflating: dataset/data/train/1/155933557/155933557Image4.jpg
inflating: dataset/data/train/1/155933557/155933557Image5.jpg
inflating: dataset/data/train/1/155933557/155933557Image6.jpg
inflating: dataset/data/train/1/155933557/155933557Image7.jpg
creating: dataset/data/train/1/161441187/
inflating: dataset/data/train/1/161441187/161441187Image0.jpg
inflating: dataset/data/train/1/161441187/161441187Image10.jpg
inflating: dataset/data/train/1/161441187/161441187Image2.jpg
inflating: dataset/data/train/1/161441187/161441187Image5.jpg
inflating: dataset/data/train/1/161441187/161441187Image6.jpg
inflating: dataset/data/train/1/161441187/161441187Image8.jpg
inflating: dataset/data/train/1/161441187/161441187Image9.jpg
creating: dataset/data/train/1/161811413/
inflating: dataset/data/train/1/161811413/161811413Image0.jpg
inflating: dataset/data/train/1/161811413/161811413Image2.jpg
inflating: dataset/data/train/1/161811413/161811413Image3.jpg
inflating: dataset/data/train/1/161811413/161811413Image4.jpg
inflating: dataset/data/train/1/161811413/161811413Image5.jpg
inflating: dataset/data/train/1/161811413/161811413Image6.jpg
inflating: dataset/data/train/1/161811413/161811413Image7.jpg
creating: dataset/data/train/1/20150909004/
inflating: dataset/data/train/1/20150909004/20150909145523.jpg
inflating: dataset/data/train/1/20150909004/20150909145642.jpg
inflating: dataset/data/train/1/20150909004/20150909145712.jpg
inflating: dataset/data/train/1/20150909004/20150909145742.jpg
inflating: dataset/data/train/1/20150909004/20150909145813.jpg
inflating: dataset/data/train/1/20150909004/20150909145844.jpg
inflating: dataset/data/train/1/20150909004/20150909150017.jpg
creating: dataset/data/train/1/20150914002/
inflating: dataset/data/train/1/20150914002/20150914151336.jpg
inflating: dataset/data/train/1/20150914002/20150914151518.jpg
inflating: dataset/data/train/1/20150914002/20150914151540.jpg
inflating: dataset/data/train/1/20150914002/20150914151610.jpg
inflating: dataset/data/train/1/20150914002/20150914151643.jpg
inflating: dataset/data/train/1/20150914002/20150914151645.jpg
inflating: dataset/data/train/1/20150914002/20150914151807.jpg
creating: dataset/data/train/1/20150923006/
inflating: dataset/data/train/1/20150923006/20150923154621.jpg
inflating: dataset/data/train/1/20150923006/20150923154753.jpg
inflating: dataset/data/train/1/20150923006/20150923154828.jpg
inflating: dataset/data/train/1/20150923006/20150923154846.jpg
inflating: dataset/data/train/1/20150923006/20150923154936.jpg
inflating: dataset/data/train/1/20150923006/20150923154943.jpg
inflating: dataset/data/train/1/20150923006/20150923155146.jpg
creating: dataset/data/train/1/20150930005/
inflating: dataset/data/train/1/20150930005/20150930144528.jpg
inflating: dataset/data/train/1/20150930005/20150930144648.jpg
inflating: dataset/data/train/1/20150930005/20150930144716.jpg
inflating: dataset/data/train/1/20150930005/20150930144759.jpg
inflating: dataset/data/train/1/20150930005/20150930144825.jpg
inflating: dataset/data/train/1/20150930005/20150930144830.jpg
inflating: dataset/data/train/1/20150930005/20150930144952.jpg
creating: dataset/data/train/1/20151020001/
inflating: dataset/data/train/1/20151020001/20151020110941.jpg
inflating: dataset/data/train/1/20151020001/20151020111129.jpg
inflating: dataset/data/train/1/20151020001/20151020111156.jpg
inflating: dataset/data/train/1/20151020001/20151020111224.jpg
inflating: dataset/data/train/1/20151020001/20151020111245.jpg
inflating: dataset/data/train/1/20151020001/20151020111248.jpg
inflating: dataset/data/train/1/20151020001/20151020111438.jpg
creating: dataset/data/train/1/20151028006/
inflating: dataset/data/train/1/20151028006/20151028150313.jpg
inflating: dataset/data/train/1/20151028006/20151028150438.jpg
inflating: dataset/data/train/1/20151028006/20151028150503.jpg
inflating: dataset/data/train/1/20151028006/20151028150533.jpg
inflating: dataset/data/train/1/20151028006/20151028150603.jpg
inflating: dataset/data/train/1/20151028006/20151028150604.jpg
inflating: dataset/data/train/1/20151028006/20151028150705.jpg
creating: dataset/data/train/1/20151102003/
inflating: dataset/data/train/1/20151102003/20151102152252.jpg
inflating: dataset/data/train/1/20151102003/20151102152415.jpg
inflating: dataset/data/train/1/20151102003/20151102152442.jpg
inflating: dataset/data/train/1/20151102003/20151102152513.jpg
inflating: dataset/data/train/1/20151102003/20151102152543.jpg
inflating: dataset/data/train/1/20151102003/20151102152549.jpg
inflating: dataset/data/train/1/20151102003/20151102152644.jpg
creating: dataset/data/train/1/20151106008/
inflating: dataset/data/train/1/20151106008/20151106163323.jpg
inflating: dataset/data/train/1/20151106008/20151106163538.jpg
inflating: dataset/data/train/1/20151106008/20151106163602.jpg
inflating: dataset/data/train/1/20151106008/20151106163635.jpg
inflating: dataset/data/train/1/20151106008/20151106163704.jpg
inflating: dataset/data/train/1/20151106008/20151106163758.jpg
inflating: dataset/data/train/1/20151106008/20151106163801.jpg
creating: dataset/data/train/1/20151112001/
inflating: dataset/data/train/1/20151112001/20151112091716.jpg
inflating: dataset/data/train/1/20151112001/20151112091846.jpg
inflating: dataset/data/train/1/20151112001/20151112091907.jpg
inflating: dataset/data/train/1/20151112001/20151112091946.jpg
inflating: dataset/data/train/1/20151112001/20151112092016.jpg
inflating: dataset/data/train/1/20151112001/20151112092018.jpg
inflating: dataset/data/train/1/20151112001/20151112092139.jpg
creating: dataset/data/train/1/20151116002/
inflating: dataset/data/train/1/20151116002/20151116145120.jpg
inflating: dataset/data/train/1/20151116002/20151116145238.jpg
inflating: dataset/data/train/1/20151116002/20151116145307.jpg
inflating: dataset/data/train/1/20151116002/20151116145338.jpg
inflating: dataset/data/train/1/20151116002/20151116145407.jpg
inflating: dataset/data/train/1/20151116002/20151116145408.jpg
inflating: dataset/data/train/1/20151116002/20151116145508.jpg
creating: dataset/data/train/1/20151116003/
inflating: dataset/data/train/1/20151116003/20151116150341.jpg
inflating: dataset/data/train/1/20151116003/20151116150527.jpg
inflating: dataset/data/train/1/20151116003/20151116150600.jpg
inflating: dataset/data/train/1/20151116003/20151116150620.jpg
inflating: dataset/data/train/1/20151116003/20151116150655.jpg
inflating: dataset/data/train/1/20151116003/20151116150659.jpg
inflating: dataset/data/train/1/20151116003/20151116150800.jpg
creating: dataset/data/train/1/20151116004/
inflating: dataset/data/train/1/20151116004/20151116152130.jpg
inflating: dataset/data/train/1/20151116004/20151116152325.jpg
inflating: dataset/data/train/1/20151116004/20151116152401.jpg
inflating: dataset/data/train/1/20151116004/20151116152424.jpg
inflating: dataset/data/train/1/20151116004/20151116152500.jpg
inflating: dataset/data/train/1/20151116004/20151116152505.jpg
inflating: dataset/data/train/1/20151116004/20151116152601.jpg
creating: dataset/data/train/1/20151116006/
inflating: dataset/data/train/1/20151116006/20151116154505.jpg
inflating: dataset/data/train/1/20151116006/20151116154627.jpg
inflating: dataset/data/train/1/20151116006/20151116154659.jpg
inflating: dataset/data/train/1/20151116006/20151116154728.jpg
inflating: dataset/data/train/1/20151116006/20151116154757.jpg
inflating: dataset/data/train/1/20151116006/20151116154759.jpg
inflating: dataset/data/train/1/20151116006/20151116154837.jpg
creating: dataset/data/train/1/20151118004/
inflating: dataset/data/train/1/20151118004/20151118154201.jpg
inflating: dataset/data/train/1/20151118004/20151118154344.jpg
inflating: dataset/data/train/1/20151118004/20151118154416.jpg
inflating: dataset/data/train/1/20151118004/20151118154437.jpg
inflating: dataset/data/train/1/20151118004/20151118154458.jpg
inflating: dataset/data/train/1/20151118004/20151118154459.jpg
inflating: dataset/data/train/1/20151118004/20151118154557.jpg
creating: dataset/data/train/1/20151201001/
inflating: dataset/data/train/1/20151201001/20151201111528.jpg
inflating: dataset/data/train/1/20151201001/20151201111648.jpg
inflating: dataset/data/train/1/20151201001/20151201111717.jpg
inflating: dataset/data/train/1/20151201001/20151201111746.jpg
inflating: dataset/data/train/1/20151201001/20151201111817.jpg
inflating: dataset/data/train/1/20151201001/20151201111819.jpg
inflating: dataset/data/train/1/20151201001/20151201112027.jpg
creating: dataset/data/train/1/20151207010/
inflating: dataset/data/train/1/20151207010/20151207155920.jpg
inflating: dataset/data/train/1/20151207010/20151207160110.jpg
inflating: dataset/data/train/1/20151207010/20151207160135.jpg
inflating: dataset/data/train/1/20151207010/20151207160151.jpg
inflating: dataset/data/train/1/20151207010/20151207160238.jpg
inflating: dataset/data/train/1/20151207010/20151207160239.jpg
inflating: dataset/data/train/1/20151207010/20151207160402.jpg
creating: dataset/data/train/1/20151209008/
inflating: dataset/data/train/1/20151209008/20151209155256.jpg
inflating: dataset/data/train/1/20151209008/20151209155447.jpg
inflating: dataset/data/train/1/20151209008/20151209155516.jpg
inflating: dataset/data/train/1/20151209008/20151209155530.jpg
inflating: dataset/data/train/1/20151209008/20151209155546.jpg
inflating: dataset/data/train/1/20151209008/20151209155549.jpg
inflating: dataset/data/train/1/20151209008/20151209155644.jpg
creating: dataset/data/train/1/20151210003/
inflating: dataset/data/train/1/20151210003/20151210151935.jpg
inflating: dataset/data/train/1/20151210003/20151210152114.jpg
inflating: dataset/data/train/1/20151210003/20151210152139.jpg
inflating: dataset/data/train/1/20151210003/20151210152215.jpg
inflating: dataset/data/train/1/20151210003/20151210152250.jpg
inflating: dataset/data/train/1/20151210003/20151210152305.jpg
inflating: dataset/data/train/1/20151210003/20151210152629.jpg
creating: dataset/data/train/1/20151223005/
inflating: dataset/data/train/1/20151223005/20151223150535.jpg
inflating: dataset/data/train/1/20151223005/20151223150700.jpg
inflating: dataset/data/train/1/20151223005/20151223150731.jpg
inflating: dataset/data/train/1/20151223005/20151223150756.jpg
inflating: dataset/data/train/1/20151223005/20151223150826.jpg
inflating: dataset/data/train/1/20151223005/20151223150835.jpg
inflating: dataset/data/train/1/20151223005/20151223151014.jpg
creating: dataset/data/train/1/20151228010/
inflating: dataset/data/train/1/20151228010/20151228161115.jpg
inflating: dataset/data/train/1/20151228010/20151228161236.jpg
inflating: dataset/data/train/1/20151228010/20151228161310.jpg
inflating: dataset/data/train/1/20151228010/20151228161335.jpg
inflating: dataset/data/train/1/20151228010/20151228161403.jpg
inflating: dataset/data/train/1/20151228010/20151228161405.jpg
inflating: dataset/data/train/1/20151228010/20151228161445.jpg
creating: dataset/data/train/1/20151230004/
inflating: dataset/data/train/1/20151230004/20151230143256.jpg
inflating: dataset/data/train/1/20151230004/20151230143434.jpg
inflating: dataset/data/train/1/20151230004/20151230143452.jpg
inflating: dataset/data/train/1/20151230004/20151230143522.jpg
inflating: dataset/data/train/1/20151230004/20151230143552.jpg
inflating: dataset/data/train/1/20151230004/20151230143556.jpg
inflating: dataset/data/train/1/20151230004/20151230143658.jpg
creating: dataset/data/train/1/20151230016/
inflating: dataset/data/train/1/20151230016/20151230173012.jpg
inflating: dataset/data/train/1/20151230016/20151230173130.jpg
inflating: dataset/data/train/1/20151230016/20151230173208.jpg
inflating: dataset/data/train/1/20151230016/20151230173243.jpg
inflating: dataset/data/train/1/20151230016/20151230173313.jpg
inflating: dataset/data/train/1/20151230016/20151230173316.jpg
inflating: dataset/data/train/1/20151230016/20151230173430.jpg
creating: dataset/data/train/1/20160201002/
inflating: dataset/data/train/1/20160201002/20160201143900.jpg
inflating: dataset/data/train/1/20160201002/20160201144045.jpg
inflating: dataset/data/train/1/20160201002/20160201144110.jpg
inflating: dataset/data/train/1/20160201002/20160201144143.jpg
inflating: dataset/data/train/1/20160201002/20160201144221.jpg
inflating: dataset/data/train/1/20160201002/20160201144225.jpg
inflating: dataset/data/train/1/20160201002/20160201144316.jpg
creating: dataset/data/train/1/20160225003/
inflating: dataset/data/train/1/20160225003/20160225105320.jpg
inflating: dataset/data/train/1/20160225003/20160225105518.jpg
inflating: dataset/data/train/1/20160225003/20160225105538.jpg
inflating: dataset/data/train/1/20160225003/20160225105604.jpg
inflating: dataset/data/train/1/20160225003/20160225105624.jpg
inflating: dataset/data/train/1/20160225003/20160225105626.jpg
inflating: dataset/data/train/1/20160225003/20160225105706.jpg
creating: dataset/data/train/1/20160321005/
inflating: dataset/data/train/1/20160321005/20160321152549.jpg
inflating: dataset/data/train/1/20160321005/20160321152731.jpg
inflating: dataset/data/train/1/20160321005/20160321152803.jpg
inflating: dataset/data/train/1/20160321005/20160321152840.jpg
inflating: dataset/data/train/1/20160321005/20160321152911.jpg
inflating: dataset/data/train/1/20160321005/20160321152915.jpg
inflating: dataset/data/train/1/20160321005/20160321153208.jpg
creating: dataset/data/train/1/20160425007/
inflating: dataset/data/train/1/20160425007/20160425155644.jpg
inflating: dataset/data/train/1/20160425007/20160425155759.jpg
inflating: dataset/data/train/1/20160425007/20160425155828.jpg
inflating: dataset/data/train/1/20160425007/20160425155901.jpg
inflating: dataset/data/train/1/20160425007/20160425155926.jpg
inflating: dataset/data/train/1/20160425007/20160425155928.jpg
inflating: dataset/data/train/1/20160425007/20160425160013.jpg
creating: dataset/data/train/1/20160427005/
inflating: dataset/data/train/1/20160427005/20160427144639.jpg
inflating: dataset/data/train/1/20160427005/20160427144823.jpg
inflating: dataset/data/train/1/20160427005/20160427144839.jpg
inflating: dataset/data/train/1/20160427005/20160427144901.jpg
inflating: dataset/data/train/1/20160427005/20160427144931.jpg
inflating: dataset/data/train/1/20160427005/20160427144935.jpg
inflating: dataset/data/train/1/20160427005/20160427145110.jpg
creating: dataset/data/train/1/20160427006/
inflating: dataset/data/train/1/20160427006/20160427150204.jpg
inflating: dataset/data/train/1/20160427006/20160427150326.jpg
inflating: dataset/data/train/1/20160427006/20160427150414.jpg
inflating: dataset/data/train/1/20160427006/20160427150424.jpg
inflating: dataset/data/train/1/20160427006/20160427150458.jpg
inflating: dataset/data/train/1/20160427006/20160427150459.jpg
inflating: dataset/data/train/1/20160427006/20160427150550.jpg
creating: dataset/data/train/1/20160517002/
inflating: dataset/data/train/1/20160517002/20160517151843.jpg
inflating: dataset/data/train/1/20160517002/20160517152029.jpg
inflating: dataset/data/train/1/20160517002/20160517152102.jpg
inflating: dataset/data/train/1/20160517002/20160517152129.jpg
inflating: dataset/data/train/1/20160517002/20160517152141.jpg
inflating: dataset/data/train/1/20160517002/20160517152148.jpg
inflating: dataset/data/train/1/20160517002/20160517152250.jpg
creating: dataset/data/train/1/20160606002/
inflating: dataset/data/train/1/20160606002/20160606154125.jpg
inflating: dataset/data/train/1/20160606002/20160606154246.jpg
inflating: dataset/data/train/1/20160606002/20160606154314.jpg
inflating: dataset/data/train/1/20160606002/20160606154343.jpg
inflating: dataset/data/train/1/20160606002/20160606154417.jpg
inflating: dataset/data/train/1/20160606002/20160606154419.jpg
inflating: dataset/data/train/1/20160606002/20160606154517.jpg
creating: dataset/data/train/1/20160608002/
inflating: dataset/data/train/1/20160608002/20160608094812.jpg
inflating: dataset/data/train/1/20160608002/20160608094925.jpg
inflating: dataset/data/train/1/20160608002/20160608095003.jpg
inflating: dataset/data/train/1/20160608002/20160608095028.jpg
inflating: dataset/data/train/1/20160608002/20160608095057.jpg
inflating: dataset/data/train/1/20160608002/20160608095059.jpg
inflating: dataset/data/train/1/20160608002/20160608095148.jpg
creating: dataset/data/train/1/20160615007/
inflating: dataset/data/train/1/20160615007/20160615145306.jpg
inflating: dataset/data/train/1/20160615007/20160615145437.jpg
inflating: dataset/data/train/1/20160615007/20160615145454.jpg
inflating: dataset/data/train/1/20160615007/20160615145525.jpg
inflating: dataset/data/train/1/20160615007/20160615145554.jpg
inflating: dataset/data/train/1/20160615007/20160615145559.jpg
inflating: dataset/data/train/1/20160615007/20160615145658.jpg
creating: dataset/data/train/1/20160622011/
inflating: dataset/data/train/1/20160622011/20160622155936.jpg
inflating: dataset/data/train/1/20160622011/20160622160056.jpg
inflating: dataset/data/train/1/20160622011/20160622160126.jpg
inflating: dataset/data/train/1/20160622011/20160622160156.jpg
inflating: dataset/data/train/1/20160622011/20160622160227.jpg
inflating: dataset/data/train/1/20160622011/20160622160229.jpg
inflating: dataset/data/train/1/20160622011/20160622160308.jpg
creating: dataset/data/train/1/20160810006/
inflating: dataset/data/train/1/20160810006/20160810145547.jpg
inflating: dataset/data/train/1/20160810006/20160810145705.jpg
inflating: dataset/data/train/1/20160810006/20160810145736.jpg
inflating: dataset/data/train/1/20160810006/20160810145814.jpg
inflating: dataset/data/train/1/20160810006/20160810145840.jpg
inflating: dataset/data/train/1/20160810006/20160810145846.jpg
inflating: dataset/data/train/1/20160810006/20160810150000.jpg
creating: dataset/data/train/1/20160824004/
inflating: dataset/data/train/1/20160824004/20160824105919.jpg
inflating: dataset/data/train/1/20160824004/20160824110043.jpg
inflating: dataset/data/train/1/20160824004/20160824110114.jpg
inflating: dataset/data/train/1/20160824004/20160824110140.jpg
inflating: dataset/data/train/1/20160824004/20160824110211.jpg
inflating: dataset/data/train/1/20160824004/20160824110215.jpg
inflating: dataset/data/train/1/20160824004/20160824110232.jpg
creating: dataset/data/train/1/20160914006/
inflating: dataset/data/train/1/20160914006/20160914152351.jpg
inflating: dataset/data/train/1/20160914006/20160914152522.jpg
inflating: dataset/data/train/1/20160914006/20160914152540.jpg
inflating: dataset/data/train/1/20160914006/20160914152619.jpg
inflating: dataset/data/train/1/20160914006/20160914152654.jpg
inflating: dataset/data/train/1/20160914006/20160914152659.jpg
inflating: dataset/data/train/1/20160914006/20160914152815.jpg
creating: dataset/data/train/2/
creating: dataset/data/train/2/084219150/
inflating: dataset/data/train/2/084219150/084219150Image0.jpg
inflating: dataset/data/train/2/084219150/084219150Image1.jpg
inflating: dataset/data/train/2/084219150/084219150Image10.jpg
inflating: dataset/data/train/2/084219150/084219150Image2.jpg
inflating: dataset/data/train/2/084219150/084219150Image5.jpg
inflating: dataset/data/train/2/084219150/084219150Image8.jpg
inflating: dataset/data/train/2/084219150/084219150Image9.jpg
creating: dataset/data/train/2/091029280/
inflating: dataset/data/train/2/091029280/091029280Image0.jpg
inflating: dataset/data/train/2/091029280/091029280Image15.jpg
inflating: dataset/data/train/2/091029280/091029280Image2.jpg
inflating: dataset/data/train/2/091029280/091029280Image3.jpg
inflating: dataset/data/train/2/091029280/091029280Image4.jpg
inflating: dataset/data/train/2/091029280/091029280Image5.jpg
inflating: dataset/data/train/2/091029280/091029280Image64.jpg
creating: dataset/data/train/2/091316240/
inflating: dataset/data/train/2/091316240/091316240Image0.jpg
inflating: dataset/data/train/2/091316240/091316240Image11.jpg
inflating: dataset/data/train/2/091316240/091316240Image13.jpg
inflating: dataset/data/train/2/091316240/091316240Image15.jpg
inflating: dataset/data/train/2/091316240/091316240Image3.jpg
inflating: dataset/data/train/2/091316240/091316240Image4.jpg
inflating: dataset/data/train/2/091316240/091316240Image8.jpg
creating: dataset/data/train/2/092814720/
inflating: dataset/data/train/2/092814720/092814720Image0.jpg
inflating: dataset/data/train/2/092814720/092814720Image2.jpg
inflating: dataset/data/train/2/092814720/092814720Image3.jpg
inflating: dataset/data/train/2/092814720/092814720Image44.jpg
inflating: dataset/data/train/2/092814720/092814720Image5.jpg
inflating: dataset/data/train/2/092814720/092814720Image6.jpg
inflating: dataset/data/train/2/092814720/092814720Image74.jpg
creating: dataset/data/train/2/095446777/
inflating: dataset/data/train/2/095446777/095446777Image0.jpg
inflating: dataset/data/train/2/095446777/095446777Image104.jpg
inflating: dataset/data/train/2/095446777/095446777Image3.jpg
inflating: dataset/data/train/2/095446777/095446777Image4.jpg
inflating: dataset/data/train/2/095446777/095446777Image54.jpg
inflating: dataset/data/train/2/095446777/095446777Image8.jpg
inflating: dataset/data/train/2/095446777/095446777Image9.jpg
creating: dataset/data/train/2/100347213/
inflating: dataset/data/train/2/100347213/100347213Image0.jpg
inflating: dataset/data/train/2/100347213/100347213Image2.jpg
inflating: dataset/data/train/2/100347213/100347213Image3.jpg
inflating: dataset/data/train/2/100347213/100347213Image4.jpg
inflating: dataset/data/train/2/100347213/100347213Image55.jpg
inflating: dataset/data/train/2/100347213/100347213Image6.jpg
inflating: dataset/data/train/2/100347213/100347213Image74.jpg
creating: dataset/data/train/2/100847130/
inflating: dataset/data/train/2/100847130/100847130Image0.jpg
inflating: dataset/data/train/2/100847130/100847130Image10.jpg
inflating: dataset/data/train/2/100847130/100847130Image12.jpg
inflating: dataset/data/train/2/100847130/100847130Image2.jpg
inflating: dataset/data/train/2/100847130/100847130Image3.jpg
inflating: dataset/data/train/2/100847130/100847130Image4.jpg
inflating: dataset/data/train/2/100847130/100847130Image5.jpg
creating: dataset/data/train/2/100938850/
inflating: dataset/data/train/2/100938850/100938850Image0.jpg
inflating: dataset/data/train/2/100938850/100938850Image10.jpg
inflating: dataset/data/train/2/100938850/100938850Image14.jpg
inflating: dataset/data/train/2/100938850/100938850Image2.jpg
inflating: dataset/data/train/2/100938850/100938850Image5.jpg
inflating: dataset/data/train/2/100938850/100938850Image7.jpg
inflating: dataset/data/train/2/100938850/100938850Image8.jpg
creating: dataset/data/train/2/101155613/
inflating: dataset/data/train/2/101155613/101155613Image0.jpg
inflating: dataset/data/train/2/101155613/101155613Image10.jpg
inflating: dataset/data/train/2/101155613/101155613Image12.jpg
inflating: dataset/data/train/2/101155613/101155613Image13.jpg
inflating: dataset/data/train/2/101155613/101155613Image14.jpg
inflating: dataset/data/train/2/101155613/101155613Image8.jpg
inflating: dataset/data/train/2/101155613/101155613Image9.jpg
creating: dataset/data/train/2/101249150/
inflating: dataset/data/train/2/101249150/101249150Image0.jpg
inflating: dataset/data/train/2/101249150/101249150Image10.jpg
inflating: dataset/data/train/2/101249150/101249150Image114.jpg
inflating: dataset/data/train/2/101249150/101249150Image4.jpg
inflating: dataset/data/train/2/101249150/101249150Image5.jpg
inflating: dataset/data/train/2/101249150/101249150Image6.jpg
inflating: dataset/data/train/2/101249150/101249150Image94.jpg
creating: dataset/data/train/2/102415710/
inflating: dataset/data/train/2/102415710/102415710Image0.jpg
inflating: dataset/data/train/2/102415710/102415710Image2.jpg
inflating: dataset/data/train/2/102415710/102415710Image3.jpg
inflating: dataset/data/train/2/102415710/102415710Image4.jpg
inflating: dataset/data/train/2/102415710/102415710Image5.jpg
inflating: dataset/data/train/2/102415710/102415710Image6.jpg
inflating: dataset/data/train/2/102415710/102415710Image7.jpg
creating: dataset/data/train/2/103206470/
inflating: dataset/data/train/2/103206470/103206470Image0.jpg
inflating: dataset/data/train/2/103206470/103206470Image2.jpg
inflating: dataset/data/train/2/103206470/103206470Image3.jpg
inflating: dataset/data/train/2/103206470/103206470Image4.jpg
inflating: dataset/data/train/2/103206470/103206470Image5.jpg
inflating: dataset/data/train/2/103206470/103206470Image7.jpg
inflating: dataset/data/train/2/103206470/103206470Image8.jpg
creating: dataset/data/train/2/103428260/
inflating: dataset/data/train/2/103428260/103428260Image0.jpg
inflating: dataset/data/train/2/103428260/103428260Image12.jpg
inflating: dataset/data/train/2/103428260/103428260Image15.jpg
inflating: dataset/data/train/2/103428260/103428260Image17.jpg
inflating: dataset/data/train/2/103428260/103428260Image3.jpg
inflating: dataset/data/train/2/103428260/103428260Image4.jpg
inflating: dataset/data/train/2/103428260/103428260Image6.jpg
creating: dataset/data/train/2/105332243/
inflating: dataset/data/train/2/105332243/105332243Image0.jpg
inflating: dataset/data/train/2/105332243/105332243Image10.jpg
inflating: dataset/data/train/2/105332243/105332243Image12.jpg
inflating: dataset/data/train/2/105332243/105332243Image13.jpg
inflating: dataset/data/train/2/105332243/105332243Image2.jpg
inflating: dataset/data/train/2/105332243/105332243Image4.jpg
inflating: dataset/data/train/2/105332243/105332243Image5.jpg
creating: dataset/data/train/2/105918500/
inflating: dataset/data/train/2/105918500/105918500Image0.jpg
inflating: dataset/data/train/2/105918500/105918500Image10.jpg
inflating: dataset/data/train/2/105918500/105918500Image2.jpg
inflating: dataset/data/train/2/105918500/105918500Image4.jpg
inflating: dataset/data/train/2/105918500/105918500Image6.jpg
inflating: dataset/data/train/2/105918500/105918500Image7.jpg
inflating: dataset/data/train/2/105918500/105918500Image8.jpg
creating: dataset/data/train/2/110100163/
inflating: dataset/data/train/2/110100163/110100163Image0.jpg
inflating: dataset/data/train/2/110100163/110100163Image1.jpg
inflating: dataset/data/train/2/110100163/110100163Image12.jpg
inflating: dataset/data/train/2/110100163/110100163Image2.jpg
inflating: dataset/data/train/2/110100163/110100163Image4.jpg
inflating: dataset/data/train/2/110100163/110100163Image5.jpg
inflating: dataset/data/train/2/110100163/110100163Image6.jpg
creating: dataset/data/train/2/143028670/
inflating: dataset/data/train/2/143028670/143028670Image0.jpg
inflating: dataset/data/train/2/143028670/143028670Image100.jpg
inflating: dataset/data/train/2/143028670/143028670Image11.jpg
inflating: dataset/data/train/2/143028670/143028670Image120.jpg
inflating: dataset/data/train/2/143028670/143028670Image3.jpg
inflating: dataset/data/train/2/143028670/143028670Image4.jpg
inflating: dataset/data/train/2/143028670/143028670Image5.jpg
creating: dataset/data/train/2/144011743/
inflating: dataset/data/train/2/144011743/144011743Image0.jpg
inflating: dataset/data/train/2/144011743/144011743Image1.jpg
inflating: dataset/data/train/2/144011743/144011743Image3.jpg
inflating: dataset/data/train/2/144011743/144011743Image4.jpg
inflating: dataset/data/train/2/144011743/144011743Image6.jpg
inflating: dataset/data/train/2/144011743/144011743Image8.jpg
inflating: dataset/data/train/2/144011743/144011743Image9.jpg
creating: dataset/data/train/2/145132100/
inflating: dataset/data/train/2/145132100/145132100Image0.jpg
inflating: dataset/data/train/2/145132100/145132100Image1.jpg
inflating: dataset/data/train/2/145132100/145132100Image3.jpg
inflating: dataset/data/train/2/145132100/145132100Image4.jpg
inflating: dataset/data/train/2/145132100/145132100Image6.jpg
inflating: dataset/data/train/2/145132100/145132100Image7.jpg
inflating: dataset/data/train/2/145132100/145132100Image9.jpg
creating: dataset/data/train/2/145703093/
inflating: dataset/data/train/2/145703093/145703093Image0.jpg
inflating: dataset/data/train/2/145703093/145703093Image10.jpg
inflating: dataset/data/train/2/145703093/145703093Image15.jpg
inflating: dataset/data/train/2/145703093/145703093Image2.jpg
inflating: dataset/data/train/2/145703093/145703093Image3.jpg
inflating: dataset/data/train/2/145703093/145703093Image5.jpg
inflating: dataset/data/train/2/145703093/145703093Image6.jpg
creating: dataset/data/train/2/145747013/
inflating: dataset/data/train/2/145747013/145747013Image0.jpg
inflating: dataset/data/train/2/145747013/145747013Image2.jpg
inflating: dataset/data/train/2/145747013/145747013Image3.jpg
inflating: dataset/data/train/2/145747013/145747013Image5.jpg
inflating: dataset/data/train/2/145747013/145747013Image6.jpg
inflating: dataset/data/train/2/145747013/145747013Image7.jpg
inflating: dataset/data/train/2/145747013/145747013Image9.jpg
creating: dataset/data/train/2/145912857/
inflating: dataset/data/train/2/145912857/145912857Image0.jpg
inflating: dataset/data/train/2/145912857/145912857Image2.jpg
inflating: dataset/data/train/2/145912857/145912857Image3.jpg
inflating: dataset/data/train/2/145912857/145912857Image5.jpg
inflating: dataset/data/train/2/145912857/145912857Image7.jpg
inflating: dataset/data/train/2/145912857/145912857Image8.jpg
inflating: dataset/data/train/2/145912857/145912857Image9.jpg
creating: dataset/data/train/2/150649750/
inflating: dataset/data/train/2/150649750/150649750Image0.jpg
inflating: dataset/data/train/2/150649750/150649750Image12.jpg
inflating: dataset/data/train/2/150649750/150649750Image2.jpg
inflating: dataset/data/train/2/150649750/150649750Image3.jpg
inflating: dataset/data/train/2/150649750/150649750Image4.jpg
inflating: dataset/data/train/2/150649750/150649750Image44.jpg
inflating: dataset/data/train/2/150649750/150649750Image6.jpg
creating: dataset/data/train/2/151543077/
inflating: dataset/data/train/2/151543077/151543077Image0.jpg
inflating: dataset/data/train/2/151543077/151543077Image2.jpg
inflating: dataset/data/train/2/151543077/151543077Image3.jpg
inflating: dataset/data/train/2/151543077/151543077Image4.jpg
inflating: dataset/data/train/2/151543077/151543077Image5.jpg
inflating: dataset/data/train/2/151543077/151543077Image6.jpg
inflating: dataset/data/train/2/151543077/151543077Image7.jpg
creating: dataset/data/train/2/151937793/
inflating: dataset/data/train/2/151937793/151937793Image12.jpg
inflating: dataset/data/train/2/151937793/151937793Image13.jpg
inflating: dataset/data/train/2/151937793/151937793Image15.jpg
inflating: dataset/data/train/2/151937793/151937793Image2.jpg
inflating: dataset/data/train/2/151937793/151937793Image5.jpg
inflating: dataset/data/train/2/151937793/151937793Image6.jpg
inflating: dataset/data/train/2/151937793/151937793Image8.jpg
creating: dataset/data/train/2/152111870/
inflating: dataset/data/train/2/152111870/152111870Image0.jpg
inflating: dataset/data/train/2/152111870/152111870Image2.jpg
inflating: dataset/data/train/2/152111870/152111870Image3.jpg
inflating: dataset/data/train/2/152111870/152111870Image4.jpg
inflating: dataset/data/train/2/152111870/152111870Image5.jpg
inflating: dataset/data/train/2/152111870/152111870Image6.jpg
inflating: dataset/data/train/2/152111870/152111870Image7.jpg
creating: dataset/data/train/2/152325917/
inflating: dataset/data/train/2/152325917/152325917Image0.jpg
inflating: dataset/data/train/2/152325917/152325917Image11.jpg
inflating: dataset/data/train/2/152325917/152325917Image2.jpg
inflating: dataset/data/train/2/152325917/152325917Image3.jpg
inflating: dataset/data/train/2/152325917/152325917Image4.jpg
inflating: dataset/data/train/2/152325917/152325917Image8.jpg
inflating: dataset/data/train/2/152325917/152325917Image9.jpg
creating: dataset/data/train/2/152720600/
inflating: dataset/data/train/2/152720600/152720600Image0.jpg
inflating: dataset/data/train/2/152720600/152720600Image10.jpg
inflating: dataset/data/train/2/152720600/152720600Image2.jpg
inflating: dataset/data/train/2/152720600/152720600Image3.jpg
inflating: dataset/data/train/2/152720600/152720600Image6.jpg
inflating: dataset/data/train/2/152720600/152720600Image7.jpg
inflating: dataset/data/train/2/152720600/152720600Image8.jpg
creating: dataset/data/train/2/152815657/
inflating: dataset/data/train/2/152815657/152815657Image0.jpg
inflating: dataset/data/train/2/152815657/152815657Image2.jpg
inflating: dataset/data/train/2/152815657/152815657Image3.jpg
inflating: dataset/data/train/2/152815657/152815657Image5.jpg
inflating: dataset/data/train/2/152815657/152815657Image6.jpg
inflating: dataset/data/train/2/152815657/152815657Image7.jpg
inflating: dataset/data/train/2/152815657/152815657Image8.jpg
creating: dataset/data/train/2/152953857/
inflating: dataset/data/train/2/152953857/152953857Image0.jpg
inflating: dataset/data/train/2/152953857/152953857Image10.jpg
inflating: dataset/data/train/2/152953857/152953857Image12.jpg
inflating: dataset/data/train/2/152953857/152953857Image5.jpg
inflating: dataset/data/train/2/152953857/152953857Image7.jpg
inflating: dataset/data/train/2/152953857/152953857Image8.jpg
inflating: dataset/data/train/2/152953857/152953857Image9.jpg
creating: dataset/data/train/2/153019317/
inflating: dataset/data/train/2/153019317/153019317Image0.jpg
inflating: dataset/data/train/2/153019317/153019317Image10.jpg
inflating: dataset/data/train/2/153019317/153019317Image3.jpg
inflating: dataset/data/train/2/153019317/153019317Image4.jpg
inflating: dataset/data/train/2/153019317/153019317Image6.jpg
inflating: dataset/data/train/2/153019317/153019317Image8.jpg
inflating: dataset/data/train/2/153019317/153019317Image9.jpg
creating: dataset/data/train/2/153026150/
inflating: dataset/data/train/2/153026150/153026150Image0.jpg
inflating: dataset/data/train/2/153026150/153026150Image10.jpg
inflating: dataset/data/train/2/153026150/153026150Image2.jpg
inflating: dataset/data/train/2/153026150/153026150Image3.jpg
inflating: dataset/data/train/2/153026150/153026150Image6.jpg
inflating: dataset/data/train/2/153026150/153026150Image8.jpg
inflating: dataset/data/train/2/153026150/153026150Image9.jpg
creating: dataset/data/train/2/153932600/
inflating: dataset/data/train/2/153932600/153932600Image0.jpg
inflating: dataset/data/train/2/153932600/153932600Image2.jpg
inflating: dataset/data/train/2/153932600/153932600Image3.jpg
inflating: dataset/data/train/2/153932600/153932600Image4.jpg
inflating: dataset/data/train/2/153932600/153932600Image60.jpg
inflating: dataset/data/train/2/153932600/153932600Image7.jpg
inflating: dataset/data/train/2/153932600/153932600Image80.jpg
creating: dataset/data/train/2/154307580/
inflating: dataset/data/train/2/154307580/154307580Image0.jpg
inflating: dataset/data/train/2/154307580/154307580Image2.jpg
inflating: dataset/data/train/2/154307580/154307580Image3.jpg
inflating: dataset/data/train/2/154307580/154307580Image5.jpg
inflating: dataset/data/train/2/154307580/154307580Image6.jpg
inflating: dataset/data/train/2/154307580/154307580Image7.jpg
inflating: dataset/data/train/2/154307580/154307580Image8.jpg
creating: dataset/data/train/2/154804597/
inflating: dataset/data/train/2/154804597/154804597Image0.jpg
inflating: dataset/data/train/2/154804597/154804597Image2.jpg
inflating: dataset/data/train/2/154804597/154804597Image3.jpg
inflating: dataset/data/train/2/154804597/154804597Image5.jpg
inflating: dataset/data/train/2/154804597/154804597Image7.jpg
inflating: dataset/data/train/2/154804597/154804597Image8.jpg
inflating: dataset/data/train/2/154804597/154804597Image9.jpg
creating: dataset/data/train/2/155511193/
inflating: dataset/data/train/2/155511193/155511193Image0.jpg
inflating: dataset/data/train/2/155511193/155511193Image2.jpg
inflating: dataset/data/train/2/155511193/155511193Image3.jpg
inflating: dataset/data/train/2/155511193/155511193Image4.jpg
inflating: dataset/data/train/2/155511193/155511193Image60.jpg
inflating: dataset/data/train/2/155511193/155511193Image7.jpg
inflating: dataset/data/train/2/155511193/155511193Image70.jpg
creating: dataset/data/train/2/155818790/
inflating: dataset/data/train/2/155818790/155818790Image0.jpg
inflating: dataset/data/train/2/155818790/155818790Image100.jpg
inflating: dataset/data/train/2/155818790/155818790Image2.jpg
inflating: dataset/data/train/2/155818790/155818790Image5.jpg
inflating: dataset/data/train/2/155818790/155818790Image7.jpg
inflating: dataset/data/train/2/155818790/155818790Image80.jpg
inflating: dataset/data/train/2/155818790/155818790Image9.jpg
creating: dataset/data/train/2/160015640/
inflating: dataset/data/train/2/160015640/160015640Image0.jpg
inflating: dataset/data/train/2/160015640/160015640Image2.jpg
inflating: dataset/data/train/2/160015640/160015640Image3.jpg
inflating: dataset/data/train/2/160015640/160015640Image4.jpg
inflating: dataset/data/train/2/160015640/160015640Image6.jpg
inflating: dataset/data/train/2/160015640/160015640Image7.jpg
inflating: dataset/data/train/2/160015640/160015640Image8.jpg
creating: dataset/data/train/2/160305777/
inflating: dataset/data/train/2/160305777/160305777Image0.jpg
inflating: dataset/data/train/2/160305777/160305777Image2.jpg
inflating: dataset/data/train/2/160305777/160305777Image3.jpg
inflating: dataset/data/train/2/160305777/160305777Image4.jpg
inflating: dataset/data/train/2/160305777/160305777Image6.jpg
inflating: dataset/data/train/2/160305777/160305777Image7.jpg
inflating: dataset/data/train/2/160305777/160305777Image9.jpg
creating: dataset/data/train/2/161206193/
inflating: dataset/data/train/2/161206193/161206193Image0.jpg
inflating: dataset/data/train/2/161206193/161206193Image2.jpg
inflating: dataset/data/train/2/161206193/161206193Image4.jpg
inflating: dataset/data/train/2/161206193/161206193Image5.jpg
inflating: dataset/data/train/2/161206193/161206193Image60.jpg
inflating: dataset/data/train/2/161206193/161206193Image7.jpg
inflating: dataset/data/train/2/161206193/161206193Image80.jpg
creating: dataset/data/train/2/20150819005/
inflating: dataset/data/train/2/20150819005/20150819150839.jpg
inflating: dataset/data/train/2/20150819005/20150819150958.jpg
inflating: dataset/data/train/2/20150819005/20150819151030.jpg
inflating: dataset/data/train/2/20150819005/20150819151101.jpg
inflating: dataset/data/train/2/20150819005/20150819151129.jpg
inflating: dataset/data/train/2/20150819005/20150819151145.jpg
inflating: dataset/data/train/2/20150819005/20150819151221.jpg
creating: dataset/data/train/2/20150819011/
inflating: dataset/data/train/2/20150819011/20150819163415.jpg
inflating: dataset/data/train/2/20150819011/20150819163538.jpg
inflating: dataset/data/train/2/20150819011/20150819163608.jpg
inflating: dataset/data/train/2/20150819011/20150819163638.jpg
inflating: dataset/data/train/2/20150819011/20150819163708.jpg
inflating: dataset/data/train/2/20150819011/20150819163709.jpg
inflating: dataset/data/train/2/20150819011/20150819163805.jpg
creating: dataset/data/train/2/20150826001/
inflating: dataset/data/train/2/20150826001/20150826100701.jpg
inflating: dataset/data/train/2/20150826001/20150826100707.jpg
inflating: dataset/data/train/2/20150826001/20150826100833.jpg
inflating: dataset/data/train/2/20150826001/20150826100901.jpg
inflating: dataset/data/train/2/20150826001/20150826100927.jpg
inflating: dataset/data/train/2/20150826001/20150826100929.jpg
inflating: dataset/data/train/2/20150826001/20150826101105.jpg
creating: dataset/data/train/2/20150902002/
inflating: dataset/data/train/2/20150902002/20150902100419.jpg
inflating: dataset/data/train/2/20150902002/20150902100456.jpg
inflating: dataset/data/train/2/20150902002/20150902100631.jpg
inflating: dataset/data/train/2/20150902002/20150902100659.jpg
inflating: dataset/data/train/2/20150902002/20150902100725.jpg
inflating: dataset/data/train/2/20150902002/20150902100728.jpg
inflating: dataset/data/train/2/20150902002/20150902100821.jpg
creating: dataset/data/train/2/20150902004/
inflating: dataset/data/train/2/20150902004/20150902145516.jpg
inflating: dataset/data/train/2/20150902004/20150902145636.jpg
inflating: dataset/data/train/2/20150902004/20150902145714.jpg
inflating: dataset/data/train/2/20150902004/20150902145752.jpg
inflating: dataset/data/train/2/20150902004/20150902145813.jpg
inflating: dataset/data/train/2/20150902004/20150902145817.jpg
inflating: dataset/data/train/2/20150902004/20150902145915.jpg
creating: dataset/data/train/2/20150902008/
inflating: dataset/data/train/2/20150902008/20150902154045.jpg
inflating: dataset/data/train/2/20150902008/20150902154223.jpg
inflating: dataset/data/train/2/20150902008/20150902154249.jpg
inflating: dataset/data/train/2/20150902008/20150902154324.jpg
inflating: dataset/data/train/2/20150902008/20150902154350.jpg
inflating: dataset/data/train/2/20150902008/20150902154358.jpg
inflating: dataset/data/train/2/20150902008/20150902154542.jpg
creating: dataset/data/train/2/20150923009/
inflating: dataset/data/train/2/20150923009/20150923161717.jpg
inflating: dataset/data/train/2/20150923009/20150923161858.jpg
inflating: dataset/data/train/2/20150923009/20150923161921.jpg
inflating: dataset/data/train/2/20150923009/20150923161947.jpg
inflating: dataset/data/train/2/20150923009/20150923162023.jpg
inflating: dataset/data/train/2/20150923009/20150923162054.jpg
inflating: dataset/data/train/2/20150923009/20150923162146.jpg
creating: dataset/data/train/2/20151023006/
inflating: dataset/data/train/2/20151023006/20151023155614.jpg
inflating: dataset/data/train/2/20151023006/20151023155750.jpg
inflating: dataset/data/train/2/20151023006/20151023155818.jpg
inflating: dataset/data/train/2/20151023006/20151023155848.jpg
inflating: dataset/data/train/2/20151023006/20151023155916.jpg
inflating: dataset/data/train/2/20151023006/20151023155940.jpg
inflating: dataset/data/train/2/20151023006/20151023160353.jpg
creating: dataset/data/train/2/20151104004/
inflating: dataset/data/train/2/20151104004/20151104113302.jpg
inflating: dataset/data/train/2/20151104004/20151104113446.jpg
inflating: dataset/data/train/2/20151104004/20151104113452.jpg
inflating: dataset/data/train/2/20151104004/20151104113507.jpg
inflating: dataset/data/train/2/20151104004/20151104113513.jpg
inflating: dataset/data/train/2/20151104004/20151104113515.jpg
inflating: dataset/data/train/2/20151104004/20151104113625.jpg
creating: dataset/data/train/2/20151104005/
inflating: dataset/data/train/2/20151104005/20151104141943.jpg
inflating: dataset/data/train/2/20151104005/20151104142113.jpg
inflating: dataset/data/train/2/20151104005/20151104142138.jpg
inflating: dataset/data/train/2/20151104005/20151104142208.jpg
inflating: dataset/data/train/2/20151104005/20151104142244.jpg
inflating: dataset/data/train/2/20151104005/20151104142249.jpg
inflating: dataset/data/train/2/20151104005/20151104142328.jpg
creating: dataset/data/train/2/20151110002/
inflating: dataset/data/train/2/20151110002/20151110145147.jpg
inflating: dataset/data/train/2/20151110002/20151110145323.jpg
inflating: dataset/data/train/2/20151110002/20151110145351.jpg
inflating: dataset/data/train/2/20151110002/20151110145424.jpg
inflating: dataset/data/train/2/20151110002/20151110145500.jpg
inflating: dataset/data/train/2/20151110002/20151110145505.jpg
inflating: dataset/data/train/2/20151110002/20151110145552.jpg
creating: dataset/data/train/2/20151111003/
inflating: dataset/data/train/2/20151111003/20151111145308.jpg
inflating: dataset/data/train/2/20151111003/20151111145500.jpg
inflating: dataset/data/train/2/20151111003/20151111145537.jpg
inflating: dataset/data/train/2/20151111003/20151111145601.jpg
inflating: dataset/data/train/2/20151111003/20151111145634.jpg
inflating: dataset/data/train/2/20151111003/20151111145639.jpg
inflating: dataset/data/train/2/20151111003/20151111145732.jpg
creating: dataset/data/train/2/20151112002/
inflating: dataset/data/train/2/20151112002/20151112093253.jpg
inflating: dataset/data/train/2/20151112002/20151112093428.jpg
inflating: dataset/data/train/2/20151112002/20151112093448.jpg
inflating: dataset/data/train/2/20151112002/20151112093522.jpg
inflating: dataset/data/train/2/20151112002/20151112093541.jpg
inflating: dataset/data/train/2/20151112002/20151112093548.jpg
inflating: dataset/data/train/2/20151112002/20151112093634.jpg
creating: dataset/data/train/2/20151113005/
inflating: dataset/data/train/2/20151113005/20151113150507.jpg
inflating: dataset/data/train/2/20151113005/20151113150638.jpg
inflating: dataset/data/train/2/20151113005/20151113150704.jpg
inflating: dataset/data/train/2/20151113005/20151113150738.jpg
inflating: dataset/data/train/2/20151113005/20151113150801.jpg
inflating: dataset/data/train/2/20151113005/20151113150824.jpg
inflating: dataset/data/train/2/20151113005/20151113150852.jpg
creating: dataset/data/train/2/20151113006/
inflating: dataset/data/train/2/20151113006/20151113151651.jpg
inflating: dataset/data/train/2/20151113006/20151113151812.jpg
inflating: dataset/data/train/2/20151113006/20151113151842.jpg
inflating: dataset/data/train/2/20151113006/20151113151910.jpg
inflating: dataset/data/train/2/20151113006/20151113151940.jpg
inflating: dataset/data/train/2/20151113006/20151113151945.jpg
inflating: dataset/data/train/2/20151113006/20151113152043.jpg
creating: dataset/data/train/2/20151113008/
inflating: dataset/data/train/2/20151113008/20151113153922.jpg
inflating: dataset/data/train/2/20151113008/20151113154045.jpg
inflating: dataset/data/train/2/20151113008/20151113154115.jpg
inflating: dataset/data/train/2/20151113008/20151113154144.jpg
inflating: dataset/data/train/2/20151113008/20151113154214.jpg
inflating: dataset/data/train/2/20151113008/20151113154218.jpg
inflating: dataset/data/train/2/20151113008/20151113154303.jpg
creating: dataset/data/train/2/20151116001/
inflating: dataset/data/train/2/20151116001/20151116143926.jpg
inflating: dataset/data/train/2/20151116001/20151116144107.jpg
inflating: dataset/data/train/2/20151116001/20151116144136.jpg
inflating: dataset/data/train/2/20151116001/20151116144205.jpg
inflating: dataset/data/train/2/20151116001/20151116144236.jpg
inflating: dataset/data/train/2/20151116001/20151116144239.jpg
inflating: dataset/data/train/2/20151116001/20151116144324.jpg
creating: dataset/data/train/2/20151116005/
inflating: dataset/data/train/2/20151116005/20151116153350.jpg
inflating: dataset/data/train/2/20151116005/20151116153513.jpg
inflating: dataset/data/train/2/20151116005/20151116153542.jpg
inflating: dataset/data/train/2/20151116005/20151116153610.jpg
inflating: dataset/data/train/2/20151116005/20151116153643.jpg
inflating: dataset/data/train/2/20151116005/20151116153648.jpg
inflating: dataset/data/train/2/20151116005/20151116153749.jpg
creating: dataset/data/train/2/20151118002/
inflating: dataset/data/train/2/20151118002/20151118151650.jpg
inflating: dataset/data/train/2/20151118002/20151118151825.jpg
inflating: dataset/data/train/2/20151118002/20151118151901.jpg
inflating: dataset/data/train/2/20151118002/20151118151928.jpg
inflating: dataset/data/train/2/20151118002/20151118151950.jpg
inflating: dataset/data/train/2/20151118002/20151118152026.jpg
inflating: dataset/data/train/2/20151118002/20151118152132.jpg
creating: dataset/data/train/2/20151127014/
inflating: dataset/data/train/2/20151127014/20151127162609.jpg
inflating: dataset/data/train/2/20151127014/20151127162822.jpg
inflating: dataset/data/train/2/20151127014/20151127162832.jpg
inflating: dataset/data/train/2/20151127014/20151127162834.jpg
inflating: dataset/data/train/2/20151127014/20151127162941.jpg
inflating: dataset/data/train/2/20151127014/20151127162953.jpg
inflating: dataset/data/train/2/20151127014/20151127163130.jpg
creating: dataset/data/train/2/20151130007/
inflating: dataset/data/train/2/20151130007/20151130161719.jpg
inflating: dataset/data/train/2/20151130007/20151130161848.jpg
inflating: dataset/data/train/2/20151130007/20151130161859.jpg
inflating: dataset/data/train/2/20151130007/20151130161943.jpg
inflating: dataset/data/train/2/20151130007/20151130162006.jpg
inflating: dataset/data/train/2/20151130007/20151130162007.jpg
inflating: dataset/data/train/2/20151130007/20151130162116.jpg
creating: dataset/data/train/2/20151130008/
inflating: dataset/data/train/2/20151130008/20151130155656.jpg
inflating: dataset/data/train/2/20151130008/20151130155900.jpg
inflating: dataset/data/train/2/20151130008/20151130155922.jpg
inflating: dataset/data/train/2/20151130008/20151130160005.jpg
inflating: dataset/data/train/2/20151130008/20151130160030.jpg
inflating: dataset/data/train/2/20151130008/20151130160035.jpg
inflating: dataset/data/train/2/20151130008/20151130160139.jpg
creating: dataset/data/train/2/20151211005/
inflating: dataset/data/train/2/20151211005/20151211155048.jpg
inflating: dataset/data/train/2/20151211005/20151211155217.jpg
inflating: dataset/data/train/2/20151211005/20151211155240.jpg
inflating: dataset/data/train/2/20151211005/20151211155311.jpg
inflating: dataset/data/train/2/20151211005/20151211155341.jpg
inflating: dataset/data/train/2/20151211005/20151211155345.jpg
inflating: dataset/data/train/2/20151211005/20151211155505.jpg
creating: dataset/data/train/2/20151214008/
inflating: dataset/data/train/2/20151214008/20151214152153.jpg
inflating: dataset/data/train/2/20151214008/20151214152437.jpg
inflating: dataset/data/train/2/20151214008/20151214152509.jpg
inflating: dataset/data/train/2/20151214008/20151214152541.jpg
inflating: dataset/data/train/2/20151214008/20151214152606.jpg
inflating: dataset/data/train/2/20151214008/20151214152609.jpg
inflating: dataset/data/train/2/20151214008/20151214152700.jpg
creating: dataset/data/train/2/20151214011/
inflating: dataset/data/train/2/20151214011/20151214161800.jpg
inflating: dataset/data/train/2/20151214011/20151214161914.jpg
inflating: dataset/data/train/2/20151214011/20151214161944.jpg
inflating: dataset/data/train/2/20151214011/20151214162017.jpg
inflating: dataset/data/train/2/20151214011/20151214162046.jpg
inflating: dataset/data/train/2/20151214011/20151214162049.jpg
inflating: dataset/data/train/2/20151214011/20151214162201.jpg
creating: dataset/data/train/2/20151216001/
inflating: dataset/data/train/2/20151216001/20151216145857.jpg
inflating: dataset/data/train/2/20151216001/20151216150022.jpg
inflating: dataset/data/train/2/20151216001/20151216150047.jpg
inflating: dataset/data/train/2/20151216001/20151216150123.jpg
inflating: dataset/data/train/2/20151216001/20151216150151.jpg
inflating: dataset/data/train/2/20151216001/20151216150158.jpg
inflating: dataset/data/train/2/20151216001/20151216150258.jpg
creating: dataset/data/train/2/20151216007/
inflating: dataset/data/train/2/20151216007/20151216144457.jpg
inflating: dataset/data/train/2/20151216007/20151216144617.jpg
inflating: dataset/data/train/2/20151216007/20151216144648.jpg
inflating: dataset/data/train/2/20151216007/20151216144717.jpg
inflating: dataset/data/train/2/20151216007/20151216144750.jpg
inflating: dataset/data/train/2/20151216007/20151216144755.jpg
inflating: dataset/data/train/2/20151216007/20151216144908.jpg
creating: dataset/data/train/2/20151216009/
inflating: dataset/data/train/2/20151216009/20151216153107.jpg
inflating: dataset/data/train/2/20151216009/20151216153230.jpg
inflating: dataset/data/train/2/20151216009/20151216153307.jpg
inflating: dataset/data/train/2/20151216009/20151216153331.jpg
inflating: dataset/data/train/2/20151216009/20151216153403.jpg
inflating: dataset/data/train/2/20151216009/20151216153408.jpg
inflating: dataset/data/train/2/20151216009/20151216153457.jpg
creating: dataset/data/train/2/20151216012/
inflating: dataset/data/train/2/20151216012/20151216161839.jpg
inflating: dataset/data/train/2/20151216012/20151216162028.jpg
inflating: dataset/data/train/2/20151216012/20151216162051.jpg
inflating: dataset/data/train/2/20151216012/20151216162130.jpg
inflating: dataset/data/train/2/20151216012/20151216162156.jpg
inflating: dataset/data/train/2/20151216012/20151216162415.jpg
inflating: dataset/data/train/2/20151216012/20151216162504.jpg
creating: dataset/data/train/2/20151230003/
inflating: dataset/data/train/2/20151230003/20151230144252.jpg
inflating: dataset/data/train/2/20151230003/20151230144417.jpg
inflating: dataset/data/train/2/20151230003/20151230144448.jpg
inflating: dataset/data/train/2/20151230003/20151230144517.jpg
inflating: dataset/data/train/2/20151230003/20151230144549.jpg
inflating: dataset/data/train/2/20151230003/20151230144558.jpg
inflating: dataset/data/train/2/20151230003/20151230144656.jpg
creating: dataset/data/train/2/20151230007/
inflating: dataset/data/train/2/20151230007/20151230152256.jpg
inflating: dataset/data/train/2/20151230007/20151230152431.jpg
inflating: dataset/data/train/2/20151230007/20151230152455.jpg
inflating: dataset/data/train/2/20151230007/20151230152523.jpg
inflating: dataset/data/train/2/20151230007/20151230152558.jpg
inflating: dataset/data/train/2/20151230007/20151230152559.jpg
inflating: dataset/data/train/2/20151230007/20151230152708.jpg
creating: dataset/data/train/2/20151230009/
inflating: dataset/data/train/2/20151230009/20151230154917.jpg
inflating: dataset/data/train/2/20151230009/20151230155102.jpg
inflating: dataset/data/train/2/20151230009/20151230155136.jpg
inflating: dataset/data/train/2/20151230009/20151230155159.jpg
inflating: dataset/data/train/2/20151230009/20151230155229.jpg
inflating: dataset/data/train/2/20151230009/20151230155234.jpg
inflating: dataset/data/train/2/20151230009/20151230155317.jpg
creating: dataset/data/train/2/20151230015/
inflating: dataset/data/train/2/20151230015/20151230171034.jpg
inflating: dataset/data/train/2/20151230015/20151230171203.jpg
inflating: dataset/data/train/2/20151230015/20151230171224.jpg
inflating: dataset/data/train/2/20151230015/20151230171258.jpg
inflating: dataset/data/train/2/20151230015/20151230171324.jpg
inflating: dataset/data/train/2/20151230015/20151230171328.jpg
inflating: dataset/data/train/2/20151230015/20151230171438.jpg
creating: dataset/data/train/2/20151231006/
inflating: dataset/data/train/2/20151231006/20151231155301.jpg
inflating: dataset/data/train/2/20151231006/20151231155428.jpg
inflating: dataset/data/train/2/20151231006/20151231155455.jpg
inflating: dataset/data/train/2/20151231006/20151231155532.jpg
inflating: dataset/data/train/2/20151231006/20151231155553.jpg
inflating: dataset/data/train/2/20151231006/20151231155558.jpg
inflating: dataset/data/train/2/20151231006/20151231155729.jpg
creating: dataset/data/train/2/20160201003/
inflating: dataset/data/train/2/20160201003/20160201145143.jpg
inflating: dataset/data/train/2/20160201003/20160201145301.jpg
inflating: dataset/data/train/2/20160201003/20160201145334.jpg
inflating: dataset/data/train/2/20160201003/20160201145407.jpg
inflating: dataset/data/train/2/20160201003/20160201145428.jpg
inflating: dataset/data/train/2/20160201003/20160201145429.jpg
inflating: dataset/data/train/2/20160201003/20160201145537.jpg
creating: dataset/data/train/2/20160316009/
inflating: dataset/data/train/2/20160316009/20160316154216.jpg
inflating: dataset/data/train/2/20160316009/20160316154331.jpg
inflating: dataset/data/train/2/20160316009/20160316154414.jpg
inflating: dataset/data/train/2/20160316009/20160316154443.jpg
inflating: dataset/data/train/2/20160316009/20160316154515.jpg
inflating: dataset/data/train/2/20160316009/20160316154518.jpg
inflating: dataset/data/train/2/20160316009/20160316154635.jpg
creating: dataset/data/train/2/20160317001/
inflating: dataset/data/train/2/20160317001/20160317151710.jpg
inflating: dataset/data/train/2/20160317001/20160317151833.jpg
inflating: dataset/data/train/2/20160317001/20160317151903.jpg
inflating: dataset/data/train/2/20160317001/20160317151931.jpg
inflating: dataset/data/train/2/20160317001/20160317152003.jpg
inflating: dataset/data/train/2/20160317001/20160317152005.jpg
inflating: dataset/data/train/2/20160317001/20160317152107.jpg
creating: dataset/data/train/2/20160321001/
inflating: dataset/data/train/2/20160321001/20160321143811.jpg
inflating: dataset/data/train/2/20160321001/20160321143933.jpg
inflating: dataset/data/train/2/20160321001/20160321144007.jpg
inflating: dataset/data/train/2/20160321001/20160321144035.jpg
inflating: dataset/data/train/2/20160321001/20160321144103.jpg
inflating: dataset/data/train/2/20160321001/20160321144108.jpg
inflating: dataset/data/train/2/20160321001/20160321144151.jpg
creating: dataset/data/train/2/20160324005/
inflating: dataset/data/train/2/20160324005/20160324153010.jpg
inflating: dataset/data/train/2/20160324005/20160324153130.jpg
inflating: dataset/data/train/2/20160324005/20160324153156.jpg
inflating: dataset/data/train/2/20160324005/20160324153226.jpg
inflating: dataset/data/train/2/20160324005/20160324153256.jpg
inflating: dataset/data/train/2/20160324005/20160324153258.jpg
inflating: dataset/data/train/2/20160324005/20160324153345.jpg
creating: dataset/data/train/2/20160401003/
inflating: dataset/data/train/2/20160401003/20160401150627.jpg
inflating: dataset/data/train/2/20160401003/20160401150811.jpg
inflating: dataset/data/train/2/20160401003/20160401150836.jpg
inflating: dataset/data/train/2/20160401003/20160401150850.jpg
inflating: dataset/data/train/2/20160401003/20160401150917.jpg
inflating: dataset/data/train/2/20160401003/20160401150919.jpg
inflating: dataset/data/train/2/20160401003/20160401150952.jpg
creating: dataset/data/train/2/20160405004/
inflating: dataset/data/train/2/20160405004/20160405122840.jpg
inflating: dataset/data/train/2/20160405004/20160405123022.jpg
inflating: dataset/data/train/2/20160405004/20160405123044.jpg
inflating: dataset/data/train/2/20160405004/20160405123114.jpg
inflating: dataset/data/train/2/20160405004/20160405123144.jpg
inflating: dataset/data/train/2/20160405004/20160405123148.jpg
inflating: dataset/data/train/2/20160405004/20160405123259.jpg
creating: dataset/data/train/2/20160406010/
inflating: dataset/data/train/2/20160406010/20160406150011.jpg
inflating: dataset/data/train/2/20160406010/20160406150139.jpg
inflating: dataset/data/train/2/20160406010/20160406150211.jpg
inflating: dataset/data/train/2/20160406010/20160406150239.jpg
inflating: dataset/data/train/2/20160406010/20160406150310.jpg
inflating: dataset/data/train/2/20160406010/20160406150315.jpg
inflating: dataset/data/train/2/20160406010/20160406150400.jpg
creating: dataset/data/train/2/20160406013/
inflating: dataset/data/train/2/20160406013/20160406154107.jpg
inflating: dataset/data/train/2/20160406013/20160406154234.jpg
inflating: dataset/data/train/2/20160406013/20160406154306.jpg
inflating: dataset/data/train/2/20160406013/20160406154343.jpg
inflating: dataset/data/train/2/20160406013/20160406154405.jpg
inflating: dataset/data/train/2/20160406013/20160406154408.jpg
inflating: dataset/data/train/2/20160406013/20160406154450.jpg
creating: dataset/data/train/2/20160411002/
inflating: dataset/data/train/2/20160411002/20160411144840.jpg
inflating: dataset/data/train/2/20160411002/20160411145020.jpg
inflating: dataset/data/train/2/20160411002/20160411145050.jpg
inflating: dataset/data/train/2/20160411002/20160411145108.jpg
inflating: dataset/data/train/2/20160411002/20160411145124.jpg
inflating: dataset/data/train/2/20160411002/20160411145135.jpg
inflating: dataset/data/train/2/20160411002/20160411145402.jpg
creating: dataset/data/train/2/20160419001/
inflating: dataset/data/train/2/20160419001/20160419112223.jpg
inflating: dataset/data/train/2/20160419001/20160419112344.jpg
inflating: dataset/data/train/2/20160419001/20160419112413.jpg
inflating: dataset/data/train/2/20160419001/20160419112445.jpg
inflating: dataset/data/train/2/20160419001/20160419112514.jpg
inflating: dataset/data/train/2/20160419001/20160419112518.jpg
inflating: dataset/data/train/2/20160419001/20160419112609.jpg
creating: dataset/data/train/2/20160425002/
inflating: dataset/data/train/2/20160425002/20160425143325.jpg
inflating: dataset/data/train/2/20160425002/20160425143443.jpg
inflating: dataset/data/train/2/20160425002/20160425143518.jpg
inflating: dataset/data/train/2/20160425002/20160425143535.jpg
inflating: dataset/data/train/2/20160425002/20160425143612.jpg
inflating: dataset/data/train/2/20160425002/20160425143615.jpg
inflating: dataset/data/train/2/20160425002/20160425143721.jpg
creating: dataset/data/train/2/20160425003/
inflating: dataset/data/train/2/20160425003/20160425145851.jpg
inflating: dataset/data/train/2/20160425003/20160425150041.jpg
inflating: dataset/data/train/2/20160425003/20160425150108.jpg
inflating: dataset/data/train/2/20160425003/20160425150139.jpg
inflating: dataset/data/train/2/20160425003/20160425150202.jpg
inflating: dataset/data/train/2/20160425003/20160425150205.jpg
inflating: dataset/data/train/2/20160425003/20160425150250.jpg
creating: dataset/data/train/2/20160425010/
inflating: dataset/data/train/2/20160425010/20160425170006.jpg
inflating: dataset/data/train/2/20160425010/20160425170137.jpg
inflating: dataset/data/train/2/20160425010/20160425170200.jpg
inflating: dataset/data/train/2/20160425010/20160425170230.jpg
inflating: dataset/data/train/2/20160425010/20160425170302.jpg
inflating: dataset/data/train/2/20160425010/20160425170308.jpg
inflating: dataset/data/train/2/20160425010/20160425170403.jpg
creating: dataset/data/train/2/20160426001/
inflating: dataset/data/train/2/20160426001/20160426114319.jpg
inflating: dataset/data/train/2/20160426001/20160426114509.jpg
inflating: dataset/data/train/2/20160426001/20160426114533.jpg
inflating: dataset/data/train/2/20160426001/20160426114604.jpg
inflating: dataset/data/train/2/20160426001/20160426114647.jpg
inflating: dataset/data/train/2/20160426001/20160426114648.jpg
inflating: dataset/data/train/2/20160426001/20160426114749.jpg
creating: dataset/data/train/2/20160428001/
inflating: dataset/data/train/2/20160428001/20160428104702.jpg
inflating: dataset/data/train/2/20160428001/20160428104839.jpg
inflating: dataset/data/train/2/20160428001/20160428104921.jpg
inflating: dataset/data/train/2/20160428001/20160428104939.jpg
inflating: dataset/data/train/2/20160428001/20160428105010.jpg
inflating: dataset/data/train/2/20160428001/20160428105020.jpg
inflating: dataset/data/train/2/20160428001/20160428105108.jpg
creating: dataset/data/train/2/20160504009/
inflating: dataset/data/train/2/20160504009/20160504163119.jpg
inflating: dataset/data/train/2/20160504009/20160504163236.jpg
inflating: dataset/data/train/2/20160504009/20160504163306.jpg
inflating: dataset/data/train/2/20160504009/20160504163336.jpg
inflating: dataset/data/train/2/20160504009/20160504163406.jpg
inflating: dataset/data/train/2/20160504009/20160504163408.jpg
inflating: dataset/data/train/2/20160504009/20160504163454.jpg
creating: dataset/data/train/2/20160511003/
inflating: dataset/data/train/2/20160511003/20160511101023.jpg
inflating: dataset/data/train/2/20160511003/20160511101141.jpg
inflating: dataset/data/train/2/20160511003/20160511101211.jpg
inflating: dataset/data/train/2/20160511003/20160511101243.jpg
inflating: dataset/data/train/2/20160511003/20160511101312.jpg
inflating: dataset/data/train/2/20160511003/20160511101315.jpg
inflating: dataset/data/train/2/20160511003/20160511101339.jpg
creating: dataset/data/train/2/20160601004/
inflating: dataset/data/train/2/20160601004/20160601101748.jpg
inflating: dataset/data/train/2/20160601004/20160601101929.jpg
inflating: dataset/data/train/2/20160601004/20160601102000.jpg
inflating: dataset/data/train/2/20160601004/20160601102017.jpg
inflating: dataset/data/train/2/20160601004/20160601102050.jpg
inflating: dataset/data/train/2/20160601004/20160601102058.jpg
inflating: dataset/data/train/2/20160601004/20160601102124.jpg
creating: dataset/data/train/2/20160601007/
inflating: dataset/data/train/2/20160601007/20160601145539.jpg
inflating: dataset/data/train/2/20160601007/20160601145655.jpg
inflating: dataset/data/train/2/20160601007/20160601145730.jpg
inflating: dataset/data/train/2/20160601007/20160601145756.jpg
inflating: dataset/data/train/2/20160601007/20160601145825.jpg
inflating: dataset/data/train/2/20160601007/20160601145828.jpg
inflating: dataset/data/train/2/20160601007/20160601145929.jpg
creating: dataset/data/train/2/20160607001/
inflating: dataset/data/train/2/20160607001/20160607150750.jpg
inflating: dataset/data/train/2/20160607001/20160607151011.jpg
inflating: dataset/data/train/2/20160607001/20160607151028.jpg
inflating: dataset/data/train/2/20160607001/20160607151059.jpg
inflating: dataset/data/train/2/20160607001/20160607151127.jpg
inflating: dataset/data/train/2/20160607001/20160607151129.jpg
inflating: dataset/data/train/2/20160607001/20160607151233.jpg
creating: dataset/data/train/2/20160608003/
inflating: dataset/data/train/2/20160608003/20160608095938.jpg
inflating: dataset/data/train/2/20160608003/20160608100109.jpg
inflating: dataset/data/train/2/20160608003/20160608100127.jpg
inflating: dataset/data/train/2/20160608003/20160608100204.jpg
inflating: dataset/data/train/2/20160608003/20160608100227.jpg
inflating: dataset/data/train/2/20160608003/20160608100229.jpg
inflating: dataset/data/train/2/20160608003/20160608100314.jpg
creating: dataset/data/train/2/20160629009/
inflating: dataset/data/train/2/20160629009/20160629153634.jpg
inflating: dataset/data/train/2/20160629009/20160629153816.jpg
inflating: dataset/data/train/2/20160629009/20160629153840.jpg
inflating: dataset/data/train/2/20160629009/20160629153916.jpg
inflating: dataset/data/train/2/20160629009/20160629153937.jpg
inflating: dataset/data/train/2/20160629009/20160629153942.jpg
inflating: dataset/data/train/2/20160629009/20160629154124.jpg
creating: dataset/data/train/2/20160704002/
inflating: dataset/data/train/2/20160704002/20160704143753.jpg
inflating: dataset/data/train/2/20160704002/20160704143911.jpg
inflating: dataset/data/train/2/20160704002/20160704143939.jpg
inflating: dataset/data/train/2/20160704002/20160704144018.jpg
inflating: dataset/data/train/2/20160704002/20160704144040.jpg
inflating: dataset/data/train/2/20160704002/20160704144048.jpg
inflating: dataset/data/train/2/20160704002/20160704144138.jpg
creating: dataset/data/train/2/20160704004/
inflating: dataset/data/train/2/20160704004/20160704150558.jpg
inflating: dataset/data/train/2/20160704004/20160704150717.jpg
inflating: dataset/data/train/2/20160704004/20160704150747.jpg
inflating: dataset/data/train/2/20160704004/20160704150805.jpg
inflating: dataset/data/train/2/20160704004/20160704150814.jpg
inflating: dataset/data/train/2/20160704004/20160704150819.jpg
inflating: dataset/data/train/2/20160704004/20160704150905.jpg
creating: dataset/data/train/2/20160705001/
inflating: dataset/data/train/2/20160705001/20160705112514.jpg
inflating: dataset/data/train/2/20160705001/20160705112648.jpg
inflating: dataset/data/train/2/20160705001/20160705112710.jpg
inflating: dataset/data/train/2/20160705001/20160705112752.jpg
inflating: dataset/data/train/2/20160705001/20160705112806.jpg
inflating: dataset/data/train/2/20160705001/20160705112809.jpg
inflating: dataset/data/train/2/20160705001/20160705112841.jpg
creating: dataset/data/train/2/20160712002/
inflating: dataset/data/train/2/20160712002/20160712112546.jpg
inflating: dataset/data/train/2/20160712002/20160712112730.jpg
inflating: dataset/data/train/2/20160712002/20160712112759.jpg
inflating: dataset/data/train/2/20160712002/20160712112826.jpg
inflating: dataset/data/train/2/20160712002/20160712112902.jpg
inflating: dataset/data/train/2/20160712002/20160712112905.jpg
inflating: dataset/data/train/2/20160712002/20160712113008.jpg
creating: dataset/data/train/2/20160720009/
inflating: dataset/data/train/2/20160720009/20160720151520.jpg
inflating: dataset/data/train/2/20160720009/20160720151659.jpg
inflating: dataset/data/train/2/20160720009/20160720151718.jpg
inflating: dataset/data/train/2/20160720009/20160720151746.jpg
inflating: dataset/data/train/2/20160720009/20160720151816.jpg
inflating: dataset/data/train/2/20160720009/20160720151819.jpg
inflating: dataset/data/train/2/20160720009/20160720151929.jpg
creating: dataset/data/train/2/20160721004/
inflating: dataset/data/train/2/20160721004/20160721111629.jpg
inflating: dataset/data/train/2/20160721004/20160721111803.jpg
inflating: dataset/data/train/2/20160721004/20160721111833.jpg
inflating: dataset/data/train/2/20160721004/20160721111851.jpg
inflating: dataset/data/train/2/20160721004/20160721111921.jpg
inflating: dataset/data/train/2/20160721004/20160721111925.jpg
inflating: dataset/data/train/2/20160721004/20160721112056.jpg
creating: dataset/data/train/2/20160722007/
inflating: dataset/data/train/2/20160722007/20160722165122.jpg
inflating: dataset/data/train/2/20160722007/20160722165254.jpg
inflating: dataset/data/train/2/20160722007/20160722165316.jpg
inflating: dataset/data/train/2/20160722007/20160722165346.jpg
inflating: dataset/data/train/2/20160722007/20160722165412.jpg
inflating: dataset/data/train/2/20160722007/20160722165414.jpg
inflating: dataset/data/train/2/20160722007/20160722165549.jpg
creating: dataset/data/train/2/20160725003/
inflating: dataset/data/train/2/20160725003/20160725144346.jpg
inflating: dataset/data/train/2/20160725003/20160725144516.jpg
inflating: dataset/data/train/2/20160725003/20160725144545.jpg
inflating: dataset/data/train/2/20160725003/20160725144618.jpg
inflating: dataset/data/train/2/20160725003/20160725144639.jpg
inflating: dataset/data/train/2/20160725003/20160725144650.jpg
inflating: dataset/data/train/2/20160725003/20160725144745.jpg
creating: dataset/data/train/2/20160727003/
inflating: dataset/data/train/2/20160727003/20160727105108.jpg
inflating: dataset/data/train/2/20160727003/20160727105221.jpg
inflating: dataset/data/train/2/20160727003/20160727105243.jpg
inflating: dataset/data/train/2/20160727003/20160727105317.jpg
inflating: dataset/data/train/2/20160727003/20160727105351.jpg
inflating: dataset/data/train/2/20160727003/20160727105359.jpg
inflating: dataset/data/train/2/20160727003/20160727105415.jpg
creating: dataset/data/train/2/20160808001/
inflating: dataset/data/train/2/20160808001/20160808144553.jpg
inflating: dataset/data/train/2/20160808001/20160808144712.jpg
inflating: dataset/data/train/2/20160808001/20160808144740.jpg
inflating: dataset/data/train/2/20160808001/20160808144812.jpg
inflating: dataset/data/train/2/20160808001/20160808144841.jpg
inflating: dataset/data/train/2/20160808001/20160808144845.jpg
inflating: dataset/data/train/2/20160808001/20160808144906.jpg
creating: dataset/data/train/2/20160817002/
inflating: dataset/data/train/2/20160817002/20160817144606.jpg
inflating: dataset/data/train/2/20160817002/20160817144739.jpg
inflating: dataset/data/train/2/20160817002/20160817144758.jpg
inflating: dataset/data/train/2/20160817002/20160817144831.jpg
inflating: dataset/data/train/2/20160817002/20160817144858.jpg
inflating: dataset/data/train/2/20160817002/20160817144859.jpg
inflating: dataset/data/train/2/20160817002/20160817144936.jpg
creating: dataset/data/train/2/20160817004/
inflating: dataset/data/train/2/20160817004/20160817150522.jpg
inflating: dataset/data/train/2/20160817004/20160817150645.jpg
inflating: dataset/data/train/2/20160817004/20160817150712.jpg
inflating: dataset/data/train/2/20160817004/20160817150747.jpg
inflating: dataset/data/train/2/20160817004/20160817150812.jpg
inflating: dataset/data/train/2/20160817004/20160817150819.jpg
inflating: dataset/data/train/2/20160817004/20160817150956.jpg
creating: dataset/data/train/2/20160822001/
inflating: dataset/data/train/2/20160822001/20160822144911.jpg
inflating: dataset/data/train/2/20160822001/20160822145034.jpg
inflating: dataset/data/train/2/20160822001/20160822145105.jpg
inflating: dataset/data/train/2/20160822001/20160822145141.jpg
inflating: dataset/data/train/2/20160822001/20160822145203.jpg
inflating: dataset/data/train/2/20160822001/20160822145208.jpg
inflating: dataset/data/train/2/20160822001/20160822145300.jpg
creating: dataset/data/train/2/20160824007/
inflating: dataset/data/train/2/20160824007/20160824150403.jpg
inflating: dataset/data/train/2/20160824007/20160824150540.jpg
inflating: dataset/data/train/2/20160824007/20160824150606.jpg
inflating: dataset/data/train/2/20160824007/20160824150638.jpg
inflating: dataset/data/train/2/20160824007/20160824150705.jpg
inflating: dataset/data/train/2/20160824007/20160824150708.jpg
inflating: dataset/data/train/2/20160824007/20160824150816.jpg
creating: dataset/data/train/2/20160825004/
inflating: dataset/data/train/2/20160825004/20160825173923.jpg
inflating: dataset/data/train/2/20160825004/20160825174136.jpg
inflating: dataset/data/train/2/20160825004/20160825174155.jpg
inflating: dataset/data/train/2/20160825004/20160825174210.jpg
inflating: dataset/data/train/2/20160825004/20160825174237.jpg
inflating: dataset/data/train/2/20160825004/20160825174239.jpg
inflating: dataset/data/train/2/20160825004/20160825174348.jpg
creating: dataset/data/train/2/20160907004/
inflating: dataset/data/train/2/20160907004/20160907150456.jpg
inflating: dataset/data/train/2/20160907004/20160907150633.jpg
inflating: dataset/data/train/2/20160907004/20160907150702.jpg
inflating: dataset/data/train/2/20160907004/20160907150723.jpg
inflating: dataset/data/train/2/20160907004/20160907150742.jpg
inflating: dataset/data/train/2/20160907004/20160907150748.jpg
inflating: dataset/data/train/2/20160907004/20160907150931.jpg
creating: dataset/data/train/2/20160908003/
inflating: dataset/data/train/2/20160908003/20160908104712.jpg
inflating: dataset/data/train/2/20160908003/20160908105024.jpg
inflating: dataset/data/train/2/20160908003/20160908105052.jpg
inflating: dataset/data/train/2/20160908003/20160908105123.jpg
inflating: dataset/data/train/2/20160908003/20160908105152.jpg
inflating: dataset/data/train/2/20160908003/20160908105153.jpg
inflating: dataset/data/train/2/20160908003/20160908105243.jpg
creating: dataset/data/train/2/20160912007/
inflating: dataset/data/train/2/20160912007/20160912151821.jpg
inflating: dataset/data/train/2/20160912007/20160912152011.jpg
inflating: dataset/data/train/2/20160912007/20160912152040.jpg
inflating: dataset/data/train/2/20160912007/20160912152055.jpg
inflating: dataset/data/train/2/20160912007/20160912152134.jpg
inflating: dataset/data/train/2/20160912007/20160912152135.jpg
inflating: dataset/data/train/2/20160912007/20160912152249.jpg
creating: dataset/data/train/2/20160914012/
inflating: dataset/data/train/2/20160914012/20160914163210.jpg
inflating: dataset/data/train/2/20160914012/20160914163346.jpg
inflating: dataset/data/train/2/20160914012/20160914163401.jpg
inflating: dataset/data/train/2/20160914012/20160914163421.jpg
inflating: dataset/data/train/2/20160914012/20160914163434.jpg
inflating: dataset/data/train/2/20160914012/20160914163435.jpg
inflating: dataset/data/train/2/20160914012/20160914163522.jpg
creating: dataset/data/train/2/20160914015/
inflating: dataset/data/train/2/20160914015/20160914170331.jpg
inflating: dataset/data/train/2/20160914015/20160914170505.jpg
inflating: dataset/data/train/2/20160914015/20160914170528.jpg
inflating: dataset/data/train/2/20160914015/20160914170558.jpg
inflating: dataset/data/train/2/20160914015/20160914170628.jpg
inflating: dataset/data/train/2/20160914015/20160914170629.jpg
inflating: dataset/data/train/2/20160914015/20160914170719.jpg
creating: dataset/data/train/2/20160920006/
inflating: dataset/data/train/2/20160920006/20160920170435.jpg
inflating: dataset/data/train/2/20160920006/20160920170553.jpg
inflating: dataset/data/train/2/20160920006/20160920170608.jpg
inflating: dataset/data/train/2/20160920006/20160920170625.jpg
inflating: dataset/data/train/2/20160920006/20160920170652.jpg
inflating: dataset/data/train/2/20160920006/20160920170653.jpg
inflating: dataset/data/train/2/20160920006/20160920170737.jpg
creating: dataset/data/train/2/20160927005/
inflating: dataset/data/train/2/20160927005/20160927161957.jpg
inflating: dataset/data/train/2/20160927005/20160927162205.jpg
inflating: dataset/data/train/2/20160927005/20160927162212.jpg
inflating: dataset/data/train/2/20160927005/20160927162237.jpg
inflating: dataset/data/train/2/20160927005/20160927162301.jpg
inflating: dataset/data/train/2/20160927005/20160927162302.jpg
inflating: dataset/data/train/2/20160927005/20160927162403.jpg
creating: dataset/data/train/2/20160928004/
inflating: dataset/data/train/2/20160928004/20160928102428.jpg
inflating: dataset/data/train/2/20160928004/20160928102546.jpg
inflating: dataset/data/train/2/20160928004/20160928102610.jpg
inflating: dataset/data/train/2/20160928004/20160928102640.jpg
inflating: dataset/data/train/2/20160928004/20160928102713.jpg
inflating: dataset/data/train/2/20160928004/20160928102715.jpg
inflating: dataset/data/train/2/20160928004/20160928102749.jpg
creating: dataset/data/train/2/20161008001/
inflating: dataset/data/train/2/20161008001/20161008095156.jpg
inflating: dataset/data/train/2/20161008001/20161008095338.jpg
inflating: dataset/data/train/2/20161008001/20161008095354.jpg
inflating: dataset/data/train/2/20161008001/20161008095426.jpg
inflating: dataset/data/train/2/20161008001/20161008095454.jpg
inflating: dataset/data/train/2/20161008001/20161008095455.jpg
inflating: dataset/data/train/2/20161008001/20161008095533.jpg
creating: dataset/data/train/2/20161010006/
inflating: dataset/data/train/2/20161010006/20161010150522.jpg
inflating: dataset/data/train/2/20161010006/20161010150658.jpg
inflating: dataset/data/train/2/20161010006/20161010150713.jpg
inflating: dataset/data/train/2/20161010006/20161010150753.jpg
inflating: dataset/data/train/2/20161010006/20161010150813.jpg
inflating: dataset/data/train/2/20161010006/20161010150814.jpg
inflating: dataset/data/train/2/20161010006/20161010150914.jpg
creating: dataset/data/train/3/
creating: dataset/data/train/3/090200510/
inflating: dataset/data/train/3/090200510/090200510Image0.jpg
inflating: dataset/data/train/3/090200510/090200510Image10.jpg
inflating: dataset/data/train/3/090200510/090200510Image11.jpg
inflating: dataset/data/train/3/090200510/090200510Image2.jpg
inflating: dataset/data/train/3/090200510/090200510Image3.jpg
inflating: dataset/data/train/3/090200510/090200510Image8.jpg
inflating: dataset/data/train/3/090200510/090200510Image9.jpg
creating: dataset/data/train/3/093518297/
inflating: dataset/data/train/3/093518297/093518297Image0.jpg
inflating: dataset/data/train/3/093518297/093518297Image2.jpg
inflating: dataset/data/train/3/093518297/093518297Image4.jpg
inflating: dataset/data/train/3/093518297/093518297Image5.jpg
inflating: dataset/data/train/3/093518297/093518297Image6.jpg
inflating: dataset/data/train/3/093518297/093518297Image7.jpg
inflating: dataset/data/train/3/093518297/093518297Image8.jpg
creating: dataset/data/train/3/101450780/
inflating: dataset/data/train/3/101450780/101450780Image0.jpg
inflating: dataset/data/train/3/101450780/101450780Image1.jpg
inflating: dataset/data/train/3/101450780/101450780Image2.jpg
inflating: dataset/data/train/3/101450780/101450780Image3.jpg
inflating: dataset/data/train/3/101450780/101450780Image4.jpg
inflating: dataset/data/train/3/101450780/101450780Image6.jpg
inflating: dataset/data/train/3/101450780/101450780Image9.jpg
creating: dataset/data/train/3/103336120/
inflating: dataset/data/train/3/103336120/103336120Image1.jpg
inflating: dataset/data/train/3/103336120/103336120Image10.jpg
inflating: dataset/data/train/3/103336120/103336120Image11.jpg
inflating: dataset/data/train/3/103336120/103336120Image12.jpg
inflating: dataset/data/train/3/103336120/103336120Image5.jpg
inflating: dataset/data/train/3/103336120/103336120Image6.jpg
inflating: dataset/data/train/3/103336120/103336120Image7.jpg
creating: dataset/data/train/3/114204650/
inflating: dataset/data/train/3/114204650/114204650Image10.jpg
inflating: dataset/data/train/3/114204650/114204650Image13.jpg
inflating: dataset/data/train/3/114204650/114204650Image14.jpg
inflating: dataset/data/train/3/114204650/114204650Image15.jpg
inflating: dataset/data/train/3/114204650/114204650Image4.jpg
inflating: dataset/data/train/3/114204650/114204650Image8.jpg
inflating: dataset/data/train/3/114204650/114204650Image9.jpg
creating: dataset/data/train/3/115924160/
inflating: dataset/data/train/3/115924160/115924160Image0.jpg
inflating: dataset/data/train/3/115924160/115924160Image1.jpg
inflating: dataset/data/train/3/115924160/115924160Image2.jpg
inflating: dataset/data/train/3/115924160/115924160Image5.jpg
inflating: dataset/data/train/3/115924160/115924160Image7.jpg
inflating: dataset/data/train/3/115924160/115924160Image8.jpg
inflating: dataset/data/train/3/115924160/115924160Image9.jpg
creating: dataset/data/train/3/145141110/
inflating: dataset/data/train/3/145141110/145141110Image0.jpg
inflating: dataset/data/train/3/145141110/145141110Image10.jpg
inflating: dataset/data/train/3/145141110/145141110Image2.jpg
inflating: dataset/data/train/3/145141110/145141110Image3.jpg
inflating: dataset/data/train/3/145141110/145141110Image5.jpg
inflating: dataset/data/train/3/145141110/145141110Image6.jpg
inflating: dataset/data/train/3/145141110/145141110Image9.jpg
creating: dataset/data/train/3/150023453/
inflating: dataset/data/train/3/150023453/150023453Image0.jpg
inflating: dataset/data/train/3/150023453/150023453Image10.jpg
inflating: dataset/data/train/3/150023453/150023453Image2.jpg
inflating: dataset/data/train/3/150023453/150023453Image4.jpg
inflating: dataset/data/train/3/150023453/150023453Image5.jpg
inflating: dataset/data/train/3/150023453/150023453Image6.jpg
inflating: dataset/data/train/3/150023453/150023453Image7.jpg
creating: dataset/data/train/3/20150914004/
inflating: dataset/data/train/3/20150914004/20150914155806.jpg
inflating: dataset/data/train/3/20150914004/20150914155948.jpg
inflating: dataset/data/train/3/20150914004/20150914160017.jpg
inflating: dataset/data/train/3/20150914004/20150914160049.jpg
inflating: dataset/data/train/3/20150914004/20150914160121.jpg
inflating: dataset/data/train/3/20150914004/20150914160123.jpg
inflating: dataset/data/train/3/20150914004/20150914160239.jpg
creating: dataset/data/train/3/20150930010/
inflating: dataset/data/train/3/20150930010/20150930160649.jpg
inflating: dataset/data/train/3/20150930010/20150930160831.jpg
inflating: dataset/data/train/3/20150930010/20150930160900.jpg
inflating: dataset/data/train/3/20150930010/20150930160926.jpg
inflating: dataset/data/train/3/20150930010/20150930161002.jpg
inflating: dataset/data/train/3/20150930010/20150930161006.jpg
inflating: dataset/data/train/3/20150930010/20150930161104.jpg
creating: dataset/data/train/3/20151020004/
inflating: dataset/data/train/3/20151020004/20151020160653.jpg
inflating: dataset/data/train/3/20151020004/20151020160843.jpg
inflating: dataset/data/train/3/20151020004/20151020160903.jpg
inflating: dataset/data/train/3/20151020004/20151020160928.jpg
inflating: dataset/data/train/3/20151020004/20151020161109.jpg
inflating: dataset/data/train/3/20151020004/20151020161110.jpg
inflating: dataset/data/train/3/20151020004/20151020161326.jpg
creating: dataset/data/train/3/20160303007/
inflating: dataset/data/train/3/20160303007/20160303173514.jpg
inflating: dataset/data/train/3/20160303007/20160303173705.jpg
inflating: dataset/data/train/3/20160303007/20160303173742.jpg
inflating: dataset/data/train/3/20160303007/20160303173758.jpg
inflating: dataset/data/train/3/20160303007/20160303173826.jpg
inflating: dataset/data/train/3/20160303007/20160303173829.jpg
inflating: dataset/data/train/3/20160303007/20160303173853.jpg
creating: dataset/data/train/3/20160323017/
inflating: dataset/data/train/3/20160323017/20160323151927.jpg
inflating: dataset/data/train/3/20160323017/20160323152105.jpg
inflating: dataset/data/train/3/20160323017/20160323152131.jpg
inflating: dataset/data/train/3/20160323017/20160323152201.jpg
inflating: dataset/data/train/3/20160323017/20160323152231.jpg
inflating: dataset/data/train/3/20160323017/20160323152240.jpg
inflating: dataset/data/train/3/20160323017/20160323152323.jpg
creating: dataset/data/train/3/20160406014/
inflating: dataset/data/train/3/20160406014/20160406155835.jpg
inflating: dataset/data/train/3/20160406014/20160406160044.jpg
inflating: dataset/data/train/3/20160406014/20160406160059.jpg
inflating: dataset/data/train/3/20160406014/20160406160125.jpg
inflating: dataset/data/train/3/20160406014/20160406160152.jpg
inflating: dataset/data/train/3/20160406014/20160406160153.jpg
inflating: dataset/data/train/3/20160406014/20160406160345.jpg
creating: dataset/data/train/3/20160418009/
inflating: dataset/data/train/3/20160418009/20160418154437.jpg
inflating: dataset/data/train/3/20160418009/20160418154633.jpg
inflating: dataset/data/train/3/20160418009/20160418154659.jpg
inflating: dataset/data/train/3/20160418009/20160418154732.jpg
inflating: dataset/data/train/3/20160418009/20160418154803.jpg
inflating: dataset/data/train/3/20160418009/20160418154810.jpg
inflating: dataset/data/train/3/20160418009/20160418154833.jpg
creating: dataset/data/train/3/20160427004/
inflating: dataset/data/train/3/20160427004/20160427143000.jpg
inflating: dataset/data/train/3/20160427004/20160427143137.jpg
inflating: dataset/data/train/3/20160427004/20160427143202.jpg
inflating: dataset/data/train/3/20160427004/20160427143231.jpg
inflating: dataset/data/train/3/20160427004/20160427143301.jpg
inflating: dataset/data/train/3/20160427004/20160427143305.jpg
inflating: dataset/data/train/3/20160427004/20160427143358.jpg
creating: dataset/data/train/3/20160427007/
inflating: dataset/data/train/3/20160427007/20160427151800.jpg
inflating: dataset/data/train/3/20160427007/20160427151938.jpg
inflating: dataset/data/train/3/20160427007/20160427152003.jpg
inflating: dataset/data/train/3/20160427007/20160427152036.jpg
inflating: dataset/data/train/3/20160427007/20160427152112.jpg
inflating: dataset/data/train/3/20160427007/20160427152113.jpg
inflating: dataset/data/train/3/20160427007/20160427152214.jpg
creating: dataset/data/train/3/20160606004/
inflating: dataset/data/train/3/20160606004/20160606160324.jpg
inflating: dataset/data/train/3/20160606004/20160606160448.jpg
inflating: dataset/data/train/3/20160606004/20160606160514.jpg
inflating: dataset/data/train/3/20160606004/20160606160545.jpg
inflating: dataset/data/train/3/20160606004/20160606160614.jpg
inflating: dataset/data/train/3/20160606004/20160606160616.jpg
inflating: dataset/data/train/3/20160606004/20160606160700.jpg
creating: dataset/data/train/3/20160612004/
inflating: dataset/data/train/3/20160612004/20160612164314.jpg
inflating: dataset/data/train/3/20160612004/20160612164454.jpg
inflating: dataset/data/train/3/20160612004/20160612164541.jpg
inflating: dataset/data/train/3/20160612004/20160612164608.jpg
inflating: dataset/data/train/3/20160612004/20160612164615.jpg
inflating: dataset/data/train/3/20160612004/20160612164618.jpg
inflating: dataset/data/train/3/20160612004/20160612164707.jpg
creating: dataset/data/train/3/20160617002/
inflating: dataset/data/train/3/20160617002/20160617152115.jpg
inflating: dataset/data/train/3/20160617002/20160617152452.jpg
inflating: dataset/data/train/3/20160617002/20160617152502.jpg
inflating: dataset/data/train/3/20160617002/20160617152536.jpg
inflating: dataset/data/train/3/20160617002/20160617152625.jpg
inflating: dataset/data/train/3/20160617002/20160617152628.jpg
inflating: dataset/data/train/3/20160617002/20160617152854.jpg
###Markdown
**Constants** For your environment, please modify the paths accordingly.
###Code
TRAIN_PATH = '/content/dataset/data/train/'
TEST_PATH = '/content/dataset/data/test/'
# TRAIN_PATH = 'dataset/data/train/'
# TEST_PATH = 'dataset/data/test/'
CROP_SIZE = 260
IMAGE_SIZE = 224
BATCH_SIZE = 100
prefix = '/content/drive/My Drive/Studiu doctorat leziuni cervicale/V2/Chekpoints & Notebooks/'
CHECKPOINT_NATURAL_IMG_MODEL = prefix + 'Cancer Detection MobileNetV2 All Natural Images Full Conv32-0.7 6 Dec.tar'
CHECKPOINT_GREEN_LENS_IMG_MODEL = prefix + 'Cancer_Detection_MobileNetV2_Green_Lens_2_Dec Full Conv 64 0.7.tar'
CHECKPOINT_IODINE_SOLUTION_IMG_MODEL = prefix + 'Cancer_Detection_MobileNetV2_Iodine_1_Dec Full Conv32.tar'
CHECKPOINT_ENSAMBLE = prefix + 'Cancer Detection - Ensable Conv 7 Dec.tar'
###Output
_____no_output_____
###Markdown
**Imports**
###Code
import torch as t
import torchvision as tv
import numpy as np
import PIL as pil
import matplotlib.pyplot as plt
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader
from torch.nn import Linear, BCEWithLogitsLoss
import sklearn as sk
import sklearn.metrics
from os import listdir
import time
import random
###Output
_____no_output_____
###Markdown
**Deterministic Measurements** This statements help making the experiments reproducible by fixing the random seeds. Despite fixing the random seeds, experiments are usually not reproducible using different PyTorch releases, commits, platforms or between CPU and GPU executions. Please find more details in the PyTorch documentation:https://pytorch.org/docs/stable/notes/randomness.html
###Code
SEED = 0
t.manual_seed(SEED)
t.cuda.manual_seed(SEED)
t.cuda.manual_seed_all(SEED)
t.backends.cudnn.deterministic = True
t.backends.cudnn.benchmark = False
np.random.seed(SEED)
random.seed(SEED)
###Output
_____no_output_____
###Markdown
**Memory Stats**
###Code
import GPUtil
def memory_stats():
for gpu in GPUtil.getGPUs():
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
memory_stats()
###Output
GPU RAM Free: 13985MB | Used: 2295MB | Util 14% | Total 16280MB
###Markdown
**Loading Data** The dataset is structured in multiple small folders of 7 images each. This generator iterates through the folders and returns the category and 7 paths: one for each image in the folder. The paths are ordered; the order is important since each folder contains 3 types of images, first 5 are with acetic acid solution and the last two are through a green lens and having iodine solution(a solution of a dark red color).
###Code
def sortByLastDigits(elem):
chars = [c for c in elem if c.isdigit()]
return 0 if len(chars) == 0 else int(''.join(chars))
def getImagesPaths(root_path):
for class_folder in [root_path + f for f in listdir(root_path)]:
category = int(class_folder[-1])
for case_folder in listdir(class_folder):
case_folder_path = class_folder + '/' + case_folder + '/'
img_files = [case_folder_path + file_name for file_name in listdir(case_folder_path)]
yield category, sorted(img_files, key = sortByLastDigits)
###Output
_____no_output_____
###Markdown
We define 4 datasets, which load: natural images, images taken through a green lens, images where the doctor applied iodine solution (which gives a dark red color) and all images. Each dataset has dynamic and static transformations which could be applied to the data. The static transformations are applied on the initialization of the dataset, while the dynamic ones are applied when loading each batch of data.
###Code
class SimpleImagesDataset(t.utils.data.Dataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None):
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
for i in range(5):
img = pil.Image.open(img_files[i])
if transforms_x_static != None:
img = transforms_x_static(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((img, category))
def __getitem__(self, i):
x, y = self.dataset[i]
if self.transforms_x != None:
x = self.transforms_x(x)
if self.transforms_y != None:
y = self.transforms_y(y)
return x, y
def __len__(self):
return len(self.dataset)
class GreenLensImagesDataset(SimpleImagesDataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None):
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
# Only the green lens image
img = pil.Image.open(img_files[-2])
if transforms_x_static != None:
img = transforms_x_static(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((img, category))
class RedImagesDataset(SimpleImagesDataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None):
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
# Only the green lens image
img = pil.Image.open(img_files[-1])
if transforms_x_static != None:
img = transforms_x_static(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((img, category))
class TransformsRand:
def __init__(self):
self.angle = random.random()
self.scale = random.random()
self.shear = random.random()
self.hflip = random.random()
class AllImagesDataset(t.utils.data.Dataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None, is_train = True):
self.is_train = is_train
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
imgs = []
for i in range(7):
img = pil.Image.open(img_files[i])
if transforms_x_static != None:
img = transforms_x_static(img)
imgs.append(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((imgs, category))
def __getitem__(self, i):
x, y = self.dataset[i]
if self.transforms_x != None:
if self.is_train:
rand = TransformsRand()
x = [self.transforms_x(_x, rand = rand) for _x in x]
# x = [self.transforms_x(_x) for _x in x]
else:
x = [self.transforms_x(_x) for _x in x]
if self.transforms_y != None:
y = self.transforms_y(y)
return x, y
def __len__(self):
return len(self.dataset)
###Output
_____no_output_____
###Markdown
**Preprocess Data** Convert pytorch tensor to numpy array.
###Code
def to_numpy(x):
return x.cpu().detach().numpy()
###Output
_____no_output_____
###Markdown
Data transformations for the test and training sets.
###Code
norm_mean = [0.485, 0.456, 0.406]
norm_std = [0.229, 0.224, 0.225]
def custom_transforms(x, angle = 45, scale = (1., 2.), shear = 30, rand = None):
if rand == None:
rand = TransformsRand()
angle = angle * rand.angle
scale_value = scale[0] + ((scale[1] - scale[0]) * rand.scale)
shear = shear * rand.shear
x = tv.transforms.functional.affine(x, angle = angle, scale = scale_value, shear = shear, translate = [0, 0])
x = tv.transforms.functional.resize(x, IMAGE_SIZE)
if rand.hflip > .5:
x = tv.transforms.functional.hflip(x)
x = tv.transforms.functional.to_tensor(x).cuda()
x = tv.transforms.functional.normalize(x, mean=norm_mean, std=norm_std)
return x
transforms_train = tv.transforms.Compose([
tv.transforms.RandomAffine(degrees = 45, translate = None, scale = (1., 2.), shear = 30),
# tv.transforms.CenterCrop(CROP_SIZE),
tv.transforms.Resize(IMAGE_SIZE),
tv.transforms.RandomHorizontalFlip(),
tv.transforms.ToTensor(),
tv.transforms.Lambda(lambda t: t.cuda()),
tv.transforms.Normalize(mean=norm_mean, std=norm_std)
])
transforms_test = tv.transforms.Compose([
# tv.transforms.CenterCrop(CROP_SIZE),
tv.transforms.Resize(IMAGE_SIZE),
tv.transforms.ToTensor(),
tv.transforms.Normalize(mean=norm_mean, std=norm_std)
])
y_transform = tv.transforms.Lambda(lambda y: t.tensor(y, dtype=t.long, device = 'cuda:0'))
###Output
_____no_output_____
###Markdown
Initialize pytorch datasets and loaders for training and test.
###Code
def create_loaders():
dataset_train = AllImagesDataset(TRAIN_PATH, transforms_x_dynamic = custom_transforms, transforms_y_dynamic = y_transform)
dataset_test = AllImagesDataset(TEST_PATH, transforms_x_static = transforms_test,
transforms_x_dynamic = tv.transforms.Lambda(lambda t: t.cuda()), transforms_y_dynamic = y_transform, is_train = False)
loader_train = DataLoader(dataset_train, BATCH_SIZE, shuffle = True, num_workers = 0)
loader_test = DataLoader(dataset_test, BATCH_SIZE, shuffle = False, num_workers = 0)
return loader_train, loader_test, len(dataset_train), len(dataset_test)
loader_train_simple_img, loader_test_simple_img, len_train, len_test = create_loaders()
###Output
_____no_output_____
###Markdown
**Visualize Data** Load a few images so that we can see the effects of the data augmentation on the training set.
###Code
def plot_one_prediction(x, label, pred):
x, label, pred = to_numpy(x), to_numpy(label), to_numpy(pred)
x = np.transpose(x, [1, 2, 0])
if x.shape[-1] == 1:
x = x.squeeze()
x = x * np.array(norm_std) + np.array(norm_mean)
plt.title(label, color = 'green' if label == pred else 'red')
plt.imshow(x)
def plot_predictions(imgs, labels, preds):
fig = plt.figure(figsize = (20, 5))
for i in range(20):
fig.add_subplot(2, 10, i + 1, xticks = [], yticks = [])
plot_one_prediction(imgs[i], labels[i], preds[i])
# x, y = next(iter(loader_train_simple_img))
# for i in range(7):
# plot_predictions(x[i], y, y)
###Output
_____no_output_____
###Markdown
**Model** Define a few models to experiment with.
###Code
def get_mobilenet_v2():
model = t.hub.load('pytorch/vision', 'mobilenet_v2', pretrained=True)
model.classifier[0] = t.nn.Dropout(p=0.9, inplace=False)
model.classifier[1] = Linear(in_features=1280, out_features=4, bias=True)
model.features[18].add_module('cnn_drop_18', t.nn.Dropout2d(p = .3))
model.features[17]._modules['conv'][1].add_module('cnn_drop_17', t.nn.Dropout2d(p = .2))
model.features[16]._modules['conv'][1].add_module('cnn_drop_16', t.nn.Dropout2d(p = .1))
model = model.cuda()
return model
def get_vgg_19():
model = tv.models.vgg19(pretrained = True)
model = model.cuda()
model.classifier[2].p = .9
model.classifier[6].out_features = 4
return model
def get_res_next_101():
model = t.hub.load('facebookresearch/WSL-Images', 'resnext101_32x8d_wsl')
model.fc = t.nn.Sequential(
t.nn.Dropout(p = .9),
t.nn.Linear(in_features=2048, out_features=4)
)
model = model.cuda()
return model
def get_resnet_18():
model = tv.models.resnet18(pretrained = True)
model.fc = t.nn.Sequential(
t.nn.Dropout(p = .9),
t.nn.Linear(in_features=512, out_features=4)
)
model = model.cuda()
return model
def get_dense_net():
model = tv.models.densenet121(pretrained = True)
model.classifier = t.nn.Sequential(
t.nn.Dropout(p = .9),
t.nn.Linear(in_features = 1024, out_features = 4)
)
model = model.cuda()
return model
###Output
_____no_output_____
###Markdown
Define ensemble.
###Code
class WrappedModel(t.nn.Module):
def __init__(self, module):
super().__init__()
self.module = module # that I actually define.
def forward(self, x):
return self.module(x)
class MobileNetV2_FullConv(t.nn.Module):
def __init__(self, end_channels):
super().__init__()
self.cnn = get_mobilenet_v2().features
self.cnn[18] = t.nn.Sequential(
tv.models.mobilenet.ConvBNReLU(320, end_channels, kernel_size=1)
)
self.fc = t.nn.Linear(end_channels, 4)
def forward(self, x):
x = self.cnn(x)
x = x.mean([2, 3])
x = self.fc(x);
return x
class Ensamble(t.nn.Module):
def __init__(self):
super().__init__()
self.model_simple = cnn_full_conv(CHECKPOINT_NATURAL_IMG_MODEL, 32).cnn
self.model_green = cnn_full_conv(CHECKPOINT_GREEN_LENS_IMG_MODEL, 64).cnn
self.model_red = cnn_full_conv(CHECKPOINT_IODINE_SOLUTION_IMG_MODEL, 32).cnn
channels = 32 * 5 + 64 + 32
fc_size = 32
self.classifier = tv.models.mobilenet.InvertedResidual(inp = channels, oup = fc_size, stride = 1, expand_ratio = 5)
self.classifier._modules['conv'][0].add_module('classifier_drop_1', t.nn.Dropout2d(p = .6))
self.classifier._modules['conv'][1].add_module('classifier_drop_1', t.nn.Dropout2d(p = .6))
self.fc = t.nn.Sequential(
# t.nn.Dropout(p = .2),
t.nn.Linear(fc_size, 4)
)
def forward(self, x):
x_list = []
for i in range(5):
x_list.append(self.model_simple(x[i]))
x_list.append(self.model_green(x[5]))
x_list.append(self.model_red(x[6]))
x_concat = t.cat(x_list, 1)
x_concat = self.classifier(x_concat)
x_concat = x_concat.mean([2, 3])
x_concat = self.fc(x_concat)
return x_concat
def cnn(checkpoint_path):
cnn = t.hub.load('pytorch/vision', 'mobilenet_v2', pretrained=False)
cnn.classifier[0] = t.nn.Dropout(p=0, inplace=False)
cnn.classifier[1] = Linear(in_features=1280, out_features=4, bias=True)
checkpoint = t.load(checkpoint_path)
cnn.load_state_dict(checkpoint['model'])
for param in cnn.parameters():
param.requires_grad = False
return cnn
def cnn_from_data_parallel(checkpoint_path):
cnn = t.hub.load('pytorch/vision', 'mobilenet_v2', pretrained=False)
cnn.classifier[0] = t.nn.Dropout(p=0, inplace=False)
cnn.classifier[1] = Linear(in_features=1280, out_features=4, bias=True)
cnn = WrappedModel(cnn)
checkpoint = t.load(checkpoint_path)
cnn.load_state_dict(checkpoint['model'])
for param in cnn.parameters():
param.requires_grad = False
return cnn
def cnn_full_conv(checkpoint_path, end_channels_nb):
cnn = MobileNetV2_FullConv(end_channels_nb)
checkpoint = t.load(checkpoint_path)
cnn.load_state_dict(checkpoint['model'])
for param in cnn.parameters():
param.requires_grad = False
return cnn
###Output
_____no_output_____
###Markdown
**Evaluate**
###Code
model_4_class = t.nn.DataParallel(Ensamble().cuda())
checkpoint_4_class = t.load(CHECKPOINT_ENSAMBLE)
model_4_class.load_state_dict(checkpoint_4_class['model'])
def evaluate(model, loader_test, len_test):
test_acc, test_precision, test_recall, test_f_score = 0, 0, 0, 0
model.eval()
with t.no_grad():
for x, y in loader_test:
y_pred = model.forward(x)
y_pred, y = to_numpy(y_pred), to_numpy(y)
pred = y_pred.argmax(axis = 1)
pred = pred > 1
y = y > 1
ratio = len(y) / len_test
test_acc += (sk.metrics.accuracy_score(y, pred) * ratio)
precision, recall, f_score, _ = sk.metrics.precision_recall_fscore_support(y, pred, average = 'macro')
test_precision += (precision * ratio)
test_recall += (recall * ratio)
test_f_score += (f_score * ratio)
print('Acc {} prec {} rec {} f {}'.format(test_acc, test_precision, test_recall, test_f_score))
return test_acc, test_precision, test_recall, test_f_score
evaluate(model_4_class, loader_test_simple_img, len_test)
###Output
Acc 0.8958333333333334 prec 0.8965217391304348 rec 0.8958333333333333 f 0.8957881024750325
|
Inference and Benchmark.ipynb | ###Markdown
Inference Using Normal Tensorflow model
###Code
# Load Tensorflow model
detect_fn = tf.saved_model.load(TF_SAVED_MODEL_PATH)
#Inference
a = detect_fn(np.random.normal(size=(1, , 600, 3)).astype(np.float32))
# Benchmark Tensorflow Model
tenf = benchmark(detect_fn, input_shape=(1,1200,1200,3))
###Output
Warm up ...
Start timing ...
Iteration 10/100, avg batch time 50.20 ms
Iteration 20/100, avg batch time 50.28 ms
Iteration 30/100, avg batch time 50.34 ms
Iteration 40/100, avg batch time 50.29 ms
Iteration 50/100, avg batch time 50.25 ms
Iteration 60/100, avg batch time 50.29 ms
Iteration 70/100, avg batch time 50.31 ms
Iteration 80/100, avg batch time 50.31 ms
Iteration 90/100, avg batch time 50.29 ms
Iteration 100/100, avg batch time 50.31 ms
Input shape: (1, 1200, 1200, 3)
Average batch time: 50.31 ms
###Markdown
Creating TensorRT Optimized Model
###Code
from tensorflow.python.client import device_lib
# Checks if TensorRT compatible GPU is present
def check_tensor_core_gpu_present():
local_device_protos = device_lib.list_local_devices()
for line in local_device_protos:
if "compute capability" in str(line):
compute_capability = float(line.physical_device_desc.split("compute capability: ")[-1])
if compute_capability>=7.0:
return True
print("Tensor Core GPU Present:", check_tensor_core_gpu_present())
tensor_core_gpu = check_tensor_core_gpu_present()
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(precision_mode=trt.TrtPrecisionMode.FP16,
max_workspace_size_bytes=8000000000)
converter = trt.TrtGraphConverterV2(input_saved_model_dir=TF_SAVED_MODEL_PATH,
conversion_params=conversion_params)
converter.convert()
converter.save(TRT_EXPORTED_MODEL)
print('Done Converting to TF-TRT FP16')
###Output
_____no_output_____
###Markdown
Inference from TensorRT Model:
###Code
#Load TensorRT Model
model = tf.saved_model.load(TRT_EXPORTED_MODEL)
func = model.signatures['serving_default']
#Sample image from internet
!wget -O test.jpg "https://health.clevelandclinic.org/wp-content/uploads/sites/3/2018/03/Polyp.png"
#Convert Image to Tensor
samp = input_ten('test.jpg')
## Inference output from model
output = func(samp)
process_output(output)
# Benchmark TensorRT Model
trt = benchmark(func, input_shape=(1,1200,1200,3))
###Output
Warm up ...
Start timing ...
Iteration 10/100, avg batch time 21.26 ms
Iteration 20/100, avg batch time 21.36 ms
Iteration 30/100, avg batch time 21.36 ms
Iteration 40/100, avg batch time 21.23 ms
Iteration 50/100, avg batch time 21.16 ms
Iteration 60/100, avg batch time 21.08 ms
Iteration 70/100, avg batch time 21.02 ms
Iteration 80/100, avg batch time 20.99 ms
Iteration 90/100, avg batch time 20.96 ms
Iteration 100/100, avg batch time 20.91 ms
Input shape: (1, 1200, 1200, 3)
Average batch time: 20.91 ms
###Markdown
Comparing
###Code
# Plotting Tensorflow vs TensorRT Inference Time plot.
fig = plt.figure(figsize = (10, 10))
y = [round(tenf,2),round(trt,2)]
plt.rcParams.update({'font.size': 15})
# creating the bar plot
barr = plt.bar(['TensorFLow','TensorRT'], y)
barr[0].set_color('orange')
barr[1].set_color('green')
plt.ylabel("Time taken for inference for one image (in ms)",fontdict=dict(fontsize=15))
plt.title("Comparision of Inference Time (Lower is better)",fontdict=dict(fontsize=15))
for index,data in enumerate(y):
plt.text(x=index , y =data +0.03, s=f"{data} ms" ,ha='center', fontdict=dict(fontsize=18))
plt.show()
###Output
_____no_output_____ |
python/d2l-en/pytorch/chapter_optimization/sgd.ipynb | ###Markdown
Stochastic Gradient Descent:label:`sec_sgd`In earlier chapters we kept using stochastic gradient descent in our training procedure, however, without explaining why it works.To shed some light on it,we just described the basic principles of gradient descentin :numref:`sec_gd`.In this section, we go on to discuss*stochastic gradient descent* in greater detail.
###Code
%matplotlib inline
import math
import torch
from d2l import torch as d2l
###Output
_____no_output_____
###Markdown
Stochastic Gradient UpdatesIn deep learning, the objective function is usually the average of the loss functions for each example in the training dataset.Given a training dataset of $n$ examples,we assume that $f_i(\mathbf{x})$ is the loss functionwith respect to the training example of index $i$,where $\mathbf{x}$ is the parameter vector.Then we arrive at the objective function$$f(\mathbf{x}) = \frac{1}{n} \sum_{i = 1}^n f_i(\mathbf{x}).$$The gradient of the objective function at $\mathbf{x}$ is computed as$$\nabla f(\mathbf{x}) = \frac{1}{n} \sum_{i = 1}^n \nabla f_i(\mathbf{x}).$$If gradient descent is used, the computational cost for each independent variable iteration is $\mathcal{O}(n)$, which grows linearly with $n$. Therefore, when the training dataset is larger, the cost of gradient descent for each iteration will be higher.Stochastic gradient descent (SGD) reduces computational cost at each iteration. At each iteration of stochastic gradient descent, we uniformly sample an index $i\in\{1,\ldots, n\}$ for data examples at random, and compute the gradient $\nabla f_i(\mathbf{x})$ to update $\mathbf{x}$:$$\mathbf{x} \leftarrow \mathbf{x} - \eta \nabla f_i(\mathbf{x}),$$where $\eta$ is the learning rate. We can see that the computational cost for each iteration drops from $\mathcal{O}(n)$ of the gradient descent to the constant $\mathcal{O}(1)$. Moreover, we want to emphasize that the stochastic gradient $\nabla f_i(\mathbf{x})$ is an unbiased estimate of the full gradient $\nabla f(\mathbf{x})$ because$$\mathbb{E}_i \nabla f_i(\mathbf{x}) = \frac{1}{n} \sum_{i = 1}^n \nabla f_i(\mathbf{x}) = \nabla f(\mathbf{x}).$$This means that, on average, the stochastic gradient is a good estimate of the gradient.Now, we will compare it with gradient descent by adding random noise with a mean of 0 and a variance of 1 to the gradient to simulate a stochastic gradient descent.
###Code
def f(x1, x2): # Objective function
return x1 ** 2 + 2 * x2 ** 2
def f_grad(x1, x2): # Gradient of the objective function
return 2 * x1, 4 * x2
def sgd(x1, x2, s1, s2, f_grad):
g1, g2 = f_grad(x1, x2)
# Simulate noisy gradient
g1 += torch.normal(0.0, 1, (1,))
g2 += torch.normal(0.0, 1, (1,))
eta_t = eta * lr()
return (x1 - eta_t * g1, x2 - eta_t * g2, 0, 0)
def constant_lr():
return 1
eta = 0.1
lr = constant_lr # Constant learning rate
d2l.show_trace_2d(f, d2l.train_2d(sgd, steps=50, f_grad=f_grad))
###Output
epoch 50, x1: 0.161185, x2: 0.143164
###Markdown
As we can see, the trajectory of the variables in the stochastic gradient descent is much more noisy than the one we observed in gradient descent in :numref:`sec_gd`. This is due to the stochastic nature of the gradient. That is, even when we arrive near the minimum, we are still subject to the uncertainty injected by the instantaneous gradient via $\eta \nabla f_i(\mathbf{x})$. Even after 50 steps the quality is still not so good. Even worse, it will not improve after additional steps (we encourage you to experiment with a larger number of steps to confirm this). This leaves us with the only alternative: change the learning rate $\eta$. However, if we pick this too small, we will not make any meaningful progress initially. On the other hand, if we pick it too large, we will not get a good solution, as seen above. The only way to resolve these conflicting goals is to reduce the learning rate *dynamically* as optimization progresses.This is also the reason for adding a learning rate function `lr` into the `sgd` step function. In the example above any functionality for learning rate scheduling lies dormant as we set the associated `lr` function to be constant. Dynamic Learning RateReplacing $\eta$ with a time-dependent learning rate $\eta(t)$ adds to the complexity of controlling convergence of an optimization algorithm. In particular, we need to figure out how rapidly $\eta$ should decay. If it is too quick, we will stop optimizing prematurely. If we decrease it too slowly, we waste too much time on optimization. The following are a few basic strategies that are used in adjusting $\eta$ over time (we will discuss more advanced strategies later):$$\begin{aligned} \eta(t) & = \eta_i \text{ if } t_i \leq t \leq t_{i+1} && \text{piecewise constant} \\ \eta(t) & = \eta_0 \cdot e^{-\lambda t} && \text{exponential decay} \\ \eta(t) & = \eta_0 \cdot (\beta t + 1)^{-\alpha} && \text{polynomial decay}\end{aligned}$$In the first *piecewise constant* scenario we decrease the learning rate, e.g., whenever progress in optimization stalls. This is a common strategy for training deep networks. Alternatively we could decrease it much more aggressively by an *exponential decay*. Unfortunately this often leads to premature stopping before the algorithm has converged. A popular choice is *polynomial decay* with $\alpha = 0.5$. In the case of convex optimization there are a number of proofs that show that this rate is well behaved.Let us see what the exponential decay looks like in practice.
###Code
def exponential_lr():
# Global variable that is defined outside this function and updated inside
global t
t += 1
return math.exp(-0.1 * t)
t = 1
lr = exponential_lr
d2l.show_trace_2d(f, d2l.train_2d(sgd, steps=1000, f_grad=f_grad))
###Output
epoch 1000, x1: -0.827536, x2: -0.041107
###Markdown
As expected, the variance in the parameters is significantly reduced. However, this comes at the expense of failing to converge to the optimal solution $\mathbf{x} = (0, 0)$. Even after 1000 iteration steps are we are still very far away from the optimal solution. Indeed, the algorithm fails to converge at all. On the other hand, if we use a polynomial decay where the learning rate decays with the inverse square root of the number of steps, convergence gets better after only 50 steps.
###Code
def polynomial_lr():
# Global variable that is defined outside this function and updated inside
global t
t += 1
return (1 + 0.1 * t) ** (-0.5)
t = 1
lr = polynomial_lr
d2l.show_trace_2d(f, d2l.train_2d(sgd, steps=50, f_grad=f_grad))
###Output
epoch 50, x1: 0.124031, x2: 0.040822
|
coding snippets/model_testing.ipynb | ###Markdown
Simulate data based on the model.simulate drawing at random from a population of whom 26% are Black
###Code
# There are 100 panelists
sample_size = 100
# These are chosen from a population whose
# demographic proportions are roughly 26% black, 74% white
# let's make a list
eligible_population = [0.26, 0.74]
# we will use sample_proportions
# feed it the sample size, the list or proportions
# the categories in the output array are in the same
# order as the input, so we want item(0) to know the
# proportion of the 100 person panel that is black
sample_proportions(sample_size, eligible_population).item(0)
# you'll notice the proportion of black panelists
# varies with each sample. But are there any as low as 0.08?
# if we want to know the NUMBER of black panelists in a random sample,
# we multiply the proportion by the sample size.
# let's write a function to do this.
def one_simulated_count():
return sample_size * sample_proportions(sample_size, eligible_population).item(0)
###Output
_____no_output_____
###Markdown
Simulating Multiple Values of the Statistic
###Code
# Now, we want to generate 10000 simulations
# and see how they vary.
# let's do this with a for loop
# create an empty array
# populate it with results from each simulation
counts = make_array()
repetitions = 10000
for i in np.arange(repetitions):
counts = np.append(counts, one_simulated_count())
###Output
_____no_output_____
###Markdown
What the Model Predicts
###Code
# generate an empirical histogram of simulated counts.
Table().with_column(
'Count in a Random Sample', counts
).hist(bins = np.arange(5.5, 46.6, 1))
###Output
_____no_output_____
###Markdown
Comparing the Predicted and Observed Data
###Code
Table().with_column(
'Count in a Random Sample', counts
).hist(bins = np.arange(5.5, 46.6, 1))
plots.ylim(-0.002, 0.09)
plots.scatter(8, 0, color='red', s=30);
###Output
_____no_output_____ |
01.getting-started/01.train-within-notebook/01.train-within-notebook.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. 01. Train in the Notebook & Deploy Model to ACI* Load workspace* Train a simple regression model directly in the Notebook python kernel* Record run history* Find the best model in run history and download it.* Deploy the model as an Azure Container Instance (ACI) Prerequisites1. Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't. 2. Install following pre-requisite libraries to your conda environment and restart notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```3. Check that ACI is registered for your Azure Subscription.
###Code
!az provider show -n Microsoft.ContainerInstance -o table
###Output
_____no_output_____
###Markdown
If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail.
###Code
!az provider register -n Microsoft.ContainerInstance
###Output
_____no_output_____
###Markdown
Validate Azure ML SDK installation and get version number for debugging purposes
###Code
from azureml.core import Experiment, Run, Workspace
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment nameChoose a name for experiment.
###Code
experiment_name = 'train-in-notebook'
###Output
_____no_output_____
###Markdown
Start a training run in local Notebook
###Code
# load diabetes dataset, a well-known small dataset that comes with scikit-learn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
data = {
"train":{"X": X_train, "y": y_train},
"test":{"X": X_test, "y": y_test}
}
###Output
_____no_output_____
###Markdown
Train a simple Ridge modelTrain a very simple Ridge regression model in scikit-learn, and save it as a pickle file.
###Code
reg = Ridge(alpha = 0.03)
reg.fit(X=data['train']['X'], y=data['train']['y'])
preds = reg.predict(data['test']['X'])
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl');
###Output
_____no_output_____
###Markdown
Add experiment trackingNow, let's add Azure ML experiment logging, and upload persisted model into run record as well.
###Code
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
run.tag("Description","My first run!")
run.log('alpha', 0.03)
reg = Ridge(alpha=0.03)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
run.log('mse', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl')
run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page.
###Code
run
###Output
_____no_output_____
###Markdown
Simple parameter sweepSweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment.
###Code
import numpy as np
import os
from tqdm import tqdm
model_name = "model.pkl"
# list of numbers from 0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
# try a bunch of alpha values in a Linear Regression (Ridge) model
for alpha in tqdm(alphas):
# create a bunch of runs, each train a model with a different alpha value
with experiment.start_logging() as run:
# Use Ridge algorithm to build a regression model
reg = Ridge(alpha=alpha)
reg.fit(X=data["train"]["X"], y=data["train"]["y"])
preds = reg.predict(X=data["test"]["X"])
mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds)
# log alpha, mean_squared_error and feature names in run history
run.log(name="alpha", value=alpha)
run.log(name="mse", value=mse)
run.log_list(name="columns", value=columns)
with open(model_name, "wb") as file:
joblib.dump(value=reg, filename=file)
# upload the serialized model into run history record
run.upload_file(name="outputs/" + model_name, path_or_stream=model_name)
# now delete the serialized model from local folder since it is already uploaded to run history
os.remove(path=model_name)
###Output
_____no_output_____
###Markdown
Select best model from the experimentLoad all experiment run metrics recursively from the experiment into a dictionary object.
###Code
runs = {}
run_metrics = {}
for r in tqdm(experiment.get_runs()):
metrics = r.get_metrics()
if 'mse' in metrics.keys():
runs[r.id] = r
run_metrics[r.id] = metrics
###Output
_____no_output_____
###Markdown
Now find the run with the lowest Mean Squared Error value
###Code
best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])
best_run = runs[best_run_id]
print('Best run is:', best_run_id)
print('Metrics:', run_metrics[best_run_id])
###Output
_____no_output_____
###Markdown
You can add tags to your runs to make them easier to catalog
###Code
best_run.tag(key="Description", value="The best one")
best_run.get_tags()
###Output
_____no_output_____
###Markdown
Plot MSE over alphaLet's observe the best model visually by plotting the MSE values over alpha values:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
best_alpha = run_metrics[best_run_id]['alpha']
min_mse = run_metrics[best_run_id]['mse']
alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])
sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')
plt.xlabel('alpha', fontsize = 14)
plt.ylabel('mean squared error', fontsize = 14)
plt.title('MSE over alpha', fontsize = 16)
# plot arrow
plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,
width = 0, head_width = .03, head_length = 8)
# plot "best run" text
plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)
plt.show()
###Output
_____no_output_____
###Markdown
Register the best model Find the model file saved in the run record of best run.
###Code
for f in best_run.get_file_names():
print(f)
###Output
_____no_output_____
###Markdown
Now we can register this model in the model registry of the workspace
###Code
model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')
###Output
_____no_output_____
###Markdown
Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time.
###Code
models = ws.models(name='best_model')
for m in models:
print(m.name, m.version)
###Output
_____no_output_____
###Markdown
You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like.
###Code
# remove the model file if it is already on disk
if os.path.isfile('model.pkl'):
os.remove('model.pkl')
# download the model
model.download(target_dir="./")
###Output
_____no_output_____
###Markdown
Scoring scriptNow we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed.
###Code
with open('./score.py', 'r') as scoring_script:
print(scoring_script.read())
###Output
_____no_output_____
###Markdown
Create environment dependency fileWe need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_conda_package("scikit-learn")
print(myenv.serialize_to_string())
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
Deploy web service into an Azure Container InstanceThe deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ** Note: ** The web service creation can take 6-7 minutes.
###Code
from azureml.core.webservice import AciWebservice, Webservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'sample name': 'AML 101'},
description='This is a great example.')
###Output
_____no_output_____
###Markdown
Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`. If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy.
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="myenv.yml")
%%time
# this will take 5-10 minutes to finish
# you can also use "az container list" command to find the ACI being deployed
service = Webservice.deploy_from_model(name='my-aci-svc',
deployment_config=aciconfig,
models=[model],
image_config=image_config,
workspace=ws)
service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Test web service
###Code
print('web service is hosted in ACI:', service.scoring_uri)
###Output
_____no_output_____
###Markdown
Use the `run` API to call the web service with one row of data to get a prediction.
###Code
import json
# score the first row from the test set.
test_samples = json.dumps({"data": X_test[0:1, :].tolist()})
service.run(input_data = test_samples)
###Output
_____no_output_____
###Markdown
Feed the entire test set and calculate the errors (residual values).
###Code
# score the entire test set.
test_samples = json.dumps({'data': X_test.tolist()})
result = json.loads(service.run(input_data = test_samples))['result']
residual = result - y_test
###Output
_____no_output_____
###Markdown
You can also send raw HTTP request to test the web service.
###Code
import requests
import json
# 2 rows of input data, each with 10 made-up numerical features
input_data = "{\"data\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers = headers)
print(resp.text)
###Output
_____no_output_____
###Markdown
Residual graphPlot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve.
###Code
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(14)
a0.plot(residual, 'bo', alpha=0.4);
a0.plot([0,90], [0,0], 'r', lw=2)
a0.set_ylabel('residue values', fontsize=14)
a0.set_xlabel('test data set', fontsize=14)
a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');
a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);
a1.set_yticklabels([])
plt.show()
###Output
_____no_output_____
###Markdown
Delete ACI to clean up Deleting ACI is super fast!
###Code
%%time
service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. 01. Train in the Notebook & Deploy Model to ACI* Load workspace* Train a simple regression model directly in the Notebook python kernel* Record run history* Find the best model in run history and download it.* Deploy the model as an Azure Container Instance (ACI) Prerequisites1. Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't. 2. Install following pre-requisite libraries to your conda environment and restart notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```3. Check that ACI is registered for your Azure Subscription.
###Code
!az provider show -n Microsoft.ContainerInstance -o table
###Output
_____no_output_____
###Markdown
If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail.
###Code
!az provider register -n Microsoft.ContainerInstance
###Output
_____no_output_____
###Markdown
Validate Azure ML SDK installation and get version number for debugging purposes
###Code
from azureml.core import Experiment, Run, Workspace
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment nameChoose a name for experiment.
###Code
experiment_name = 'train-in-notebook'
###Output
_____no_output_____
###Markdown
Start a training run in local Notebook
###Code
# load diabetes dataset, a well-known small dataset that comes with scikit-learn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
data = {
"train":{"X": X_train, "y": y_train},
"test":{"X": X_test, "y": y_test}
}
###Output
_____no_output_____
###Markdown
Train a simple Ridge modelTrain a very simple Ridge regression model in scikit-learn, and save it as a pickle file.
###Code
reg = Ridge(alpha = 0.03)
reg.fit(X=data['train']['X'], y=data['train']['y'])
preds = reg.predict(data['test']['X'])
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl');
###Output
_____no_output_____
###Markdown
Add experiment trackingNow, let's add Azure ML experiment logging, and upload persisted model into run record as well.
###Code
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
run.tag("Description","My first run!")
run.log('alpha', 0.03)
reg = Ridge(alpha=0.03)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
run.log('mse', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl')
run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page.
###Code
run
###Output
_____no_output_____
###Markdown
Simple parameter sweepSweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment.
###Code
import numpy as np
import os
from tqdm import tqdm
model_name = "model.pkl"
# list of numbers from 0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
# try a bunch of alpha values in a Linear Regression (Ridge) model
for alpha in tqdm(alphas):
# create a bunch of runs, each train a model with a different alpha value
with experiment.start_logging() as run:
# Use Ridge algorithm to build a regression model
reg = Ridge(alpha=alpha)
reg.fit(X=data["train"]["X"], y=data["train"]["y"])
preds = reg.predict(X=data["test"]["X"])
mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds)
# log alpha, mean_squared_error and feature names in run history
run.log(name="alpha", value=alpha)
run.log(name="mse", value=mse)
run.log_list(name="columns", value=columns)
with open(model_name, "wb") as file:
joblib.dump(value=reg, filename=file)
# upload the serialized model into run history record
run.upload_file(name="outputs/" + model_name, path_or_stream=model_name)
# now delete the serialized model from local folder since it is already uploaded to run history
os.remove(path=model_name)
# now let's take a look at the experiment in Azure portal.
experiment
###Output
_____no_output_____
###Markdown
Select best model from the experimentLoad all experiment run metrics recursively from the experiment into a dictionary object.
###Code
runs = {}
run_metrics = {}
for r in tqdm(experiment.get_runs()):
metrics = r.get_metrics()
if 'mse' in metrics.keys():
runs[r.id] = r
run_metrics[r.id] = metrics
###Output
_____no_output_____
###Markdown
Now find the run with the lowest Mean Squared Error value
###Code
best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])
best_run = runs[best_run_id]
print('Best run is:', best_run_id)
print('Metrics:', run_metrics[best_run_id])
###Output
_____no_output_____
###Markdown
You can add tags to your runs to make them easier to catalog
###Code
best_run.tag(key="Description", value="The best one")
best_run.get_tags()
###Output
_____no_output_____
###Markdown
Plot MSE over alphaLet's observe the best model visually by plotting the MSE values over alpha values:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
best_alpha = run_metrics[best_run_id]['alpha']
min_mse = run_metrics[best_run_id]['mse']
alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])
sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')
plt.xlabel('alpha', fontsize = 14)
plt.ylabel('mean squared error', fontsize = 14)
plt.title('MSE over alpha', fontsize = 16)
# plot arrow
plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,
width = 0, head_width = .03, head_length = 8)
# plot "best run" text
plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)
plt.show()
###Output
_____no_output_____
###Markdown
Register the best model Find the model file saved in the run record of best run.
###Code
for f in best_run.get_file_names():
print(f)
###Output
_____no_output_____
###Markdown
Now we can register this model in the model registry of the workspace
###Code
model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')
###Output
_____no_output_____
###Markdown
Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time.
###Code
from azureml.core.model import Model
models = Model.list(workspace=ws, name='best_model')
for m in models:
print(m.name, m.version)
###Output
_____no_output_____
###Markdown
You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like.
###Code
# remove the model file if it is already on disk
if os.path.isfile('model.pkl'):
os.remove('model.pkl')
# download the model
model.download(target_dir="./")
###Output
_____no_output_____
###Markdown
Scoring scriptNow we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed.
###Code
with open('./score.py', 'r') as scoring_script:
print(scoring_script.read())
###Output
_____no_output_____
###Markdown
Create environment dependency fileWe need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_conda_package("scikit-learn")
myenv.add_pip_package("pynacl==1.2.1")
print(myenv.serialize_to_string())
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
Deploy web service into an Azure Container InstanceThe deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ** Note: ** The web service creation can take 6-7 minutes.
###Code
from azureml.core.webservice import AciWebservice, Webservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'sample name': 'AML 101'},
description='This is a great example.')
###Output
_____no_output_____
###Markdown
Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`. If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy.
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="myenv.yml")
%%time
# this will take 5-10 minutes to finish
# you can also use "az container list" command to find the ACI being deployed
service = Webservice.deploy_from_model(name='my-aci-svc',
deployment_config=aciconfig,
models=[model],
image_config=image_config,
workspace=ws)
service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Test web service
###Code
print('web service is hosted in ACI:', service.scoring_uri)
###Output
_____no_output_____
###Markdown
Use the `run` API to call the web service with one row of data to get a prediction.
###Code
import json
# score the first row from the test set.
test_samples = json.dumps({"data": X_test[0:1, :].tolist()})
service.run(input_data = test_samples)
###Output
_____no_output_____
###Markdown
Feed the entire test set and calculate the errors (residual values).
###Code
# score the entire test set.
test_samples = json.dumps({'data': X_test.tolist()})
result = json.loads(service.run(input_data = test_samples))['result']
residual = result - y_test
###Output
_____no_output_____
###Markdown
You can also send raw HTTP request to test the web service.
###Code
import requests
import json
# 2 rows of input data, each with 10 made-up numerical features
input_data = "{\"data\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers = headers)
print(resp.text)
###Output
_____no_output_____
###Markdown
Residual graphPlot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve.
###Code
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(14)
a0.plot(residual, 'bo', alpha=0.4);
a0.plot([0,90], [0,0], 'r', lw=2)
a0.set_ylabel('residue values', fontsize=14)
a0.set_xlabel('test data set', fontsize=14)
a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');
a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);
a1.set_yticklabels([])
plt.show()
###Output
_____no_output_____
###Markdown
Delete ACI to clean up Deleting ACI is super fast!
###Code
%%time
service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. 01. Train in the Notebook & Deploy Model to ACI* Load workspace* Train a simple regression model directly in the Notebook python kernel* Record run history* Find the best model in run history and download it.* Deploy the model as an Azure Container Instance (ACI) Prerequisites1. Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't. 2. Install following pre-requisite libraries to your conda environment and restart notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```3. Check that ACI is registered for your Azure Subscription.
###Code
!az provider show -n Microsoft.ContainerInstance -o table
###Output
_____no_output_____
###Markdown
If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail.
###Code
!az provider register -n Microsoft.ContainerInstance
###Output
_____no_output_____
###Markdown
Validate Azure ML SDK installation and get version number for debugging purposes
###Code
from azureml.core import Experiment, Run, Workspace
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment nameChoose a name for experiment.
###Code
experiment_name = 'train-in-notebook'
###Output
_____no_output_____
###Markdown
Start a training run in local Notebook
###Code
# load diabetes dataset, a well-known small dataset that comes with scikit-learn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
data = {
"train":{"X": X_train, "y": y_train},
"test":{"X": X_test, "y": y_test}
}
###Output
_____no_output_____
###Markdown
Train a simple Ridge modelTrain a very simple Ridge regression model in scikit-learn, and save it as a pickle file.
###Code
reg = Ridge(alpha = 0.03)
reg.fit(X=data['train']['X'], y=data['train']['y'])
preds = reg.predict(data['test']['X'])
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl');
###Output
_____no_output_____
###Markdown
Add experiment trackingNow, let's add Azure ML experiment logging, and upload persisted model into run record as well.
###Code
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
run.tag("Description","My first run!")
run.log('alpha', 0.03)
reg = Ridge(alpha=0.03)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
run.log('mse', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl')
run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page.
###Code
run
###Output
_____no_output_____
###Markdown
Simple parameter sweepSweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment.
###Code
import numpy as np
import os
from tqdm import tqdm
model_name = "model.pkl"
# list of numbers from 0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
# try a bunch of alpha values in a Linear Regression (Ridge) model
for alpha in tqdm(alphas):
# create a bunch of runs, each train a model with a different alpha value
with experiment.start_logging() as run:
# Use Ridge algorithm to build a regression model
reg = Ridge(alpha=alpha)
reg.fit(X=data["train"]["X"], y=data["train"]["y"])
preds = reg.predict(X=data["test"]["X"])
mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds)
# log alpha, mean_squared_error and feature names in run history
run.log(name="alpha", value=alpha)
run.log(name="mse", value=mse)
run.log_list(name="columns", value=columns)
with open(model_name, "wb") as file:
joblib.dump(value=reg, filename=file)
# upload the serialized model into run history record
run.upload_file(name="outputs/" + model_name, path_or_stream=model_name)
# now delete the serialized model from local folder since it is already uploaded to run history
os.remove(path=model_name)
# now let's take a look at the experiment in Azure portal.
experiment
###Output
_____no_output_____
###Markdown
Select best model from the experimentLoad all experiment run metrics recursively from the experiment into a dictionary object.
###Code
runs = {}
run_metrics = {}
for r in tqdm(experiment.get_runs()):
metrics = r.get_metrics()
if 'mse' in metrics.keys():
runs[r.id] = r
run_metrics[r.id] = metrics
###Output
_____no_output_____
###Markdown
Now find the run with the lowest Mean Squared Error value
###Code
best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])
best_run = runs[best_run_id]
print('Best run is:', best_run_id)
print('Metrics:', run_metrics[best_run_id])
###Output
_____no_output_____
###Markdown
You can add tags to your runs to make them easier to catalog
###Code
best_run.tag(key="Description", value="The best one")
best_run.get_tags()
###Output
_____no_output_____
###Markdown
Plot MSE over alphaLet's observe the best model visually by plotting the MSE values over alpha values:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
best_alpha = run_metrics[best_run_id]['alpha']
min_mse = run_metrics[best_run_id]['mse']
alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])
sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')
plt.xlabel('alpha', fontsize = 14)
plt.ylabel('mean squared error', fontsize = 14)
plt.title('MSE over alpha', fontsize = 16)
# plot arrow
plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,
width = 0, head_width = .03, head_length = 8)
# plot "best run" text
plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)
plt.show()
###Output
_____no_output_____
###Markdown
Register the best model Find the model file saved in the run record of best run.
###Code
for f in best_run.get_file_names():
print(f)
###Output
_____no_output_____
###Markdown
Now we can register this model in the model registry of the workspace
###Code
model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')
###Output
_____no_output_____
###Markdown
Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time.
###Code
from azureml.core.model import Model
models = Model.list(workspace=ws, name='best_model')
for m in models:
print(m.name, m.version)
###Output
_____no_output_____
###Markdown
You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like.
###Code
# remove the model file if it is already on disk
if os.path.isfile('model.pkl'):
os.remove('model.pkl')
# download the model
model.download(target_dir="./")
###Output
_____no_output_____
###Markdown
Scoring scriptNow we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed.
###Code
with open('./score.py', 'r') as scoring_script:
print(scoring_script.read())
###Output
_____no_output_____
###Markdown
Create environment dependency fileWe need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_conda_package("scikit-learn")
print(myenv.serialize_to_string())
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
Deploy web service into an Azure Container InstanceThe deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ** Note: ** The web service creation can take 6-7 minutes.
###Code
from azureml.core.webservice import AciWebservice, Webservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'sample name': 'AML 101'},
description='This is a great example.')
###Output
_____no_output_____
###Markdown
Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`. If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy.
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="myenv.yml")
%%time
# this will take 5-10 minutes to finish
# you can also use "az container list" command to find the ACI being deployed
service = Webservice.deploy_from_model(name='my-aci-svc',
deployment_config=aciconfig,
models=[model],
image_config=image_config,
workspace=ws)
service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Test web service
###Code
print('web service is hosted in ACI:', service.scoring_uri)
###Output
_____no_output_____
###Markdown
Use the `run` API to call the web service with one row of data to get a prediction.
###Code
import json
# score the first row from the test set.
test_samples = json.dumps({"data": X_test[0:1, :].tolist()})
service.run(input_data = test_samples)
###Output
_____no_output_____
###Markdown
Feed the entire test set and calculate the errors (residual values).
###Code
# score the entire test set.
test_samples = json.dumps({'data': X_test.tolist()})
result = json.loads(service.run(input_data = test_samples))
residual = result - y_test
###Output
_____no_output_____
###Markdown
You can also send raw HTTP request to test the web service.
###Code
import requests
import json
# 2 rows of input data, each with 10 made-up numerical features
input_data = "{\"data\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers = headers)
print(resp.text)
###Output
_____no_output_____
###Markdown
Residual graphPlot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve.
###Code
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(14)
a0.plot(residual, 'bo', alpha=0.4);
a0.plot([0,90], [0,0], 'r', lw=2)
a0.set_ylabel('residue values', fontsize=14)
a0.set_xlabel('test data set', fontsize=14)
a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');
a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);
a1.set_yticklabels([])
plt.show()
###Output
_____no_output_____
###Markdown
Delete ACI to clean up Deleting ACI is super fast!
###Code
%%time
service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. 01. Train in the Notebook & Deploy Model to ACI* Load workspace* Train a simple regression model directly in the Notebook python kernel* Record run history* Find the best model in run history and download it.* Deploy the model as an Azure Container Instance (ACI) Prerequisites1. Make sure you go through the [00. Installation and Configuration](../../00.configuration.ipynb) Notebook first if you haven't. 2. Install following pre-requisite libraries to your conda environment and restart notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```3. Check that ACI is registered for your Azure Subscription.
###Code
!az provider show -n Microsoft.ContainerInstance -o table
###Output
_____no_output_____
###Markdown
If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail.
###Code
!az provider register -n Microsoft.ContainerInstance
###Output
_____no_output_____
###Markdown
Validate Azure ML SDK installation and get version number for debugging purposes
###Code
from azureml.core import Experiment, Run, Workspace
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment nameChoose a name for experiment.
###Code
experiment_name = 'train-in-notebook'
###Output
_____no_output_____
###Markdown
Start a training run in local Notebook
###Code
# load diabetes dataset, a well-known small dataset that comes with scikit-learn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
data = {
"train":{"X": X_train, "y": y_train},
"test":{"X": X_test, "y": y_test}
}
###Output
_____no_output_____
###Markdown
Train a simple Ridge modelTrain a very simple Ridge regression model in scikit-learn, and save it as a pickle file.
###Code
reg = Ridge(alpha = 0.03)
reg.fit(X=data['train']['X'], y=data['train']['y'])
preds = reg.predict(data['test']['X'])
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl');
###Output
_____no_output_____
###Markdown
Add experiment trackingNow, let's add Azure ML experiment logging, and upload persisted model into run record as well.
###Code
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
run.tag("Description","My first run!")
run.log('alpha', 0.03)
reg = Ridge(alpha=0.03)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
run.log('mse', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl')
run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page.
###Code
run
###Output
_____no_output_____
###Markdown
Simple parameter sweepSweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment.
###Code
import numpy as np
import os
from tqdm import tqdm
model_name = "model.pkl"
# list of numbers from 0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
# try a bunch of alpha values in a Linear Regression (Ridge) model
for alpha in tqdm(alphas):
# create a bunch of runs, each train a model with a different alpha value
with experiment.start_logging() as run:
# Use Ridge algorithm to build a regression model
reg = Ridge(alpha=alpha)
reg.fit(X=data["train"]["X"], y=data["train"]["y"])
preds = reg.predict(X=data["test"]["X"])
mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds)
# log alpha, mean_squared_error and feature names in run history
run.log(name="alpha", value=alpha)
run.log(name="mse", value=mse)
run.log_list(name="columns", value=columns)
with open(model_name, "wb") as file:
joblib.dump(value=reg, filename=file)
# upload the serialized model into run history record
run.upload_file(name="outputs/" + model_name, path_or_stream=model_name)
# now delete the serialized model from local folder since it is already uploaded to run history
os.remove(path=model_name)
# now let's take a look at the experiment in Azure portal.
experiment
###Output
_____no_output_____
###Markdown
Select best model from the experimentLoad all experiment run metrics recursively from the experiment into a dictionary object.
###Code
runs = {}
run_metrics = {}
for r in tqdm(experiment.get_runs()):
metrics = r.get_metrics()
if 'mse' in metrics.keys():
runs[r.id] = r
run_metrics[r.id] = metrics
###Output
_____no_output_____
###Markdown
Now find the run with the lowest Mean Squared Error value
###Code
best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])
best_run = runs[best_run_id]
print('Best run is:', best_run_id)
print('Metrics:', run_metrics[best_run_id])
###Output
_____no_output_____
###Markdown
You can add tags to your runs to make them easier to catalog
###Code
best_run.tag(key="Description", value="The best one")
best_run.get_tags()
###Output
_____no_output_____
###Markdown
Plot MSE over alphaLet's observe the best model visually by plotting the MSE values over alpha values:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
best_alpha = run_metrics[best_run_id]['alpha']
min_mse = run_metrics[best_run_id]['mse']
alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])
sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')
plt.xlabel('alpha', fontsize = 14)
plt.ylabel('mean squared error', fontsize = 14)
plt.title('MSE over alpha', fontsize = 16)
# plot arrow
plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,
width = 0, head_width = .03, head_length = 8)
# plot "best run" text
plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)
plt.show()
###Output
_____no_output_____
###Markdown
Register the best model Find the model file saved in the run record of best run.
###Code
for f in best_run.get_file_names():
print(f)
###Output
_____no_output_____
###Markdown
Now we can register this model in the model registry of the workspace
###Code
model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')
###Output
_____no_output_____
###Markdown
Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time.
###Code
from azureml.core.model import Model
models = Model.list(workspace=ws, name='best_model')
for m in models:
print(m.name, m.version)
###Output
_____no_output_____
###Markdown
You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like.
###Code
# remove the model file if it is already on disk
if os.path.isfile('model.pkl'):
os.remove('model.pkl')
# download the model
model.download(target_dir="./")
###Output
_____no_output_____
###Markdown
Scoring scriptNow we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed.
###Code
with open('./score.py', 'r') as scoring_script:
print(scoring_script.read())
###Output
_____no_output_____
###Markdown
Create environment dependency fileWe need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=["scikit-learn"])
print(myenv.serialize_to_string())
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
Deploy web service into an Azure Container InstanceThe deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ** Note: ** The web service creation can take 6-7 minutes.
###Code
from azureml.core.webservice import AciWebservice, Webservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'sample name': 'AML 101'},
description='This is a great example.')
###Output
_____no_output_____
###Markdown
Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`. If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy.
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="myenv.yml")
%%time
# this will take 5-10 minutes to finish
# you can also use "az container list" command to find the ACI being deployed
service = Webservice.deploy_from_model(name='my-aci-svc',
deployment_config=aciconfig,
models=[model],
image_config=image_config,
workspace=ws)
service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Test web service
###Code
print('web service is hosted in ACI:', service.scoring_uri)
###Output
_____no_output_____
###Markdown
Use the `run` API to call the web service with one row of data to get a prediction.
###Code
import json
# score the first row from the test set.
test_samples = json.dumps({"data": X_test[0:1, :].tolist()})
service.run(input_data = test_samples)
###Output
_____no_output_____
###Markdown
Feed the entire test set and calculate the errors (residual values).
###Code
# score the entire test set.
test_samples = json.dumps({'data': X_test.tolist()})
result = service.run(input_data = test_samples)
residual = result - y_test
###Output
_____no_output_____
###Markdown
You can also send raw HTTP request to test the web service.
###Code
import requests
import json
# 2 rows of input data, each with 10 made-up numerical features
input_data = "{\"data\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers = headers)
print(resp.text)
###Output
_____no_output_____
###Markdown
Residual graphPlot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve.
###Code
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(14)
a0.plot(residual, 'bo', alpha=0.4);
a0.plot([0,90], [0,0], 'r', lw=2)
a0.set_ylabel('residue values', fontsize=14)
a0.set_xlabel('test data set', fontsize=14)
a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');
a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);
a1.set_yticklabels([])
plt.show()
###Output
_____no_output_____
###Markdown
Delete ACI to clean up Deleting ACI is super fast!
###Code
%%time
service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. 01. Train in the Notebook & Deploy Model to ACI* Load workspace* Train a simple regression model directly in the Notebook python kernel* Record run history* Find the best model in run history and download it.* Deploy the model as an Azure Container Instance (ACI) Prerequisites1. Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't. 2. Install following pre-requisite libraries to your conda environment and restart notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```3. Check that ACI is registered for your Azure Subscription.
###Code
!az provider show -n Microsoft.ContainerInstance -o table
###Output
_____no_output_____
###Markdown
If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail.
###Code
!az provider register -n Microsoft.ContainerInstance
###Output
_____no_output_____
###Markdown
Validate Azure ML SDK installation and get version number for debugging purposes
###Code
from azureml.core import Experiment, Run, Workspace
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment nameChoose a name for experiment.
###Code
experiment_name = 'train-in-notebook'
###Output
_____no_output_____
###Markdown
Start a training run in local Notebook
###Code
# load diabetes dataset, a well-known small dataset that comes with scikit-learn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
data = {
"train":{"X": X_train, "y": y_train},
"test":{"X": X_test, "y": y_test}
}
###Output
_____no_output_____
###Markdown
Train a simple Ridge modelTrain a very simple Ridge regression model in scikit-learn, and save it as a pickle file.
###Code
reg = Ridge(alpha = 0.03)
reg.fit(X=data['train']['X'], y=data['train']['y'])
preds = reg.predict(data['test']['X'])
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl');
###Output
_____no_output_____
###Markdown
Add experiment trackingNow, let's add Azure ML experiment logging, and upload persisted model into run record as well.
###Code
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
run.tag("Description","My first run!")
run.log('alpha', 0.03)
reg = Ridge(alpha=0.03)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
run.log('mse', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl')
run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page.
###Code
run
###Output
_____no_output_____
###Markdown
Simple parameter sweepSweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment.
###Code
import numpy as np
import os
from tqdm import tqdm
model_name = "model.pkl"
# list of numbers from 0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
# try a bunch of alpha values in a Linear Regression (Ridge) model
for alpha in tqdm(alphas):
# create a bunch of runs, each train a model with a different alpha value
with experiment.start_logging() as run:
# Use Ridge algorithm to build a regression model
reg = Ridge(alpha=alpha)
reg.fit(X=data["train"]["X"], y=data["train"]["y"])
preds = reg.predict(X=data["test"]["X"])
mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds)
# log alpha, mean_squared_error and feature names in run history
run.log(name="alpha", value=alpha)
run.log(name="mse", value=mse)
run.log_list(name="columns", value=columns)
with open(model_name, "wb") as file:
joblib.dump(value=reg, filename=file)
# upload the serialized model into run history record
run.upload_file(name="outputs/" + model_name, path_or_stream=model_name)
# now delete the serialized model from local folder since it is already uploaded to run history
os.remove(path=model_name)
# now let's take a look at the experiment in Azure portal.
experiment
###Output
_____no_output_____
###Markdown
Select best model from the experimentLoad all experiment run metrics recursively from the experiment into a dictionary object.
###Code
runs = {}
run_metrics = {}
for r in tqdm(experiment.get_runs()):
metrics = r.get_metrics()
if 'mse' in metrics.keys():
runs[r.id] = r
run_metrics[r.id] = metrics
###Output
_____no_output_____
###Markdown
Now find the run with the lowest Mean Squared Error value
###Code
best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])
best_run = runs[best_run_id]
print('Best run is:', best_run_id)
print('Metrics:', run_metrics[best_run_id])
###Output
_____no_output_____
###Markdown
You can add tags to your runs to make them easier to catalog
###Code
best_run.tag(key="Description", value="The best one")
best_run.get_tags()
###Output
_____no_output_____
###Markdown
Plot MSE over alphaLet's observe the best model visually by plotting the MSE values over alpha values:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
best_alpha = run_metrics[best_run_id]['alpha']
min_mse = run_metrics[best_run_id]['mse']
alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])
sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')
plt.xlabel('alpha', fontsize = 14)
plt.ylabel('mean squared error', fontsize = 14)
plt.title('MSE over alpha', fontsize = 16)
# plot arrow
plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,
width = 0, head_width = .03, head_length = 8)
# plot "best run" text
plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)
plt.show()
###Output
_____no_output_____
###Markdown
Register the best model Find the model file saved in the run record of best run.
###Code
for f in best_run.get_file_names():
print(f)
###Output
_____no_output_____
###Markdown
Now we can register this model in the model registry of the workspace
###Code
model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')
###Output
_____no_output_____
###Markdown
Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time.
###Code
from azureml.core.model import Model
models = Model.list(workspace=ws, name='best_model')
for m in models:
print(m.name, m.version)
###Output
_____no_output_____
###Markdown
You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like.
###Code
# remove the model file if it is already on disk
if os.path.isfile('model.pkl'):
os.remove('model.pkl')
# download the model
model.download(target_dir="./")
###Output
_____no_output_____
###Markdown
Scoring scriptNow we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed.
###Code
with open('./score.py', 'r') as scoring_script:
print(scoring_script.read())
###Output
_____no_output_____
###Markdown
Create environment dependency fileWe need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_conda_package("scikit-learn")
print(myenv.serialize_to_string())
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
Deploy web service into an Azure Container InstanceThe deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ** Note: ** The web service creation can take 6-7 minutes.
###Code
from azureml.core.webservice import AciWebservice, Webservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'sample name': 'AML 101'},
description='This is a great example.')
###Output
_____no_output_____
###Markdown
Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`. If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy.
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="myenv.yml")
%%time
# this will take 5-10 minutes to finish
# you can also use "az container list" command to find the ACI being deployed
service = Webservice.deploy_from_model(name='my-aci-svc',
deployment_config=aciconfig,
models=[model],
image_config=image_config,
workspace=ws)
service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Test web service
###Code
print('web service is hosted in ACI:', service.scoring_uri)
###Output
_____no_output_____
###Markdown
Use the `run` API to call the web service with one row of data to get a prediction.
###Code
import json
# score the first row from the test set.
test_samples = json.dumps({"data": X_test[0:1, :].tolist()})
service.run(input_data = test_samples)
###Output
_____no_output_____
###Markdown
Feed the entire test set and calculate the errors (residual values).
###Code
# score the entire test set.
test_samples = json.dumps({'data': X_test.tolist()})
result = json.loads(service.run(input_data = test_samples))['result']
residual = result - y_test
###Output
_____no_output_____
###Markdown
You can also send raw HTTP request to test the web service.
###Code
import requests
import json
# 2 rows of input data, each with 10 made-up numerical features
input_data = "{\"data\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers = headers)
print(resp.text)
###Output
_____no_output_____
###Markdown
Residual graphPlot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve.
###Code
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(14)
a0.plot(residual, 'bo', alpha=0.4);
a0.plot([0,90], [0,0], 'r', lw=2)
a0.set_ylabel('residue values', fontsize=14)
a0.set_xlabel('test data set', fontsize=14)
a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');
a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);
a1.set_yticklabels([])
plt.show()
###Output
_____no_output_____
###Markdown
Delete ACI to clean up Deleting ACI is super fast!
###Code
%%time
service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. 01. Train in the Notebook & Deploy Model to ACI* Load workspace* Train a simple regression model directly in the Notebook python kernel* Record run history* Find the best model in run history and download it.* Deploy the model as an Azure Container Instance (ACI) Prerequisites1. Make sure you go through the [00. Installation and Configuration](../../00.configuration.ipynb) Notebook first if you haven't. 2. Install following pre-requisite libraries to your conda environment and restart notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```3. Check that ACI is registered for your Azure Subscription.
###Code
!az provider show -n Microsoft.ContainerInstance -o table
###Output
_____no_output_____
###Markdown
If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail.
###Code
!az provider register -n Microsoft.ContainerInstance
###Output
_____no_output_____
###Markdown
Validate Azure ML SDK installation and get version number for debugging purposes
###Code
from azureml.core import Experiment, Run, Workspace
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment nameChoose a name for experiment.
###Code
experiment_name = 'train-in-notebook'
###Output
_____no_output_____
###Markdown
Start a training run in local Notebook
###Code
# load diabetes dataset, a well-known small dataset that comes with scikit-learn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
data = {
"train":{"X": X_train, "y": y_train},
"test":{"X": X_test, "y": y_test}
}
###Output
_____no_output_____
###Markdown
Train a simple Ridge modelTrain a very simple Ridge regression model in scikit-learn, and save it as a pickle file.
###Code
reg = Ridge(alpha = 0.03)
reg.fit(X=data['train']['X'], y=data['train']['y'])
preds = reg.predict(data['test']['X'])
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl');
###Output
_____no_output_____
###Markdown
Add experiment trackingNow, let's add Azure ML experiment logging, and upload persisted model into run record as well.
###Code
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
run.tag("Description","My first run!")
run.log('alpha', 0.03)
reg = Ridge(alpha=0.03)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
run.log('mse', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl')
run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page.
###Code
run
###Output
_____no_output_____
###Markdown
Simple parameter sweepSweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment.
###Code
import numpy as np
import os
from tqdm import tqdm
model_name = "model.pkl"
# list of numbers from 0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
# try a bunch of alpha values in a Linear Regression (Ridge) model
for alpha in tqdm(alphas):
# create a bunch of runs, each train a model with a different alpha value
with experiment.start_logging() as run:
# Use Ridge algorithm to build a regression model
reg = Ridge(alpha=alpha)
reg.fit(X=data["train"]["X"], y=data["train"]["y"])
preds = reg.predict(X=data["test"]["X"])
mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds)
# log alpha, mean_squared_error and feature names in run history
run.log(name="alpha", value=alpha)
run.log(name="mse", value=mse)
run.log_list(name="columns", value=columns)
with open(model_name, "wb") as file:
joblib.dump(value=reg, filename=file)
# upload the serialized model into run history record
run.upload_file(name="outputs/" + model_name, path_or_stream=model_name)
# now delete the serialized model from local folder since it is already uploaded to run history
os.remove(path=model_name)
# now let's take a look at the experiment in Azure portal.
experiment
###Output
_____no_output_____
###Markdown
Select best model from the experimentLoad all experiment run metrics recursively from the experiment into a dictionary object.
###Code
runs = {}
run_metrics = {}
for r in tqdm(experiment.get_runs()):
metrics = r.get_metrics()
if 'mse' in metrics.keys():
runs[r.id] = r
run_metrics[r.id] = metrics
###Output
_____no_output_____
###Markdown
Now find the run with the lowest Mean Squared Error value
###Code
best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])
best_run = runs[best_run_id]
print('Best run is:', best_run_id)
print('Metrics:', run_metrics[best_run_id])
###Output
_____no_output_____
###Markdown
You can add tags to your runs to make them easier to catalog
###Code
best_run.tag(key="Description", value="The best one")
best_run.get_tags()
###Output
_____no_output_____
###Markdown
Plot MSE over alphaLet's observe the best model visually by plotting the MSE values over alpha values:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
best_alpha = run_metrics[best_run_id]['alpha']
min_mse = run_metrics[best_run_id]['mse']
alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])
sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')
plt.xlabel('alpha', fontsize = 14)
plt.ylabel('mean squared error', fontsize = 14)
plt.title('MSE over alpha', fontsize = 16)
# plot arrow
plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,
width = 0, head_width = .03, head_length = 8)
# plot "best run" text
plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)
plt.show()
###Output
_____no_output_____
###Markdown
Register the best model Find the model file saved in the run record of best run.
###Code
for f in best_run.get_file_names():
print(f)
###Output
_____no_output_____
###Markdown
Now we can register this model in the model registry of the workspace
###Code
model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')
###Output
_____no_output_____
###Markdown
Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time.
###Code
from azureml.core.model import Model
models = Model.list(workspace=ws, name='best_model')
for m in models:
print(m.name, m.version)
###Output
_____no_output_____
###Markdown
You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like.
###Code
# remove the model file if it is already on disk
if os.path.isfile('model.pkl'):
os.remove('model.pkl')
# download the model
model.download(target_dir="./")
###Output
_____no_output_____
###Markdown
Scoring scriptNow we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed.
###Code
with open('./score.py', 'r') as scoring_script:
print(scoring_script.read())
###Output
_____no_output_____
###Markdown
Create environment dependency fileWe need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_conda_package("scikit-learn")
print(myenv.serialize_to_string())
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
Deploy web service into an Azure Container InstanceThe deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ** Note: ** The web service creation can take 6-7 minutes.
###Code
from azureml.core.webservice import AciWebservice, Webservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'sample name': 'AML 101'},
description='This is a great example.')
###Output
_____no_output_____
###Markdown
Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`. If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy.
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="myenv.yml")
%%time
# this will take 5-10 minutes to finish
# you can also use "az container list" command to find the ACI being deployed
service = Webservice.deploy_from_model(name='my-aci-svc',
deployment_config=aciconfig,
models=[model],
image_config=image_config,
workspace=ws)
service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Test web service
###Code
print('web service is hosted in ACI:', service.scoring_uri)
###Output
_____no_output_____
###Markdown
Use the `run` API to call the web service with one row of data to get a prediction.
###Code
import json
# score the first row from the test set.
test_samples = json.dumps({"data": X_test[0:1, :].tolist()})
service.run(input_data = test_samples)
###Output
_____no_output_____
###Markdown
Feed the entire test set and calculate the errors (residual values).
###Code
# score the entire test set.
test_samples = json.dumps({'data': X_test.tolist()})
result = service.run(input_data = test_samples)
residual = result - y_test
###Output
_____no_output_____
###Markdown
You can also send raw HTTP request to test the web service.
###Code
import requests
import json
# 2 rows of input data, each with 10 made-up numerical features
input_data = "{\"data\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers = headers)
print(resp.text)
###Output
_____no_output_____
###Markdown
Residual graphPlot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve.
###Code
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(14)
a0.plot(residual, 'bo', alpha=0.4);
a0.plot([0,90], [0,0], 'r', lw=2)
a0.set_ylabel('residue values', fontsize=14)
a0.set_xlabel('test data set', fontsize=14)
a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');
a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);
a1.set_yticklabels([])
plt.show()
###Output
_____no_output_____
###Markdown
Delete ACI to clean up Deleting ACI is super fast!
###Code
%%time
service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. 01. Train in the Notebook & Deploy Model to ACI* Load workspace* Train a simple regression model directly in the Notebook python kernel* Record run history* Find the best model in run history and download it.* Deploy the model as an Azure Container Instance (ACI) Prerequisites1. Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't. 2. Install following pre-requisite libraries to your conda environment and restart notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```3. Check that ACI is registered for your Azure Subscription.
###Code
!az provider show -n Microsoft.ContainerInstance -o table
###Output
_____no_output_____
###Markdown
If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail.
###Code
!az provider register -n Microsoft.ContainerInstance
###Output
_____no_output_____
###Markdown
Validate Azure ML SDK installation and get version number for debugging purposes
###Code
from azureml.core import Experiment, Run, Workspace
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment nameChoose a name for experiment.
###Code
experiment_name = 'train-in-notebook'
###Output
_____no_output_____
###Markdown
Start a training run in local Notebook
###Code
# load diabetes dataset, a well-known small dataset that comes with scikit-learn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
data = {
"train":{"X": X_train, "y": y_train},
"test":{"X": X_test, "y": y_test}
}
###Output
_____no_output_____
###Markdown
Train a simple Ridge modelTrain a very simple Ridge regression model in scikit-learn, and save it as a pickle file.
###Code
reg = Ridge(alpha = 0.03)
reg.fit(X=data['train']['X'], y=data['train']['y'])
preds = reg.predict(data['test']['X'])
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl');
###Output
_____no_output_____
###Markdown
Add experiment trackingNow, let's add Azure ML experiment logging, and upload persisted model into run record as well.
###Code
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
run.tag("Description","My first run!")
run.log('alpha', 0.03)
reg = Ridge(alpha=0.03)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
run.log('mse', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl')
run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page.
###Code
run
###Output
_____no_output_____
###Markdown
Simple parameter sweepSweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment.
###Code
import numpy as np
import os
from tqdm import tqdm
model_name = "model.pkl"
# list of numbers from 0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
# try a bunch of alpha values in a Linear Regression (Ridge) model
for alpha in tqdm(alphas):
# create a bunch of runs, each train a model with a different alpha value
with experiment.start_logging() as run:
# Use Ridge algorithm to build a regression model
reg = Ridge(alpha=alpha)
reg.fit(X=data["train"]["X"], y=data["train"]["y"])
preds = reg.predict(X=data["test"]["X"])
mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds)
# log alpha, mean_squared_error and feature names in run history
run.log(name="alpha", value=alpha)
run.log(name="mse", value=mse)
run.log_list(name="columns", value=columns)
with open(model_name, "wb") as file:
joblib.dump(value=reg, filename=file)
# upload the serialized model into run history record
run.upload_file(name="outputs/" + model_name, path_or_stream=model_name)
# now delete the serialized model from local folder since it is already uploaded to run history
os.remove(path=model_name)
# now let's take a look at the experiment in Azure portal.
experiment
###Output
_____no_output_____
###Markdown
Select best model from the experimentLoad all experiment run metrics recursively from the experiment into a dictionary object.
###Code
runs = {}
run_metrics = {}
for r in tqdm(experiment.get_runs()):
metrics = r.get_metrics()
if 'mse' in metrics.keys():
runs[r.id] = r
run_metrics[r.id] = metrics
###Output
_____no_output_____
###Markdown
Now find the run with the lowest Mean Squared Error value
###Code
best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])
best_run = runs[best_run_id]
print('Best run is:', best_run_id)
print('Metrics:', run_metrics[best_run_id])
###Output
_____no_output_____
###Markdown
You can add tags to your runs to make them easier to catalog
###Code
best_run.tag(key="Description", value="The best one")
best_run.get_tags()
###Output
_____no_output_____
###Markdown
Plot MSE over alphaLet's observe the best model visually by plotting the MSE values over alpha values:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
best_alpha = run_metrics[best_run_id]['alpha']
min_mse = run_metrics[best_run_id]['mse']
alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])
sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')
plt.xlabel('alpha', fontsize = 14)
plt.ylabel('mean squared error', fontsize = 14)
plt.title('MSE over alpha', fontsize = 16)
# plot arrow
plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,
width = 0, head_width = .03, head_length = 8)
# plot "best run" text
plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)
plt.show()
###Output
_____no_output_____
###Markdown
Register the best model Find the model file saved in the run record of best run.
###Code
for f in best_run.get_file_names():
print(f)
###Output
_____no_output_____
###Markdown
Now we can register this model in the model registry of the workspace
###Code
model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')
###Output
_____no_output_____
###Markdown
Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time.
###Code
models = ws.models(name='best_model')
for m in models:
print(m.name, m.version)
###Output
_____no_output_____
###Markdown
You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like.
###Code
# remove the model file if it is already on disk
if os.path.isfile('model.pkl'):
os.remove('model.pkl')
# download the model
model.download(target_dir="./")
###Output
_____no_output_____
###Markdown
Scoring scriptNow we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed.
###Code
with open('./score.py', 'r') as scoring_script:
print(scoring_script.read())
###Output
_____no_output_____
###Markdown
Create environment dependency fileWe need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_conda_package("scikit-learn")
print(myenv.serialize_to_string())
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
Deploy web service into an Azure Container InstanceThe deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ** Note: ** The web service creation can take 6-7 minutes.
###Code
from azureml.core.webservice import AciWebservice, Webservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'sample name': 'AML 101'},
description='This is a great example.')
###Output
_____no_output_____
###Markdown
Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`. If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy.
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="myenv.yml")
%%time
# this will take 5-10 minutes to finish
# you can also use "az container list" command to find the ACI being deployed
service = Webservice.deploy_from_model(name='my-aci-svc',
deployment_config=aciconfig,
models=[model],
image_config=image_config,
workspace=ws)
service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Test web service
###Code
print('web service is hosted in ACI:', service.scoring_uri)
###Output
_____no_output_____
###Markdown
Use the `run` API to call the web service with one row of data to get a prediction.
###Code
import json
# score the first row from the test set.
test_samples = json.dumps({"data": X_test[0:1, :].tolist()})
service.run(input_data = test_samples)
###Output
_____no_output_____
###Markdown
Feed the entire test set and calculate the errors (residual values).
###Code
# score the entire test set.
test_samples = json.dumps({'data': X_test.tolist()})
result = json.loads(service.run(input_data = test_samples))['result']
residual = result - y_test
###Output
_____no_output_____
###Markdown
You can also send raw HTTP request to test the web service.
###Code
import requests
import json
# 2 rows of input data, each with 10 made-up numerical features
input_data = "{\"data\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers = headers)
print(resp.text)
###Output
_____no_output_____
###Markdown
Residual graphPlot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve.
###Code
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(14)
a0.plot(residual, 'bo', alpha=0.4);
a0.plot([0,90], [0,0], 'r', lw=2)
a0.set_ylabel('residue values', fontsize=14)
a0.set_xlabel('test data set', fontsize=14)
a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');
a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);
a1.set_yticklabels([])
plt.show()
###Output
_____no_output_____
###Markdown
Delete ACI to clean up Deleting ACI is super fast!
###Code
%%time
service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. 01. Train in the Notebook & Deploy Model to ACI* Load workspace* Train a simple regression model directly in the Notebook python kernel* Record run history* Find the best model in run history and download it.* Deploy the model as an Azure Container Instance (ACI) Prerequisites1. Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't. 2. Install following pre-requisite libraries to your conda environment and restart notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```3. Check that ACI is registered for your Azure Subscription.
###Code
!az provider show -n Microsoft.ContainerInstance -o table
###Output
_____no_output_____
###Markdown
If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail.
###Code
!az provider register -n Microsoft.ContainerInstance
###Output
_____no_output_____
###Markdown
Validate Azure ML SDK installation and get version number for debugging purposes
###Code
from azureml.core import Experiment, Run, Workspace
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment nameChoose a name for experiment.
###Code
experiment_name = 'train-in-notebook'
###Output
_____no_output_____
###Markdown
Start a training run in local Notebook
###Code
# load diabetes dataset, a well-known small dataset that comes with scikit-learn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
data = {
"train":{"X": X_train, "y": y_train},
"test":{"X": X_test, "y": y_test}
}
###Output
_____no_output_____
###Markdown
Train a simple Ridge modelTrain a very simple Ridge regression model in scikit-learn, and save it as a pickle file.
###Code
reg = Ridge(alpha = 0.03)
reg.fit(X=data['train']['X'], y=data['train']['y'])
preds = reg.predict(data['test']['X'])
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl');
###Output
_____no_output_____
###Markdown
Add experiment trackingNow, let's add Azure ML experiment logging, and upload persisted model into run record as well.
###Code
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
run.tag("Description","My first run!")
run.log('alpha', 0.03)
reg = Ridge(alpha=0.03)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
run.log('mse', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl')
run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page.
###Code
run
###Output
_____no_output_____
###Markdown
Simple parameter sweepSweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment.
###Code
import numpy as np
import os
from tqdm import tqdm
model_name = "model.pkl"
# list of numbers from 0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
# try a bunch of alpha values in a Linear Regression (Ridge) model
for alpha in tqdm(alphas):
# create a bunch of runs, each train a model with a different alpha value
with experiment.start_logging() as run:
# Use Ridge algorithm to build a regression model
reg = Ridge(alpha=alpha)
reg.fit(X=data["train"]["X"], y=data["train"]["y"])
preds = reg.predict(X=data["test"]["X"])
mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds)
# log alpha, mean_squared_error and feature names in run history
run.log(name="alpha", value=alpha)
run.log(name="mse", value=mse)
run.log_list(name="columns", value=columns)
with open(model_name, "wb") as file:
joblib.dump(value=reg, filename=file)
# upload the serialized model into run history record
run.upload_file(name="outputs/" + model_name, path_or_stream=model_name)
# now delete the serialized model from local folder since it is already uploaded to run history
os.remove(path=model_name)
# now let's take a look at the experiment in Azure portal.
experiment
###Output
_____no_output_____
###Markdown
Select best model from the experimentLoad all experiment run metrics recursively from the experiment into a dictionary object.
###Code
runs = {}
run_metrics = {}
for r in tqdm(experiment.get_runs()):
metrics = r.get_metrics()
if 'mse' in metrics.keys():
runs[r.id] = r
run_metrics[r.id] = metrics
###Output
_____no_output_____
###Markdown
Now find the run with the lowest Mean Squared Error value
###Code
best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])
best_run = runs[best_run_id]
print('Best run is:', best_run_id)
print('Metrics:', run_metrics[best_run_id])
###Output
_____no_output_____
###Markdown
You can add tags to your runs to make them easier to catalog
###Code
best_run.tag(key="Description", value="The best one")
best_run.get_tags()
###Output
_____no_output_____
###Markdown
Plot MSE over alphaLet's observe the best model visually by plotting the MSE values over alpha values:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
best_alpha = run_metrics[best_run_id]['alpha']
min_mse = run_metrics[best_run_id]['mse']
alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])
sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')
plt.xlabel('alpha', fontsize = 14)
plt.ylabel('mean squared error', fontsize = 14)
plt.title('MSE over alpha', fontsize = 16)
# plot arrow
plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,
width = 0, head_width = .03, head_length = 8)
# plot "best run" text
plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)
plt.show()
###Output
_____no_output_____
###Markdown
Register the best model Find the model file saved in the run record of best run.
###Code
for f in best_run.get_file_names():
print(f)
###Output
_____no_output_____
###Markdown
Now we can register this model in the model registry of the workspace
###Code
model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')
###Output
_____no_output_____
###Markdown
Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time.
###Code
from azureml.core.model import Model
models = Model.list(name='best_model')
for m in models:
print(m.name, m.version)
###Output
_____no_output_____
###Markdown
You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like.
###Code
# remove the model file if it is already on disk
if os.path.isfile('model.pkl'):
os.remove('model.pkl')
# download the model
model.download(target_dir="./")
###Output
_____no_output_____
###Markdown
Scoring scriptNow we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed.
###Code
with open('./score.py', 'r') as scoring_script:
print(scoring_script.read())
###Output
_____no_output_____
###Markdown
Create environment dependency fileWe need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_conda_package("scikit-learn")
print(myenv.serialize_to_string())
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
Deploy web service into an Azure Container InstanceThe deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ** Note: ** The web service creation can take 6-7 minutes.
###Code
from azureml.core.webservice import AciWebservice, Webservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'sample name': 'AML 101'},
description='This is a great example.')
###Output
_____no_output_____
###Markdown
Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`. If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy.
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="myenv.yml")
%%time
# this will take 5-10 minutes to finish
# you can also use "az container list" command to find the ACI being deployed
service = Webservice.deploy_from_model(name='my-aci-svc',
deployment_config=aciconfig,
models=[model],
image_config=image_config,
workspace=ws)
service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Test web service
###Code
print('web service is hosted in ACI:', service.scoring_uri)
###Output
_____no_output_____
###Markdown
Use the `run` API to call the web service with one row of data to get a prediction.
###Code
import json
# score the first row from the test set.
test_samples = json.dumps({"data": X_test[0:1, :].tolist()})
service.run(input_data = test_samples)
###Output
_____no_output_____
###Markdown
Feed the entire test set and calculate the errors (residual values).
###Code
# score the entire test set.
test_samples = json.dumps({'data': X_test.tolist()})
result = json.loads(service.run(input_data = test_samples))['result']
residual = result - y_test
###Output
_____no_output_____
###Markdown
You can also send raw HTTP request to test the web service.
###Code
import requests
import json
# 2 rows of input data, each with 10 made-up numerical features
input_data = "{\"data\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers = headers)
print(resp.text)
###Output
_____no_output_____
###Markdown
Residual graphPlot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve.
###Code
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(14)
a0.plot(residual, 'bo', alpha=0.4);
a0.plot([0,90], [0,0], 'r', lw=2)
a0.set_ylabel('residue values', fontsize=14)
a0.set_xlabel('test data set', fontsize=14)
a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');
a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);
a1.set_yticklabels([])
plt.show()
###Output
_____no_output_____
###Markdown
Delete ACI to clean up Deleting ACI is super fast!
###Code
%%time
service.delete()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. 01. Train in the Notebook & Deploy Model to ACI* Load workspace* Train a simple regression model directly in the Notebook python kernel* Record run history* Find the best model in run history and download it.* Deploy the model as an Azure Container Instance (ACI) Prerequisites1. Make sure you go through the [00. Installation and Configuration](../../00.configuration.ipynb) Notebook first if you haven't. 2. Install following pre-requisite libraries to your conda environment and restart notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```3. Check that ACI is registered for your Azure Subscription.
###Code
!az provider show -n Microsoft.ContainerInstance -o table
###Output
_____no_output_____
###Markdown
If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail.
###Code
!az provider register -n Microsoft.ContainerInstance
###Output
_____no_output_____
###Markdown
Validate Azure ML SDK installation and get version number for debugging purposes
###Code
from azureml.core import Experiment, Run, Workspace
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment nameChoose a name for experiment.
###Code
experiment_name = 'train-in-notebook'
###Output
_____no_output_____
###Markdown
Start a training run in local Notebook
###Code
# load diabetes dataset, a well-known small dataset that comes with scikit-learn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
data = {
"train":{"X": X_train, "y": y_train},
"test":{"X": X_test, "y": y_test}
}
###Output
_____no_output_____
###Markdown
Train a simple Ridge modelTrain a very simple Ridge regression model in scikit-learn, and save it as a pickle file.
###Code
reg = Ridge(alpha = 0.03)
reg.fit(X=data['train']['X'], y=data['train']['y'])
preds = reg.predict(data['test']['X'])
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl');
###Output
_____no_output_____
###Markdown
Add experiment trackingNow, let's add Azure ML experiment logging, and upload persisted model into run record as well.
###Code
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
run.tag("Description","My first run!")
run.log('alpha', 0.03)
reg = Ridge(alpha=0.03)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
run.log('mse', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl')
run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page.
###Code
run
###Output
_____no_output_____
###Markdown
Simple parameter sweepSweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment.
###Code
import numpy as np
import os
from tqdm import tqdm
model_name = "model.pkl"
# list of numbers from 0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
# try a bunch of alpha values in a Linear Regression (Ridge) model
for alpha in tqdm(alphas):
# create a bunch of runs, each train a model with a different alpha value
with experiment.start_logging() as run:
# Use Ridge algorithm to build a regression model
reg = Ridge(alpha=alpha)
reg.fit(X=data["train"]["X"], y=data["train"]["y"])
preds = reg.predict(X=data["test"]["X"])
mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds)
# log alpha, mean_squared_error and feature names in run history
run.log(name="alpha", value=alpha)
run.log(name="mse", value=mse)
run.log_list(name="columns", value=columns)
with open(model_name, "wb") as file:
joblib.dump(value=reg, filename=file)
# upload the serialized model into run history record
run.upload_file(name="outputs/" + model_name, path_or_stream=model_name)
# now delete the serialized model from local folder since it is already uploaded to run history
os.remove(path=model_name)
# now let's take a look at the experiment in Azure portal.
experiment
###Output
_____no_output_____
###Markdown
Select best model from the experimentLoad all experiment run metrics recursively from the experiment into a dictionary object.
###Code
runs = {}
run_metrics = {}
for r in tqdm(experiment.get_runs()):
metrics = r.get_metrics()
if 'mse' in metrics.keys():
runs[r.id] = r
run_metrics[r.id] = metrics
###Output
_____no_output_____
###Markdown
Now find the run with the lowest Mean Squared Error value
###Code
best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])
best_run = runs[best_run_id]
print('Best run is:', best_run_id)
print('Metrics:', run_metrics[best_run_id])
###Output
_____no_output_____
###Markdown
You can add tags to your runs to make them easier to catalog
###Code
best_run.tag(key="Description", value="The best one")
best_run.get_tags()
###Output
_____no_output_____
###Markdown
Plot MSE over alphaLet's observe the best model visually by plotting the MSE values over alpha values:
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
best_alpha = run_metrics[best_run_id]['alpha']
min_mse = run_metrics[best_run_id]['mse']
alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])
sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')
plt.xlabel('alpha', fontsize = 14)
plt.ylabel('mean squared error', fontsize = 14)
plt.title('MSE over alpha', fontsize = 16)
# plot arrow
plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,
width = 0, head_width = .03, head_length = 8)
# plot "best run" text
plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)
plt.show()
###Output
_____no_output_____
###Markdown
Register the best model Find the model file saved in the run record of best run.
###Code
for f in best_run.get_file_names():
print(f)
###Output
_____no_output_____
###Markdown
Now we can register this model in the model registry of the workspace
###Code
model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')
###Output
_____no_output_____
###Markdown
Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time.
###Code
from azureml.core.model import Model
models = Model.list(workspace=ws, name='best_model')
for m in models:
print(m.name, m.version)
###Output
_____no_output_____
###Markdown
You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like.
###Code
# remove the model file if it is already on disk
if os.path.isfile('model.pkl'):
os.remove('model.pkl')
# download the model
model.download(target_dir="./")
###Output
_____no_output_____
###Markdown
Scoring scriptNow we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed.
###Code
with open('./score.py', 'r') as scoring_script:
print(scoring_script.read())
###Output
_____no_output_____
###Markdown
Create environment dependency fileWe need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=["scikit-learn"])
print(myenv.serialize_to_string())
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
Deploy web service into an Azure Container InstanceThe deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ** Note: ** The web service creation can take 6-7 minutes.
###Code
from azureml.core.webservice import AciWebservice, Webservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'sample name': 'AML 101'},
description='This is a great example.')
###Output
_____no_output_____
###Markdown
Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`. If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy.
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="myenv.yml")
%%time
# this will take 5-10 minutes to finish
# you can also use "az container list" command to find the ACI being deployed
service = Webservice.deploy_from_model(name='my-aci-svc',
deployment_config=aciconfig,
models=[model],
image_config=image_config,
workspace=ws)
service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Test web service
###Code
print('web service is hosted in ACI:', service.scoring_uri)
###Output
_____no_output_____
###Markdown
Use the `run` API to call the web service with one row of data to get a prediction.
###Code
import json
# score the first row from the test set.
test_samples = json.dumps({"data": X_test[0:1, :].tolist()})
service.run(input_data = test_samples)
###Output
_____no_output_____
###Markdown
Feed the entire test set and calculate the errors (residual values).
###Code
# score the entire test set.
test_samples = json.dumps({'data': X_test.tolist()})
result = service.run(input_data = test_samples)
residual = result - y_test
###Output
_____no_output_____
###Markdown
You can also send raw HTTP request to test the web service.
###Code
import requests
import json
# 2 rows of input data, each with 10 made-up numerical features
input_data = "{\"data\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers = headers)
print(resp.text)
###Output
_____no_output_____
###Markdown
Residual graphPlot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve.
###Code
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(14)
a0.plot(residual, 'bo', alpha=0.4);
a0.plot([0,90], [0,0], 'r', lw=2)
a0.set_ylabel('residue values', fontsize=14)
a0.set_xlabel('test data set', fontsize=14)
a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');
a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);
a1.set_yticklabels([])
plt.show()
###Output
_____no_output_____
###Markdown
Delete ACI to clean up Deleting ACI is super fast!
###Code
%%time
service.delete()
###Output
_____no_output_____ |
2. Convolutional Neural Networks in TensorFlow/3. Transfer Learning/assignment/C2W3_Assignment.ipynb | ###Markdown
Week 3: Transfer LearningWelcome to this assignment! This week, you are going to use a technique called `Transfer Learning` in which you utilize an already trained network to help you solve a similar problem to the one it was originally trained to solve.Let's get started!
###Code
import os
import zipfile
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing.image import img_to_array, load_img
###Output
_____no_output_____
###Markdown
DatasetFor this assignment, you will use the `Horse or Human dataset`, which contains images of horses and humans. Download the `training` and `validation` sets by running the cell below:
###Code
# Get the Horse or Human training dataset
!wget -q -P /content/ https://storage.googleapis.com/tensorflow-1-public/course2/week3/horse-or-human.zip
# Get the Horse or Human validation dataset
!wget -q -P /content/ https://storage.googleapis.com/tensorflow-1-public/course2/week3/validation-horse-or-human.zip
test_local_zip = './horse-or-human.zip'
zip_ref = zipfile.ZipFile(test_local_zip, 'r')
zip_ref.extractall('/tmp/training')
val_local_zip = './validation-horse-or-human.zip'
zip_ref = zipfile.ZipFile(val_local_zip, 'r')
zip_ref.extractall('/tmp/validation')
zip_ref.close()
###Output
_____no_output_____
###Markdown
This dataset already has an structure that is compatible with Keras' `flow_from_directory` so you don't need to move the images into subdirectories as you did in the previous assignments. However, it is still a good idea to save the paths of the images so you can use them later on:
###Code
# Define the training and validation base directories
train_dir = '/tmp/training'
validation_dir = '/tmp/validation'
# Directory with training horse pictures
train_horses_dir = os.path.join(train_dir, 'horses')
# Directory with training humans pictures
train_humans_dir = os.path.join(train_dir, 'humans')
# Directory with validation horse pictures
validation_horses_dir = os.path.join(validation_dir, 'horses')
# Directory with validation human pictures
validation_humans_dir = os.path.join(validation_dir, 'humans')
# Check the number of images for each class and set
print(f"There are {len(os.listdir(train_horses_dir))} images of horses for training.\n")
print(f"There are {len(os.listdir(train_humans_dir))} images of humans for training.\n")
print(f"There are {len(os.listdir(validation_horses_dir))} images of horses for validation.\n")
print(f"There are {len(os.listdir(validation_humans_dir))} images of humans for validation.\n")
###Output
There are 500 images of horses for training.
There are 527 images of humans for training.
There are 128 images of horses for validation.
There are 128 images of humans for validation.
###Markdown
Now take a look at a sample image of each one of the classes:
###Code
print("Sample horse image:")
plt.imshow(load_img(f"{os.path.join(train_horses_dir, os.listdir(train_horses_dir)[0])}"))
plt.show()
print("\nSample human image:")
plt.imshow(load_img(f"{os.path.join(train_humans_dir, os.listdir(train_humans_dir)[0])}"))
plt.show()
###Output
Sample horse image:
###Markdown
`matplotlib` makes it easy to see that these images have a resolution of 300x300 and are colored, but you can double check this by using the code below:
###Code
# Load the first example of a horse
sample_image = load_img(f"{os.path.join(train_horses_dir, os.listdir(train_horses_dir)[0])}")
# Convert the image into its numpy array representation
sample_array = img_to_array(sample_image)
print(f"Each image has shape: {sample_array.shape}")
###Output
Each image has shape: (300, 300, 3)
###Markdown
As expected, the sample image has a resolution of 300x300 and the last dimension is used for each one of the RGB channels to represent color. Training and Validation GeneratorsNow that you know the images you are dealing with, it is time for you to code the generators that will fed these images to your Network. For this, complete the `train_val_generators` function below:**Important Note:** The images have a resolution of 300x300 but the `flow_from_directory` method you will use allows you to set a target resolution. In this case, **set a `target_size` of (150, 150)**. This will heavily lower the number of trainable parameters in your final network, yielding much quicker training times without compromising the accuracy!
###Code
# GRADED FUNCTION: train_val_generators
def train_val_generators(TRAINING_DIR, VALIDATION_DIR):
### START CODE HERE
# Instantiate the ImageDataGenerator class
# Don't forget to normalize pixel values and set arguments to augment the images
train_datagen = ImageDataGenerator(rescale = 1./255.)
# Pass in the appropriate arguments to the flow_from_directory method
train_generator = train_datagen.flow_from_directory(directory=TRAINING_DIR,
batch_size=32,
class_mode='binary',
target_size=(150, 150))
# Instantiate the ImageDataGenerator class (don't forget to set the rescale argument)
# Remember that validation data should not be augmented
validation_datagen = ImageDataGenerator(rescale = 1./255.)
# Pass in the appropriate arguments to the flow_from_directory method
validation_generator = validation_datagen.flow_from_directory(directory=VALIDATION_DIR,
batch_size=32,
class_mode='binary',
target_size=(150, 150))
### END CODE HERE
return train_generator, validation_generator
# Test your generators
train_generator, validation_generator = train_val_generators(train_dir, validation_dir)
###Output
Found 1027 images belonging to 2 classes.
Found 256 images belonging to 2 classes.
###Markdown
**Expected Output:**```Found 1027 images belonging to 2 classes.Found 256 images belonging to 2 classes.``` Transfer learning - Create the pre-trained modelDownload the `inception V3` weights into the `/tmp/` directory:
###Code
# Download the inception v3 weights
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \
-O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
###Output
--2022-03-26 20:10:50-- https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
Resolving storage.googleapis.com (storage.googleapis.com)... 142.250.159.128, 74.125.132.128, 74.125.201.128, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|142.250.159.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 87910968 (84M) [application/x-hdf]
Saving to: ‘/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5’
/tmp/inception_v3_w 100%[===================>] 83.84M 214MB/s in 0.4s
2022-03-26 20:10:50 (214 MB/s) - ‘/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5’ saved [87910968/87910968]
###Markdown
Now load the `InceptionV3` model and save the path to the weights you just downloaded:
###Code
# Import the inception model
from tensorflow.keras.applications.inception_v3 import InceptionV3
# Create an instance of the inception model from the local pre-trained weights
local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
###Output
_____no_output_____
###Markdown
Complete the `create_pre_trained_model` function below. You should specify the correct `input_shape` for the model (remember that you set a new resolution for the images instead of the native 300x300) and make all of the layers non-trainable:
###Code
# GRADED FUNCTION: create_pre_trained_model
def create_pre_trained_model(local_weights_file):
### START CODE HERE
pre_trained_model = InceptionV3(input_shape = (150, 150, 3),
include_top = False,
weights = None)
pre_trained_model.load_weights(local_weights_file)
# Make all the layers in the pre-trained model non-trainable
for layer in pre_trained_model.layers:
layer.trainable = False
### END CODE HERE
return pre_trained_model
###Output
_____no_output_____
###Markdown
Check that everything went well by comparing the last few rows of the model summary to the expected output:
###Code
pre_trained_model = create_pre_trained_model(local_weights_file)
# Print the model summary
pre_trained_model.summary()
###Output
Model: "inception_v3"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 150, 150, 3 0 []
)]
conv2d (Conv2D) (None, 74, 74, 32) 864 ['input_1[0][0]']
batch_normalization (BatchNorm (None, 74, 74, 32) 96 ['conv2d[0][0]']
alization)
activation (Activation) (None, 74, 74, 32) 0 ['batch_normalization[0][0]']
conv2d_1 (Conv2D) (None, 72, 72, 32) 9216 ['activation[0][0]']
batch_normalization_1 (BatchNo (None, 72, 72, 32) 96 ['conv2d_1[0][0]']
rmalization)
activation_1 (Activation) (None, 72, 72, 32) 0 ['batch_normalization_1[0][0]']
conv2d_2 (Conv2D) (None, 72, 72, 64) 18432 ['activation_1[0][0]']
batch_normalization_2 (BatchNo (None, 72, 72, 64) 192 ['conv2d_2[0][0]']
rmalization)
activation_2 (Activation) (None, 72, 72, 64) 0 ['batch_normalization_2[0][0]']
max_pooling2d (MaxPooling2D) (None, 35, 35, 64) 0 ['activation_2[0][0]']
conv2d_3 (Conv2D) (None, 35, 35, 80) 5120 ['max_pooling2d[0][0]']
batch_normalization_3 (BatchNo (None, 35, 35, 80) 240 ['conv2d_3[0][0]']
rmalization)
activation_3 (Activation) (None, 35, 35, 80) 0 ['batch_normalization_3[0][0]']
conv2d_4 (Conv2D) (None, 33, 33, 192) 138240 ['activation_3[0][0]']
batch_normalization_4 (BatchNo (None, 33, 33, 192) 576 ['conv2d_4[0][0]']
rmalization)
activation_4 (Activation) (None, 33, 33, 192) 0 ['batch_normalization_4[0][0]']
max_pooling2d_1 (MaxPooling2D) (None, 16, 16, 192) 0 ['activation_4[0][0]']
conv2d_8 (Conv2D) (None, 16, 16, 64) 12288 ['max_pooling2d_1[0][0]']
batch_normalization_8 (BatchNo (None, 16, 16, 64) 192 ['conv2d_8[0][0]']
rmalization)
activation_8 (Activation) (None, 16, 16, 64) 0 ['batch_normalization_8[0][0]']
conv2d_6 (Conv2D) (None, 16, 16, 48) 9216 ['max_pooling2d_1[0][0]']
conv2d_9 (Conv2D) (None, 16, 16, 96) 55296 ['activation_8[0][0]']
batch_normalization_6 (BatchNo (None, 16, 16, 48) 144 ['conv2d_6[0][0]']
rmalization)
batch_normalization_9 (BatchNo (None, 16, 16, 96) 288 ['conv2d_9[0][0]']
rmalization)
activation_6 (Activation) (None, 16, 16, 48) 0 ['batch_normalization_6[0][0]']
activation_9 (Activation) (None, 16, 16, 96) 0 ['batch_normalization_9[0][0]']
average_pooling2d (AveragePool (None, 16, 16, 192) 0 ['max_pooling2d_1[0][0]']
ing2D)
conv2d_5 (Conv2D) (None, 16, 16, 64) 12288 ['max_pooling2d_1[0][0]']
conv2d_7 (Conv2D) (None, 16, 16, 64) 76800 ['activation_6[0][0]']
conv2d_10 (Conv2D) (None, 16, 16, 96) 82944 ['activation_9[0][0]']
conv2d_11 (Conv2D) (None, 16, 16, 32) 6144 ['average_pooling2d[0][0]']
batch_normalization_5 (BatchNo (None, 16, 16, 64) 192 ['conv2d_5[0][0]']
rmalization)
batch_normalization_7 (BatchNo (None, 16, 16, 64) 192 ['conv2d_7[0][0]']
rmalization)
batch_normalization_10 (BatchN (None, 16, 16, 96) 288 ['conv2d_10[0][0]']
ormalization)
batch_normalization_11 (BatchN (None, 16, 16, 32) 96 ['conv2d_11[0][0]']
ormalization)
activation_5 (Activation) (None, 16, 16, 64) 0 ['batch_normalization_5[0][0]']
activation_7 (Activation) (None, 16, 16, 64) 0 ['batch_normalization_7[0][0]']
activation_10 (Activation) (None, 16, 16, 96) 0 ['batch_normalization_10[0][0]']
activation_11 (Activation) (None, 16, 16, 32) 0 ['batch_normalization_11[0][0]']
mixed0 (Concatenate) (None, 16, 16, 256) 0 ['activation_5[0][0]',
'activation_7[0][0]',
'activation_10[0][0]',
'activation_11[0][0]']
conv2d_15 (Conv2D) (None, 16, 16, 64) 16384 ['mixed0[0][0]']
batch_normalization_15 (BatchN (None, 16, 16, 64) 192 ['conv2d_15[0][0]']
ormalization)
activation_15 (Activation) (None, 16, 16, 64) 0 ['batch_normalization_15[0][0]']
conv2d_13 (Conv2D) (None, 16, 16, 48) 12288 ['mixed0[0][0]']
conv2d_16 (Conv2D) (None, 16, 16, 96) 55296 ['activation_15[0][0]']
batch_normalization_13 (BatchN (None, 16, 16, 48) 144 ['conv2d_13[0][0]']
ormalization)
batch_normalization_16 (BatchN (None, 16, 16, 96) 288 ['conv2d_16[0][0]']
ormalization)
activation_13 (Activation) (None, 16, 16, 48) 0 ['batch_normalization_13[0][0]']
activation_16 (Activation) (None, 16, 16, 96) 0 ['batch_normalization_16[0][0]']
average_pooling2d_1 (AveragePo (None, 16, 16, 256) 0 ['mixed0[0][0]']
oling2D)
conv2d_12 (Conv2D) (None, 16, 16, 64) 16384 ['mixed0[0][0]']
conv2d_14 (Conv2D) (None, 16, 16, 64) 76800 ['activation_13[0][0]']
conv2d_17 (Conv2D) (None, 16, 16, 96) 82944 ['activation_16[0][0]']
conv2d_18 (Conv2D) (None, 16, 16, 64) 16384 ['average_pooling2d_1[0][0]']
batch_normalization_12 (BatchN (None, 16, 16, 64) 192 ['conv2d_12[0][0]']
ormalization)
batch_normalization_14 (BatchN (None, 16, 16, 64) 192 ['conv2d_14[0][0]']
ormalization)
batch_normalization_17 (BatchN (None, 16, 16, 96) 288 ['conv2d_17[0][0]']
ormalization)
batch_normalization_18 (BatchN (None, 16, 16, 64) 192 ['conv2d_18[0][0]']
ormalization)
activation_12 (Activation) (None, 16, 16, 64) 0 ['batch_normalization_12[0][0]']
activation_14 (Activation) (None, 16, 16, 64) 0 ['batch_normalization_14[0][0]']
activation_17 (Activation) (None, 16, 16, 96) 0 ['batch_normalization_17[0][0]']
activation_18 (Activation) (None, 16, 16, 64) 0 ['batch_normalization_18[0][0]']
mixed1 (Concatenate) (None, 16, 16, 288) 0 ['activation_12[0][0]',
'activation_14[0][0]',
'activation_17[0][0]',
'activation_18[0][0]']
conv2d_22 (Conv2D) (None, 16, 16, 64) 18432 ['mixed1[0][0]']
batch_normalization_22 (BatchN (None, 16, 16, 64) 192 ['conv2d_22[0][0]']
ormalization)
activation_22 (Activation) (None, 16, 16, 64) 0 ['batch_normalization_22[0][0]']
conv2d_20 (Conv2D) (None, 16, 16, 48) 13824 ['mixed1[0][0]']
conv2d_23 (Conv2D) (None, 16, 16, 96) 55296 ['activation_22[0][0]']
batch_normalization_20 (BatchN (None, 16, 16, 48) 144 ['conv2d_20[0][0]']
ormalization)
batch_normalization_23 (BatchN (None, 16, 16, 96) 288 ['conv2d_23[0][0]']
ormalization)
activation_20 (Activation) (None, 16, 16, 48) 0 ['batch_normalization_20[0][0]']
activation_23 (Activation) (None, 16, 16, 96) 0 ['batch_normalization_23[0][0]']
average_pooling2d_2 (AveragePo (None, 16, 16, 288) 0 ['mixed1[0][0]']
oling2D)
conv2d_19 (Conv2D) (None, 16, 16, 64) 18432 ['mixed1[0][0]']
conv2d_21 (Conv2D) (None, 16, 16, 64) 76800 ['activation_20[0][0]']
conv2d_24 (Conv2D) (None, 16, 16, 96) 82944 ['activation_23[0][0]']
conv2d_25 (Conv2D) (None, 16, 16, 64) 18432 ['average_pooling2d_2[0][0]']
batch_normalization_19 (BatchN (None, 16, 16, 64) 192 ['conv2d_19[0][0]']
ormalization)
batch_normalization_21 (BatchN (None, 16, 16, 64) 192 ['conv2d_21[0][0]']
ormalization)
batch_normalization_24 (BatchN (None, 16, 16, 96) 288 ['conv2d_24[0][0]']
ormalization)
batch_normalization_25 (BatchN (None, 16, 16, 64) 192 ['conv2d_25[0][0]']
ormalization)
activation_19 (Activation) (None, 16, 16, 64) 0 ['batch_normalization_19[0][0]']
activation_21 (Activation) (None, 16, 16, 64) 0 ['batch_normalization_21[0][0]']
activation_24 (Activation) (None, 16, 16, 96) 0 ['batch_normalization_24[0][0]']
activation_25 (Activation) (None, 16, 16, 64) 0 ['batch_normalization_25[0][0]']
mixed2 (Concatenate) (None, 16, 16, 288) 0 ['activation_19[0][0]',
'activation_21[0][0]',
'activation_24[0][0]',
'activation_25[0][0]']
conv2d_27 (Conv2D) (None, 16, 16, 64) 18432 ['mixed2[0][0]']
batch_normalization_27 (BatchN (None, 16, 16, 64) 192 ['conv2d_27[0][0]']
ormalization)
activation_27 (Activation) (None, 16, 16, 64) 0 ['batch_normalization_27[0][0]']
conv2d_28 (Conv2D) (None, 16, 16, 96) 55296 ['activation_27[0][0]']
batch_normalization_28 (BatchN (None, 16, 16, 96) 288 ['conv2d_28[0][0]']
ormalization)
activation_28 (Activation) (None, 16, 16, 96) 0 ['batch_normalization_28[0][0]']
conv2d_26 (Conv2D) (None, 7, 7, 384) 995328 ['mixed2[0][0]']
conv2d_29 (Conv2D) (None, 7, 7, 96) 82944 ['activation_28[0][0]']
batch_normalization_26 (BatchN (None, 7, 7, 384) 1152 ['conv2d_26[0][0]']
ormalization)
batch_normalization_29 (BatchN (None, 7, 7, 96) 288 ['conv2d_29[0][0]']
ormalization)
activation_26 (Activation) (None, 7, 7, 384) 0 ['batch_normalization_26[0][0]']
activation_29 (Activation) (None, 7, 7, 96) 0 ['batch_normalization_29[0][0]']
max_pooling2d_2 (MaxPooling2D) (None, 7, 7, 288) 0 ['mixed2[0][0]']
mixed3 (Concatenate) (None, 7, 7, 768) 0 ['activation_26[0][0]',
'activation_29[0][0]',
'max_pooling2d_2[0][0]']
conv2d_34 (Conv2D) (None, 7, 7, 128) 98304 ['mixed3[0][0]']
batch_normalization_34 (BatchN (None, 7, 7, 128) 384 ['conv2d_34[0][0]']
ormalization)
activation_34 (Activation) (None, 7, 7, 128) 0 ['batch_normalization_34[0][0]']
conv2d_35 (Conv2D) (None, 7, 7, 128) 114688 ['activation_34[0][0]']
batch_normalization_35 (BatchN (None, 7, 7, 128) 384 ['conv2d_35[0][0]']
ormalization)
activation_35 (Activation) (None, 7, 7, 128) 0 ['batch_normalization_35[0][0]']
conv2d_31 (Conv2D) (None, 7, 7, 128) 98304 ['mixed3[0][0]']
conv2d_36 (Conv2D) (None, 7, 7, 128) 114688 ['activation_35[0][0]']
batch_normalization_31 (BatchN (None, 7, 7, 128) 384 ['conv2d_31[0][0]']
ormalization)
batch_normalization_36 (BatchN (None, 7, 7, 128) 384 ['conv2d_36[0][0]']
ormalization)
activation_31 (Activation) (None, 7, 7, 128) 0 ['batch_normalization_31[0][0]']
activation_36 (Activation) (None, 7, 7, 128) 0 ['batch_normalization_36[0][0]']
conv2d_32 (Conv2D) (None, 7, 7, 128) 114688 ['activation_31[0][0]']
conv2d_37 (Conv2D) (None, 7, 7, 128) 114688 ['activation_36[0][0]']
batch_normalization_32 (BatchN (None, 7, 7, 128) 384 ['conv2d_32[0][0]']
ormalization)
batch_normalization_37 (BatchN (None, 7, 7, 128) 384 ['conv2d_37[0][0]']
ormalization)
activation_32 (Activation) (None, 7, 7, 128) 0 ['batch_normalization_32[0][0]']
activation_37 (Activation) (None, 7, 7, 128) 0 ['batch_normalization_37[0][0]']
average_pooling2d_3 (AveragePo (None, 7, 7, 768) 0 ['mixed3[0][0]']
oling2D)
conv2d_30 (Conv2D) (None, 7, 7, 192) 147456 ['mixed3[0][0]']
conv2d_33 (Conv2D) (None, 7, 7, 192) 172032 ['activation_32[0][0]']
conv2d_38 (Conv2D) (None, 7, 7, 192) 172032 ['activation_37[0][0]']
conv2d_39 (Conv2D) (None, 7, 7, 192) 147456 ['average_pooling2d_3[0][0]']
batch_normalization_30 (BatchN (None, 7, 7, 192) 576 ['conv2d_30[0][0]']
ormalization)
batch_normalization_33 (BatchN (None, 7, 7, 192) 576 ['conv2d_33[0][0]']
ormalization)
batch_normalization_38 (BatchN (None, 7, 7, 192) 576 ['conv2d_38[0][0]']
ormalization)
batch_normalization_39 (BatchN (None, 7, 7, 192) 576 ['conv2d_39[0][0]']
ormalization)
activation_30 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_30[0][0]']
activation_33 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_33[0][0]']
activation_38 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_38[0][0]']
activation_39 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_39[0][0]']
mixed4 (Concatenate) (None, 7, 7, 768) 0 ['activation_30[0][0]',
'activation_33[0][0]',
'activation_38[0][0]',
'activation_39[0][0]']
conv2d_44 (Conv2D) (None, 7, 7, 160) 122880 ['mixed4[0][0]']
batch_normalization_44 (BatchN (None, 7, 7, 160) 480 ['conv2d_44[0][0]']
ormalization)
activation_44 (Activation) (None, 7, 7, 160) 0 ['batch_normalization_44[0][0]']
conv2d_45 (Conv2D) (None, 7, 7, 160) 179200 ['activation_44[0][0]']
batch_normalization_45 (BatchN (None, 7, 7, 160) 480 ['conv2d_45[0][0]']
ormalization)
activation_45 (Activation) (None, 7, 7, 160) 0 ['batch_normalization_45[0][0]']
conv2d_41 (Conv2D) (None, 7, 7, 160) 122880 ['mixed4[0][0]']
conv2d_46 (Conv2D) (None, 7, 7, 160) 179200 ['activation_45[0][0]']
batch_normalization_41 (BatchN (None, 7, 7, 160) 480 ['conv2d_41[0][0]']
ormalization)
batch_normalization_46 (BatchN (None, 7, 7, 160) 480 ['conv2d_46[0][0]']
ormalization)
activation_41 (Activation) (None, 7, 7, 160) 0 ['batch_normalization_41[0][0]']
activation_46 (Activation) (None, 7, 7, 160) 0 ['batch_normalization_46[0][0]']
conv2d_42 (Conv2D) (None, 7, 7, 160) 179200 ['activation_41[0][0]']
conv2d_47 (Conv2D) (None, 7, 7, 160) 179200 ['activation_46[0][0]']
batch_normalization_42 (BatchN (None, 7, 7, 160) 480 ['conv2d_42[0][0]']
ormalization)
batch_normalization_47 (BatchN (None, 7, 7, 160) 480 ['conv2d_47[0][0]']
ormalization)
activation_42 (Activation) (None, 7, 7, 160) 0 ['batch_normalization_42[0][0]']
activation_47 (Activation) (None, 7, 7, 160) 0 ['batch_normalization_47[0][0]']
average_pooling2d_4 (AveragePo (None, 7, 7, 768) 0 ['mixed4[0][0]']
oling2D)
conv2d_40 (Conv2D) (None, 7, 7, 192) 147456 ['mixed4[0][0]']
conv2d_43 (Conv2D) (None, 7, 7, 192) 215040 ['activation_42[0][0]']
conv2d_48 (Conv2D) (None, 7, 7, 192) 215040 ['activation_47[0][0]']
conv2d_49 (Conv2D) (None, 7, 7, 192) 147456 ['average_pooling2d_4[0][0]']
batch_normalization_40 (BatchN (None, 7, 7, 192) 576 ['conv2d_40[0][0]']
ormalization)
batch_normalization_43 (BatchN (None, 7, 7, 192) 576 ['conv2d_43[0][0]']
ormalization)
batch_normalization_48 (BatchN (None, 7, 7, 192) 576 ['conv2d_48[0][0]']
ormalization)
batch_normalization_49 (BatchN (None, 7, 7, 192) 576 ['conv2d_49[0][0]']
ormalization)
activation_40 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_40[0][0]']
activation_43 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_43[0][0]']
activation_48 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_48[0][0]']
activation_49 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_49[0][0]']
mixed5 (Concatenate) (None, 7, 7, 768) 0 ['activation_40[0][0]',
'activation_43[0][0]',
'activation_48[0][0]',
'activation_49[0][0]']
conv2d_54 (Conv2D) (None, 7, 7, 160) 122880 ['mixed5[0][0]']
batch_normalization_54 (BatchN (None, 7, 7, 160) 480 ['conv2d_54[0][0]']
ormalization)
activation_54 (Activation) (None, 7, 7, 160) 0 ['batch_normalization_54[0][0]']
conv2d_55 (Conv2D) (None, 7, 7, 160) 179200 ['activation_54[0][0]']
batch_normalization_55 (BatchN (None, 7, 7, 160) 480 ['conv2d_55[0][0]']
ormalization)
activation_55 (Activation) (None, 7, 7, 160) 0 ['batch_normalization_55[0][0]']
conv2d_51 (Conv2D) (None, 7, 7, 160) 122880 ['mixed5[0][0]']
conv2d_56 (Conv2D) (None, 7, 7, 160) 179200 ['activation_55[0][0]']
batch_normalization_51 (BatchN (None, 7, 7, 160) 480 ['conv2d_51[0][0]']
ormalization)
batch_normalization_56 (BatchN (None, 7, 7, 160) 480 ['conv2d_56[0][0]']
ormalization)
activation_51 (Activation) (None, 7, 7, 160) 0 ['batch_normalization_51[0][0]']
activation_56 (Activation) (None, 7, 7, 160) 0 ['batch_normalization_56[0][0]']
conv2d_52 (Conv2D) (None, 7, 7, 160) 179200 ['activation_51[0][0]']
conv2d_57 (Conv2D) (None, 7, 7, 160) 179200 ['activation_56[0][0]']
batch_normalization_52 (BatchN (None, 7, 7, 160) 480 ['conv2d_52[0][0]']
ormalization)
batch_normalization_57 (BatchN (None, 7, 7, 160) 480 ['conv2d_57[0][0]']
ormalization)
activation_52 (Activation) (None, 7, 7, 160) 0 ['batch_normalization_52[0][0]']
activation_57 (Activation) (None, 7, 7, 160) 0 ['batch_normalization_57[0][0]']
average_pooling2d_5 (AveragePo (None, 7, 7, 768) 0 ['mixed5[0][0]']
oling2D)
conv2d_50 (Conv2D) (None, 7, 7, 192) 147456 ['mixed5[0][0]']
conv2d_53 (Conv2D) (None, 7, 7, 192) 215040 ['activation_52[0][0]']
conv2d_58 (Conv2D) (None, 7, 7, 192) 215040 ['activation_57[0][0]']
conv2d_59 (Conv2D) (None, 7, 7, 192) 147456 ['average_pooling2d_5[0][0]']
batch_normalization_50 (BatchN (None, 7, 7, 192) 576 ['conv2d_50[0][0]']
ormalization)
batch_normalization_53 (BatchN (None, 7, 7, 192) 576 ['conv2d_53[0][0]']
ormalization)
batch_normalization_58 (BatchN (None, 7, 7, 192) 576 ['conv2d_58[0][0]']
ormalization)
batch_normalization_59 (BatchN (None, 7, 7, 192) 576 ['conv2d_59[0][0]']
ormalization)
activation_50 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_50[0][0]']
activation_53 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_53[0][0]']
activation_58 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_58[0][0]']
activation_59 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_59[0][0]']
mixed6 (Concatenate) (None, 7, 7, 768) 0 ['activation_50[0][0]',
'activation_53[0][0]',
'activation_58[0][0]',
'activation_59[0][0]']
conv2d_64 (Conv2D) (None, 7, 7, 192) 147456 ['mixed6[0][0]']
batch_normalization_64 (BatchN (None, 7, 7, 192) 576 ['conv2d_64[0][0]']
ormalization)
activation_64 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_64[0][0]']
conv2d_65 (Conv2D) (None, 7, 7, 192) 258048 ['activation_64[0][0]']
batch_normalization_65 (BatchN (None, 7, 7, 192) 576 ['conv2d_65[0][0]']
ormalization)
activation_65 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_65[0][0]']
conv2d_61 (Conv2D) (None, 7, 7, 192) 147456 ['mixed6[0][0]']
conv2d_66 (Conv2D) (None, 7, 7, 192) 258048 ['activation_65[0][0]']
batch_normalization_61 (BatchN (None, 7, 7, 192) 576 ['conv2d_61[0][0]']
ormalization)
batch_normalization_66 (BatchN (None, 7, 7, 192) 576 ['conv2d_66[0][0]']
ormalization)
activation_61 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_61[0][0]']
activation_66 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_66[0][0]']
conv2d_62 (Conv2D) (None, 7, 7, 192) 258048 ['activation_61[0][0]']
conv2d_67 (Conv2D) (None, 7, 7, 192) 258048 ['activation_66[0][0]']
batch_normalization_62 (BatchN (None, 7, 7, 192) 576 ['conv2d_62[0][0]']
ormalization)
batch_normalization_67 (BatchN (None, 7, 7, 192) 576 ['conv2d_67[0][0]']
ormalization)
activation_62 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_62[0][0]']
activation_67 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_67[0][0]']
average_pooling2d_6 (AveragePo (None, 7, 7, 768) 0 ['mixed6[0][0]']
oling2D)
conv2d_60 (Conv2D) (None, 7, 7, 192) 147456 ['mixed6[0][0]']
conv2d_63 (Conv2D) (None, 7, 7, 192) 258048 ['activation_62[0][0]']
conv2d_68 (Conv2D) (None, 7, 7, 192) 258048 ['activation_67[0][0]']
conv2d_69 (Conv2D) (None, 7, 7, 192) 147456 ['average_pooling2d_6[0][0]']
batch_normalization_60 (BatchN (None, 7, 7, 192) 576 ['conv2d_60[0][0]']
ormalization)
batch_normalization_63 (BatchN (None, 7, 7, 192) 576 ['conv2d_63[0][0]']
ormalization)
batch_normalization_68 (BatchN (None, 7, 7, 192) 576 ['conv2d_68[0][0]']
ormalization)
batch_normalization_69 (BatchN (None, 7, 7, 192) 576 ['conv2d_69[0][0]']
ormalization)
activation_60 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_60[0][0]']
activation_63 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_63[0][0]']
activation_68 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_68[0][0]']
activation_69 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_69[0][0]']
mixed7 (Concatenate) (None, 7, 7, 768) 0 ['activation_60[0][0]',
'activation_63[0][0]',
'activation_68[0][0]',
'activation_69[0][0]']
conv2d_72 (Conv2D) (None, 7, 7, 192) 147456 ['mixed7[0][0]']
batch_normalization_72 (BatchN (None, 7, 7, 192) 576 ['conv2d_72[0][0]']
ormalization)
activation_72 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_72[0][0]']
conv2d_73 (Conv2D) (None, 7, 7, 192) 258048 ['activation_72[0][0]']
batch_normalization_73 (BatchN (None, 7, 7, 192) 576 ['conv2d_73[0][0]']
ormalization)
activation_73 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_73[0][0]']
conv2d_70 (Conv2D) (None, 7, 7, 192) 147456 ['mixed7[0][0]']
conv2d_74 (Conv2D) (None, 7, 7, 192) 258048 ['activation_73[0][0]']
batch_normalization_70 (BatchN (None, 7, 7, 192) 576 ['conv2d_70[0][0]']
ormalization)
batch_normalization_74 (BatchN (None, 7, 7, 192) 576 ['conv2d_74[0][0]']
ormalization)
activation_70 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_70[0][0]']
activation_74 (Activation) (None, 7, 7, 192) 0 ['batch_normalization_74[0][0]']
conv2d_71 (Conv2D) (None, 3, 3, 320) 552960 ['activation_70[0][0]']
conv2d_75 (Conv2D) (None, 3, 3, 192) 331776 ['activation_74[0][0]']
batch_normalization_71 (BatchN (None, 3, 3, 320) 960 ['conv2d_71[0][0]']
ormalization)
batch_normalization_75 (BatchN (None, 3, 3, 192) 576 ['conv2d_75[0][0]']
ormalization)
activation_71 (Activation) (None, 3, 3, 320) 0 ['batch_normalization_71[0][0]']
activation_75 (Activation) (None, 3, 3, 192) 0 ['batch_normalization_75[0][0]']
max_pooling2d_3 (MaxPooling2D) (None, 3, 3, 768) 0 ['mixed7[0][0]']
mixed8 (Concatenate) (None, 3, 3, 1280) 0 ['activation_71[0][0]',
'activation_75[0][0]',
'max_pooling2d_3[0][0]']
conv2d_80 (Conv2D) (None, 3, 3, 448) 573440 ['mixed8[0][0]']
batch_normalization_80 (BatchN (None, 3, 3, 448) 1344 ['conv2d_80[0][0]']
ormalization)
activation_80 (Activation) (None, 3, 3, 448) 0 ['batch_normalization_80[0][0]']
conv2d_77 (Conv2D) (None, 3, 3, 384) 491520 ['mixed8[0][0]']
conv2d_81 (Conv2D) (None, 3, 3, 384) 1548288 ['activation_80[0][0]']
batch_normalization_77 (BatchN (None, 3, 3, 384) 1152 ['conv2d_77[0][0]']
ormalization)
batch_normalization_81 (BatchN (None, 3, 3, 384) 1152 ['conv2d_81[0][0]']
ormalization)
activation_77 (Activation) (None, 3, 3, 384) 0 ['batch_normalization_77[0][0]']
activation_81 (Activation) (None, 3, 3, 384) 0 ['batch_normalization_81[0][0]']
conv2d_78 (Conv2D) (None, 3, 3, 384) 442368 ['activation_77[0][0]']
conv2d_79 (Conv2D) (None, 3, 3, 384) 442368 ['activation_77[0][0]']
conv2d_82 (Conv2D) (None, 3, 3, 384) 442368 ['activation_81[0][0]']
conv2d_83 (Conv2D) (None, 3, 3, 384) 442368 ['activation_81[0][0]']
average_pooling2d_7 (AveragePo (None, 3, 3, 1280) 0 ['mixed8[0][0]']
oling2D)
conv2d_76 (Conv2D) (None, 3, 3, 320) 409600 ['mixed8[0][0]']
batch_normalization_78 (BatchN (None, 3, 3, 384) 1152 ['conv2d_78[0][0]']
ormalization)
batch_normalization_79 (BatchN (None, 3, 3, 384) 1152 ['conv2d_79[0][0]']
ormalization)
batch_normalization_82 (BatchN (None, 3, 3, 384) 1152 ['conv2d_82[0][0]']
ormalization)
batch_normalization_83 (BatchN (None, 3, 3, 384) 1152 ['conv2d_83[0][0]']
ormalization)
conv2d_84 (Conv2D) (None, 3, 3, 192) 245760 ['average_pooling2d_7[0][0]']
batch_normalization_76 (BatchN (None, 3, 3, 320) 960 ['conv2d_76[0][0]']
ormalization)
activation_78 (Activation) (None, 3, 3, 384) 0 ['batch_normalization_78[0][0]']
activation_79 (Activation) (None, 3, 3, 384) 0 ['batch_normalization_79[0][0]']
activation_82 (Activation) (None, 3, 3, 384) 0 ['batch_normalization_82[0][0]']
activation_83 (Activation) (None, 3, 3, 384) 0 ['batch_normalization_83[0][0]']
batch_normalization_84 (BatchN (None, 3, 3, 192) 576 ['conv2d_84[0][0]']
ormalization)
activation_76 (Activation) (None, 3, 3, 320) 0 ['batch_normalization_76[0][0]']
mixed9_0 (Concatenate) (None, 3, 3, 768) 0 ['activation_78[0][0]',
'activation_79[0][0]']
concatenate (Concatenate) (None, 3, 3, 768) 0 ['activation_82[0][0]',
'activation_83[0][0]']
activation_84 (Activation) (None, 3, 3, 192) 0 ['batch_normalization_84[0][0]']
mixed9 (Concatenate) (None, 3, 3, 2048) 0 ['activation_76[0][0]',
'mixed9_0[0][0]',
'concatenate[0][0]',
'activation_84[0][0]']
conv2d_89 (Conv2D) (None, 3, 3, 448) 917504 ['mixed9[0][0]']
batch_normalization_89 (BatchN (None, 3, 3, 448) 1344 ['conv2d_89[0][0]']
ormalization)
activation_89 (Activation) (None, 3, 3, 448) 0 ['batch_normalization_89[0][0]']
conv2d_86 (Conv2D) (None, 3, 3, 384) 786432 ['mixed9[0][0]']
conv2d_90 (Conv2D) (None, 3, 3, 384) 1548288 ['activation_89[0][0]']
batch_normalization_86 (BatchN (None, 3, 3, 384) 1152 ['conv2d_86[0][0]']
ormalization)
batch_normalization_90 (BatchN (None, 3, 3, 384) 1152 ['conv2d_90[0][0]']
ormalization)
activation_86 (Activation) (None, 3, 3, 384) 0 ['batch_normalization_86[0][0]']
activation_90 (Activation) (None, 3, 3, 384) 0 ['batch_normalization_90[0][0]']
conv2d_87 (Conv2D) (None, 3, 3, 384) 442368 ['activation_86[0][0]']
conv2d_88 (Conv2D) (None, 3, 3, 384) 442368 ['activation_86[0][0]']
conv2d_91 (Conv2D) (None, 3, 3, 384) 442368 ['activation_90[0][0]']
conv2d_92 (Conv2D) (None, 3, 3, 384) 442368 ['activation_90[0][0]']
average_pooling2d_8 (AveragePo (None, 3, 3, 2048) 0 ['mixed9[0][0]']
oling2D)
conv2d_85 (Conv2D) (None, 3, 3, 320) 655360 ['mixed9[0][0]']
batch_normalization_87 (BatchN (None, 3, 3, 384) 1152 ['conv2d_87[0][0]']
ormalization)
batch_normalization_88 (BatchN (None, 3, 3, 384) 1152 ['conv2d_88[0][0]']
ormalization)
batch_normalization_91 (BatchN (None, 3, 3, 384) 1152 ['conv2d_91[0][0]']
ormalization)
batch_normalization_92 (BatchN (None, 3, 3, 384) 1152 ['conv2d_92[0][0]']
ormalization)
conv2d_93 (Conv2D) (None, 3, 3, 192) 393216 ['average_pooling2d_8[0][0]']
batch_normalization_85 (BatchN (None, 3, 3, 320) 960 ['conv2d_85[0][0]']
ormalization)
activation_87 (Activation) (None, 3, 3, 384) 0 ['batch_normalization_87[0][0]']
activation_88 (Activation) (None, 3, 3, 384) 0 ['batch_normalization_88[0][0]']
activation_91 (Activation) (None, 3, 3, 384) 0 ['batch_normalization_91[0][0]']
activation_92 (Activation) (None, 3, 3, 384) 0 ['batch_normalization_92[0][0]']
batch_normalization_93 (BatchN (None, 3, 3, 192) 576 ['conv2d_93[0][0]']
ormalization)
activation_85 (Activation) (None, 3, 3, 320) 0 ['batch_normalization_85[0][0]']
mixed9_1 (Concatenate) (None, 3, 3, 768) 0 ['activation_87[0][0]',
'activation_88[0][0]']
concatenate_1 (Concatenate) (None, 3, 3, 768) 0 ['activation_91[0][0]',
'activation_92[0][0]']
activation_93 (Activation) (None, 3, 3, 192) 0 ['batch_normalization_93[0][0]']
mixed10 (Concatenate) (None, 3, 3, 2048) 0 ['activation_85[0][0]',
'mixed9_1[0][0]',
'concatenate_1[0][0]',
'activation_93[0][0]']
==================================================================================================
Total params: 21,802,784
Trainable params: 0
Non-trainable params: 21,802,784
__________________________________________________________________________________________________
###Markdown
**Expected Output:**```batch_normalization_v1_281 (Bat (None, 3, 3, 192) 576 conv2d_281[0][0] __________________________________________________________________________________________________activation_273 (Activation) (None, 3, 3, 320) 0 batch_normalization_v1_273[0][0] __________________________________________________________________________________________________mixed9_1 (Concatenate) (None, 3, 3, 768) 0 activation_275[0][0] activation_276[0][0] __________________________________________________________________________________________________concatenate_5 (Concatenate) (None, 3, 3, 768) 0 activation_279[0][0] activation_280[0][0] __________________________________________________________________________________________________activation_281 (Activation) (None, 3, 3, 192) 0 batch_normalization_v1_281[0][0] __________________________________________________________________________________________________mixed10 (Concatenate) (None, 3, 3, 2048) 0 activation_273[0][0] mixed9_1[0][0] concatenate_5[0][0] activation_281[0][0] ==================================================================================================Total params: 21,802,784Trainable params: 0Non-trainable params: 21,802,784``` To check that all the layers in the model were set to be non-trainable, you can also run the cell below:
###Code
total_params = pre_trained_model.count_params()
num_trainable_params = sum([w.shape.num_elements() for w in pre_trained_model.trainable_weights])
print(f"There are {total_params:,} total parameters in this model.")
print(f"There are {num_trainable_params:,} trainable parameters in this model.")
###Output
There are 21,802,784 total parameters in this model.
There are 0 trainable parameters in this model.
###Markdown
**Expected Output:**```There are 21,802,784 total parameters in this model.There are 0 trainable parameters in this model.``` Creating callbacks for laterYou have already worked with callbacks in the first course of this specialization so the callback to stop training once an accuracy of 99.9% is reached, is provided for you:
###Code
# Define a Callback class that stops training once accuracy reaches 99.9%
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.999):
print("\nReached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
###Output
_____no_output_____
###Markdown
Pipelining the pre-trained model with your ownNow that the pre-trained model is ready, you need to "glue" it to your own model to solve the task at hand.For this you will need the last output of the pre-trained model, since this will be the input for your own. Complete the `output_of_last_layer` function below.**Note:** For grading purposes use the `mixed7` layer as the last layer of the pre-trained model. However, after submitting feel free to come back here and play around with this.
###Code
# GRADED FUNCTION: output_of_last_layer
def output_of_last_layer(pre_trained_model):
### START CODE HERE
last_desired_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_desired_layer.output_shape)
last_output = last_desired_layer.output
print('last layer output: ', last_output)
### END CODE HERE
return last_output
###Output
_____no_output_____
###Markdown
Check that everything works as expected:
###Code
last_output = output_of_last_layer(pre_trained_model)
###Output
last layer output shape: (None, 7, 7, 768)
last layer output: KerasTensor(type_spec=TensorSpec(shape=(None, 7, 7, 768), dtype=tf.float32, name=None), name='mixed7/concat:0', description="created by layer 'mixed7'")
###Markdown
**Expected Output (if `mixed7` layer was used):**```last layer output shape: (None, 7, 7, 768)last layer output: KerasTensor(type_spec=TensorSpec(shape=(None, 7, 7, 768), dtype=tf.float32, name=None), name='mixed7/concat:0', description="created by layer 'mixed7'")``` Now you will create the final model by adding some additional layers on top of the pre-trained model.Complete the `create_final_model` function below. You will need to use Tensorflow's [Functional API](https://www.tensorflow.org/guide/keras/functional) for this since the pretrained model has been created using it. Let's double check this first:
###Code
# Print the type of the pre-trained model
print(f"The pretrained model has type: {type(pre_trained_model)}")
###Output
The pretrained model has type: <class 'keras.engine.functional.Functional'>
###Markdown
To create the final model, you will use Keras' Model class by defining the appropriate inputs and outputs as described in the first way to instantiate a Model in the [docs](https://www.tensorflow.org/api_docs/python/tf/keras/Model).Note that you can get the input from any existing model by using its `input` attribute and by using the Funcional API you can use the last layer directly as output when creating the final model.
###Code
# GRADED FUNCTION: create_final_model
def create_final_model(pre_trained_model, last_output):
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
### START CODE HERE
# Add a fully connected layer with 1024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)
# Add a final sigmoid layer for classification
x = layers.Dense(1, activation='sigmoid')(x)
# Create the complete model by using the Model class
model = Model(inputs=pre_trained_model.input, outputs=x)
# Compile the model
model.compile(optimizer = RMSprop(learning_rate=0.0001),
loss = 'binary_crossentropy',
metrics = ['accuracy'])
### END CODE HERE
return model
# Save your model in a variable
model = create_final_model(pre_trained_model, last_output)
# Inspect parameters
total_params = model.count_params()
num_trainable_params = sum([w.shape.num_elements() for w in model.trainable_weights])
print(f"There are {total_params:,} total parameters in this model.")
print(f"There are {num_trainable_params:,} trainable parameters in this model.")
###Output
There are 47,512,481 total parameters in this model.
There are 38,537,217 trainable parameters in this model.
###Markdown
**Expected Output:**```There are 47,512,481 total parameters in this model.There are 38,537,217 trainable parameters in this model.``` Wow, that is a lot of parameters!After submitting your assignment later, try re-running this notebook but use the original resolution of 300x300, you will be surprised to see how many more parameters are for that case.Now train the model:
###Code
# Run this and see how many epochs it should take before the callback
# fires, and stops training at 99.9% accuracy
# (It should take a few epochs)
callbacks = myCallback()
history = model.fit(train_generator,
validation_data = validation_generator,
epochs = 100,
verbose = 2,
callbacks=callbacks)
###Output
Epoch 1/100
33/33 - 23s - loss: 0.0304 - accuracy: 0.9815 - val_loss: 0.0351 - val_accuracy: 0.9766 - 23s/epoch - 692ms/step
Epoch 2/100
Reached 99.9% accuracy so cancelling training!
33/33 - 8s - loss: 5.0943e-05 - accuracy: 1.0000 - val_loss: 0.0309 - val_accuracy: 0.9805 - 8s/epoch - 256ms/step
###Markdown
The training should have stopped after less than 10 epochs and it should have reached an accuracy over 99,9% (firing the callback). This happened so quickly because of the pre-trained model you used, which already contained information to classify humans from horses. Really cool!Now take a quick look at the training and validation accuracies for each epoch of training:
###Code
# Plot the training and validation accuracies for each epoch
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
###Output
_____no_output_____ |
notebook/Tutorial-BSSN_constraints.ipynb | ###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); [BSSN](http://www2.yukawa.kyoto-u.ac.jp/~yuichiro.sekiguchi/3+1.pdf) Hamiltonian and momentum constraint equations, in ***curvilinear*** coordinates, using a covariant reference metric approach: C code generation Authors: Ian Ruchlin & Zach Etienne Formatting improvements courtesy Brandon Clark This module constructs the BSSN Hamiltonian and momentum constraint equations as symbolic (SymPy) expressions, in terms of the core BSSN quantities $\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632). This module implements a generic curvilinear coordinate reference metric approach matching that of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658), which is an extension of the spherical coordinate reference metric approach of [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632), which builds upon the covariant "Lagrangian" BSSN formalism of [Brown (2009)](https://arxiv.org/abs/0902.3652). *See also citations within each article.***Module Status:** Validated **Validation Notes:** All expressions generated in this module have been validated against a trusted code where applicable (the original NRPy+/SENR code, which itself was validated against [Baumgarte's code](https://arxiv.org/abs/1211.6632)). NRPy+ Source Code for this module: [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py)[comment]: (Introduction: TODO) Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](initializenrpy): Initialize needed Python/NRPy+ modules1. [Step 2](hamiltonianconstraint): Construct the Hamiltonian constraint $\mathcal{H}$.1. [Step 3](momentumconstraint): Construct the momentum constraint $\mathcal{M}^i$.1. [Step 4](code_validation): Code Validation against `BSSN.BSSN_constraints` NRPy+ module1. [Step 5](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$We start by loading the needed modules. Notably, this module depends on several quantities defined in the [BSSN/BSSN_quantities.py](../edit/BSSN/BSSN_quantities.py) Python code, documented in the NRPy+ [BSSN quantities](Tutorial-BSSN_quantities.ipynb). In [Step 2](hamiltonianconstraint) we call functions within [BSSN.BSSN_quantities](../edit/BSSN/BSSN_quantities.py) to define quantities needed in this module.
###Code
# Step 1: Initialize needed Python/NRPy+ modules
import sympy as sp
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import reference_metric as rfm
import BSSN.BSSN_quantities as Bq
# Step 1.a: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
# Step 1.b: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
###Output
_____no_output_____
###Markdown
Step 2: $\mathcal{H}$, the Hamiltonian constraint \[Back to [top](toc)\]$$\label{hamiltonianconstraint}$$Next we define the Hamiltonian constraint. Eq. 13 of [Baumgarte *et al.*](https://arxiv.org/pdf/1211.6632.pdf) yields:$$\mathcal{H} = {\underbrace {\textstyle \frac{2}{3} K^2}_{\rm Term\ 1}} - {\underbrace {\textstyle \bar{A}_{ij} \bar{A}^{ij}}_{\rm Term\ 2}} + {\underbrace {\textstyle e^{-4\phi} \left(\bar{R} - 8 \bar{D}^i \phi \bar{D}_i \phi - 8 \bar{D}^2 \phi\right)}_{\rm Term\ 3}}$$
###Code
# Step 2: The Hamiltonian constraint.
# First declare all needed variables
Bq.declare_BSSN_gridfunctions_if_not_declared_already() # Sets trK
Bq.BSSN_basic_tensors() # Sets AbarDD
Bq.gammabar__inverse_and_derivs() # Sets gammabarUU
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD() # Sets AbarUU and AbarDD_dD
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU() # Sets RbarDD
Bq.phi_and_derivs() # Sets phi_dBarD & phi_dBarDD
# Term 1: 2/3 K^2
H = sp.Rational(2,3)*Bq.trK**2
# Term 2: -A_{ij} A^{ij}
for i in range(DIM):
for j in range(DIM):
H += -Bq.AbarDD[i][j]*Bq.AbarUU[i][j]
# Term 3a: trace(Rbar)
Rbartrace = sp.sympify(0)
for i in range(DIM):
for j in range(DIM):
Rbartrace += Bq.gammabarUU[i][j]*Bq.RbarDD[i][j]
# Term 3b: -8 \bar{\gamma}^{ij} \bar{D}_i \phi \bar{D}_j \phi = -8*phi_dBar_times_phi_dBar
# Term 3c: -8 \bar{\gamma}^{ij} \bar{D}_i \bar{D}_j \phi = -8*phi_dBarDD_contraction
phi_dBar_times_phi_dBar = sp.sympify(0) # Term 3b
phi_dBarDD_contraction = sp.sympify(0) # Term 3c
for i in range(DIM):
for j in range(DIM):
phi_dBar_times_phi_dBar += Bq.gammabarUU[i][j]*Bq.phi_dBarD[i]*Bq.phi_dBarD[j]
phi_dBarDD_contraction += Bq.gammabarUU[i][j]*Bq.phi_dBarDD[i][j]
# Add Term 3:
H += Bq.exp_m4phi*(Rbartrace - 8*(phi_dBar_times_phi_dBar + phi_dBarDD_contraction))
###Output
_____no_output_____
###Markdown
Step 3: $\mathcal{M}^i$, the momentum constraint \[Back to [top](toc)\]$$\label{momentumconstraint}$$***Courtesy Ian Ruchlin***The following definition of the momentum constraint is a simplification of Eq. 47 or [Ruchlin, Etienne, & Baumgarte (2018)](https://arxiv.org/pdf/1712.07658.pdf), which itself was a corrected version of the momentum constraint presented in Eq. 14 of [Baumgarte *et al*](https://arxiv.org/pdf/1211.6632.pdf).Start with the physical momentum constraint$$\mathcal{M}^{i} \equiv D_{j} \left ( K^{i j} - \gamma^{i j} K \right ) = 0 \; .$$Expanding and using metric compatibility with the physical covariant derivative $D_{i}$ yields$$\mathcal{M}^{i} = D_{j} K^{i j} - \gamma^{i j} \partial_{j} K \; .$$The physical extrinsic curvature $K_{i j}$ is related to the trace-free extrinsic curvature $A_{i j}$ by$$K_{i j} = A_{i j} + \frac{1}{3} \gamma_{i j} K \; .$$Thus,$$\mathcal{M}^{i} = D_{j} A^{i j} - \frac{2}{3} \gamma^{i j} \partial_{j} K \; .$$The physical metric $\gamma_{i j}$ is related to the conformal metric $\bar{\gamma}_{i j}$ by the conformal rescaling$$\gamma_{i j} = e^{4 \phi} \bar{\gamma}_{i j} \; ,$$and similarly for the trace-free extrinsic curvature$$A_{i j} = e^{4 \phi} \bar{A}_{i j} \; .$$It can be shown (Eq. (3.34) in Baumgarte & Shapiro (2010) with $\alpha = -4$ and $\psi = e^{\phi}$) that the physical and conformal covariant derivatives obey$$D_{j} A^{i j} = e^{-10 \phi} \bar{D}_{j} \left (e^{6 \phi} \bar{A}^{i j} \right ) \; .$$Then, the constraint becomes$$\mathcal{M}^i = e^{-4\phi} \left({\underbrace {\textstyle \bar{D}_j \bar{A}^{ij}}_{\rm Term\ 1}} + {\underbrace {\textstyle 6 \bar{A}^{ij}\partial_j \phi}_{\rm Term\ 2}} - {\underbrace {\textstyle \frac{2}{3} \bar{\gamma}^{ij}\partial_j K}_{\rm Term\ 3}}\right) \; .$$Let's first implement Terms 2 and 3:
###Code
# Step 3: M^i, the momentum constraint
MU = ixp.zerorank1()
# Term 2: 6 A^{ij} \partial_j \phi:
for i in range(DIM):
for j in range(DIM):
MU[i] += 6*Bq.AbarUU[i][j]*Bq.phi_dD[j]
# Term 3: -2/3 \bar{\gamma}^{ij} K_{,j}
trK_dD = ixp.declarerank1("trK_dD") # Not defined in BSSN_RHSs; only trK_dupD is defined there.
for i in range(DIM):
for j in range(DIM):
MU[i] += -sp.Rational(2,3)*Bq.gammabarUU[i][j]*trK_dD[j]
###Output
_____no_output_____
###Markdown
Now, we turn our attention to Term 1. The covariant divergence involves upper indices in $\bar{A}^{i j}$, but it would be easier for us to finite difference the rescaled $\bar{A}_{i j}$. A simple application of the inverse conformal metric yields$$\bar{D}_{j} \bar{A}^{i j} = \bar{\gamma}^{i k} \bar{\gamma}^{j l} \bar{D}_{j} \bar{A}_{k l} \; .$$As usual, the covariant derivative is related to the ordinary derivative using the conformal Christoffel symbols$$\bar{D}_{k} \bar{A}_{i j} = \partial_{k} \bar{A}_{i j} - \bar{\Gamma}^{l}_{k i} \bar{A}_{l j} - \bar{\Gamma}^{l}_{k j} \bar{A}_{i l} \; .$$It is the ordinary derivative above that is approximated by finite difference. The BSSN formulation used here does not rely on spatial derivatives $\partial_{k} \bar{A}_{i j}$ in any of the right-hand-sides (except for the advection term, which uses the upwinded derivative), and so we must declare a new ordinary, centered stencil derivative field of rank 3.
###Code
# First define aDD_dD:
aDD_dD = ixp.declarerank3("aDD_dD","sym01")
# Then evaluate the conformal covariant derivative \bar{D}_j \bar{A}_{lm}
AbarDD_dBarD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AbarDD_dBarD[i][j][k] = Bq.AbarDD_dD[i][j][k]
for l in range(DIM):
AbarDD_dBarD[i][j][k] += -Bq.GammabarUDD[l][k][i]*Bq.AbarDD[l][j]
AbarDD_dBarD[i][j][k] += -Bq.GammabarUDD[l][k][j]*Bq.AbarDD[i][l]
# Term 1: Contract twice with the metric to make \bar{D}_{j} \bar{A}^{ij}
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
MU[i] += Bq.gammabarUU[i][k]*Bq.gammabarUU[j][l]*AbarDD_dBarD[k][l][j]
# Finally, we multiply by e^{-4 phi} and rescale the momentum constraint:
for i in range(DIM):
MU[i] *= Bq.exp_m4phi / rfm.ReU[i]
###Output
_____no_output_____
###Markdown
Step 4: Code Validation against `BSSN.BSSN_constraints` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$Here, as a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between1. this tutorial and 2. the NRPy+ [BSSN.BSSN_constraints](../edit/BSSN/BSSN_constraints.py) module.By default, we analyze these expressions in Spherical coordinates, though other coordinate systems may be chosen.
###Code
# Step 4: Code Validation against BSSN.BSSN_constraints NRPy+ module
# We already have SymPy expressions for BSSN constraints
# in terms of other SymPy variables. Even if we reset the
# list of NRPy+ gridfunctions, these *SymPy* expressions for
# BSSN constraint variables *will remain unaffected*.
#
# Here, we will use the above-defined BSSN constraint expressions
# to validate against the same expressions in the
# BSSN/BSSN_constraints.py file, to ensure consistency between
# this tutorial and the module itself.
#
# Reset the list of gridfunctions, as registering a gridfunction
# twice (in the bssnrhs.BSSN_RHSs() call) will spawn an error.
gri.glb_gridfcs_list = []
# Call the BSSN_RHSs() function from within the
# BSSN/BSSN_RHSs.py module,
# which should do exactly the same as in Steps 1-16 above.
import BSSN.BSSN_constraints as bssncon
bssncon.BSSN_constraints()
print("Consistency check between BSSN_constraints tutorial and NRPy+ module: ALL SHOULD BE ZERO.")
print("H - bssncon.H = " + str(H - bssncon.H))
for i in range(DIM):
print("MU["+str(i)+"] - bssncon.MU["+str(i)+"] = " + str(MU[i] - bssncon.MU[i]))
###Output
Consistency check between BSSN_constraints tutorial and NRPy+ module: ALL SHOULD BE ZERO.
H - bssncon.H = 0
MU[0] - bssncon.MU[0] = 0
MU[1] - bssncon.MU[1] = 0
MU[2] - bssncon.MU[2] = 0
###Markdown
Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-BSSN_constraints.pdf](Tutorial-BSSN_constraints.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-BSSN_constraints.ipynb
!pdflatex -interaction=batchmode Tutorial-BSSN_constraints.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_constraints.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_constraints.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-BSSN_constraints.ipynb to latex
[NbConvertApp] Writing 47428 bytes to Tutorial-BSSN_constraints.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
|
pyspark-ml-crashcourse/notebooks/01 - PySpark DataFrame Introduction Skeleton.ipynb | ###Markdown
1 Creating a DataFrameFirst, let's create some DataFrame from Python objects. While this is probably not the most common thing to do, it is easy and helpful in some situations where you already have some Python objects.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
1.1 Inspect SchemaThe `spark` object has different methods for creating a so called Spark DataFrame object. This object is similar to a table, it contains rows of records, which all conform to a common schema with named columns and specific types. On the surface it heavily borrows concepts from Pandas DataFrames or R DataFrames, although the syntax and many operations are syntactically very different.As the first step, we want to see the contents of the DataFrame. This can be easily done by using the show method.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
2 Reading DataOf course manually creating DataFrames from a couple of records is not the real use case. Instead we want to read data frames files.. Spark supports various file formats, we will use JSON in the following example.The entrypoint for creating Spark objects is an object called spark which is provided in the notebook and read to use. We will read a file containing some informations on a couple of persons, which will serve as the basis for the next examples
###Code
# YOUR CODE HERE
persons.collect()
###Output
_____no_output_____
###Markdown
2.1 Inspecting a DataFrameSpark supports various methods for inspecting both the contents and the schema of a DataFrame
###Code
# YOUR CODE HERE
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Pandas InteroperabilitySpark also supports interoperation with Python Pandas, the standard library for modelling tabular data.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
2.2 Loading CSV DataOf course Spark also supports reading CSV data. CSV files may optionally contain a header containing the column names.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
3 Simple Transformations 3.1 ProjectionsThe simplest thing to do is to create a new DataFrame with a subset of the available columns
###Code
from pyspark.sql.functions import *
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
3.2 Addressing ColumnsSpark supports multiple different ways for addressing a columns. We just saw one way, but also the following methods are supported for specifying a column:* `df.column_name`* `df['column_name']`* `col('column_name')`All these methods return a Column object, which is an abstract representative of the data in the column. As we will see soon, transformations can be applied to Column in order to derive new values. Beware of Lowercase and UppercaseWhile PySpark itself is case insenstive concering column names, Python itself is case sensitive. Since the first method for addressing columns by treating them as fields of a Python object *is* Python syntax, this is also case sensitive!
###Code
from pyspark.sql.functions import *
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
3.3 Transformations The `select` method actually accepts any column object. A column object conceptually represents a column in a DataFrame. The column may either refer directly to an existing column of the input DataFrame, or it may represent the result of a calculation or transformation of one or multiple columns of the input DataFrame. For example if we simply want to transform the name into upper case, we can do so by using a function `upper` provided by PySpark.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Defining new Column NamesThe resulting DataFrame again has a schema, but the column names to not look very nice. But by using the `alias` method of a `Column` object, you can immediately rename the newly created column like you are already used to in SQL with `SELECT complex_operation(...) AS nice_name FROM ...`. Technically specifying a new name for the resulting column is not required (as we already saw above), if the name is not specified, PySpark will generate a name from the expression. But since this generated name tends to be rather long and contains the logic instead of the intention, it is highly recommended to always explicitly specify the name of the resulting column using `as`.
###Code
# Result should be "Alice is 23 years old"
result = persons.select(
concat(persons.name, lit(" is "), persons.age, lit(" years old")).alias("description")
)
result.toPandas()
###Output
_____no_output_____
###Markdown
You can also perform simple mathematical calculations like addition, multiplication etc.
###Code
result = persons.select((persons.age * 2).alias("age2"))
result.toPandas()
###Output
_____no_output_____
###Markdown
Common FunctionsYou can find the full list of available functions at [PySpark SQL Module](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlmodule-pyspark.sql.functions). Commonly used functions for example are as follows:* [`concat(*cols)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.concat) - Concatenates multiple input columns together into a single column.* [`substring(col,start,len)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.substring) - Substring starts at pos and is of length len when str is String type or returns the slice of byte array that starts at pos in byte and is of length len when str is Binary type.* [`instr(col,substr)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.instr) - Locate the position of the first occurrence of substr column in the given string. Returns null if either of the arguments are null.* [`locate(col,substr, pos)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.locate) - Locate the position of the first occurrence of substr in a string column, after position pos.* [`length(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.length) - Computes the character length of string data or number of bytes of binary data. * [`upper(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.upper) - Converts a string column to upper case.* [`lower(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.lower) - Converts a string column to lower case.* [`coalesce(*cols)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.coalesce) - Returns the first column that is not null.* [`isnull(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.isnull) - An expression that returns true iff the column is null.* [`isnan(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.isnan) - An expression that returns true iff the column is NaN.* [`hash(cols*)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.hash) - Calculates the hash code of given columns.Spark also supports conditional expressions, like the SQL `CASE WHEN` construct* [`when(condition, value)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.when) - Evaluates a list of conditions and returns one of multiple possible result expressions.There are also some special functions often required* [`col(str)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.col) - Returns a Column based on the given column name.* [`lit(val)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.lit) - Creates a Column of literal value.* [`expr(str)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.expr) - Parses the expression string into the column that it represents User Defined FunctionsUnfortunately you cannot directly use normal Python functions for transforming DataFrame columns. Although PySpark already provides many useful functions, this might not always sufficient. But fortunately you can *convert* a standard Python function into a PySpark function, thereby defining a so called *user defined function* (UDF). Details will be explained in detail in the training. 3.4 Adding ColumnsA special variant of a `select` statement is the `withColumn` method. While the `select` statement requires all resulting columns to be defined in as arguments, the `withColumn` method keeps all existing columns and adds a new one. This operation is quite useful since in many cases new columns are derived from the existing ones, while the old ones still should be contained in the result.Let us have a look at a simple example, which only adds the salutation as a new column:
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
As you can see from the example above, `withColumn` always takes two arguments: The first one is the name of the new column (and it has to be a string), and the second argument is the expression containing the logic for calculating the actual contents. 3.5 Dropping a ColumnPySpark also supports the opposite operation which simply removes some columns from a dataframe. This is useful if you need to remove some sensitive data before saving it to disk:
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
4 Filtering*Filtering* denotes the process of keeping only rows which meet a certain filter criteria. 4.1 Simple `WHERE` clausesPySpark support two different approaches. The first approach specifies the filtering expression as a PySpark expression using columns:
###Code
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
You can also specify multiple conditions and combine them via logical operatios (`&` and `|`).
###Code
result = persons.where((persons.age > 22) & (persons.height > 160))
result.toPandas()
###Output
_____no_output_____
###Markdown
The second approach simply uses a string containing an SQL expression:
###Code
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
4.2 Limit OperationsWhen working with large datasets, it may be helpful to limit the amount of records (like an SQL `LIMIT` operation).
###Code
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
5 Simple AggregationsPySpark supports simple global aggregations, like `COUNT`, `MAX`, `MIN` etc...
###Code
# YOUR CODE HERE
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
6 Grouping & AggregatingAn important class of operation is grouping and aggregation, which is equivalnt to an SQL `SELECT aggregation GROUP BY grouping` statement. In PySpark, grouping and aggregation is always performed by first creating groups using `groupBy` immediately followed by aggregation expressions inside an `agg` method. (Actually there are also some predefined aggregations which can be used instead of `agg`, but they do not offer the flexiviliby which is required most of the time).Note that in the `agg` method you only need to specify the aggregation expression, the grouping columns are added automatically by PySpark to the resulting DataFrame.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Aggregation FunctionsPySpark supports many aggregation functions, they can be found in the documentation at [PySpark Function Documentation](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlmodule-pyspark.sql.functions). Aggregation functions are marked as such in the documentation, unfortunately there is no simple overview. Among common aggregation functions, there are for example:* count* sum* avg* corr* first* last 7 Sorting DataFramesYou can sort the entries (= rows) of a DataFrame by an arbitrary column or expression.
###Code
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
If nothing else is specified, PySpark will sort the records in increasing order of the sort columns. If you require descending order, this can be specified by manipulating the sort column with the `desc()` method as follows:
###Code
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
User Defined FunctionsSometimes the built in functions do not suffice or you want to call an existing function of a Python library. Using User Defined Functions (UDF) it is possible to wrap an existing function into a Spark DataFrame function.
###Code
import html
from pyspark.sql.types import *
html_encode = # YOUR CODE HERE
df = spark.createDataFrame([
("Alice & Bob",),
("Thelma & Louise",)
], ["name"])
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
1 Creating a DataFrameFirst, let's create some DataFrame from Python objects. While this is probably not the most common thing to do, it is easy and helpful in some situations where you already have some Python objects.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
1.1 Inspect SchemaThe `spark` object has different methods for creating a so called Spark DataFrame object. This object is similar to a table, it contains rows of records, which all conform to a common schema with named columns and specific types. On the surface it heavily borrows concepts from Pandas DataFrames or R DataFrames, although the syntax and many operations are syntactically very different.As the first step, we want to see the contents of the DataFrame. This can be easily done by using the show method.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
2 Reading DataOf course manually creating DataFrames from a couple of records is not the real use case. Instead we want to read data frames files.. Spark supports various file formats, we will use JSON in the following example.The entrypoint for creating Spark objects is an object called spark which is provided in the notebook and read to use. We will read a file containing some informations on a couple of persons, which will serve as the basis for the next examples
###Code
# YOUR CODE HERE
persons.collect()
###Output
_____no_output_____
###Markdown
2.1 Inspecting a DataFrameSpark supports various methods for inspecting both the contents and the schema of a DataFrame
###Code
# YOUR CODE HERE
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Pandas InteroperabilitySpark also supports interoperation with Python Pandas, the standard library for modelling tabular data.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
2.2 Loading CSV DataOf course Spark also supports reading CSV data. CSV files may optionally contain a header containing the column names.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
3 Simple Transformations 3.1 ProjectionsThe simplest thing to do is to create a new DataFrame with a subset of the available columns
###Code
from pyspark.sql.functions import *
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
3.2 Addressing ColumnsSpark supports multiple different ways for addressing a columns. We just saw one way, but also the following methods are supported for specifying a column:* `df.column_name`* `df['column_name']`* `col('column_name')`All these methods return a Column object, which is an abstract representative of the data in the column. As we will see soon, transformations can be applied to Column in order to derive new values. Beware of Lowercase and UppercaseWhile PySpark itself is case insenstive concering column names, Python itself is case sensitive. Since the first method for addressing columns by treating them as fields of a Python object *is* Python syntax, this is also case sensitive!
###Code
from pyspark.sql.functions import *
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
3.3 Transformations The `select` method actually accepts any column object. A column object conceptually represents a column in a DataFrame. The column may either refer directly to an existing column of the input DataFrame, or it may represent the result of a calculation or transformation of one or multiple columns of the input DataFrame. For example if we simply want to transform the name into upper case, we can do so by using a function `upper` provided by PySpark.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Defining new Column NamesThe resulting DataFrame again has a schema, but the column names to not look very nice. But by using the `alias` method of a `Column` object, you can immediately rename the newly created column like you are already used to in SQL with `SELECT complex_operation(...) AS nice_name FROM ...`. Technically specifying a new name for the resulting column is not required (as we already saw above), if the name is not specified, PySpark will generate a name from the expression. But since this generated name tends to be rather long and contains the logic instead of the intention, it is highly recommended to always explicitly specify the name of the resulting column using `as`.
###Code
# Result should be "Alice is 23 years old"
result = persons.select(
concat(persons.name, lit(" is "), persons.age, lit(" years old")).alias("description")
)
result.toPandas()
###Output
_____no_output_____
###Markdown
You can also perform simple mathematical calculations like addition, multiplication etc.
###Code
result = persons.select((persons.age * 2).alias("age2"))
result.toPandas()
###Output
_____no_output_____
###Markdown
Common FunctionsYou can find the full list of available functions at [PySpark SQL Module](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlmodule-pyspark.sql.functions). Commonly used functions for example are as follows:* [`concat(*cols)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.concat) - Concatenates multiple input columns together into a single column.* [`substring(col,start,len)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.substring) - Substring starts at pos and is of length len when str is String type or returns the slice of byte array that starts at pos in byte and is of length len when str is Binary type.* [`instr(col,substr)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.instr) - Locate the position of the first occurrence of substr column in the given string. Returns null if either of the arguments are null.* [`locate(col,substr, pos)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.locate) - Locate the position of the first occurrence of substr in a string column, after position pos.* [`length(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.length) - Computes the character length of string data or number of bytes of binary data. * [`upper(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.upper) - Converts a string column to upper case.* [`lower(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.lower) - Converts a string column to lower case.* [`coalesce(*cols)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.coalesce) - Returns the first column that is not null.* [`isnull(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.isnull) - An expression that returns true iff the column is null.* [`isnan(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.isnan) - An expression that returns true iff the column is NaN.* [`hash(cols*)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.hash) - Calculates the hash code of given columns.Spark also supports conditional expressions, like the SQL `CASE WHEN` construct* [`when(condition, value)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.when) - Evaluates a list of conditions and returns one of multiple possible result expressions.There are also some special functions often required* [`col(str)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.col) - Returns a Column based on the given column name.* [`lit(val)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.lit) - Creates a Column of literal value.* [`expr(str)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.expr) - Parses the expression string into the column that it represents User Defined FunctionsUnfortunately you cannot directly use normal Python functions for transforming DataFrame columns. Although PySpark already provides many useful functions, this might not always sufficient. But fortunately you can *convert* a standard Python function into a PySpark function, thereby defining a so called *user defined function* (UDF). Details will be explained in detail in the training. 3.4 Adding ColumnsA special variant of a `select` statement is the `withColumn` method. While the `select` statement requires all resulting columns to be defined in as arguments, the `withColumn` method keeps all existing columns and adds a new one. This operation is quite useful since in many cases new columns are derived from the existing ones, while the old ones still should be contained in the result.Let us have a look at a simple example, which only adds the salutation as a new column:
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
As you can see from the example above, `withColumn` always takes two arguments: The first one is the name of the new column (and it has to be a string), and the second argument is the expression containing the logic for calculating the actual contents. 3.5 Dropping a ColumnPySpark also supports the opposite operation which simply removes some columns from a dataframe. This is useful if you need to remove some sensitive data before saving it to disk:
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
4 Filtering*Filtering* denotes the process of keeping only rows which meet a certain filter criteria. 4.1 Simple `WHERE` clausesPySpark support two different approaches. The first approach specifies the filtering expression as a PySpark expression using columns:
###Code
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
You can also specify multiple conditions and combine them via logical operatios (`&` and `|`).
###Code
result = persons.where((persons.age > 22) & (persons.height > 160))
result.toPandas()
###Output
_____no_output_____
###Markdown
The second approach simply uses a string containing an SQL expression:
###Code
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
4.2 Limit OperationsWhen working with large datasets, it may be helpful to limit the amount of records (like an SQL `LIMIT` operation).
###Code
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
5 Simple AggregationsPySpark supports simple global aggregations, like `COUNT`, `MAX`, `MIN` etc...
###Code
# YOUR CODE HERE
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
6 Grouping & AggregatingAn important class of operation is grouping and aggregation, which is equivalnt to an SQL `SELECT aggregation GROUP BY grouping` statement. In PySpark, grouping and aggregation is always performed by first creating groups using `groupBy` immediately followed by aggregation expressions inside an `agg` method. (Actually there are also some predefined aggregations which can be used instead of `agg`, but they do not offer the flexiviliby which is required most of the time).Note that in the `agg` method you only need to specify the aggregation expression, the grouping columns are added automatically by PySpark to the resulting DataFrame.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Aggregation FunctionsPySpark supports many aggregation functions, they can be found in the documentation at [PySpark Function Documentation](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlmodule-pyspark.sql.functions). Aggregation functions are marked as such in the documentation, unfortunately there is no simple overview. Among common aggregation functions, there are for example:* count* sum* avg* corr* first* last 7 Sorting DataFramesYou can sort the entries (= rows) of a DataFrame by an arbitrary column or expression.
###Code
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
If nothing else is specified, PySpark will sort the records in increasing order of the sort columns. If you require descending order, this can be specified by manipulating the sort column with the `desc()` method as follows:
###Code
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____
###Markdown
User Defined FunctionsSometimes the built in functions do not suffice or you want to call an existing function of a Python library. Using User Defined Functions (UDF) it is possible to wrap an existing function into a Spark DataFrame function.
###Code
import html
from pyspark.sql.types import *
html_encode = # YOUR CODE HERE
df = spark.createDataFrame([
("Alice & Bob",),
("Thelma & Louise",)
], ["name"])
result = # YOUR CODE HERE
result.toPandas()
###Output
_____no_output_____ |
GEE/jupyer_notebooks/.ipynb_checkpoints/combopreprocess-checkpoint.ipynb | ###Markdown
Cloud masking with Sentinel 2 Cloud and cloud shadow masking of Sentinel 2 images in Python. Refactored from javascipt taken from this thread: [Sentinel 2 cloud masking](https://groups.google.com/forum/!searchin/google-earth-engine-developers/cloud$20masking%7Csort:relevance/google-earth-engine-developers/i63DS-Dg8Sg/Kc0knF9BBgAJ)
###Code
from IPython.display import display, Image
import math
import ee
import os
import sys
from Py6S import *
import datetime
sys.path.append(os.path.join(os.path.dirname(os.getcwd()),'bin'))
from atmospheric import Atmospheric
#from cloudmasker import *
ee.Initialize()
def rescale(img, thresholds):
"""
Linear stretch of image between two threshold values.
"""
return img.subtract(thresholds[0]).divide(thresholds[1] - thresholds[0])
def ESAcloudMask(img):
"""
European Space Agency (ESA) clouds from 'QA60', i.e. Quality Assessment band at 60m
parsed by Nick Clinton
"""
qa = img.select('QA60')
# bits 10 and 11 are clouds and cirrus
cloudBitMask = int(2**10)
cirrusBitMask = int(2**11)
# both flags set to zero indicates clear conditions.
clear = qa.bitwiseAnd(cloudBitMask).eq(0).And(\
qa.bitwiseAnd(cirrusBitMask).eq(0))
# clouds is not clear
cloud = clear.Not().rename(['ESA_clouds'])
# return the masked and scaled data.
return img.addBands(cloud)
def shadowMask(img,cloudMaskType):
"""
Finds cloud shadows in images
Originally by Gennadii Donchyts, adapted by Ian Housman
"""
def potentialShadow(cloudHeight):
"""
Finds potential shadow areas from array of cloud heights
returns an image stack (i.e. list of images)
"""
cloudHeight = ee.Number(cloudHeight)
# shadow vector length
shadowVector = zenith.tan().multiply(cloudHeight)
# x and y components of shadow vector length
x = azimuth.cos().multiply(shadowVector).divide(nominalScale).round()
y = azimuth.sin().multiply(shadowVector).divide(nominalScale).round()
# affine translation of clouds
cloudShift = cloudMask.changeProj(cloudMask.projection(), cloudMask.projection().translate(x, y)) # could incorporate shadow stretch?
return cloudShift
# select a cloud mask
cloudMask = img.select(cloudMaskType)
# make sure it is binary (i.e. apply threshold to cloud score)
cloudScoreThreshold = 0.5
cloudMask = cloudMask.gt(cloudScoreThreshold)
# solar geometry (radians)
azimuth = ee.Number(img.get('solar_azimuth')).multiply(math.pi).divide(180.0).add(ee.Number(0.5).multiply(math.pi))
zenith = ee.Number(0.5).multiply(math.pi ).subtract(ee.Number(img.get('solar_zenith')).multiply(math.pi).divide(180.0))
# find potential shadow areas based on cloud and solar geometry
nominalScale = cloudMask.projection().nominalScale()
cloudHeights = ee.List.sequence(500,4000,500)
potentialShadowStack = cloudHeights.map(potentialShadow)
potentialShadow = ee.ImageCollection.fromImages(potentialShadowStack).max()
# shadows are not clouds
potentialShadow = potentialShadow.And(cloudMask.Not())
# (modified) dark pixel detection
darkPixels = toa.normalizedDifference(['green', 'swir2']).gt(0.25)
# shadows are dark
shadows = potentialShadow.And(darkPixels).rename(['shadows'])
# might be scope for one last check here. Dark surfaces (e.g. water, basalt, etc.) cause shadow commission errors.
# perhaps using a NDWI (e.g. green and nir)
return img.addBands(shadows)
def quicklook(bandNames, mn, mx, region, gamma=False, title=False):
"""
Displays images in notebook
"""
if title:
print('\n',title)
if not gamma:
gamma = 1
visual = Image(url=toa.select(bandNames).getThumbUrl({
'region':region,
'min':mn,
'max':mx,
'gamma':gamma,
'title':title
}))
display(visual)
###Output
_____no_output_____
###Markdown
time and placeDefine the time and place that you are looking for.
###Code
# region of interest
geom = ee.Geometry.Point(-155.0844, 19.7189)
# start and end of time series
startDate = ee.Date('1980-01-01')
stopDate = ee.Date('2020-01-01')
###Output
_____no_output_____
###Markdown
an imageThe following code will grab the image collection between those dates, or the first in the time-series.
###Code
# image collection
S2col = ee.ImageCollection('COPERNICUS/S2')\
.filterBounds(geom)\
.filterDate(startDate,stopDate)
# single image
S2 = ee.Image(S2col.first())# The first Sentinel 2 image
# top of atmosphere reflectance
toa = S2.divide(10000)
###Output
_____no_output_____
###Markdown
Detail some extra information
###Code
# Metadata
info = S2.getInfo()['properties']
scene_date = datetime.datetime.utcfromtimestamp(info['system:time_start']/1000)# i.e. Python uses seconds, EE uses milliseconds
solar_z = info['MEAN_SOLAR_ZENITH_ANGLE']
# Atmospheric constituents
h2o = Atmospheric.water(geom,date).getInfo()
o3 = Atmospheric.ozone(geom,date).getInfo()
aot = Atmospheric.aerosol(geom,date).getInfo()
# Target Altitude
SRTM = ee.Image('CGIAR/SRTM90_V4')# Shuttle Radar Topography mission covers *most* of the Earth
alt = SRTM.reduceRegion(reducer = ee.Reducer.mean(),geometry = geom.centroid()).get('elevation').getInfo()
km = alt/1000 # i.e. Py6S uses units of kilometers
###Output
_____no_output_____
###Markdown
Create the 6S object
###Code
# Instantiate
s = SixS()
# Atmospheric constituents
s.atmos_profile = AtmosProfile.UserWaterAndOzone(h2o,o3)
s.aero_profile = AeroProfile.Continental
s.aot550 = aot
# Earth-Sun-satellite geometry
s.geometry = Geometry.User()
s.geometry.view_z = 0 # always NADIR (I think..)
s.geometry.solar_z = solar_z # solar zenith angle
s.geometry.month = scene_date.month # month and day used for Earth-Sun distance
s.geometry.day = scene_date.day # month and day used for Earth-Sun distance
s.altitudes.set_sensor_satellite_level()
s.altitudes.set_target_custom_altitude(km)
def shadowMask(img,cloudMaskType):
"""
Finds cloud shadows in images
Originally by Gennadii Donchyts, adapted by Ian Housman
"""
def potentialShadow(cloudHeight):
"""
Finds potential shadow areas from array of cloud heights
returns an image stack (i.e. list of images)
"""
cloudHeight = ee.Number(cloudHeight)
# shadow vector length
shadowVector = zenith.tan().multiply(cloudHeight)
# x and y components of shadow vector length
x = azimuth.cos().multiply(shadowVector).divide(nominalScale).round()
y = azimuth.sin().multiply(shadowVector).divide(nominalScale).round()
# affine translation of clouds
cloudShift = cloudMask.changeProj(cloudMask.projection(), cloudMask.projection().translate(x, y)) # could incorporate shadow stretch?
return cloudShift
# select a cloud mask
cloudMask = img.select(cloudMaskType)
# make sure it is binary (i.e. apply threshold to cloud score)
cloudScoreThreshold = 0.5
cloudMask = cloudMask.gt(cloudScoreThreshold)
# solar geometry (radians)
azimuth = ee.Number(img.get('solar_azimuth')).multiply(math.pi).divide(180.0).add(ee.Number(0.5).multiply(math.pi))
zenith = ee.Number(0.5).multiply(math.pi ).subtract(ee.Number(img.get('solar_zenith')).multiply(math.pi).divide(180.0))
# find potential shadow areas based on cloud and solar geometry
nominalScale = cloudMask.projection().nominalScale()
cloudHeights = ee.List.sequence(500,4000,500)
potentialShadowStack = cloudHeights.map(potentialShadow)
potentialShadow = ee.ImageCollection.fromImages(potentialShadowStack).max()
# shadows are not clouds
potentialShadow = potentialShadow.And(cloudMask.Not())
# (modified) dark pixel detection
darkPixels = toa.normalizedDifference(['green', 'swir2']).gt(0.25)
# shadows are dark
shadows = potentialShadow.And(darkPixels).rename(['shadows'])
# might be scope for one last check here. Dark surfaces (e.g. water, basalt, etc.) cause shadow commission errors.
# perhaps using a NDWI (e.g. green and nir)
return img.addBands(shadows)
# top of atmosphere reflectance
toa = img.select(['B1','B2','B3','B4','B6','B8A','B9','B10', 'B11','B12'],\
['aerosol', 'blue', 'green', 'red', 'red2','red4','h2o', 'cirrus','swir1', 'swir2'])\
.divide(10000).addBands(img.select('QA60'))\
.set('solar_azimuth',img.get('MEAN_SOLAR_AZIMUTH_ANGLE'))\
.set('solar_zenith',img.get('MEAN_SOLAR_ZENITH_ANGLE'))
# clouds
toa = sentinelCloudScore(toa)
toa = ESAcloudMask(toa)
# cloud shadow
toa = shadowMask(toa,'cloudScore')
# display region
region = geom.buffer(10000).bounds().getInfo()['coordinates']
# quicklooks
quicklook(['red','green','blue'], 0, 0.25, region, gamma=1.5, title='RGB')
quicklook('cloudScore', 0, 1, region, title='Cloud Score')
quicklook('ESA_clouds', 0, 1, region, title = 'ESA Clouds (QA60)')
quicklook('shadows', 0, 1, region, title = 'Shadow mask')
###Output
_____no_output_____ |
scratch_notebooks/ytBaseModelPrototype.ipynb | ###Markdown
a ytBaseModel pydantic class experimentthis notebook subclasses pydantic's `BaseModel` class to create an abstract `ytBaseModel` class that includes some business for executing the corresponding methods. The `ytBaseModel` class:* uses the `inspect.getfullargspec` within `ytBaseModel._run()` to retrieve the expected argument order of the yt method and then calls the yt method using the values in the `ytBaseModel` attributes.* checks if any of args being passed to the yt call are themselves `ytBaseModel` instances, in which case `ytBaseModel._run()` gets called for that argument.* uses a protected dictionary attribute, `_arg_mapping`, to map any argument names we have changed betwen yt's internal calls and the pydantic class. `_args_mapping['yt_name'] -> 'schema_name'`.So here's the base class:
###Code
from pydantic import BaseModel
from inspect import getfullargspec
class ytBaseModel(BaseModel):
_arg_mapping: dict = {} # mapping from internal yt name to schema name
def _run(self):
# this method actually executes the yt code
# first make sure yt is imported and then get our function handle. This assumes
# that our class name exists in yt's top level api.
import yt
func = getattr(yt, type(self).__name__)
print(f"pulled func {func}")
# now we get the arguments for the function:
# func_spec.args, which lists the named arguments and keyword arguments.
# ignoring vargs and kw-only args for now...
# see https://docs.python.org/3/library/inspect.html#inspect.getfullargspec
func_spec = getfullargspec(func)
# the list that we'll use to eventually call our function
the_args = []
# the argument position number at which we have default values (a little hacky, should
# be a better way to do this, and not sure how to scale it to include *args and **kwargs)
n_args = len(func_spec.args) # number of arguments
if func_spec.defaults is None:
# no default args, make sure we never get there...
named_kw_start_at = n_args + 1
else:
# the position at which named keyword args start
named_kw_start_at = n_args - len(func_spec.defaults)
print(f"keywords start at {named_kw_start_at}")
# loop over the call signature arguments and pull out values from our pydantic class .
# this is recursive! will call _run() if a given argument value is also a ytBaseModel.
for arg_i, arg in enumerate(func_spec.args):
# check if we've remapped the yt internal argument name for the schema
if arg in self._arg_mapping:
arg = self._arg_mapping[arg]
# get the value for this argument. If it's not there, attempt to set default values
# for arguments needed for yt but not exposed in our pydantic class
try:
arg_value = getattr(self, arg)
except AttributeError:
if arg_i >= named_kw_start_at:
# we are in the named keyword arguments, grab the default
# the func_spec.defaults tuple 0 index is the first named
# argument, so need to offset the arg_i counter
default_index = arg_i - named_kw_start_at
arg_value = func_spec.defaults[default_index]
else:
raise AttributeError
# check if this argument is itself a ytBaseModel for which we need to run
# this should make this a fully recursive function?
# if hasattr(arg_value,'_run'):
if isinstance(arg_value, ytBaseModel):
print(f"{arg_value} is a ytBaseModel, calling {arg_value}._run() now...")
arg_value = arg_value._run()
the_args.append(arg_value)
print(the_args)
return func(*the_args)
###Output
_____no_output_____
###Markdown
Now we'll create two new classes for `load` and `SlicePlot`:
###Code
class load(ytBaseModel):
filename: str
_arg_mapping: dict = {"fn": "filename"}
class SlicePlot(ytBaseModel):
ds: load = None
normal: str = 'x'
field: tuple = ('all', 'Density')
_arg_mapping: dict = {"fields": "field"}
###Output
_____no_output_____
###Markdown
now let's instantiate some classes:
###Code
ds = load(filename='snapshot_033')
slc = SlicePlot(ds=ds, dim='x',field=("PartType0","Density"))
###Output
_____no_output_____
###Markdown
so these objects are normal pydantic classes:
###Code
ds.schema()
slc.schema()
###Output
_____no_output_____
###Markdown
but now we can use .run() to execute!
###Code
slc._run()
###Output
/home/chavlin/src/yt/yt/utilities/logger.py:4: VisibleDeprecationWarning: The configuration file /home/chavlin/.config/yt/ytrc is deprecated in favor of /home/chavlin/.config/yt/yt.toml. Currently, both are present. Please manually remove the deprecated one to silence this warning.
Deprecated since v4.0.0 . This feature will be removed in v4.1.0
from yt.config import ytcfg
yt : [INFO ] 2021-03-05 12:49:08,648 Parameters: current_time = 4.343952725460923e+17 s
yt : [INFO ] 2021-03-05 12:49:08,648 Parameters: domain_dimensions = [1 1 1]
yt : [INFO ] 2021-03-05 12:49:08,648 Parameters: domain_left_edge = [0. 0. 0.]
yt : [INFO ] 2021-03-05 12:49:08,649 Parameters: domain_right_edge = [25. 25. 25.]
yt : [INFO ] 2021-03-05 12:49:08,649 Parameters: cosmological_simulation = 1
yt : [INFO ] 2021-03-05 12:49:08,649 Parameters: current_redshift = -4.811891664902035e-05
yt : [INFO ] 2021-03-05 12:49:08,650 Parameters: omega_lambda = 0.762
yt : [INFO ] 2021-03-05 12:49:08,651 Parameters: omega_matter = 0.238
yt : [INFO ] 2021-03-05 12:49:08,651 Parameters: omega_radiation = 0.0
yt : [INFO ] 2021-03-05 12:49:08,651 Parameters: hubble_constant = 0.73
yt : [INFO ] 2021-03-05 12:49:08,721 Allocating for 4.194e+06 particles
Loading particle index: 92%|█████████▏| 11/12 [00:00<00:00, 184.73it/s]
###Markdown
a ytBaseModel pydantic class experimentthis notebook subclasses pydantic's `BaseModel` class to create an abstract `ytBaseModel` class that includes some business for executing the corresponding methods. The `ytBaseModel` class:* uses the `inspect.getfullargspec` within `ytBaseModel._run()` to retrieve the expected argument order of the yt method and then calls the yt method using the values in the `ytBaseModel` attributes.* checks if any of args being passed to the yt call are themselves `ytBaseModel` instances, in which case `ytBaseModel._run()` gets called for that argument.* uses a protected dictionary attribute, `_arg_mapping`, to map any argument names we have changed betwen yt's internal calls and the pydantic class. `_args_mapping['yt_name'] -> 'schema_name'`.So here's the base class:
###Code
from pydantic import BaseModel
from inspect import getfullargspec
class ytBaseModel(BaseModel):
_arg_mapping: dict = {} # mapping from internal yt name to schema name
def _run(self):
# this method actually executes the yt code
# first make sure yt is imported and then get our function handle. This assumes
# that our class name exists in yt's top level api.
import yt
func = getattr(yt, type(self).__name__)
print(f"pulled func {func}")
# now we get the arguments for the function:
# func_spec.args, which lists the named arguments and keyword arguments.
# ignoring vargs and kw-only args for now...
# see https://docs.python.org/3/library/inspect.html#inspect.getfullargspec
func_spec = getfullargspec(func)
# the list that we'll use to eventually call our function
the_args = []
# the argument position number at which we have default values (a little hacky, should
# be a better way to do this, and not sure how to scale it to include *args and **kwargs)
n_args = len(func_spec.args) # number of arguments
if func_spec.defaults is None:
# no default args, make sure we never get there...
named_kw_start_at = n_args + 1
else:
# the position at which named keyword args start
named_kw_start_at = n_args - len(func_spec.defaults)
print(f"keywords start at {named_kw_start_at}")
# loop over the call signature arguments and pull out values from our pydantic class .
# this is recursive! will call _run() if a given argument value is also a ytBaseModel.
for arg_i, arg in enumerate(func_spec.args):
# check if we've remapped the yt internal argument name for the schema
if arg in self._arg_mapping:
arg = self._arg_mapping[arg]
# get the value for this argument. If it's not there, attempt to set default values
# for arguments needed for yt but not exposed in our pydantic class
try:
arg_value = getattr(self, arg)
except AttributeError:
if arg_i >= named_kw_start_at:
# we are in the named keyword arguments, grab the default
# the func_spec.defaults tuple 0 index is the first named
# argument, so need to offset the arg_i counter
default_index = arg_i - named_kw_start_at
arg_value = func_spec.defaults[default_index]
else:
raise AttributeError
# check if this argument is itself a ytBaseModel for which we need to run
# this should make this a fully recursive function?
# if hasattr(arg_value,'_run'):
if isinstance(arg_value, ytBaseModel):
print(f"{arg_value} is a ytBaseModel, calling {arg_value}._run() now...")
arg_value = arg_value._run()
the_args.append(arg_value)
print(the_args)
return func(*the_args)
###Output
_____no_output_____
###Markdown
Now we'll create two new classes for `load` and `SlicePlot`:
###Code
class load(ytBaseModel):
filename: str
_arg_mapping: dict = {"fn": "filename"}
class SlicePlot(ytBaseModel):
ds: load = None
normal: str = 'x'
field: tuple = ('all', 'Density')
_arg_mapping: dict = {"fields": "field"}
###Output
_____no_output_____
###Markdown
now let's instantiate some classes:
###Code
ds = load(filename="IsolatedGalaxy/galaxy0030/galaxy0030")
slc = SlicePlot(ds=ds, dim='x',field=("PartType0","Density"))
###Output
_____no_output_____
###Markdown
so these objects are normal pydantic classes:
###Code
ds.schema()
slc.schema()
###Output
_____no_output_____
###Markdown
but now we can use .run() to execute!
###Code
slc._run()
from pydantic import BaseModel
from inspect import getfullargspec
from typing import Optional
class ytBaseModel(BaseModel):
_arg_mapping: dict = {} # mapping from internal yt name to schema name
_yt_operation: Optional[str]
def _run(self):
# this method actually executes the yt code
# first make sure yt is imported and then get our function handle. This assumes
# that our class name exists in yt's top level api.
import yt
print(self._yt_operation)
funcname = getattr(self, "_yt_operation", type(self).__name__ )
func = getattr(yt, funcname)
print(f"pulled func {func}")
# now we get the arguments for the function:
# func_spec.args, which lists the named arguments and keyword arguments.
# ignoring vargs and kw-only args for now...
# see https://docs.python.org/3/library/inspect.html#inspect.getfullargspec
func_spec = getfullargspec(func)
# the list that we'll use to eventually call our function
the_args = []
# the argument position number at which we have default values (a little hacky, should
# be a better way to do this, and not sure how to scale it to include *args and **kwargs)
n_args = len(func_spec.args) # number of arguments
if func_spec.defaults is None:
# no default args, make sure we never get there...
named_kw_start_at = n_args + 1
else:
# the position at which named keyword args start
named_kw_start_at = n_args - len(func_spec.defaults)
print(f"keywords start at {named_kw_start_at}")
# loop over the call signature arguments and pull out values from our pydantic class .
# this is recursive! will call _run() if a given argument value is also a ytBaseModel.
for arg_i, arg in enumerate(func_spec.args):
# check if we've remapped the yt internal argument name for the schema
if arg in self._arg_mapping:
arg = self._arg_mapping[arg]
# get the value for this argument. If it's not there, attempt to set default values
# for arguments needed for yt but not exposed in our pydantic class
print(arg)
try:
arg_value = getattr(self, arg)
except AttributeError:
if arg_i >= named_kw_start_at:
# we are in the named keyword arguments, grab the default
# the func_spec.defaults tuple 0 index is the first named
# argument, so need to offset the arg_i counter
default_index = arg_i - named_kw_start_at
arg_value = func_spec.defaults[default_index]
else:
raise AttributeError
# check if this argument is itself a ytBaseModel for which we need to run
# this should make this a fully recursive function?
# if hasattr(arg_value,'_run'):
if isinstance(arg_value, ytBaseModel) or isinstance(arg_value, ytParameter):
print(f"{arg_value} is a {type(arg_value)}, calling {arg_value}._run() now...")
arg_value = arg_value._run()
the_args.append(arg_value)
print(the_args)
return func(*the_args)
class ytParameter(BaseModel):
_skip_these = ['comments']
def _run(self):
p = [getattr(self,key) for key in self.schema()['properties'].keys() if key not in self._skip_these]
if len(p) > 1:
raise ValueError("whoops. ytParameter instances can only have single values")
return p[0]
class Dataset(ytBaseModel):
"""
The dataset model to load and that will be drawn from for other classes. Filename is the only required field.
"""
filename: str
name: str = "Data for Science"
comments: Optional[str]
grammar: str = "registration"
_yt_operation: str = "load"
_arg_mapping: dict = {'fn' : 'filename'}
class ytModel(ytBaseModel):
'''
An example for a yt analysis schema using Pydantic
'''
Load: Dataset
class Config:
title = 'yt example'
underscore_attrs_are_private = True
def _run(self):
# for the top level model, we override this. Nested objects will still be recursive!
att = getattr(self, "Load")
return att._run()
validated_json = {'Load': {"filename": "IsolatedGalaxy/galaxy0030/galaxy0030"}}
yt_mod = ytModel(Load = validated_json["Load"])
yt_mod
ds = yt_mod._run()
###Output
_____no_output_____ |
Chapter02/2.07 Basic Simulations.ipynb | ###Markdown
Basic Simulations in OpenAI's Gym OpenAI Gym is a toolkit for building, evaluating and comparing RL algorithms. It iscompatible with algorithms written in any frameworks like TensoFlow, Theano, Keras etc... Itis simple and easy to comprehend. It makes no assumption about the structure of our agentand provides an interface to all RL tasks.Now, we will see, how to simulate environments in gym. CartPole Environment First let us import the OpenAI's Gym library
###Code
import gym
###Output
_____no_output_____
###Markdown
We use the make function for simulating the environment
###Code
env = gym.make('CartPole-v0')
###Output
_____no_output_____
###Markdown
Then, we initialize the environment using reset method
###Code
env.reset()
###Output
_____no_output_____
###Markdown
Now,we can loop for some time steps and render the environment at each step
###Code
for _ in range(1000):
env.render()
env.step(env.action_space.sample())
###Output
_____no_output_____
###Markdown
Different Types of Environments OpenAI gym provides a lot of simulation environments for training, evaluating andbuilding our agents. We can check the available environments by either checking theirwebsite or simply typing the following commands which will list the available environments.
###Code
from gym import envs
print(envs.registry.all())
###Output
_____no_output_____
###Markdown
CarRacing Environment Since Gym provides different interesting environments, let us simulate a car racingenvironment as shown below,
###Code
import gym
env = gym.make('CarRacing-v0')
env.reset()
for _ in range(1000):
env.render()
env.step(env.action_space.sample())
###Output
_____no_output_____
###Markdown
Basic Simulations in OpenAI's Gym OpenAI Gym is a toolkit for building, evaluating and comparing RL algorithms. It iscompatible with algorithms written in any frameworks like TensoFlow, Theano, Keras etc... Itis simple and easy to comprehend. It makes no assumption about the structure of our agentand provides an interface to all RL tasks.Now, we will see, how to simulate environments in gym. CartPole Environment First let us import the OpenAI's Gym library
###Code
import gym
###Output
_____no_output_____
###Markdown
We use the make function for simulating the environment
###Code
env = gym.make('CartPole-v0')
###Output
_____no_output_____
###Markdown
Then, we initialize the environment using reset method
###Code
env.reset()
###Output
_____no_output_____
###Markdown
Now,we can loop for some time steps and render the environment at each step
###Code
for _ in range(1000):
env.render()
env.step(env.action_space.sample())
###Output
/Users/jarvis/GoogleDrive/packt/Reinforcement Learning with Python/gym/gym/logger.py:30: UserWarning: [33mWARN: You are calling 'step()' even though this environment has already returned done = True. You should always call 'reset()' once you receive 'done = True' -- any further steps are undefined behavior.[0m
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
###Markdown
Different Types of Environments OpenAI gym provides a lot of simulation environments for training, evaluating andbuilding our agents. We can check the available environments by either checking theirwebsite or simply typing the following commands which will list the available environments.
###Code
from gym import envs
print(envs.registry.all())
###Output
dict_values([EnvSpec(Copy-v0), EnvSpec(RepeatCopy-v0), EnvSpec(ReversedAddition-v0), EnvSpec(ReversedAddition3-v0), EnvSpec(DuplicatedInput-v0), EnvSpec(Reverse-v0), EnvSpec(CartPole-v0), EnvSpec(CartPole-v1), EnvSpec(MountainCar-v0), EnvSpec(MountainCarContinuous-v0), EnvSpec(Pendulum-v0), EnvSpec(Acrobot-v1), EnvSpec(LunarLander-v2), EnvSpec(LunarLanderContinuous-v2), EnvSpec(BipedalWalker-v2), EnvSpec(BipedalWalkerHardcore-v2), EnvSpec(CarRacing-v0), EnvSpec(Blackjack-v0), EnvSpec(KellyCoinflip-v0), EnvSpec(KellyCoinflipGeneralized-v0), EnvSpec(FrozenLake-v0), EnvSpec(FrozenLake8x8-v0), EnvSpec(CliffWalking-v0), EnvSpec(NChain-v0), EnvSpec(Roulette-v0), EnvSpec(Taxi-v2), EnvSpec(GuessingGame-v0), EnvSpec(HotterColder-v0), EnvSpec(Reacher-v2), EnvSpec(Pusher-v2), EnvSpec(Thrower-v2), EnvSpec(Striker-v2), EnvSpec(InvertedPendulum-v2), EnvSpec(InvertedDoublePendulum-v2), EnvSpec(HalfCheetah-v2), EnvSpec(HalfCheetah-v3), EnvSpec(Hopper-v2), EnvSpec(Hopper-v3), EnvSpec(Swimmer-v2), EnvSpec(Swimmer-v3), EnvSpec(Walker2d-v2), EnvSpec(Walker2d-v3), EnvSpec(Ant-v2), EnvSpec(Ant-v3), EnvSpec(Humanoid-v2), EnvSpec(Humanoid-v3), EnvSpec(HumanoidStandup-v2), EnvSpec(FetchSlide-v1), EnvSpec(FetchPickAndPlace-v1), EnvSpec(FetchReach-v1), EnvSpec(FetchPush-v1), EnvSpec(HandReach-v0), EnvSpec(HandManipulateBlockRotateZ-v0), EnvSpec(HandManipulateBlockRotateZTouchSensors-v0), EnvSpec(HandManipulateBlockRotateZTouchSensors-v1), EnvSpec(HandManipulateBlockRotateParallel-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensors-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensors-v1), EnvSpec(HandManipulateBlockRotateXYZ-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensors-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensors-v1), EnvSpec(HandManipulateBlockFull-v0), EnvSpec(HandManipulateBlock-v0), EnvSpec(HandManipulateBlockTouchSensors-v0), EnvSpec(HandManipulateBlockTouchSensors-v1), EnvSpec(HandManipulateEggRotate-v0), EnvSpec(HandManipulateEggRotateTouchSensors-v0), EnvSpec(HandManipulateEggRotateTouchSensors-v1), EnvSpec(HandManipulateEggFull-v0), EnvSpec(HandManipulateEgg-v0), EnvSpec(HandManipulateEggTouchSensors-v0), EnvSpec(HandManipulateEggTouchSensors-v1), EnvSpec(HandManipulatePenRotate-v0), EnvSpec(HandManipulatePenRotateTouchSensors-v0), EnvSpec(HandManipulatePenRotateTouchSensors-v1), EnvSpec(HandManipulatePenFull-v0), EnvSpec(HandManipulatePen-v0), EnvSpec(HandManipulatePenTouchSensors-v0), EnvSpec(HandManipulatePenTouchSensors-v1), EnvSpec(FetchSlideDense-v1), EnvSpec(FetchPickAndPlaceDense-v1), EnvSpec(FetchReachDense-v1), EnvSpec(FetchPushDense-v1), EnvSpec(HandReachDense-v0), EnvSpec(HandManipulateBlockRotateZDense-v0), EnvSpec(HandManipulateBlockRotateZTouchSensorsDense-v0), EnvSpec(HandManipulateBlockRotateZTouchSensorsDense-v1), EnvSpec(HandManipulateBlockRotateParallelDense-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensorsDense-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensorsDense-v1), EnvSpec(HandManipulateBlockRotateXYZDense-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensorsDense-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensorsDense-v1), EnvSpec(HandManipulateBlockFullDense-v0), EnvSpec(HandManipulateBlockDense-v0), EnvSpec(HandManipulateBlockTouchSensorsDense-v0), EnvSpec(HandManipulateBlockTouchSensorsDense-v1), EnvSpec(HandManipulateEggRotateDense-v0), EnvSpec(HandManipulateEggRotateTouchSensorsDense-v0), EnvSpec(HandManipulateEggRotateTouchSensorsDense-v1), EnvSpec(HandManipulateEggFullDense-v0), EnvSpec(HandManipulateEggDense-v0), EnvSpec(HandManipulateEggTouchSensorsDense-v0), EnvSpec(HandManipulateEggTouchSensorsDense-v1), EnvSpec(HandManipulatePenRotateDense-v0), EnvSpec(HandManipulatePenRotateTouchSensorsDense-v0), EnvSpec(HandManipulatePenRotateTouchSensorsDense-v1), EnvSpec(HandManipulatePenFullDense-v0), EnvSpec(HandManipulatePenDense-v0), EnvSpec(HandManipulatePenTouchSensorsDense-v0), EnvSpec(HandManipulatePenTouchSensorsDense-v1), EnvSpec(Adventure-v0), EnvSpec(Adventure-v4), EnvSpec(AdventureDeterministic-v0), EnvSpec(AdventureDeterministic-v4), EnvSpec(AdventureNoFrameskip-v0), EnvSpec(AdventureNoFrameskip-v4), EnvSpec(Adventure-ram-v0), EnvSpec(Adventure-ram-v4), EnvSpec(Adventure-ramDeterministic-v0), EnvSpec(Adventure-ramDeterministic-v4), EnvSpec(Adventure-ramNoFrameskip-v0), EnvSpec(Adventure-ramNoFrameskip-v4), EnvSpec(AirRaid-v0), EnvSpec(AirRaid-v4), EnvSpec(AirRaidDeterministic-v0), EnvSpec(AirRaidDeterministic-v4), EnvSpec(AirRaidNoFrameskip-v0), EnvSpec(AirRaidNoFrameskip-v4), EnvSpec(AirRaid-ram-v0), EnvSpec(AirRaid-ram-v4), EnvSpec(AirRaid-ramDeterministic-v0), EnvSpec(AirRaid-ramDeterministic-v4), EnvSpec(AirRaid-ramNoFrameskip-v0), EnvSpec(AirRaid-ramNoFrameskip-v4), EnvSpec(Alien-v0), EnvSpec(Alien-v4), EnvSpec(AlienDeterministic-v0), EnvSpec(AlienDeterministic-v4), EnvSpec(AlienNoFrameskip-v0), EnvSpec(AlienNoFrameskip-v4), EnvSpec(Alien-ram-v0), EnvSpec(Alien-ram-v4), EnvSpec(Alien-ramDeterministic-v0), EnvSpec(Alien-ramDeterministic-v4), EnvSpec(Alien-ramNoFrameskip-v0), EnvSpec(Alien-ramNoFrameskip-v4), EnvSpec(Amidar-v0), EnvSpec(Amidar-v4), EnvSpec(AmidarDeterministic-v0), EnvSpec(AmidarDeterministic-v4), EnvSpec(AmidarNoFrameskip-v0), EnvSpec(AmidarNoFrameskip-v4), EnvSpec(Amidar-ram-v0), EnvSpec(Amidar-ram-v4), EnvSpec(Amidar-ramDeterministic-v0), EnvSpec(Amidar-ramDeterministic-v4), EnvSpec(Amidar-ramNoFrameskip-v0), EnvSpec(Amidar-ramNoFrameskip-v4), EnvSpec(Assault-v0), EnvSpec(Assault-v4), EnvSpec(AssaultDeterministic-v0), EnvSpec(AssaultDeterministic-v4), EnvSpec(AssaultNoFrameskip-v0), EnvSpec(AssaultNoFrameskip-v4), EnvSpec(Assault-ram-v0), EnvSpec(Assault-ram-v4), EnvSpec(Assault-ramDeterministic-v0), EnvSpec(Assault-ramDeterministic-v4), EnvSpec(Assault-ramNoFrameskip-v0), EnvSpec(Assault-ramNoFrameskip-v4), EnvSpec(Asterix-v0), EnvSpec(Asterix-v4), EnvSpec(AsterixDeterministic-v0), EnvSpec(AsterixDeterministic-v4), EnvSpec(AsterixNoFrameskip-v0), EnvSpec(AsterixNoFrameskip-v4), EnvSpec(Asterix-ram-v0), EnvSpec(Asterix-ram-v4), EnvSpec(Asterix-ramDeterministic-v0), EnvSpec(Asterix-ramDeterministic-v4), EnvSpec(Asterix-ramNoFrameskip-v0), EnvSpec(Asterix-ramNoFrameskip-v4), EnvSpec(Asteroids-v0), EnvSpec(Asteroids-v4), EnvSpec(AsteroidsDeterministic-v0), EnvSpec(AsteroidsDeterministic-v4), EnvSpec(AsteroidsNoFrameskip-v0), EnvSpec(AsteroidsNoFrameskip-v4), EnvSpec(Asteroids-ram-v0), EnvSpec(Asteroids-ram-v4), EnvSpec(Asteroids-ramDeterministic-v0), EnvSpec(Asteroids-ramDeterministic-v4), EnvSpec(Asteroids-ramNoFrameskip-v0), EnvSpec(Asteroids-ramNoFrameskip-v4), EnvSpec(Atlantis-v0), EnvSpec(Atlantis-v4), EnvSpec(AtlantisDeterministic-v0), EnvSpec(AtlantisDeterministic-v4), EnvSpec(AtlantisNoFrameskip-v0), EnvSpec(AtlantisNoFrameskip-v4), EnvSpec(Atlantis-ram-v0), EnvSpec(Atlantis-ram-v4), EnvSpec(Atlantis-ramDeterministic-v0), EnvSpec(Atlantis-ramDeterministic-v4), EnvSpec(Atlantis-ramNoFrameskip-v0), EnvSpec(Atlantis-ramNoFrameskip-v4), EnvSpec(BankHeist-v0), EnvSpec(BankHeist-v4), EnvSpec(BankHeistDeterministic-v0), EnvSpec(BankHeistDeterministic-v4), EnvSpec(BankHeistNoFrameskip-v0), EnvSpec(BankHeistNoFrameskip-v4), EnvSpec(BankHeist-ram-v0), EnvSpec(BankHeist-ram-v4), EnvSpec(BankHeist-ramDeterministic-v0), EnvSpec(BankHeist-ramDeterministic-v4), EnvSpec(BankHeist-ramNoFrameskip-v0), EnvSpec(BankHeist-ramNoFrameskip-v4), EnvSpec(BattleZone-v0), EnvSpec(BattleZone-v4), EnvSpec(BattleZoneDeterministic-v0), EnvSpec(BattleZoneDeterministic-v4), EnvSpec(BattleZoneNoFrameskip-v0), EnvSpec(BattleZoneNoFrameskip-v4), EnvSpec(BattleZone-ram-v0), EnvSpec(BattleZone-ram-v4), EnvSpec(BattleZone-ramDeterministic-v0), EnvSpec(BattleZone-ramDeterministic-v4), EnvSpec(BattleZone-ramNoFrameskip-v0), EnvSpec(BattleZone-ramNoFrameskip-v4), EnvSpec(BeamRider-v0), EnvSpec(BeamRider-v4), EnvSpec(BeamRiderDeterministic-v0), EnvSpec(BeamRiderDeterministic-v4), EnvSpec(BeamRiderNoFrameskip-v0), EnvSpec(BeamRiderNoFrameskip-v4), EnvSpec(BeamRider-ram-v0), EnvSpec(BeamRider-ram-v4), EnvSpec(BeamRider-ramDeterministic-v0), EnvSpec(BeamRider-ramDeterministic-v4), EnvSpec(BeamRider-ramNoFrameskip-v0), EnvSpec(BeamRider-ramNoFrameskip-v4), EnvSpec(Berzerk-v0), EnvSpec(Berzerk-v4), EnvSpec(BerzerkDeterministic-v0), EnvSpec(BerzerkDeterministic-v4), EnvSpec(BerzerkNoFrameskip-v0), EnvSpec(BerzerkNoFrameskip-v4), EnvSpec(Berzerk-ram-v0), EnvSpec(Berzerk-ram-v4), EnvSpec(Berzerk-ramDeterministic-v0), EnvSpec(Berzerk-ramDeterministic-v4), EnvSpec(Berzerk-ramNoFrameskip-v0), EnvSpec(Berzerk-ramNoFrameskip-v4), EnvSpec(Bowling-v0), EnvSpec(Bowling-v4), EnvSpec(BowlingDeterministic-v0), EnvSpec(BowlingDeterministic-v4), EnvSpec(BowlingNoFrameskip-v0), EnvSpec(BowlingNoFrameskip-v4), EnvSpec(Bowling-ram-v0), EnvSpec(Bowling-ram-v4), EnvSpec(Bowling-ramDeterministic-v0), EnvSpec(Bowling-ramDeterministic-v4), EnvSpec(Bowling-ramNoFrameskip-v0), EnvSpec(Bowling-ramNoFrameskip-v4), EnvSpec(Boxing-v0), EnvSpec(Boxing-v4), EnvSpec(BoxingDeterministic-v0), EnvSpec(BoxingDeterministic-v4), EnvSpec(BoxingNoFrameskip-v0), EnvSpec(BoxingNoFrameskip-v4), EnvSpec(Boxing-ram-v0), EnvSpec(Boxing-ram-v4), EnvSpec(Boxing-ramDeterministic-v0), EnvSpec(Boxing-ramDeterministic-v4), EnvSpec(Boxing-ramNoFrameskip-v0), EnvSpec(Boxing-ramNoFrameskip-v4), EnvSpec(Breakout-v0), EnvSpec(Breakout-v4), EnvSpec(BreakoutDeterministic-v0), EnvSpec(BreakoutDeterministic-v4), EnvSpec(BreakoutNoFrameskip-v0), EnvSpec(BreakoutNoFrameskip-v4), EnvSpec(Breakout-ram-v0), EnvSpec(Breakout-ram-v4), EnvSpec(Breakout-ramDeterministic-v0), EnvSpec(Breakout-ramDeterministic-v4), EnvSpec(Breakout-ramNoFrameskip-v0), EnvSpec(Breakout-ramNoFrameskip-v4), EnvSpec(Carnival-v0), EnvSpec(Carnival-v4), EnvSpec(CarnivalDeterministic-v0), EnvSpec(CarnivalDeterministic-v4), EnvSpec(CarnivalNoFrameskip-v0), EnvSpec(CarnivalNoFrameskip-v4), EnvSpec(Carnival-ram-v0), EnvSpec(Carnival-ram-v4), EnvSpec(Carnival-ramDeterministic-v0), EnvSpec(Carnival-ramDeterministic-v4), EnvSpec(Carnival-ramNoFrameskip-v0), EnvSpec(Carnival-ramNoFrameskip-v4), EnvSpec(Centipede-v0), EnvSpec(Centipede-v4), EnvSpec(CentipedeDeterministic-v0), EnvSpec(CentipedeDeterministic-v4), EnvSpec(CentipedeNoFrameskip-v0), EnvSpec(CentipedeNoFrameskip-v4), EnvSpec(Centipede-ram-v0), EnvSpec(Centipede-ram-v4), EnvSpec(Centipede-ramDeterministic-v0), EnvSpec(Centipede-ramDeterministic-v4), EnvSpec(Centipede-ramNoFrameskip-v0), EnvSpec(Centipede-ramNoFrameskip-v4), EnvSpec(ChopperCommand-v0), EnvSpec(ChopperCommand-v4), EnvSpec(ChopperCommandDeterministic-v0), EnvSpec(ChopperCommandDeterministic-v4), EnvSpec(ChopperCommandNoFrameskip-v0), EnvSpec(ChopperCommandNoFrameskip-v4), EnvSpec(ChopperCommand-ram-v0), EnvSpec(ChopperCommand-ram-v4), EnvSpec(ChopperCommand-ramDeterministic-v0), EnvSpec(ChopperCommand-ramDeterministic-v4), EnvSpec(ChopperCommand-ramNoFrameskip-v0), EnvSpec(ChopperCommand-ramNoFrameskip-v4), EnvSpec(CrazyClimber-v0), EnvSpec(CrazyClimber-v4), EnvSpec(CrazyClimberDeterministic-v0), EnvSpec(CrazyClimberDeterministic-v4), EnvSpec(CrazyClimberNoFrameskip-v0), EnvSpec(CrazyClimberNoFrameskip-v4), EnvSpec(CrazyClimber-ram-v0), EnvSpec(CrazyClimber-ram-v4), EnvSpec(CrazyClimber-ramDeterministic-v0), EnvSpec(CrazyClimber-ramDeterministic-v4), EnvSpec(CrazyClimber-ramNoFrameskip-v0), EnvSpec(CrazyClimber-ramNoFrameskip-v4), EnvSpec(Defender-v0), EnvSpec(Defender-v4), EnvSpec(DefenderDeterministic-v0), EnvSpec(DefenderDeterministic-v4), EnvSpec(DefenderNoFrameskip-v0), EnvSpec(DefenderNoFrameskip-v4), EnvSpec(Defender-ram-v0), EnvSpec(Defender-ram-v4), EnvSpec(Defender-ramDeterministic-v0), EnvSpec(Defender-ramDeterministic-v4), EnvSpec(Defender-ramNoFrameskip-v0), EnvSpec(Defender-ramNoFrameskip-v4), EnvSpec(DemonAttack-v0), EnvSpec(DemonAttack-v4), EnvSpec(DemonAttackDeterministic-v0), EnvSpec(DemonAttackDeterministic-v4), EnvSpec(DemonAttackNoFrameskip-v0), EnvSpec(DemonAttackNoFrameskip-v4), EnvSpec(DemonAttack-ram-v0), EnvSpec(DemonAttack-ram-v4), EnvSpec(DemonAttack-ramDeterministic-v0), EnvSpec(DemonAttack-ramDeterministic-v4), EnvSpec(DemonAttack-ramNoFrameskip-v0), EnvSpec(DemonAttack-ramNoFrameskip-v4), EnvSpec(DoubleDunk-v0), EnvSpec(DoubleDunk-v4), EnvSpec(DoubleDunkDeterministic-v0), EnvSpec(DoubleDunkDeterministic-v4), EnvSpec(DoubleDunkNoFrameskip-v0), EnvSpec(DoubleDunkNoFrameskip-v4), EnvSpec(DoubleDunk-ram-v0), EnvSpec(DoubleDunk-ram-v4), EnvSpec(DoubleDunk-ramDeterministic-v0), EnvSpec(DoubleDunk-ramDeterministic-v4), EnvSpec(DoubleDunk-ramNoFrameskip-v0), EnvSpec(DoubleDunk-ramNoFrameskip-v4), EnvSpec(ElevatorAction-v0), EnvSpec(ElevatorAction-v4), EnvSpec(ElevatorActionDeterministic-v0), EnvSpec(ElevatorActionDeterministic-v4), EnvSpec(ElevatorActionNoFrameskip-v0), EnvSpec(ElevatorActionNoFrameskip-v4), EnvSpec(ElevatorAction-ram-v0), EnvSpec(ElevatorAction-ram-v4), EnvSpec(ElevatorAction-ramDeterministic-v0), EnvSpec(ElevatorAction-ramDeterministic-v4), EnvSpec(ElevatorAction-ramNoFrameskip-v0), EnvSpec(ElevatorAction-ramNoFrameskip-v4), EnvSpec(Enduro-v0), EnvSpec(Enduro-v4), EnvSpec(EnduroDeterministic-v0), EnvSpec(EnduroDeterministic-v4), EnvSpec(EnduroNoFrameskip-v0), EnvSpec(EnduroNoFrameskip-v4), EnvSpec(Enduro-ram-v0), EnvSpec(Enduro-ram-v4), EnvSpec(Enduro-ramDeterministic-v0), EnvSpec(Enduro-ramDeterministic-v4), EnvSpec(Enduro-ramNoFrameskip-v0), EnvSpec(Enduro-ramNoFrameskip-v4), EnvSpec(FishingDerby-v0), EnvSpec(FishingDerby-v4), EnvSpec(FishingDerbyDeterministic-v0), EnvSpec(FishingDerbyDeterministic-v4), EnvSpec(FishingDerbyNoFrameskip-v0), EnvSpec(FishingDerbyNoFrameskip-v4), EnvSpec(FishingDerby-ram-v0), EnvSpec(FishingDerby-ram-v4), EnvSpec(FishingDerby-ramDeterministic-v0), EnvSpec(FishingDerby-ramDeterministic-v4), EnvSpec(FishingDerby-ramNoFrameskip-v0), EnvSpec(FishingDerby-ramNoFrameskip-v4), EnvSpec(Freeway-v0), EnvSpec(Freeway-v4), EnvSpec(FreewayDeterministic-v0), EnvSpec(FreewayDeterministic-v4), EnvSpec(FreewayNoFrameskip-v0), EnvSpec(FreewayNoFrameskip-v4), EnvSpec(Freeway-ram-v0), EnvSpec(Freeway-ram-v4), EnvSpec(Freeway-ramDeterministic-v0), EnvSpec(Freeway-ramDeterministic-v4), EnvSpec(Freeway-ramNoFrameskip-v0), EnvSpec(Freeway-ramNoFrameskip-v4), EnvSpec(Frostbite-v0), EnvSpec(Frostbite-v4), EnvSpec(FrostbiteDeterministic-v0), EnvSpec(FrostbiteDeterministic-v4), EnvSpec(FrostbiteNoFrameskip-v0), EnvSpec(FrostbiteNoFrameskip-v4), EnvSpec(Frostbite-ram-v0), EnvSpec(Frostbite-ram-v4), EnvSpec(Frostbite-ramDeterministic-v0), EnvSpec(Frostbite-ramDeterministic-v4), EnvSpec(Frostbite-ramNoFrameskip-v0), EnvSpec(Frostbite-ramNoFrameskip-v4), EnvSpec(Gopher-v0), EnvSpec(Gopher-v4), EnvSpec(GopherDeterministic-v0), EnvSpec(GopherDeterministic-v4), EnvSpec(GopherNoFrameskip-v0), EnvSpec(GopherNoFrameskip-v4), EnvSpec(Gopher-ram-v0), EnvSpec(Gopher-ram-v4), EnvSpec(Gopher-ramDeterministic-v0), EnvSpec(Gopher-ramDeterministic-v4), EnvSpec(Gopher-ramNoFrameskip-v0), EnvSpec(Gopher-ramNoFrameskip-v4), EnvSpec(Gravitar-v0), EnvSpec(Gravitar-v4), EnvSpec(GravitarDeterministic-v0), EnvSpec(GravitarDeterministic-v4), EnvSpec(GravitarNoFrameskip-v0), EnvSpec(GravitarNoFrameskip-v4), EnvSpec(Gravitar-ram-v0), EnvSpec(Gravitar-ram-v4), EnvSpec(Gravitar-ramDeterministic-v0), EnvSpec(Gravitar-ramDeterministic-v4), EnvSpec(Gravitar-ramNoFrameskip-v0), EnvSpec(Gravitar-ramNoFrameskip-v4), EnvSpec(Hero-v0), EnvSpec(Hero-v4), EnvSpec(HeroDeterministic-v0), EnvSpec(HeroDeterministic-v4), EnvSpec(HeroNoFrameskip-v0), EnvSpec(HeroNoFrameskip-v4), EnvSpec(Hero-ram-v0), EnvSpec(Hero-ram-v4), EnvSpec(Hero-ramDeterministic-v0), EnvSpec(Hero-ramDeterministic-v4), EnvSpec(Hero-ramNoFrameskip-v0), EnvSpec(Hero-ramNoFrameskip-v4), EnvSpec(IceHockey-v0), EnvSpec(IceHockey-v4), EnvSpec(IceHockeyDeterministic-v0), EnvSpec(IceHockeyDeterministic-v4), EnvSpec(IceHockeyNoFrameskip-v0), EnvSpec(IceHockeyNoFrameskip-v4), EnvSpec(IceHockey-ram-v0), EnvSpec(IceHockey-ram-v4), EnvSpec(IceHockey-ramDeterministic-v0), EnvSpec(IceHockey-ramDeterministic-v4), EnvSpec(IceHockey-ramNoFrameskip-v0), EnvSpec(IceHockey-ramNoFrameskip-v4), EnvSpec(Jamesbond-v0), EnvSpec(Jamesbond-v4), EnvSpec(JamesbondDeterministic-v0), EnvSpec(JamesbondDeterministic-v4), EnvSpec(JamesbondNoFrameskip-v0), EnvSpec(JamesbondNoFrameskip-v4), EnvSpec(Jamesbond-ram-v0), EnvSpec(Jamesbond-ram-v4), EnvSpec(Jamesbond-ramDeterministic-v0), EnvSpec(Jamesbond-ramDeterministic-v4), EnvSpec(Jamesbond-ramNoFrameskip-v0), EnvSpec(Jamesbond-ramNoFrameskip-v4), EnvSpec(JourneyEscape-v0), EnvSpec(JourneyEscape-v4), EnvSpec(JourneyEscapeDeterministic-v0), EnvSpec(JourneyEscapeDeterministic-v4), EnvSpec(JourneyEscapeNoFrameskip-v0), EnvSpec(JourneyEscapeNoFrameskip-v4), EnvSpec(JourneyEscape-ram-v0), EnvSpec(JourneyEscape-ram-v4), EnvSpec(JourneyEscape-ramDeterministic-v0), EnvSpec(JourneyEscape-ramDeterministic-v4), EnvSpec(JourneyEscape-ramNoFrameskip-v0), EnvSpec(JourneyEscape-ramNoFrameskip-v4), EnvSpec(Kangaroo-v0), EnvSpec(Kangaroo-v4), EnvSpec(KangarooDeterministic-v0), EnvSpec(KangarooDeterministic-v4), EnvSpec(KangarooNoFrameskip-v0), EnvSpec(KangarooNoFrameskip-v4), EnvSpec(Kangaroo-ram-v0), EnvSpec(Kangaroo-ram-v4), EnvSpec(Kangaroo-ramDeterministic-v0), EnvSpec(Kangaroo-ramDeterministic-v4), EnvSpec(Kangaroo-ramNoFrameskip-v0), EnvSpec(Kangaroo-ramNoFrameskip-v4), EnvSpec(Krull-v0), EnvSpec(Krull-v4), EnvSpec(KrullDeterministic-v0), EnvSpec(KrullDeterministic-v4), EnvSpec(KrullNoFrameskip-v0), EnvSpec(KrullNoFrameskip-v4), EnvSpec(Krull-ram-v0), EnvSpec(Krull-ram-v4), EnvSpec(Krull-ramDeterministic-v0), EnvSpec(Krull-ramDeterministic-v4), EnvSpec(Krull-ramNoFrameskip-v0), EnvSpec(Krull-ramNoFrameskip-v4), EnvSpec(KungFuMaster-v0), EnvSpec(KungFuMaster-v4), EnvSpec(KungFuMasterDeterministic-v0), EnvSpec(KungFuMasterDeterministic-v4), EnvSpec(KungFuMasterNoFrameskip-v0), EnvSpec(KungFuMasterNoFrameskip-v4), EnvSpec(KungFuMaster-ram-v0), EnvSpec(KungFuMaster-ram-v4), EnvSpec(KungFuMaster-ramDeterministic-v0), EnvSpec(KungFuMaster-ramDeterministic-v4), EnvSpec(KungFuMaster-ramNoFrameskip-v0), EnvSpec(KungFuMaster-ramNoFrameskip-v4), EnvSpec(MontezumaRevenge-v0), EnvSpec(MontezumaRevenge-v4), EnvSpec(MontezumaRevengeDeterministic-v0), EnvSpec(MontezumaRevengeDeterministic-v4), EnvSpec(MontezumaRevengeNoFrameskip-v0), EnvSpec(MontezumaRevengeNoFrameskip-v4), EnvSpec(MontezumaRevenge-ram-v0), EnvSpec(MontezumaRevenge-ram-v4), EnvSpec(MontezumaRevenge-ramDeterministic-v0), EnvSpec(MontezumaRevenge-ramDeterministic-v4), EnvSpec(MontezumaRevenge-ramNoFrameskip-v0), EnvSpec(MontezumaRevenge-ramNoFrameskip-v4), EnvSpec(MsPacman-v0), EnvSpec(MsPacman-v4), EnvSpec(MsPacmanDeterministic-v0), EnvSpec(MsPacmanDeterministic-v4), EnvSpec(MsPacmanNoFrameskip-v0), EnvSpec(MsPacmanNoFrameskip-v4), EnvSpec(MsPacman-ram-v0), EnvSpec(MsPacman-ram-v4), EnvSpec(MsPacman-ramDeterministic-v0), EnvSpec(MsPacman-ramDeterministic-v4), EnvSpec(MsPacman-ramNoFrameskip-v0), EnvSpec(MsPacman-ramNoFrameskip-v4), EnvSpec(NameThisGame-v0), EnvSpec(NameThisGame-v4), EnvSpec(NameThisGameDeterministic-v0), EnvSpec(NameThisGameDeterministic-v4), EnvSpec(NameThisGameNoFrameskip-v0), EnvSpec(NameThisGameNoFrameskip-v4), EnvSpec(NameThisGame-ram-v0), EnvSpec(NameThisGame-ram-v4), EnvSpec(NameThisGame-ramDeterministic-v0), EnvSpec(NameThisGame-ramDeterministic-v4), EnvSpec(NameThisGame-ramNoFrameskip-v0), EnvSpec(NameThisGame-ramNoFrameskip-v4), EnvSpec(Phoenix-v0), EnvSpec(Phoenix-v4), EnvSpec(PhoenixDeterministic-v0), EnvSpec(PhoenixDeterministic-v4), EnvSpec(PhoenixNoFrameskip-v0), EnvSpec(PhoenixNoFrameskip-v4), EnvSpec(Phoenix-ram-v0), EnvSpec(Phoenix-ram-v4), EnvSpec(Phoenix-ramDeterministic-v0), EnvSpec(Phoenix-ramDeterministic-v4), EnvSpec(Phoenix-ramNoFrameskip-v0), EnvSpec(Phoenix-ramNoFrameskip-v4), EnvSpec(Pitfall-v0), EnvSpec(Pitfall-v4), EnvSpec(PitfallDeterministic-v0), EnvSpec(PitfallDeterministic-v4), EnvSpec(PitfallNoFrameskip-v0), EnvSpec(PitfallNoFrameskip-v4), EnvSpec(Pitfall-ram-v0), EnvSpec(Pitfall-ram-v4), EnvSpec(Pitfall-ramDeterministic-v0), EnvSpec(Pitfall-ramDeterministic-v4), EnvSpec(Pitfall-ramNoFrameskip-v0), EnvSpec(Pitfall-ramNoFrameskip-v4), EnvSpec(Pong-v0), EnvSpec(Pong-v4), EnvSpec(PongDeterministic-v0), EnvSpec(PongDeterministic-v4), EnvSpec(PongNoFrameskip-v0), EnvSpec(PongNoFrameskip-v4), EnvSpec(Pong-ram-v0), EnvSpec(Pong-ram-v4), EnvSpec(Pong-ramDeterministic-v0), EnvSpec(Pong-ramDeterministic-v4), EnvSpec(Pong-ramNoFrameskip-v0), EnvSpec(Pong-ramNoFrameskip-v4), EnvSpec(Pooyan-v0), EnvSpec(Pooyan-v4), EnvSpec(PooyanDeterministic-v0), EnvSpec(PooyanDeterministic-v4), EnvSpec(PooyanNoFrameskip-v0), EnvSpec(PooyanNoFrameskip-v4), EnvSpec(Pooyan-ram-v0), EnvSpec(Pooyan-ram-v4), EnvSpec(Pooyan-ramDeterministic-v0), EnvSpec(Pooyan-ramDeterministic-v4), EnvSpec(Pooyan-ramNoFrameskip-v0), EnvSpec(Pooyan-ramNoFrameskip-v4), EnvSpec(PrivateEye-v0), EnvSpec(PrivateEye-v4), EnvSpec(PrivateEyeDeterministic-v0), EnvSpec(PrivateEyeDeterministic-v4), EnvSpec(PrivateEyeNoFrameskip-v0), EnvSpec(PrivateEyeNoFrameskip-v4), EnvSpec(PrivateEye-ram-v0), EnvSpec(PrivateEye-ram-v4), EnvSpec(PrivateEye-ramDeterministic-v0), EnvSpec(PrivateEye-ramDeterministic-v4), EnvSpec(PrivateEye-ramNoFrameskip-v0), EnvSpec(PrivateEye-ramNoFrameskip-v4), EnvSpec(Qbert-v0), EnvSpec(Qbert-v4), EnvSpec(QbertDeterministic-v0), EnvSpec(QbertDeterministic-v4), EnvSpec(QbertNoFrameskip-v0), EnvSpec(QbertNoFrameskip-v4), EnvSpec(Qbert-ram-v0), EnvSpec(Qbert-ram-v4), EnvSpec(Qbert-ramDeterministic-v0), EnvSpec(Qbert-ramDeterministic-v4), EnvSpec(Qbert-ramNoFrameskip-v0), EnvSpec(Qbert-ramNoFrameskip-v4), EnvSpec(Riverraid-v0), EnvSpec(Riverraid-v4), EnvSpec(RiverraidDeterministic-v0), EnvSpec(RiverraidDeterministic-v4), EnvSpec(RiverraidNoFrameskip-v0), EnvSpec(RiverraidNoFrameskip-v4), EnvSpec(Riverraid-ram-v0), EnvSpec(Riverraid-ram-v4), EnvSpec(Riverraid-ramDeterministic-v0), EnvSpec(Riverraid-ramDeterministic-v4), EnvSpec(Riverraid-ramNoFrameskip-v0), EnvSpec(Riverraid-ramNoFrameskip-v4), EnvSpec(RoadRunner-v0), EnvSpec(RoadRunner-v4), EnvSpec(RoadRunnerDeterministic-v0), EnvSpec(RoadRunnerDeterministic-v4), EnvSpec(RoadRunnerNoFrameskip-v0), EnvSpec(RoadRunnerNoFrameskip-v4), EnvSpec(RoadRunner-ram-v0), EnvSpec(RoadRunner-ram-v4), EnvSpec(RoadRunner-ramDeterministic-v0), EnvSpec(RoadRunner-ramDeterministic-v4), EnvSpec(RoadRunner-ramNoFrameskip-v0), EnvSpec(RoadRunner-ramNoFrameskip-v4), EnvSpec(Robotank-v0), EnvSpec(Robotank-v4), EnvSpec(RobotankDeterministic-v0), EnvSpec(RobotankDeterministic-v4), EnvSpec(RobotankNoFrameskip-v0), EnvSpec(RobotankNoFrameskip-v4), EnvSpec(Robotank-ram-v0), EnvSpec(Robotank-ram-v4), EnvSpec(Robotank-ramDeterministic-v0), EnvSpec(Robotank-ramDeterministic-v4), EnvSpec(Robotank-ramNoFrameskip-v0), EnvSpec(Robotank-ramNoFrameskip-v4), EnvSpec(Seaquest-v0), EnvSpec(Seaquest-v4), EnvSpec(SeaquestDeterministic-v0), EnvSpec(SeaquestDeterministic-v4), EnvSpec(SeaquestNoFrameskip-v0), EnvSpec(SeaquestNoFrameskip-v4), EnvSpec(Seaquest-ram-v0), EnvSpec(Seaquest-ram-v4), EnvSpec(Seaquest-ramDeterministic-v0), EnvSpec(Seaquest-ramDeterministic-v4), EnvSpec(Seaquest-ramNoFrameskip-v0), EnvSpec(Seaquest-ramNoFrameskip-v4), EnvSpec(Skiing-v0), EnvSpec(Skiing-v4), EnvSpec(SkiingDeterministic-v0), EnvSpec(SkiingDeterministic-v4), EnvSpec(SkiingNoFrameskip-v0), EnvSpec(SkiingNoFrameskip-v4), EnvSpec(Skiing-ram-v0), EnvSpec(Skiing-ram-v4), EnvSpec(Skiing-ramDeterministic-v0), EnvSpec(Skiing-ramDeterministic-v4), EnvSpec(Skiing-ramNoFrameskip-v0), EnvSpec(Skiing-ramNoFrameskip-v4), EnvSpec(Solaris-v0), EnvSpec(Solaris-v4), EnvSpec(SolarisDeterministic-v0), EnvSpec(SolarisDeterministic-v4), EnvSpec(SolarisNoFrameskip-v0), EnvSpec(SolarisNoFrameskip-v4), EnvSpec(Solaris-ram-v0), EnvSpec(Solaris-ram-v4), EnvSpec(Solaris-ramDeterministic-v0), EnvSpec(Solaris-ramDeterministic-v4), EnvSpec(Solaris-ramNoFrameskip-v0), EnvSpec(Solaris-ramNoFrameskip-v4), EnvSpec(SpaceInvaders-v0), EnvSpec(SpaceInvaders-v4), EnvSpec(SpaceInvadersDeterministic-v0), EnvSpec(SpaceInvadersDeterministic-v4), EnvSpec(SpaceInvadersNoFrameskip-v0), EnvSpec(SpaceInvadersNoFrameskip-v4), EnvSpec(SpaceInvaders-ram-v0), EnvSpec(SpaceInvaders-ram-v4), EnvSpec(SpaceInvaders-ramDeterministic-v0), EnvSpec(SpaceInvaders-ramDeterministic-v4), EnvSpec(SpaceInvaders-ramNoFrameskip-v0), EnvSpec(SpaceInvaders-ramNoFrameskip-v4), EnvSpec(StarGunner-v0), EnvSpec(StarGunner-v4), EnvSpec(StarGunnerDeterministic-v0), EnvSpec(StarGunnerDeterministic-v4), EnvSpec(StarGunnerNoFrameskip-v0), EnvSpec(StarGunnerNoFrameskip-v4), EnvSpec(StarGunner-ram-v0), EnvSpec(StarGunner-ram-v4), EnvSpec(StarGunner-ramDeterministic-v0), EnvSpec(StarGunner-ramDeterministic-v4), EnvSpec(StarGunner-ramNoFrameskip-v0), EnvSpec(StarGunner-ramNoFrameskip-v4), EnvSpec(Tennis-v0), EnvSpec(Tennis-v4), EnvSpec(TennisDeterministic-v0), EnvSpec(TennisDeterministic-v4), EnvSpec(TennisNoFrameskip-v0), EnvSpec(TennisNoFrameskip-v4), EnvSpec(Tennis-ram-v0), EnvSpec(Tennis-ram-v4), EnvSpec(Tennis-ramDeterministic-v0), EnvSpec(Tennis-ramDeterministic-v4), EnvSpec(Tennis-ramNoFrameskip-v0), EnvSpec(Tennis-ramNoFrameskip-v4), EnvSpec(TimePilot-v0), EnvSpec(TimePilot-v4), EnvSpec(TimePilotDeterministic-v0), EnvSpec(TimePilotDeterministic-v4), EnvSpec(TimePilotNoFrameskip-v0), EnvSpec(TimePilotNoFrameskip-v4), EnvSpec(TimePilot-ram-v0), EnvSpec(TimePilot-ram-v4), EnvSpec(TimePilot-ramDeterministic-v0), EnvSpec(TimePilot-ramDeterministic-v4), EnvSpec(TimePilot-ramNoFrameskip-v0), EnvSpec(TimePilot-ramNoFrameskip-v4), EnvSpec(Tutankham-v0), EnvSpec(Tutankham-v4), EnvSpec(TutankhamDeterministic-v0), EnvSpec(TutankhamDeterministic-v4), EnvSpec(TutankhamNoFrameskip-v0), EnvSpec(TutankhamNoFrameskip-v4), EnvSpec(Tutankham-ram-v0), EnvSpec(Tutankham-ram-v4), EnvSpec(Tutankham-ramDeterministic-v0), EnvSpec(Tutankham-ramDeterministic-v4), EnvSpec(Tutankham-ramNoFrameskip-v0), EnvSpec(Tutankham-ramNoFrameskip-v4), EnvSpec(UpNDown-v0), EnvSpec(UpNDown-v4), EnvSpec(UpNDownDeterministic-v0), EnvSpec(UpNDownDeterministic-v4), EnvSpec(UpNDownNoFrameskip-v0), EnvSpec(UpNDownNoFrameskip-v4), EnvSpec(UpNDown-ram-v0), EnvSpec(UpNDown-ram-v4), EnvSpec(UpNDown-ramDeterministic-v0), EnvSpec(UpNDown-ramDeterministic-v4), EnvSpec(UpNDown-ramNoFrameskip-v0), EnvSpec(UpNDown-ramNoFrameskip-v4), EnvSpec(Venture-v0), EnvSpec(Venture-v4), EnvSpec(VentureDeterministic-v0), EnvSpec(VentureDeterministic-v4), EnvSpec(VentureNoFrameskip-v0), EnvSpec(VentureNoFrameskip-v4), EnvSpec(Venture-ram-v0), EnvSpec(Venture-ram-v4), EnvSpec(Venture-ramDeterministic-v0), EnvSpec(Venture-ramDeterministic-v4), EnvSpec(Venture-ramNoFrameskip-v0), EnvSpec(Venture-ramNoFrameskip-v4), EnvSpec(VideoPinball-v0), EnvSpec(VideoPinball-v4), EnvSpec(VideoPinballDeterministic-v0), EnvSpec(VideoPinballDeterministic-v4), EnvSpec(VideoPinballNoFrameskip-v0), EnvSpec(VideoPinballNoFrameskip-v4), EnvSpec(VideoPinball-ram-v0), EnvSpec(VideoPinball-ram-v4), EnvSpec(VideoPinball-ramDeterministic-v0), EnvSpec(VideoPinball-ramDeterministic-v4), EnvSpec(VideoPinball-ramNoFrameskip-v0), EnvSpec(VideoPinball-ramNoFrameskip-v4), EnvSpec(WizardOfWor-v0), EnvSpec(WizardOfWor-v4), EnvSpec(WizardOfWorDeterministic-v0), EnvSpec(WizardOfWorDeterministic-v4), EnvSpec(WizardOfWorNoFrameskip-v0), EnvSpec(WizardOfWorNoFrameskip-v4), EnvSpec(WizardOfWor-ram-v0), EnvSpec(WizardOfWor-ram-v4), EnvSpec(WizardOfWor-ramDeterministic-v0), EnvSpec(WizardOfWor-ramDeterministic-v4), EnvSpec(WizardOfWor-ramNoFrameskip-v0), EnvSpec(WizardOfWor-ramNoFrameskip-v4), EnvSpec(YarsRevenge-v0), EnvSpec(YarsRevenge-v4), EnvSpec(YarsRevengeDeterministic-v0), EnvSpec(YarsRevengeDeterministic-v4), EnvSpec(YarsRevengeNoFrameskip-v0), EnvSpec(YarsRevengeNoFrameskip-v4), EnvSpec(YarsRevenge-ram-v0), EnvSpec(YarsRevenge-ram-v4), EnvSpec(YarsRevenge-ramDeterministic-v0), EnvSpec(YarsRevenge-ramDeterministic-v4), EnvSpec(YarsRevenge-ramNoFrameskip-v0), EnvSpec(YarsRevenge-ramNoFrameskip-v4), EnvSpec(Zaxxon-v0), EnvSpec(Zaxxon-v4), EnvSpec(ZaxxonDeterministic-v0), EnvSpec(ZaxxonDeterministic-v4), EnvSpec(ZaxxonNoFrameskip-v0), EnvSpec(ZaxxonNoFrameskip-v4), EnvSpec(Zaxxon-ram-v0), EnvSpec(Zaxxon-ram-v4), EnvSpec(Zaxxon-ramDeterministic-v0), EnvSpec(Zaxxon-ramDeterministic-v4), EnvSpec(Zaxxon-ramNoFrameskip-v0), EnvSpec(Zaxxon-ramNoFrameskip-v4), EnvSpec(CubeCrash-v0), EnvSpec(CubeCrashSparse-v0), EnvSpec(CubeCrashScreenBecomesBlack-v0), EnvSpec(MemorizeDigits-v0)])
###Markdown
CarRacing Environment Since Gym provides different interesting environments, let us simulate a car racingenvironment as shown below,
###Code
import gym
env = gym.make('CarRacing-v0')
env.reset()
for _ in range(1000):
env.render()
env.step(env.action_space.sample())
###Output
Track generation: 1243..1558 -> 315-tiles track
###Markdown
Basic Simulations in OpenAI's Gym OpenAI Gym is a toolkit for building, evaluating and comparing RL algorithms. It iscompatible with algorithms written in any frameworks like TensoFlow, Theano, Keras etc... Itis simple and easy to comprehend. It makes no assumption about the structure of our agentand provides an interface to all RL tasks.Now, we will see, how to simulate environments in gym. CartPole Environment First let us import the OpenAI's Gym library
###Code
import gym
###Output
_____no_output_____
###Markdown
We use the make function for simulating the environment
###Code
env = gym.make('CartPole-v0')
###Output
_____no_output_____
###Markdown
Then, we initialize the environment using reset method
###Code
env.reset()
###Output
_____no_output_____
###Markdown
Now,we can loop for some time steps and render the environment at each step
###Code
for _ in range(1000):
env.render()
env.step(env.action_space.sample())
###Output
/home/sara/miniconda3/lib/python3.7/site-packages/gym/logger.py:30: UserWarning: [33mWARN: You are calling 'step()' even though this environment has already returned done = True. You should always call 'reset()' once you receive 'done = True' -- any further steps are undefined behavior.[0m
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
###Markdown
Different Types of Environments OpenAI gym provides a lot of simulation environments for training, evaluating andbuilding our agents. We can check the available environments by either checking theirwebsite or simply typing the following commands which will list the available environments.
###Code
from gym import envs
print(envs.registry.all())
###Output
dict_values([EnvSpec(Copy-v0), EnvSpec(RepeatCopy-v0), EnvSpec(ReversedAddition-v0), EnvSpec(ReversedAddition3-v0), EnvSpec(DuplicatedInput-v0), EnvSpec(Reverse-v0), EnvSpec(CartPole-v0), EnvSpec(CartPole-v1), EnvSpec(MountainCar-v0), EnvSpec(MountainCarContinuous-v0), EnvSpec(Pendulum-v0), EnvSpec(Acrobot-v1), EnvSpec(LunarLander-v2), EnvSpec(LunarLanderContinuous-v2), EnvSpec(BipedalWalker-v3), EnvSpec(BipedalWalkerHardcore-v3), EnvSpec(CarRacing-v0), EnvSpec(Blackjack-v0), EnvSpec(KellyCoinflip-v0), EnvSpec(KellyCoinflipGeneralized-v0), EnvSpec(FrozenLake-v0), EnvSpec(FrozenLake8x8-v0), EnvSpec(CliffWalking-v0), EnvSpec(NChain-v0), EnvSpec(Roulette-v0), EnvSpec(Taxi-v3), EnvSpec(GuessingGame-v0), EnvSpec(HotterColder-v0), EnvSpec(Reacher-v2), EnvSpec(Pusher-v2), EnvSpec(Thrower-v2), EnvSpec(Striker-v2), EnvSpec(InvertedPendulum-v2), EnvSpec(InvertedDoublePendulum-v2), EnvSpec(HalfCheetah-v2), EnvSpec(HalfCheetah-v3), EnvSpec(Hopper-v2), EnvSpec(Hopper-v3), EnvSpec(Swimmer-v2), EnvSpec(Swimmer-v3), EnvSpec(Walker2d-v2), EnvSpec(Walker2d-v3), EnvSpec(Ant-v2), EnvSpec(Ant-v3), EnvSpec(Humanoid-v2), EnvSpec(Humanoid-v3), EnvSpec(HumanoidStandup-v2), EnvSpec(FetchSlide-v1), EnvSpec(FetchPickAndPlace-v1), EnvSpec(FetchReach-v1), EnvSpec(FetchPush-v1), EnvSpec(HandReach-v0), EnvSpec(HandManipulateBlockRotateZ-v0), EnvSpec(HandManipulateBlockRotateZTouchSensors-v0), EnvSpec(HandManipulateBlockRotateZTouchSensors-v1), EnvSpec(HandManipulateBlockRotateParallel-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensors-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensors-v1), EnvSpec(HandManipulateBlockRotateXYZ-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensors-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensors-v1), EnvSpec(HandManipulateBlockFull-v0), EnvSpec(HandManipulateBlock-v0), EnvSpec(HandManipulateBlockTouchSensors-v0), EnvSpec(HandManipulateBlockTouchSensors-v1), EnvSpec(HandManipulateEggRotate-v0), EnvSpec(HandManipulateEggRotateTouchSensors-v0), EnvSpec(HandManipulateEggRotateTouchSensors-v1), EnvSpec(HandManipulateEggFull-v0), EnvSpec(HandManipulateEgg-v0), EnvSpec(HandManipulateEggTouchSensors-v0), EnvSpec(HandManipulateEggTouchSensors-v1), EnvSpec(HandManipulatePenRotate-v0), EnvSpec(HandManipulatePenRotateTouchSensors-v0), EnvSpec(HandManipulatePenRotateTouchSensors-v1), EnvSpec(HandManipulatePenFull-v0), EnvSpec(HandManipulatePen-v0), EnvSpec(HandManipulatePenTouchSensors-v0), EnvSpec(HandManipulatePenTouchSensors-v1), EnvSpec(FetchSlideDense-v1), EnvSpec(FetchPickAndPlaceDense-v1), EnvSpec(FetchReachDense-v1), EnvSpec(FetchPushDense-v1), EnvSpec(HandReachDense-v0), EnvSpec(HandManipulateBlockRotateZDense-v0), EnvSpec(HandManipulateBlockRotateZTouchSensorsDense-v0), EnvSpec(HandManipulateBlockRotateZTouchSensorsDense-v1), EnvSpec(HandManipulateBlockRotateParallelDense-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensorsDense-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensorsDense-v1), EnvSpec(HandManipulateBlockRotateXYZDense-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensorsDense-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensorsDense-v1), EnvSpec(HandManipulateBlockFullDense-v0), EnvSpec(HandManipulateBlockDense-v0), EnvSpec(HandManipulateBlockTouchSensorsDense-v0), EnvSpec(HandManipulateBlockTouchSensorsDense-v1), EnvSpec(HandManipulateEggRotateDense-v0), EnvSpec(HandManipulateEggRotateTouchSensorsDense-v0), EnvSpec(HandManipulateEggRotateTouchSensorsDense-v1), EnvSpec(HandManipulateEggFullDense-v0), EnvSpec(HandManipulateEggDense-v0), EnvSpec(HandManipulateEggTouchSensorsDense-v0), EnvSpec(HandManipulateEggTouchSensorsDense-v1), EnvSpec(HandManipulatePenRotateDense-v0), EnvSpec(HandManipulatePenRotateTouchSensorsDense-v0), EnvSpec(HandManipulatePenRotateTouchSensorsDense-v1), EnvSpec(HandManipulatePenFullDense-v0), EnvSpec(HandManipulatePenDense-v0), EnvSpec(HandManipulatePenTouchSensorsDense-v0), EnvSpec(HandManipulatePenTouchSensorsDense-v1), EnvSpec(Adventure-v0), EnvSpec(Adventure-v4), EnvSpec(AdventureDeterministic-v0), EnvSpec(AdventureDeterministic-v4), EnvSpec(AdventureNoFrameskip-v0), EnvSpec(AdventureNoFrameskip-v4), EnvSpec(Adventure-ram-v0), EnvSpec(Adventure-ram-v4), EnvSpec(Adventure-ramDeterministic-v0), EnvSpec(Adventure-ramDeterministic-v4), EnvSpec(Adventure-ramNoFrameskip-v0), EnvSpec(Adventure-ramNoFrameskip-v4), EnvSpec(AirRaid-v0), EnvSpec(AirRaid-v4), EnvSpec(AirRaidDeterministic-v0), EnvSpec(AirRaidDeterministic-v4), EnvSpec(AirRaidNoFrameskip-v0), EnvSpec(AirRaidNoFrameskip-v4), EnvSpec(AirRaid-ram-v0), EnvSpec(AirRaid-ram-v4), EnvSpec(AirRaid-ramDeterministic-v0), EnvSpec(AirRaid-ramDeterministic-v4), EnvSpec(AirRaid-ramNoFrameskip-v0), EnvSpec(AirRaid-ramNoFrameskip-v4), EnvSpec(Alien-v0), EnvSpec(Alien-v4), EnvSpec(AlienDeterministic-v0), EnvSpec(AlienDeterministic-v4), EnvSpec(AlienNoFrameskip-v0), EnvSpec(AlienNoFrameskip-v4), EnvSpec(Alien-ram-v0), EnvSpec(Alien-ram-v4), EnvSpec(Alien-ramDeterministic-v0), EnvSpec(Alien-ramDeterministic-v4), EnvSpec(Alien-ramNoFrameskip-v0), EnvSpec(Alien-ramNoFrameskip-v4), EnvSpec(Amidar-v0), EnvSpec(Amidar-v4), EnvSpec(AmidarDeterministic-v0), EnvSpec(AmidarDeterministic-v4), EnvSpec(AmidarNoFrameskip-v0), EnvSpec(AmidarNoFrameskip-v4), EnvSpec(Amidar-ram-v0), EnvSpec(Amidar-ram-v4), EnvSpec(Amidar-ramDeterministic-v0), EnvSpec(Amidar-ramDeterministic-v4), EnvSpec(Amidar-ramNoFrameskip-v0), EnvSpec(Amidar-ramNoFrameskip-v4), EnvSpec(Assault-v0), EnvSpec(Assault-v4), EnvSpec(AssaultDeterministic-v0), EnvSpec(AssaultDeterministic-v4), EnvSpec(AssaultNoFrameskip-v0), EnvSpec(AssaultNoFrameskip-v4), EnvSpec(Assault-ram-v0), EnvSpec(Assault-ram-v4), EnvSpec(Assault-ramDeterministic-v0), EnvSpec(Assault-ramDeterministic-v4), EnvSpec(Assault-ramNoFrameskip-v0), EnvSpec(Assault-ramNoFrameskip-v4), EnvSpec(Asterix-v0), EnvSpec(Asterix-v4), EnvSpec(AsterixDeterministic-v0), EnvSpec(AsterixDeterministic-v4), EnvSpec(AsterixNoFrameskip-v0), EnvSpec(AsterixNoFrameskip-v4), EnvSpec(Asterix-ram-v0), EnvSpec(Asterix-ram-v4), EnvSpec(Asterix-ramDeterministic-v0), EnvSpec(Asterix-ramDeterministic-v4), EnvSpec(Asterix-ramNoFrameskip-v0), EnvSpec(Asterix-ramNoFrameskip-v4), EnvSpec(Asteroids-v0), EnvSpec(Asteroids-v4), EnvSpec(AsteroidsDeterministic-v0), EnvSpec(AsteroidsDeterministic-v4), EnvSpec(AsteroidsNoFrameskip-v0), EnvSpec(AsteroidsNoFrameskip-v4), EnvSpec(Asteroids-ram-v0), EnvSpec(Asteroids-ram-v4), EnvSpec(Asteroids-ramDeterministic-v0), EnvSpec(Asteroids-ramDeterministic-v4), EnvSpec(Asteroids-ramNoFrameskip-v0), EnvSpec(Asteroids-ramNoFrameskip-v4), EnvSpec(Atlantis-v0), EnvSpec(Atlantis-v4), EnvSpec(AtlantisDeterministic-v0), EnvSpec(AtlantisDeterministic-v4), EnvSpec(AtlantisNoFrameskip-v0), EnvSpec(AtlantisNoFrameskip-v4), EnvSpec(Atlantis-ram-v0), EnvSpec(Atlantis-ram-v4), EnvSpec(Atlantis-ramDeterministic-v0), EnvSpec(Atlantis-ramDeterministic-v4), EnvSpec(Atlantis-ramNoFrameskip-v0), EnvSpec(Atlantis-ramNoFrameskip-v4), EnvSpec(BankHeist-v0), EnvSpec(BankHeist-v4), EnvSpec(BankHeistDeterministic-v0), EnvSpec(BankHeistDeterministic-v4), EnvSpec(BankHeistNoFrameskip-v0), EnvSpec(BankHeistNoFrameskip-v4), EnvSpec(BankHeist-ram-v0), EnvSpec(BankHeist-ram-v4), EnvSpec(BankHeist-ramDeterministic-v0), EnvSpec(BankHeist-ramDeterministic-v4), EnvSpec(BankHeist-ramNoFrameskip-v0), EnvSpec(BankHeist-ramNoFrameskip-v4), EnvSpec(BattleZone-v0), EnvSpec(BattleZone-v4), EnvSpec(BattleZoneDeterministic-v0), EnvSpec(BattleZoneDeterministic-v4), EnvSpec(BattleZoneNoFrameskip-v0), EnvSpec(BattleZoneNoFrameskip-v4), EnvSpec(BattleZone-ram-v0), EnvSpec(BattleZone-ram-v4), EnvSpec(BattleZone-ramDeterministic-v0), EnvSpec(BattleZone-ramDeterministic-v4), EnvSpec(BattleZone-ramNoFrameskip-v0), EnvSpec(BattleZone-ramNoFrameskip-v4), EnvSpec(BeamRider-v0), EnvSpec(BeamRider-v4), EnvSpec(BeamRiderDeterministic-v0), EnvSpec(BeamRiderDeterministic-v4), EnvSpec(BeamRiderNoFrameskip-v0), EnvSpec(BeamRiderNoFrameskip-v4), EnvSpec(BeamRider-ram-v0), EnvSpec(BeamRider-ram-v4), EnvSpec(BeamRider-ramDeterministic-v0), EnvSpec(BeamRider-ramDeterministic-v4), EnvSpec(BeamRider-ramNoFrameskip-v0), EnvSpec(BeamRider-ramNoFrameskip-v4), EnvSpec(Berzerk-v0), EnvSpec(Berzerk-v4), EnvSpec(BerzerkDeterministic-v0), EnvSpec(BerzerkDeterministic-v4), EnvSpec(BerzerkNoFrameskip-v0), EnvSpec(BerzerkNoFrameskip-v4), EnvSpec(Berzerk-ram-v0), EnvSpec(Berzerk-ram-v4), EnvSpec(Berzerk-ramDeterministic-v0), EnvSpec(Berzerk-ramDeterministic-v4), EnvSpec(Berzerk-ramNoFrameskip-v0), EnvSpec(Berzerk-ramNoFrameskip-v4), EnvSpec(Bowling-v0), EnvSpec(Bowling-v4), EnvSpec(BowlingDeterministic-v0), EnvSpec(BowlingDeterministic-v4), EnvSpec(BowlingNoFrameskip-v0), EnvSpec(BowlingNoFrameskip-v4), EnvSpec(Bowling-ram-v0), EnvSpec(Bowling-ram-v4), EnvSpec(Bowling-ramDeterministic-v0), EnvSpec(Bowling-ramDeterministic-v4), EnvSpec(Bowling-ramNoFrameskip-v0), EnvSpec(Bowling-ramNoFrameskip-v4), EnvSpec(Boxing-v0), EnvSpec(Boxing-v4), EnvSpec(BoxingDeterministic-v0), EnvSpec(BoxingDeterministic-v4), EnvSpec(BoxingNoFrameskip-v0), EnvSpec(BoxingNoFrameskip-v4), EnvSpec(Boxing-ram-v0), EnvSpec(Boxing-ram-v4), EnvSpec(Boxing-ramDeterministic-v0), EnvSpec(Boxing-ramDeterministic-v4), EnvSpec(Boxing-ramNoFrameskip-v0), EnvSpec(Boxing-ramNoFrameskip-v4), EnvSpec(Breakout-v0), EnvSpec(Breakout-v4), EnvSpec(BreakoutDeterministic-v0), EnvSpec(BreakoutDeterministic-v4), EnvSpec(BreakoutNoFrameskip-v0), EnvSpec(BreakoutNoFrameskip-v4), EnvSpec(Breakout-ram-v0), EnvSpec(Breakout-ram-v4), EnvSpec(Breakout-ramDeterministic-v0), EnvSpec(Breakout-ramDeterministic-v4), EnvSpec(Breakout-ramNoFrameskip-v0), EnvSpec(Breakout-ramNoFrameskip-v4), EnvSpec(Carnival-v0), EnvSpec(Carnival-v4), EnvSpec(CarnivalDeterministic-v0), EnvSpec(CarnivalDeterministic-v4), EnvSpec(CarnivalNoFrameskip-v0), EnvSpec(CarnivalNoFrameskip-v4), EnvSpec(Carnival-ram-v0), EnvSpec(Carnival-ram-v4), EnvSpec(Carnival-ramDeterministic-v0), EnvSpec(Carnival-ramDeterministic-v4), EnvSpec(Carnival-ramNoFrameskip-v0), EnvSpec(Carnival-ramNoFrameskip-v4), EnvSpec(Centipede-v0), EnvSpec(Centipede-v4), EnvSpec(CentipedeDeterministic-v0), EnvSpec(CentipedeDeterministic-v4), EnvSpec(CentipedeNoFrameskip-v0), EnvSpec(CentipedeNoFrameskip-v4), EnvSpec(Centipede-ram-v0), EnvSpec(Centipede-ram-v4), EnvSpec(Centipede-ramDeterministic-v0), EnvSpec(Centipede-ramDeterministic-v4), EnvSpec(Centipede-ramNoFrameskip-v0), EnvSpec(Centipede-ramNoFrameskip-v4), EnvSpec(ChopperCommand-v0), EnvSpec(ChopperCommand-v4), EnvSpec(ChopperCommandDeterministic-v0), EnvSpec(ChopperCommandDeterministic-v4), EnvSpec(ChopperCommandNoFrameskip-v0), EnvSpec(ChopperCommandNoFrameskip-v4), EnvSpec(ChopperCommand-ram-v0), EnvSpec(ChopperCommand-ram-v4), EnvSpec(ChopperCommand-ramDeterministic-v0), EnvSpec(ChopperCommand-ramDeterministic-v4), EnvSpec(ChopperCommand-ramNoFrameskip-v0), EnvSpec(ChopperCommand-ramNoFrameskip-v4), EnvSpec(CrazyClimber-v0), EnvSpec(CrazyClimber-v4), EnvSpec(CrazyClimberDeterministic-v0), EnvSpec(CrazyClimberDeterministic-v4), EnvSpec(CrazyClimberNoFrameskip-v0), EnvSpec(CrazyClimberNoFrameskip-v4), EnvSpec(CrazyClimber-ram-v0), EnvSpec(CrazyClimber-ram-v4), EnvSpec(CrazyClimber-ramDeterministic-v0), EnvSpec(CrazyClimber-ramDeterministic-v4), EnvSpec(CrazyClimber-ramNoFrameskip-v0), EnvSpec(CrazyClimber-ramNoFrameskip-v4), EnvSpec(Defender-v0), EnvSpec(Defender-v4), EnvSpec(DefenderDeterministic-v0), EnvSpec(DefenderDeterministic-v4), EnvSpec(DefenderNoFrameskip-v0), EnvSpec(DefenderNoFrameskip-v4), EnvSpec(Defender-ram-v0), EnvSpec(Defender-ram-v4), EnvSpec(Defender-ramDeterministic-v0), EnvSpec(Defender-ramDeterministic-v4), EnvSpec(Defender-ramNoFrameskip-v0), EnvSpec(Defender-ramNoFrameskip-v4), EnvSpec(DemonAttack-v0), EnvSpec(DemonAttack-v4), EnvSpec(DemonAttackDeterministic-v0), EnvSpec(DemonAttackDeterministic-v4), EnvSpec(DemonAttackNoFrameskip-v0), EnvSpec(DemonAttackNoFrameskip-v4), EnvSpec(DemonAttack-ram-v0), EnvSpec(DemonAttack-ram-v4), EnvSpec(DemonAttack-ramDeterministic-v0), EnvSpec(DemonAttack-ramDeterministic-v4), EnvSpec(DemonAttack-ramNoFrameskip-v0), EnvSpec(DemonAttack-ramNoFrameskip-v4), EnvSpec(DoubleDunk-v0), EnvSpec(DoubleDunk-v4), EnvSpec(DoubleDunkDeterministic-v0), EnvSpec(DoubleDunkDeterministic-v4), EnvSpec(DoubleDunkNoFrameskip-v0), EnvSpec(DoubleDunkNoFrameskip-v4), EnvSpec(DoubleDunk-ram-v0), EnvSpec(DoubleDunk-ram-v4), EnvSpec(DoubleDunk-ramDeterministic-v0), EnvSpec(DoubleDunk-ramDeterministic-v4), EnvSpec(DoubleDunk-ramNoFrameskip-v0), EnvSpec(DoubleDunk-ramNoFrameskip-v4), EnvSpec(ElevatorAction-v0), EnvSpec(ElevatorAction-v4), EnvSpec(ElevatorActionDeterministic-v0), EnvSpec(ElevatorActionDeterministic-v4), EnvSpec(ElevatorActionNoFrameskip-v0), EnvSpec(ElevatorActionNoFrameskip-v4), EnvSpec(ElevatorAction-ram-v0), EnvSpec(ElevatorAction-ram-v4), EnvSpec(ElevatorAction-ramDeterministic-v0), EnvSpec(ElevatorAction-ramDeterministic-v4), EnvSpec(ElevatorAction-ramNoFrameskip-v0), EnvSpec(ElevatorAction-ramNoFrameskip-v4), EnvSpec(Enduro-v0), EnvSpec(Enduro-v4), EnvSpec(EnduroDeterministic-v0), EnvSpec(EnduroDeterministic-v4), EnvSpec(EnduroNoFrameskip-v0), EnvSpec(EnduroNoFrameskip-v4), EnvSpec(Enduro-ram-v0), EnvSpec(Enduro-ram-v4), EnvSpec(Enduro-ramDeterministic-v0), EnvSpec(Enduro-ramDeterministic-v4), EnvSpec(Enduro-ramNoFrameskip-v0), EnvSpec(Enduro-ramNoFrameskip-v4), EnvSpec(FishingDerby-v0), EnvSpec(FishingDerby-v4), EnvSpec(FishingDerbyDeterministic-v0), EnvSpec(FishingDerbyDeterministic-v4), EnvSpec(FishingDerbyNoFrameskip-v0), EnvSpec(FishingDerbyNoFrameskip-v4), EnvSpec(FishingDerby-ram-v0), EnvSpec(FishingDerby-ram-v4), EnvSpec(FishingDerby-ramDeterministic-v0), EnvSpec(FishingDerby-ramDeterministic-v4), EnvSpec(FishingDerby-ramNoFrameskip-v0), EnvSpec(FishingDerby-ramNoFrameskip-v4), EnvSpec(Freeway-v0), EnvSpec(Freeway-v4), EnvSpec(FreewayDeterministic-v0), EnvSpec(FreewayDeterministic-v4), EnvSpec(FreewayNoFrameskip-v0), EnvSpec(FreewayNoFrameskip-v4), EnvSpec(Freeway-ram-v0), EnvSpec(Freeway-ram-v4), EnvSpec(Freeway-ramDeterministic-v0), EnvSpec(Freeway-ramDeterministic-v4), EnvSpec(Freeway-ramNoFrameskip-v0), EnvSpec(Freeway-ramNoFrameskip-v4), EnvSpec(Frostbite-v0), EnvSpec(Frostbite-v4), EnvSpec(FrostbiteDeterministic-v0), EnvSpec(FrostbiteDeterministic-v4), EnvSpec(FrostbiteNoFrameskip-v0), EnvSpec(FrostbiteNoFrameskip-v4), EnvSpec(Frostbite-ram-v0), EnvSpec(Frostbite-ram-v4), EnvSpec(Frostbite-ramDeterministic-v0), EnvSpec(Frostbite-ramDeterministic-v4), EnvSpec(Frostbite-ramNoFrameskip-v0), EnvSpec(Frostbite-ramNoFrameskip-v4), EnvSpec(Gopher-v0), EnvSpec(Gopher-v4), EnvSpec(GopherDeterministic-v0), EnvSpec(GopherDeterministic-v4), EnvSpec(GopherNoFrameskip-v0), EnvSpec(GopherNoFrameskip-v4), EnvSpec(Gopher-ram-v0), EnvSpec(Gopher-ram-v4), EnvSpec(Gopher-ramDeterministic-v0), EnvSpec(Gopher-ramDeterministic-v4), EnvSpec(Gopher-ramNoFrameskip-v0), EnvSpec(Gopher-ramNoFrameskip-v4), EnvSpec(Gravitar-v0), EnvSpec(Gravitar-v4), EnvSpec(GravitarDeterministic-v0), EnvSpec(GravitarDeterministic-v4), EnvSpec(GravitarNoFrameskip-v0), EnvSpec(GravitarNoFrameskip-v4), EnvSpec(Gravitar-ram-v0), EnvSpec(Gravitar-ram-v4), EnvSpec(Gravitar-ramDeterministic-v0), EnvSpec(Gravitar-ramDeterministic-v4), EnvSpec(Gravitar-ramNoFrameskip-v0), EnvSpec(Gravitar-ramNoFrameskip-v4), EnvSpec(Hero-v0), EnvSpec(Hero-v4), EnvSpec(HeroDeterministic-v0), EnvSpec(HeroDeterministic-v4), EnvSpec(HeroNoFrameskip-v0), EnvSpec(HeroNoFrameskip-v4), EnvSpec(Hero-ram-v0), EnvSpec(Hero-ram-v4), EnvSpec(Hero-ramDeterministic-v0), EnvSpec(Hero-ramDeterministic-v4), EnvSpec(Hero-ramNoFrameskip-v0), EnvSpec(Hero-ramNoFrameskip-v4), EnvSpec(IceHockey-v0), EnvSpec(IceHockey-v4), EnvSpec(IceHockeyDeterministic-v0), EnvSpec(IceHockeyDeterministic-v4), EnvSpec(IceHockeyNoFrameskip-v0), EnvSpec(IceHockeyNoFrameskip-v4), EnvSpec(IceHockey-ram-v0), EnvSpec(IceHockey-ram-v4), EnvSpec(IceHockey-ramDeterministic-v0), EnvSpec(IceHockey-ramDeterministic-v4), EnvSpec(IceHockey-ramNoFrameskip-v0), EnvSpec(IceHockey-ramNoFrameskip-v4), EnvSpec(Jamesbond-v0), EnvSpec(Jamesbond-v4), EnvSpec(JamesbondDeterministic-v0), EnvSpec(JamesbondDeterministic-v4), EnvSpec(JamesbondNoFrameskip-v0), EnvSpec(JamesbondNoFrameskip-v4), EnvSpec(Jamesbond-ram-v0), EnvSpec(Jamesbond-ram-v4), EnvSpec(Jamesbond-ramDeterministic-v0), EnvSpec(Jamesbond-ramDeterministic-v4), EnvSpec(Jamesbond-ramNoFrameskip-v0), EnvSpec(Jamesbond-ramNoFrameskip-v4), EnvSpec(JourneyEscape-v0), EnvSpec(JourneyEscape-v4), EnvSpec(JourneyEscapeDeterministic-v0), EnvSpec(JourneyEscapeDeterministic-v4), EnvSpec(JourneyEscapeNoFrameskip-v0), EnvSpec(JourneyEscapeNoFrameskip-v4), EnvSpec(JourneyEscape-ram-v0), EnvSpec(JourneyEscape-ram-v4), EnvSpec(JourneyEscape-ramDeterministic-v0), EnvSpec(JourneyEscape-ramDeterministic-v4), EnvSpec(JourneyEscape-ramNoFrameskip-v0), EnvSpec(JourneyEscape-ramNoFrameskip-v4), EnvSpec(Kangaroo-v0), EnvSpec(Kangaroo-v4), EnvSpec(KangarooDeterministic-v0), EnvSpec(KangarooDeterministic-v4), EnvSpec(KangarooNoFrameskip-v0), EnvSpec(KangarooNoFrameskip-v4), EnvSpec(Kangaroo-ram-v0), EnvSpec(Kangaroo-ram-v4), EnvSpec(Kangaroo-ramDeterministic-v0), EnvSpec(Kangaroo-ramDeterministic-v4), EnvSpec(Kangaroo-ramNoFrameskip-v0), EnvSpec(Kangaroo-ramNoFrameskip-v4), EnvSpec(Krull-v0), EnvSpec(Krull-v4), EnvSpec(KrullDeterministic-v0), EnvSpec(KrullDeterministic-v4), EnvSpec(KrullNoFrameskip-v0), EnvSpec(KrullNoFrameskip-v4), EnvSpec(Krull-ram-v0), EnvSpec(Krull-ram-v4), EnvSpec(Krull-ramDeterministic-v0), EnvSpec(Krull-ramDeterministic-v4), EnvSpec(Krull-ramNoFrameskip-v0), EnvSpec(Krull-ramNoFrameskip-v4), EnvSpec(KungFuMaster-v0), EnvSpec(KungFuMaster-v4), EnvSpec(KungFuMasterDeterministic-v0), EnvSpec(KungFuMasterDeterministic-v4), EnvSpec(KungFuMasterNoFrameskip-v0), EnvSpec(KungFuMasterNoFrameskip-v4), EnvSpec(KungFuMaster-ram-v0), EnvSpec(KungFuMaster-ram-v4), EnvSpec(KungFuMaster-ramDeterministic-v0), EnvSpec(KungFuMaster-ramDeterministic-v4), EnvSpec(KungFuMaster-ramNoFrameskip-v0), EnvSpec(KungFuMaster-ramNoFrameskip-v4), EnvSpec(MontezumaRevenge-v0), EnvSpec(MontezumaRevenge-v4), EnvSpec(MontezumaRevengeDeterministic-v0), EnvSpec(MontezumaRevengeDeterministic-v4), EnvSpec(MontezumaRevengeNoFrameskip-v0), EnvSpec(MontezumaRevengeNoFrameskip-v4), EnvSpec(MontezumaRevenge-ram-v0), EnvSpec(MontezumaRevenge-ram-v4), EnvSpec(MontezumaRevenge-ramDeterministic-v0), EnvSpec(MontezumaRevenge-ramDeterministic-v4), EnvSpec(MontezumaRevenge-ramNoFrameskip-v0), EnvSpec(MontezumaRevenge-ramNoFrameskip-v4), EnvSpec(MsPacman-v0), EnvSpec(MsPacman-v4), EnvSpec(MsPacmanDeterministic-v0), EnvSpec(MsPacmanDeterministic-v4), EnvSpec(MsPacmanNoFrameskip-v0), EnvSpec(MsPacmanNoFrameskip-v4), EnvSpec(MsPacman-ram-v0), EnvSpec(MsPacman-ram-v4), EnvSpec(MsPacman-ramDeterministic-v0), EnvSpec(MsPacman-ramDeterministic-v4), EnvSpec(MsPacman-ramNoFrameskip-v0), EnvSpec(MsPacman-ramNoFrameskip-v4), EnvSpec(NameThisGame-v0), EnvSpec(NameThisGame-v4), EnvSpec(NameThisGameDeterministic-v0), EnvSpec(NameThisGameDeterministic-v4), EnvSpec(NameThisGameNoFrameskip-v0), EnvSpec(NameThisGameNoFrameskip-v4), EnvSpec(NameThisGame-ram-v0), EnvSpec(NameThisGame-ram-v4), EnvSpec(NameThisGame-ramDeterministic-v0), EnvSpec(NameThisGame-ramDeterministic-v4), EnvSpec(NameThisGame-ramNoFrameskip-v0), EnvSpec(NameThisGame-ramNoFrameskip-v4), EnvSpec(Phoenix-v0), EnvSpec(Phoenix-v4), EnvSpec(PhoenixDeterministic-v0), EnvSpec(PhoenixDeterministic-v4), EnvSpec(PhoenixNoFrameskip-v0), EnvSpec(PhoenixNoFrameskip-v4), EnvSpec(Phoenix-ram-v0), EnvSpec(Phoenix-ram-v4), EnvSpec(Phoenix-ramDeterministic-v0), EnvSpec(Phoenix-ramDeterministic-v4), EnvSpec(Phoenix-ramNoFrameskip-v0), EnvSpec(Phoenix-ramNoFrameskip-v4), EnvSpec(Pitfall-v0), EnvSpec(Pitfall-v4), EnvSpec(PitfallDeterministic-v0), EnvSpec(PitfallDeterministic-v4), EnvSpec(PitfallNoFrameskip-v0), EnvSpec(PitfallNoFrameskip-v4), EnvSpec(Pitfall-ram-v0), EnvSpec(Pitfall-ram-v4), EnvSpec(Pitfall-ramDeterministic-v0), EnvSpec(Pitfall-ramDeterministic-v4), EnvSpec(Pitfall-ramNoFrameskip-v0), EnvSpec(Pitfall-ramNoFrameskip-v4), EnvSpec(Pong-v0), EnvSpec(Pong-v4), EnvSpec(PongDeterministic-v0), EnvSpec(PongDeterministic-v4), EnvSpec(PongNoFrameskip-v0), EnvSpec(PongNoFrameskip-v4), EnvSpec(Pong-ram-v0), EnvSpec(Pong-ram-v4), EnvSpec(Pong-ramDeterministic-v0), EnvSpec(Pong-ramDeterministic-v4), EnvSpec(Pong-ramNoFrameskip-v0), EnvSpec(Pong-ramNoFrameskip-v4), EnvSpec(Pooyan-v0), EnvSpec(Pooyan-v4), EnvSpec(PooyanDeterministic-v0), EnvSpec(PooyanDeterministic-v4), EnvSpec(PooyanNoFrameskip-v0), EnvSpec(PooyanNoFrameskip-v4), EnvSpec(Pooyan-ram-v0), EnvSpec(Pooyan-ram-v4), EnvSpec(Pooyan-ramDeterministic-v0), EnvSpec(Pooyan-ramDeterministic-v4), EnvSpec(Pooyan-ramNoFrameskip-v0), EnvSpec(Pooyan-ramNoFrameskip-v4), EnvSpec(PrivateEye-v0), EnvSpec(PrivateEye-v4), EnvSpec(PrivateEyeDeterministic-v0), EnvSpec(PrivateEyeDeterministic-v4), EnvSpec(PrivateEyeNoFrameskip-v0), EnvSpec(PrivateEyeNoFrameskip-v4), EnvSpec(PrivateEye-ram-v0), EnvSpec(PrivateEye-ram-v4), EnvSpec(PrivateEye-ramDeterministic-v0), EnvSpec(PrivateEye-ramDeterministic-v4), EnvSpec(PrivateEye-ramNoFrameskip-v0), EnvSpec(PrivateEye-ramNoFrameskip-v4), EnvSpec(Qbert-v0), EnvSpec(Qbert-v4), EnvSpec(QbertDeterministic-v0), EnvSpec(QbertDeterministic-v4), EnvSpec(QbertNoFrameskip-v0), EnvSpec(QbertNoFrameskip-v4), EnvSpec(Qbert-ram-v0), EnvSpec(Qbert-ram-v4), EnvSpec(Qbert-ramDeterministic-v0), EnvSpec(Qbert-ramDeterministic-v4), EnvSpec(Qbert-ramNoFrameskip-v0), EnvSpec(Qbert-ramNoFrameskip-v4), EnvSpec(Riverraid-v0), EnvSpec(Riverraid-v4), EnvSpec(RiverraidDeterministic-v0), EnvSpec(RiverraidDeterministic-v4), EnvSpec(RiverraidNoFrameskip-v0), EnvSpec(RiverraidNoFrameskip-v4), EnvSpec(Riverraid-ram-v0), EnvSpec(Riverraid-ram-v4), EnvSpec(Riverraid-ramDeterministic-v0), EnvSpec(Riverraid-ramDeterministic-v4), EnvSpec(Riverraid-ramNoFrameskip-v0), EnvSpec(Riverraid-ramNoFrameskip-v4), EnvSpec(RoadRunner-v0), EnvSpec(RoadRunner-v4), EnvSpec(RoadRunnerDeterministic-v0), EnvSpec(RoadRunnerDeterministic-v4), EnvSpec(RoadRunnerNoFrameskip-v0), EnvSpec(RoadRunnerNoFrameskip-v4), EnvSpec(RoadRunner-ram-v0), EnvSpec(RoadRunner-ram-v4), EnvSpec(RoadRunner-ramDeterministic-v0), EnvSpec(RoadRunner-ramDeterministic-v4), EnvSpec(RoadRunner-ramNoFrameskip-v0), EnvSpec(RoadRunner-ramNoFrameskip-v4), EnvSpec(Robotank-v0), EnvSpec(Robotank-v4), EnvSpec(RobotankDeterministic-v0), EnvSpec(RobotankDeterministic-v4), EnvSpec(RobotankNoFrameskip-v0), EnvSpec(RobotankNoFrameskip-v4), EnvSpec(Robotank-ram-v0), EnvSpec(Robotank-ram-v4), EnvSpec(Robotank-ramDeterministic-v0), EnvSpec(Robotank-ramDeterministic-v4), EnvSpec(Robotank-ramNoFrameskip-v0), EnvSpec(Robotank-ramNoFrameskip-v4), EnvSpec(Seaquest-v0), EnvSpec(Seaquest-v4), EnvSpec(SeaquestDeterministic-v0), EnvSpec(SeaquestDeterministic-v4), EnvSpec(SeaquestNoFrameskip-v0), EnvSpec(SeaquestNoFrameskip-v4), EnvSpec(Seaquest-ram-v0), EnvSpec(Seaquest-ram-v4), EnvSpec(Seaquest-ramDeterministic-v0), EnvSpec(Seaquest-ramDeterministic-v4), EnvSpec(Seaquest-ramNoFrameskip-v0), EnvSpec(Seaquest-ramNoFrameskip-v4), EnvSpec(Skiing-v0), EnvSpec(Skiing-v4), EnvSpec(SkiingDeterministic-v0), EnvSpec(SkiingDeterministic-v4), EnvSpec(SkiingNoFrameskip-v0), EnvSpec(SkiingNoFrameskip-v4), EnvSpec(Skiing-ram-v0), EnvSpec(Skiing-ram-v4), EnvSpec(Skiing-ramDeterministic-v0), EnvSpec(Skiing-ramDeterministic-v4), EnvSpec(Skiing-ramNoFrameskip-v0), EnvSpec(Skiing-ramNoFrameskip-v4), EnvSpec(Solaris-v0), EnvSpec(Solaris-v4), EnvSpec(SolarisDeterministic-v0), EnvSpec(SolarisDeterministic-v4), EnvSpec(SolarisNoFrameskip-v0), EnvSpec(SolarisNoFrameskip-v4), EnvSpec(Solaris-ram-v0), EnvSpec(Solaris-ram-v4), EnvSpec(Solaris-ramDeterministic-v0), EnvSpec(Solaris-ramDeterministic-v4), EnvSpec(Solaris-ramNoFrameskip-v0), EnvSpec(Solaris-ramNoFrameskip-v4), EnvSpec(SpaceInvaders-v0), EnvSpec(SpaceInvaders-v4), EnvSpec(SpaceInvadersDeterministic-v0), EnvSpec(SpaceInvadersDeterministic-v4), EnvSpec(SpaceInvadersNoFrameskip-v0), EnvSpec(SpaceInvadersNoFrameskip-v4), EnvSpec(SpaceInvaders-ram-v0), EnvSpec(SpaceInvaders-ram-v4), EnvSpec(SpaceInvaders-ramDeterministic-v0), EnvSpec(SpaceInvaders-ramDeterministic-v4), EnvSpec(SpaceInvaders-ramNoFrameskip-v0), EnvSpec(SpaceInvaders-ramNoFrameskip-v4), EnvSpec(StarGunner-v0), EnvSpec(StarGunner-v4), EnvSpec(StarGunnerDeterministic-v0), EnvSpec(StarGunnerDeterministic-v4), EnvSpec(StarGunnerNoFrameskip-v0), EnvSpec(StarGunnerNoFrameskip-v4), EnvSpec(StarGunner-ram-v0), EnvSpec(StarGunner-ram-v4), EnvSpec(StarGunner-ramDeterministic-v0), EnvSpec(StarGunner-ramDeterministic-v4), EnvSpec(StarGunner-ramNoFrameskip-v0), EnvSpec(StarGunner-ramNoFrameskip-v4), EnvSpec(Tennis-v0), EnvSpec(Tennis-v4), EnvSpec(TennisDeterministic-v0), EnvSpec(TennisDeterministic-v4), EnvSpec(TennisNoFrameskip-v0), EnvSpec(TennisNoFrameskip-v4), EnvSpec(Tennis-ram-v0), EnvSpec(Tennis-ram-v4), EnvSpec(Tennis-ramDeterministic-v0), EnvSpec(Tennis-ramDeterministic-v4), EnvSpec(Tennis-ramNoFrameskip-v0), EnvSpec(Tennis-ramNoFrameskip-v4), EnvSpec(TimePilot-v0), EnvSpec(TimePilot-v4), EnvSpec(TimePilotDeterministic-v0), EnvSpec(TimePilotDeterministic-v4), EnvSpec(TimePilotNoFrameskip-v0), EnvSpec(TimePilotNoFrameskip-v4), EnvSpec(TimePilot-ram-v0), EnvSpec(TimePilot-ram-v4), EnvSpec(TimePilot-ramDeterministic-v0), EnvSpec(TimePilot-ramDeterministic-v4), EnvSpec(TimePilot-ramNoFrameskip-v0), EnvSpec(TimePilot-ramNoFrameskip-v4), EnvSpec(Tutankham-v0), EnvSpec(Tutankham-v4), EnvSpec(TutankhamDeterministic-v0), EnvSpec(TutankhamDeterministic-v4), EnvSpec(TutankhamNoFrameskip-v0), EnvSpec(TutankhamNoFrameskip-v4), EnvSpec(Tutankham-ram-v0), EnvSpec(Tutankham-ram-v4), EnvSpec(Tutankham-ramDeterministic-v0), EnvSpec(Tutankham-ramDeterministic-v4), EnvSpec(Tutankham-ramNoFrameskip-v0), EnvSpec(Tutankham-ramNoFrameskip-v4), EnvSpec(UpNDown-v0), EnvSpec(UpNDown-v4), EnvSpec(UpNDownDeterministic-v0), EnvSpec(UpNDownDeterministic-v4), EnvSpec(UpNDownNoFrameskip-v0), EnvSpec(UpNDownNoFrameskip-v4), EnvSpec(UpNDown-ram-v0), EnvSpec(UpNDown-ram-v4), EnvSpec(UpNDown-ramDeterministic-v0), EnvSpec(UpNDown-ramDeterministic-v4), EnvSpec(UpNDown-ramNoFrameskip-v0), EnvSpec(UpNDown-ramNoFrameskip-v4), EnvSpec(Venture-v0), EnvSpec(Venture-v4), EnvSpec(VentureDeterministic-v0), EnvSpec(VentureDeterministic-v4), EnvSpec(VentureNoFrameskip-v0), EnvSpec(VentureNoFrameskip-v4), EnvSpec(Venture-ram-v0), EnvSpec(Venture-ram-v4), EnvSpec(Venture-ramDeterministic-v0), EnvSpec(Venture-ramDeterministic-v4), EnvSpec(Venture-ramNoFrameskip-v0), EnvSpec(Venture-ramNoFrameskip-v4), EnvSpec(VideoPinball-v0), EnvSpec(VideoPinball-v4), EnvSpec(VideoPinballDeterministic-v0), EnvSpec(VideoPinballDeterministic-v4), EnvSpec(VideoPinballNoFrameskip-v0), EnvSpec(VideoPinballNoFrameskip-v4), EnvSpec(VideoPinball-ram-v0), EnvSpec(VideoPinball-ram-v4), EnvSpec(VideoPinball-ramDeterministic-v0), EnvSpec(VideoPinball-ramDeterministic-v4), EnvSpec(VideoPinball-ramNoFrameskip-v0), EnvSpec(VideoPinball-ramNoFrameskip-v4), EnvSpec(WizardOfWor-v0), EnvSpec(WizardOfWor-v4), EnvSpec(WizardOfWorDeterministic-v0), EnvSpec(WizardOfWorDeterministic-v4), EnvSpec(WizardOfWorNoFrameskip-v0), EnvSpec(WizardOfWorNoFrameskip-v4), EnvSpec(WizardOfWor-ram-v0), EnvSpec(WizardOfWor-ram-v4), EnvSpec(WizardOfWor-ramDeterministic-v0), EnvSpec(WizardOfWor-ramDeterministic-v4), EnvSpec(WizardOfWor-ramNoFrameskip-v0), EnvSpec(WizardOfWor-ramNoFrameskip-v4), EnvSpec(YarsRevenge-v0), EnvSpec(YarsRevenge-v4), EnvSpec(YarsRevengeDeterministic-v0), EnvSpec(YarsRevengeDeterministic-v4), EnvSpec(YarsRevengeNoFrameskip-v0), EnvSpec(YarsRevengeNoFrameskip-v4), EnvSpec(YarsRevenge-ram-v0), EnvSpec(YarsRevenge-ram-v4), EnvSpec(YarsRevenge-ramDeterministic-v0), EnvSpec(YarsRevenge-ramDeterministic-v4), EnvSpec(YarsRevenge-ramNoFrameskip-v0), EnvSpec(YarsRevenge-ramNoFrameskip-v4), EnvSpec(Zaxxon-v0), EnvSpec(Zaxxon-v4), EnvSpec(ZaxxonDeterministic-v0), EnvSpec(ZaxxonDeterministic-v4), EnvSpec(ZaxxonNoFrameskip-v0), EnvSpec(ZaxxonNoFrameskip-v4), EnvSpec(Zaxxon-ram-v0), EnvSpec(Zaxxon-ram-v4), EnvSpec(Zaxxon-ramDeterministic-v0), EnvSpec(Zaxxon-ramDeterministic-v4), EnvSpec(Zaxxon-ramNoFrameskip-v0), EnvSpec(Zaxxon-ramNoFrameskip-v4), EnvSpec(CubeCrash-v0), EnvSpec(CubeCrashSparse-v0), EnvSpec(CubeCrashScreenBecomesBlack-v0), EnvSpec(MemorizeDigits-v0)])
###Markdown
CarRacing Environment Since Gym provides different interesting environments, let us simulate a car racingenvironment as shown below,
###Code
import gym
env = gym.make('CarRacing-v0')
env.reset()
for _ in range(1000):
env.render()
env.step(env.action_space.sample())
###Output
/home/sara/miniconda3/envs/hands-on-book/lib/python3.8/site-packages/gym/logger.py:30: UserWarning: [33mWARN: Box bound precision lowered by casting to float32[0m
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
|
Modulo1/5. Flujo de Control.ipynb | ###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
else:
print('el valor de x es distinto de 8')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8, 9 o 10')
###Output
el valor de x es 9
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
semaforo = input('El semaforo tiene color: ')
#semaforo.lower() # a minusculas
semaforo.upper() # a mayusculas
semaforo = semaforo.lower()
if semaforo == 'verde':
print('cruzar la calle')
elif semaforo == 'rojo'or semaforo == 'amarillo':
print('no cruzar')
else:
print('no entiendo')
###Output
cruzar la calle
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
# 1. preguntando la edad de una persona
edad = int(input('Ingrese su edad: '))
if edad >=18:
print('La persona es mayor de edad')
else:
print('La persona es menor de edad')
###Output
La persona es menor de edad
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas.
###Code
text=str(input('Ingrese contraseña a guardar: '))
v=str(input('Ingrese contraseña: '))
if text == v:
print('Contraseña coreecto')
else:
print('Contraseña incorrecto')
###Output
Ingrese contraseña a guardar: j
Ingrese contraseña: l
###Markdown
4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
numero = int(input('Ingrese un numero entero: '))
# numero % 2 -> resto de un numero
if numero % 2 == 0:
print(f'El numero ingresado {numero} es par')
else:
print('el numero ingresado {} NO es par'.format(numero))
'el numero ingresado {} NO es par'.format(numero)
f'El numero ingresado {numero} es par'
###Output
_____no_output_____
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 20000€ y 35000€ 20% Entre 35000€ y 60000€ 30% Más de 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
sueldo=float(input('Ingrese su sueldo: '))
if sueldo < 10000:
s1=sueldo*(5/100)
print('Usted pagará el 5% de su sueldo que es: ', s1)
elif 10000<=sueldo and sueldo<20000:
s2=sueldo*(15/100)
print('usted pagara el 15% de su sueldo que es: ', s2)
elif 20000<=sueldo and sueldo<35000:
s3=sueldo*(20/100)
print('Uste pagara el 20% de su sueldo que es: ', s3)
elif 35000<=sueldo and sueldo<60000:
s4=sueldo*(30/100)
print('Uste pagara el 30% de su sueldo que es: ', s4)
elif 60000<=sueldo:
s5=sueldo*(45/100)
print('Uste pagara el 45% de su sueldo que es: ', s5)
###Output
Ingrese su sueldo: 35003
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
#Diversas operaciones con numeros
x=int(input('Ingrese el 1er numero: '))
y=int(input('Ingrese el 2do numero: '))
print("""A) Si desea sumar
B) Si desea multiplicar\nC) Si desea restar""")
a=str(input('ELIJA UNA OPCION: '))
a=a.lower()
if a=='a':
print('La SUMA es',x+y)
elif a=='b':
print('La MULTIPLICACION es',x*y)
elif a=='c':
print('La RESTA es',x-y)
else:
print('Opción incorrecto')
###Output
Ingrese el 1er numero: 3
Ingrese el 2do numero: 8
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
#Elija la pizza que desea comer
a=print('1)VEGETARIANA')
b=print('2)NO VEGETARIANA')
c=int(input('Elija el tipo de PIZZA que desea comer: '))
if c==1:
print('Que INGREDIENTE desea agregar: ')
print('a)Pimiento')
print('b)Tafu')
x=str(input('Elija la opcion: '))
d=x.lower()
if d=='a':
print("""Usted eligio lo siguiente:
PIZZA VEGETARIANA con ingredientes: mozarella, tomate y pimiento.""")
elif d=='b':
print("""Usted eligio lo siguiente:
PIZZA VEGETARIANA con ingredientes: mozarella, tomate y tafu.""")
elif c==2:
print('Que INGREDIENTE desea agregar: ')
print('a)Peperoni')
print('b)Jamón')
print('c)Salmón')
y=str(input('Elija la opcion: '))
k=y.upper()
if y=='A':
print("""Usted eligio lo siguiente:
PIZZA NO VEGETARIANA con ingrediente: mozarella, tomate y peperoni.""")
elif y=='B':
print("""Usted eligio lo siguiente:
PIZZA NO VEGETARIANA con ingrediente: mozarella, tomate y jamón.""")
elif y=='C':
print("""Usted eligio lo siguiente:
PIZZA NO VEGETARIANA con ingrediente: mozarella, tomate y salmón.""")
###Output
1)VEGETARIANA
2)NO VEGETARIANA
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x = 9.2
if x==8 :
print('el valor de x es 8')
else:
#print('el valor de x es distinto de 8')
print(f'el valor de x es distinto de {8}')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8')
###Output
el valor de x es 9
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
a = input ("SEMAFORO:")
if a.lower() == "verde":
print("cruza la calle")
elif a == "rojo":
print("no cruza la calle")
else :
print("no se encontraron datos")
###Output
cruza la calle
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
y=float(input("Cual es tu edad?"))
if y>=18:
print("mayor edad")
else:
print("menor de edad")
###Output
Cual es tu edad? 10
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas. 4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
a=float(input("Ingrese el número aquí: "))
if a%2==0:
print("El número que ha ingresado es PAR")
else:
print("El número que ha ingresado es IMPAR")
###Output
Ingrese el número aquí: 2
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 200000€ y 35000€ 20% Más de 60000€ 30% Entre 350000€ y 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
a = int(input (""INGRESE SU SUELDO:"))
if a<=10000:
b="5%"
print("Su impuesto será de",b)
elif 20000>=a>10000:
b="15%"
print("Su impuesto será de",b)
elif 35000>=a>20000:
b="20%"
print("Su impuesto será de",b)
elif 60000>=a>35000:
b="30%"
print("Su impuesto será de",b)
else:
b="45%"
print("Su impuesto será de",b)
###Output
_____no_output_____
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
a = int(input("Ingrese el número 1: "))
b = int(input("Ingrese el número 2: "))
print("1. Sumar")
print("2. Restar")
print("3. Multiplicar")
vl_opcion = int(input("Ingrese el tipo de operación a realizar: "))
vl_resultado=0
if vl_opcion==1:
vl_resultado=a+b
elif vl_opcion==2:
vl_resultado=a-b
elif vl_opcion==3:
vl_resultado=vl_x1*vl_x2
else:
print("Opción inválida")
print("El valor de la operación es: ", vl_resultado)
###Output
_____no_output_____
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
a = input ("¿QUE TIPO DE PIZZA DESEAS? VEGETARIANA O NO VEGETARIANA:")
if a == "VEGETARIANA":
input("CUAL DE ESTAS DOS INGREDIENTES DESEAS ELEGIR, PIMIENTO O TOFU:")
A=1
print(A)
###Output
_____no_output_____
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
else:
print('el valor de x es distinto de 8')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8')
###Output
el valor de x es 9
###Markdown
1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
import random
colorSemaforo = ["rojo","amarillo","verde"]
vehiculo = [True,False]
color = random.choice(colorSemaforo)
libre = random.choice(vehiculo)
if color == "verde" and libre:
print("El peaton cruza")
else:
print("El peaton se queda quieto")
###Output
El peaton se queda quieto
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
edad = int(input('Ingrese su edad:'))
if edad >= 18:
print('la persona es mayor de edad')
else:
print('la persona es menor de edad')
###Output
la persona es mayor de edad
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas.
###Code
key = "contraseña"
password = input("Introduce la contraseña: ")
if key == password.lower():
print("La contaseña coincide")
else:
print("La contraseña no coincide")
###Output
Introduce la contraseña: PARIS
La contraseña no coincide
###Markdown
4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
n = int(input("Introduce un número entero: "))
if n % 2 == 0:
print("El número " + str(n) + " es par")
else:
print("El número " + str(n) + " es impar")
###Output
Introduce un número entero: 10
El número 10 es par
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 200000€ y 35000€ 20% Más de 60000€ 30% Entre 350000€ y 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
income = float(input("¿Cuál es tu renta anual? "))
if income < 10000:
tax = 5
elif income < 20000:
tax = 15
elif income < 35000:
tax = 20
elif income < 60000:
tax = 30
else:
tax = 45
print("Tu tipo impositivo es " + str(tax) + "%")
###Output
¿Cuál es tu renta anual? 5000
Tu tipo impositivo es 5%
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
n1 = float(input("Introduce un número: ") )
n2 = float(input("Introduce otro número: ") )
opcion = 0
print("""
¿Qué quieres hacer?
1) Sumar los dos números
2) Restar los dos números
3) Multiplicar los dos números
""")
opcion = int(input("Introduce un número: ") )
if opcion == 1:
print("La suma de",n1,"+",n2,"es",n1+n2)
elif opcion == 2:
print("La resta de",n1,"-",n2,"es",n1-n2)
elif opcion == 3:
print("El producto de",n1,"*",n2,"es",n1*n2)
else:
print("Opción incorrecta")
###Output
Introduce un número: 1
La suma de 12.0 + 9.0 es 21.0
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
# Presentación del menú con los tipos de pizza
print("Bienvenido a la pizzeria Bella Napoli.\nTipos de pizza\n\t1- Vegetariana\n\t2- No vegetariana\n")
tipo = input("Introduce el número correspondiente al tipo de pizza que quieres:")
# Decisión sobre el tipo de pizza
if tipo == "1":
print("Ingredientes de pizzas vegetarianas\n\t 1- Pimiento\n\t2- Tofu\n")
ingrediente = input("Introduce el ingrediente que deseas: ")
print("Pizza vegetariana con mozzarella, tomate y ", end="")
if ingrediente == "1":
print("pimiento")
else:
print("tofu")
else:
print("Ingredientes de pizzas no vegetarianas\n\t1- Peperoni\n\t2- Jamón\n\t3- Salmón\n")
ingrediente = input("Introduce el ingrediente que deseas: ")
print("Pizza no vegetarina con mozarrella, tomate y ", end="")
if ingrediente == "1":
print("peperoni")
elif ingrediente == "2":
print("jamón")
else:
print("salmón")
###Output
Bienvenido a la pizzeria Bella Napoli.
Tipos de pizza
1- Vegetariana
2- No vegetariana
Introduce el número correspondiente al tipo de pizza que quieres:1
Ingredientes de pizzas vegetarianas
1- Pimiento
2- Tofu
Introduce el ingrediente que deseas: 2
Pizza vegetariana con mozzarella, tomate y tofu
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('Hola')
if True:
print('hola') # 4 veces espacio
# Sentencia else (si no)
x = 85
if x == 8 :
print('el valor de x es 8') # 4 espacios
else:
print('el valor de x = {} es distinto de 8'.format(x))
#print('el valor de x es {}'.format(x))
####### identificar si un numero es par
numero=int(input("ingrese el numero"))
if numero%2==0:
print("es par")
else:
print(" es impar ")
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=10
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8,9 y 10')
###Output
el valor de x es 10
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
colorSemaforo="ROJO"
if colorSemaforo=="ROJO":
print("no cruzar")
elif colorSemaforo=="AMARILLO":
print("no cruzar")
else:
print("cruzar")
###Output
_____no_output_____
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
edad=int(input("ingrese tu edad"))
if edad>=18:
print("es mayor de edad")
else:
print("es menor de edad")
###Output
_____no_output_____
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas.
###Code
passwordGuardada="Password"
password=input("ingrese un contraseña")
password.upper() ##esta funcion convierte la cadena mayusucla
if passwordGuardada==password.upper():
print("contraseña coincide")
else:
print("contraseña no coincide")
###Output
_____no_output_____
###Markdown
4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
numero=int(input("ingrese el numero entero"))
if numero%2==0:
print("es par")
else:
print(" es impar ")
tasa=5 #valor inicial
renta=int(input("ingrese el valor de la rena"))
if renta<10000:
impuesto=renta*tasa/100
elif renta>=10000 and renta<20000:
tasa=15
impuesto=renta*tasa/100
elif renta>=20000 and renta<35000:
tasa=20
impuesto=renta*tasa/100
elif renta>=35000 and renta<60000:
tasa=35
impuesto=renta*tasa/100
else:
tasa=45
impuesto=renta*tasa/100
print("el impuesto es ",impuesto)
###Output
_____no_output_____
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 20000€ y 35000€ 20% Entre 35000€ y 60000€ 30% Más de 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
print("""
1 Mostrar una suma de los dos números
2 Mostrar una resta de los dos números (el primero menos el segundo)
3 Mostrar una multiplicación de los dos números
4 En caso de introducir una opción inválida, el programa informará de que no es correcta""")
numeroOne=int(input("ingrese un numero "))
numeroTwo=int(input("ingrese un numero "))
opcion=int(input("elija su opcion "))
resultado=0;
if opcion==1:
resultado=numeroOne+numeroTwo
elif opcion==2:
resultado=numeroOne-numeroTwo
elif opcion==3:
resultado=numeroOne*numeroTwo
else:
print("ingrese opcion valida")
print(resultado)
###Output
_____no_output_____
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
print("""
1 Mostrar una suma de los dos números
2 Mostrar una resta de los dos números (el primero menos el segundo)
3 Mostrar una multiplicación de los dos números
4 En caso de introducir una opción inválida, el programa informará de que no es correcta""")
numeroOne=int(input("ingrese un numero "))
numeroTwo=int(input("ingrese un numero "))
opcion=int(input("elija su opcion "))
resultado=0;
if opcion==1:
resultado=numeroOne+numeroTwo
elif opcion==2:
resultado=numeroOne-numeroTwo
elif opcion==3:
resultado=numeroOne*numeroTwo
else:
print("ingrese opcion valida")
print(resultado)
###Output
_____no_output_____
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
print('''Pizza Bella Napoli
-Ingredientes vegetarianos: Pimiento y tofu.
-Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.
''')
tipo=input("Ingrese V si desea pizza vegetariana o N si no vegetariana")
Mozzarella=input("Desea Mozzarella Si o No (S/N)")
if tipo.upper()=="V":
if Mozzarella.upper()=="S":
print("Pimiento ,tofu y Mozzarella")
else:
print(" Pimiento y tofu")
else:
if Mozzarella.upper()=="S":
print("Peperoni, Jamón ,Salmón y Mozzarella")
else:
print("Peperoni, Jamón y Salmón")
###Output
_____no_output_____
###Markdown
8.Escribí un programa que solicite al usuario una letra y, si es una vocal, muestre el mensaje “Es vocal”. Verificar si el usuario ingresó un string de más de un carácter y, en ese caso, informarle que no se puede procesar el dato.
###Code
letra=input("ingrese una letra")
notVocal=True
if letra.upper()=="A":
notVocal=False
print("Es una vocal")
if letra.upper()=="E":
notVocal=False
print("Es una vocal")
if letra.upper()=="O":
notVocal=False
print("Es una vocal")
if letra.upper()=="I":
notVocal=False
print("Es una vocal")
if letra.upper()=="U":
notVocal=False
print("Es una vocal")
if notVocal:
print("No es vocal")
###Output
_____no_output_____
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
else:
print('el valor de x es distinto de 8')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8, 9 o 10')
###Output
el valor de x es 9
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
semaforo = input('El semaforo tiene color: ')
#semaforo.lower() # a minusculas
semaforo.upper() # a mayusculas
semaforo = semaforo.lower()
if semaforo == 'verde':
print('cruzar la calle')
elif semaforo == 'rojo'or semaforo == 'amarillo':
print('no cruzar')
else:
print('no entiendo')
###Output
cruzar la calle
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
# 1. preguntando la edad de una persona
edad = int(input('Ingrese su edad: '))
if edad >=18:
print('La persona es mayor de edad')
else:
print('La persona es menor de edad')
###Output
La persona es menor de edad
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas.
###Code
contraseña = input("registre su contraseña: ")
contraseña2 = input("ingrese su contraseña: ")
contraseña2.lower()
###Output
_____no_output_____
###Markdown
4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
numero = int(input('Ingrese un numero entero: '))
# numero % 2 -> resto de un numero
if numero % 2 == 0:
print(f'El numero ingresado {numero} es par')
else:
print('el numero ingresado {} NO es par'.format(numero))
'el numero ingresado {} NO es par'.format(numero)
f'El numero ingresado {numero} es par'
###Output
_____no_output_____
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 20000€ y 35000€ 20% Entre 35000€ y 60000€ 30% Más de 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
sueldo = int(input("ingrese su sueldo: "))
if sueldo < 10000 :
print(f'el impuesto a pagar asciende a {sueldo*0.05}')
elif sueldo >= 10000 and sueldo < 20000 :
print(f'el impuesto a pagar asciende a {sueldo*0.15}')
elif sueldo >= 20000 and sueldo < 35000 :
print(f'el impuesto a pagar asciende a {sueldo*0.20}')
elif sueldo >= 35000 and sueldo < 60000 :
print(f'el impuesto a pagar asciende a {sueldo*0.30}')
else :
print(f'el impuesto a pagar asciende a {sueldo*0.45}')
###Output
el impuesto a pagar asciende a 5200.0
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
numero1 = int(input("ingrese primer número: "))
numero2 = int(input("ingrese segundo número: "))
print(numero1+numero2)
print(numero1-numero2)
print(numero1*numero2)
###Output
8
0
16
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
opcion = input("Desea un pizza vegetariana?")
if opcion == 'si':
ingredientes = input ("Escoja uno entre pimiento o tofu: ")
else :
ingredientes = input ("Escoja uno entre Peperoni, Jamón o Salmón: ")
print(f'la pizza elegida {opcion} es vegetariana, y tiene los siguientes ingredientes: mozarella, tomate y {ingredientes}')
###Output
Desea un pizza vegetariana? si
Escoja uno entre pimiento o tofu: tofu
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('Hola')
if True:
print('hola') # 4 veces espacio
# Sentencia else (si no)
x = 85
if x == 8 :
print('el valor de x es 8') # 4 espacios
else:
print('el valor de x = {} es distinto de 8'.format(x))
#print('el valor de x es {}'.format(x))
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=10
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8,9 y 10')
###Output
el valor de x es 10
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
color_semaforo=input("Diga el valor del color del semaforo: ")
color_semaforo=color_semaforo.upper()
if color_semaforo=="VERDE":
print("puede cruzar la calle")
elif color_semaforo in ["ROJO", "AMARRILLO"]:
print("no puede cruzar")
else:
print("ingrese un color correcto")
###Output
Diga el valor del color del semaforo: Rojo
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
edad=int(input("ingrese su edad :"))
type(edad)
if edad>=18:
print("es mayor de edad")
else:
print("es menor de edad")
###Output
_____no_output_____
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas. 4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
numero=input("introduce el numero")
numero=int(numero)
if numero%2==0:
print("es par")
else:
print("es impar")
###Output
_____no_output_____
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 20000€ y 35000€ 20% Entre 35000€ y 60000€ 30% Más de 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
sueldo=input("ingrese su sueldo")
sueldo=int(sueldo)
impuesto=0.0
if sueldo<10000:
impuesto=0.05
print("su impuesto es 5%")
elif sueldo>=10000 and 20000>=sueldo:
impuesto=0.15
print("su impuesto es 15%")
elif sueldo>=20000 and 350000>=sueldo:
print("su impuesto es 20%")
impuesto=0.20
elif sueldo>=350000 and 60000>=sueldo:
impuesto=0.30
print("su impuesto es 30%")
else:
impuesto=0.45
print("su impuesto es 45%")
sueldo_neto=sueldo*impuesto
print(sueldo_neto)
###Output
_____no_output_____
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
print("""
1 Mostrar una suma de los dos números
2 Mostrar una resta de los dos números (el primero menos el segundo)
3 Mostrar una multiplicación de los dos números
4 En caso de introducir una opción inválida, el programa informará de que no es correcta""")
numeroOne=int(input("ingrese un numero "))
numeroTwo=int(input("ingrese un numero "))
opcion=int(input("elija su opcion "))
resultado=0;
if opcion==1:
resultado=numeroOne+numeroTwo
elif opcion==2:
resultado=numeroOne-numeroTwo
elif opcion==3:
resultado=numeroOne*numeroTwo
else:
print("ingrese opcion valida")
print(resultado)
###Output
_____no_output_____
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
print('''Pizza Bella Napoli
-Ingredientes vegetarianos: Pimiento y tofu.
-Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.
''')
tipo=input("Ingrese V si desea pizza vegetariana o N si no vegetariana")
Mozzarella=input("Desea Mozzarella Si o No (S/N)")
if tipo.upper()=="V":
if Mozzarella.upper()=="S":
print("Pimiento ,tofu y Mozzarella")
else:
print(" Pimiento y tofu")
else:
if Mozzarella.upper()=="S":
print("Peperoni, Jamón ,Salmón y Mozzarella")
else:
print("Peperoni, Jamón y Salmón")
###Output
_____no_output_____
###Markdown
8.Escribí un programa que solicite al usuario una letra y, si es una vocal, muestre el mensaje “Es vocal”. Verificar si el usuario ingresó un string de más de un carácter y, en ese caso, informarle que no se puede procesar el dato.
###Code
letra=input("ingrese una letra")
notVocal=True
if letra.upper()=="A":
notVocal=False
print("Es una vocal")
if letra.upper()=="E":
notVocal=False
print("Es una vocal")
if letra.upper()=="O":
notVocal=False
print("Es una vocal")
if letra.upper()=="I":
notVocal=False
print("Es una vocal")
if letra.upper()=="U":
notVocal=False
print("Es una vocal")
if notVocal:
print("No es vocal")
###Output
ingrese una letrad
No es vocal
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
else:
print('el valor de x es distinto de 8')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8, 9 o 10')
###Output
el valor de x es 9
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
semaforo = input('El semaforo tiene color: ')
#semaforo.lower() # a minusculas
semaforo.upper() # a mayusculas
semaforo = semaforo.lower()
if semaforo == 'verde':
print('cruzar la calle')
elif semaforo == 'rojo'or semaforo == 'amarillo':
print('no cruzar')
else:
print('no entiendo')
###Output
cruzar la calle
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
# 1. preguntando la edad de una persona
edad = int(input('Ingrese su edad: '))
if edad >=18:
print('La persona es mayor de edad')
else:
print('La persona es menor de edad')
###Output
La persona es menor de edad
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas.
###Code
key = "diegojoel"
password = input("Introduce la contraseña: ")
if key == password.lower():
print("La contaseña coincide")
else:
print("La contraseña no coincide")
###Output
Introduce la contraseña: diegojoel
###Markdown
4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
numero = int(input('Ingrese un numero entero: '))
# numero % 2 -> resto de un numero
if numero % 2 == 0:
print(f'El numero ingresado {numero} es par')
else:
print('el numero ingresado {} NO es par'.format(numero))
'el numero ingresado {} NO es par'.format(numero)
f'El numero ingresado {numero} es par'
###Output
_____no_output_____
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 20000€ y 35000€ 20% Entre 35000€ y 60000€ 30% Más de 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
income = float(input("¿Cuál es tu renta anual? "))
if income < 10000:
tax = 5
elif income < 20000:
tax = 15
elif income < 35000:
tax = 20
elif income < 60000:
tax = 30
else:
tax = 45
print("Tu tipo impositivo es " + str(tax) + "%")
###Output
¿Cuál es tu renta anual? 2000
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
n1 = float(input("primer número: ") )
n2 = float(input("segundo número: ") )
opcion = 0
print("""
¿Qué quieres hacer?
1) Sumar los dos números
2) Restar los dos números
3) Multiplicar los dos números
""")
opcion = int(input("Introduce un número: ") )
if opcion == 1:
print("La suma de",n1,"+",n2,"es",n1+n2)
elif opcion == 2:
print("La resta de",n1,"-",n2,"es",n1-n2)
elif opcion == 3:
print("El producto de",n1,"*",n2,"es",n1*n2)
else:
print("Opción incorrecta")
###Output
Introduce un número: 5
Introduce otro número: 3
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
print("Bienvenido a la pizzeria Bella Napoli.\nTipos de pizza\n\t1- Vegetariana\n\t2- No vegetariana\n")
tipo = input("Introduce el número correspondiente al tipo de pizza que quieres:")
# Decisión sobre el tipo de pizza
if tipo == "1":
print("Ingredientes de pizzas vegetarianas\n\t 1- Pimiento\n\t2- Tofu\n")
ingrediente = input("Introduce el ingrediente que deseas: ")
print("Pizza vegetariana con mozzarella, tomate y ", end="")
if ingrediente == "1":
print("pimiento")
else:
print("tofu")
else:
print("Ingredientes de pizzas no vegetarianas\n\t1- Peperoni\n\t2- Jamón\n\t3- Salmón\n")
ingrediente = input("Introduce el ingrediente que deseas: ")
print("Pizza no vegetarina con mozarrella, tomate y ", end="")
if ingrediente == "1":
print("peperoni")
elif ingrediente == "2":
print("jamón")
else:
print("salmón")
###Output
_____no_output_____
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
else:
print('el valor de x es distinto de 8')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8, 9 o 10')
###Output
el valor de x es 9
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
semaforo = input('El semaforo tiene color: ')
#semaforo.lower() # a minusculas
semaforo.upper() # a mayusculas
semaforo = semaforo.lower()
if semaforo == 'verde':
print('cruzar la calle')
elif semaforo == 'rojo'or semaforo == 'amarillo':
print('no cruzar')
else:
print('no entiendo')
###Output
no cruzar
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
# 1. preguntando la edad de una persona
edad = int(input('Ingrese su edad: '))
if edad >=18:
print('La persona es mayor de edad')
else:
print('La persona es menor de edad')
###Output
La persona es menor de edad
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas.
###Code
c='contraseña'
cu=input('Introduir contraseña:')
if cu==c:
print("contraseña válida")
else:
print("contraseña inválida")
###Output
Introduir contraseña: vale
###Markdown
4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
numero = int(input('Ingrese un numero entero: '))
# numero % 2 -> resto de un numero
if numero % 2 == 0:
print(f'El numero ingresado {numero} es par')
else:
print('el numero ingresado {} NO es par'.format(numero))
'el numero ingresado {} NO es par'.format(numero)
f'El numero ingresado {numero} es par'
###Output
_____no_output_____
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 20000€ y 35000€ 20% Entre 35000€ y 60000€ 30% Más de 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
sueldo=float(input("Ingresar sueldo:"))
if sueldo<10000:
print("Impuesto es 5%")
elif 10000<=sueldo<20000:
print("Impuesto es 15%")
elif 20000<=sueldo<35000:
print("Impuesto es 20%")
elif 35000<=sueldo<60000:
print("Impuesto es 30%")
else:
print("Impuesto es de 45%")
###Output
Ingresar sueldo: 930
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
a=float(input("Ingresar primer numero"))
b=float(input("Ingresar segundo numero"))
op=int(input(print("""
Escribe 1: Sumar los números
Escribe 2: Restar los números
Escribe 3: Multiplicar los números
""")))
if op==1:
print("La suma es: ",a+b)
elif op==2:
print("La resta es: ",a-b)
elif op==3:
print("La multiplicación es: ",a*b)
else:
print("Opción inválida")
###Output
Ingresar primer numero 30
Ingresar segundo numero 4
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
op=str(input("¿Quiere pizza vegetariana?"))
if op.lower()=='si':
ing=input("1. Pimiento, 2. Tofu")
if ing==1:
print('''Pizza vegetariana.
Ingredientes:Pimiento,mozzarella,tomate''')
else:
print('''Pizza vegetariana.
Ingredientes:Tofu,mozzarella,tomate''')
else:
ing2=input("""Escoger sólo un ingrediente: 1. Peperoni, 2. Jamón, 3. Salmón""")
if ing2==1:
print('''Pizza NO vegetariana.
Ingredientes:Peperoni,mozzarella,tomate''')
elif ing2==2:
print('''Pizza NO vegetariana.
Ingredientes:Jamón,mozzarella,tomate''')
else:
print('''Pizza NO vegetariana.
Ingredientes:Salmón,mozzarella,tomate''')
###Output
¿Quiere pizza vegetariana? NO
Escoger sólo un ingrediente: 1. Peperoni, 2. Jamón, 3. Salmón 2
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
else:
print('el valor de x es distinto de 8')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8, 9 o 10')
###Output
el valor de x es 9
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
semaforo = input('El semaforo tiene color: ')
#semaforo.lower() # a minusculas
semaforo.upper() # a mayusculas
semaforo = semaforo.lower()
if semaforo == 'verde':
print('cruzar la calle')
elif semaforo == 'rojo'or semaforo == 'amarillo':
print('no cruzar')
else:
print('no entiendo')
###Output
cruzar la calle
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
# 1. preguntando la edad de una persona
edad = int(input('Ingrese su edad: '))
if edad >=18:
print('La persona es mayor de edad')
else:
print('La persona es menor de edad')
###Output
La persona es menor de edad
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas.
###Code
a="contraseña"
password=input("Introducir contraseña")
if a== password.lower():
print("La contraseña coincide")
else:
print("La contraseña no coincide")
###Output
Introducir contraseña CoNtRaSeÑa
###Markdown
4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
numero = int(input('Ingrese un numero entero: '))
# numero % 2 -> resto de un numero
if numero % 2 == 0:
print(f'El numero ingresado {numero} es par')
else:
print('el numero ingresado {} NO es par'.format(numero))
'el numero ingresado {} NO es par'.format(numero)
f'El numero ingresado {numero} es par'
###Output
_____no_output_____
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 20000€ y 35000€ 20% Entre 35000€ y 60000€ 30% Más de 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
ingreso=float(input("Ingrese su Renta"))
if ingreso<10000:
porcentaje=5
elif ingreso<20000:
porcentaje=15
elif ingreso<35000:
porcentaje=20
elif ingreso<60000:
porcentaje=30
else:
porcentaje=45
print("Tu porcentaje de impuesto es",porcentaje,"%")
###Output
Ingrese su Renta 78614
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
a=int(input("Ingrese primer número"))
b=int(input("Ingrese segundo número"))
print("¿Qué tarea desea realizar?\n\t1.Suma de los números\n\t2.Resta de los números\n\t3.Multiplicar los numeros")
opcion=int(input("Introduzca la opción:"))
if opcion==1:
print("La suma de los números es: ",(a+b))
elif opcion==2:
print("La resta de los números es: ",(a-b))
elif opcion==3:
print("La multiplicación de los números es: ",(a*b))
else:
print("La opción no es correcta")
###Output
Ingrese primer número 6
Ingrese segundo número 8
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
print("Bienvenido a la pizzeria Bella Napoli.\nTipos de pizza\n\t1-Vegetariana\n\t2-No Vegetariana")
tipo=input("Introduzca el número correspondiente al tipo de pizza que desee:")
if tipo=="1":
print("Los ingredientes de la pizza son:\n\t1-Pimiento\n\t2-Tofu")
ingrediente=input("Ingrese el número correspondiente al ingrediente que desea:")
if ingrediente=="1":
ingrediente="pimiento"
else:
ingrediente="tofu"
print("Pizza Vegetariana con mozzarella, tomate y ",ingrediente)
else :
print("Los ingredientes de la pizza son:\n\t1-Peperoni\n\t2-Jamón\n\t3-Salmón")
ingrediente=input("Ingrese el número correspondiente al ingrediente que desea:")
if ingrediente=="1":
ingrediente="peperoni"
elif ingrediente=="2":
ingrediente="jamón"
else:
ingrediente="salmón"
print("Pizza No Vegetariana con mozzarella, tomate y ",ingrediente)
a=input("Ingrese primer número")
b=input("Ingrese segundo número")
print("""¿Qué tarea desea realizar?
1.Suma de los números
2.Resta de los números
3.Multiplicar los numeros""")
###Output
Ingrese primer número 2
Ingrese segundo número 3
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
else:
print('el valor de x es distinto de 8')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8')
###Output
el valor de x es 9
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
x==verde
y==rojo o amarillo
if x==verde
print("cruzar calle")
if y==rojo o amarillo
print("no cruzar")
###Output
_____no_output_____
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
age = int(input("¿Cuántos años tienes? "))
for i in range(age):
print("Has cumplido " + str(i+1) + " años")
###Output
_____no_output_____
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas.
###Code
age = int(input("¿Cuántos años tienes? "))
for i in range(age):
print("Has cumplido " + str(i+1) + " años")
###Output
_____no_output_____
###Markdown
4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
n = int(input("Introduce un número entero positivo mayor que 2: "))
i = 2
while n % i != 0:
i += 1
if i == n:
print(str(n) + " es par")
else:
print(str(n) + " no es impar")
###Output
Introduce un número entero positivo mayor que 2: 10
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 200000€ y 35000€ 20% Más de 60000€ 30% Entre 350000€ y 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo 6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
n1 = float(input("Introduce un número: ") )
n2 = float(input("Introduce otro número: ") )
opcion = 0
print("""
¿Qué quieres hacer?
1) Sumar los dos números
2) Restar los dos números
3) Multiplicar los dos números
""")
opcion = int(input("Introduce un número: ") )
if opcion == 1:
print("La suma de",n1,"+",n2,"es",n1+n2)
elif opcion == 2:
print("La resta de",n1,"-",n2,"es",n1-n2)
elif opcion == 3:
print("El producto de",n1,"*",n2,"es",n1*n2)
else:
print("Opción incorrecta")
###Output
_____no_output_____
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
# Presentación del menú con los tipos de pizza
print("Bienvenido a la pizzeria Bella Napoli.\nTipos de pizza\n\t1- Vegetariana\n\t2- No vegetariana\n")
tipo = input("Introduce el número correspondiente al tipo de pizza que quieres:")
# Decisión sobre el tipo de pizza
if tipo == "1":
print("Ingredientes de pizzas vegetarianas\n\t 1- Pimiento\n\t2- Tofu\n")
ingrediente = input("Introduce el ingrediente que deseas: ")
print("Pizza vegetariana con mozzarella, tomate y ", end="")
if ingrediente == "1":
print("pimiento")
else:
print("tofu")
else:
print("Ingredientes de pizzas no vegetarianas\n\t1- Peperoni\n\t2- Jamón\n\t3- Salmón\n")
ingrediente = input("Introduce el ingrediente que deseas: ")
print("Pizza no vegetarina con mozarrella, tomate y ", end="")
if ingrediente == "1":
print("peperoni")
elif ingrediente == "2":
print("jamón")
else:
print("salmón")
###Output
_____no_output_____
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('Hola')
if True:
print('hola') # 4 veces espacio
# Sentencia else (si no)
x = 85
if x == 8 :
print('el valor de x es 8') # 4 espacios
else:
print('el valor de x = {} es distinto de 8'.format(x))
#print('el valor de x es {}'.format(x))
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=10
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8,9 y 10')
###Output
el valor de x es 10
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
semaforo = input('Ingrese el color de la luz del semáforo: ')
semaforo = semaforo.lower()
if semaforo == 'verde':
print('Puede cruzar')
elif semaforo == 'amarillo' or semaforo == 'rojo':
print ('No puede cruzar')
else:
print('Valide ingreso de datos!')
# 'RojO'.upper()
'RojO'.lower()
###Output
_____no_output_____
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
edad=int(input('Ingrese su edad: '))
if(edad<18):
print('Eres menor de edad')
else:
print('Eres mayor de edad')
###Output
Eres menor de edad
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
else:
print('el valor de x es distinto de 8')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8')
###Output
el valor de x es 9
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
semaforo = input('El semaforo tiene color: ')
#semaforo.lower() # a minusculas
semaforo.upper() # a mayusculas
if semaforo == 'rojo':
print('no cruzar la calle')
elif semaforo == 'rojo'or semaforo == 'amarillo':
print('no cruzar')
else:
print('entiendo')
###Output
no cruzar la calle
###Markdown
4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
numero = int(input('Ingrese un numero entero: '))
# numero % 2 -> resto de un numero
if numero % 2 == 0:
print(f'El numero ingresado {numero} es par')
else:
print('el numero ingresado {} NO es par'.format(numero))
'el numero ingresado {} NO es par'.format(numero)
f'El numero ingresado {numero} es par'
###Output
_____no_output_____
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 200000€ y 35000€ 20% Más de 60000€ 30% Entre 350000€ y 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
income = float(input("¿Cuál es tu renta anual? "))
if income < 10000:
tax = 5
elif income < 20000:
tax = 15
elif income < 35000:
tax = 20
elif income < 60000:
tax = 30
else:
tax = 45
print("Tu tipo impositivo es " + str(tax) + "%")
###Output
¿Cuál es tu renta anual? 7
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
n1 = float(input("primer número: ") )
n2 = float(input("segundo número: ") )
opcion = 0
print("""
¿Qué quieres hacer?
1) Sumar los dos números
2) Restar los dos números
3) Multiplicar los dos números
""")
opcion = int(input("Introduce un número: ") )
if opcion == 1:
print("La suma de",n1,"+",n2,"es",n1+n2)
elif opcion == 2:
print("La resta de",n1,"-",n2,"es",n1-n2)
elif opcion == 3:
print("El producto de",n1,"*",n2,"es",n1*n2)
else:
print("Opción incorrecta")
n1 = float(input("primer número: ") )
n2 = float(input("segundo número: ") )
opcion = 0
print("""
¿Qué quieres hacer?
1) Sumar los dos números
2) Restar los dos números
3) Multiplicar los dos números
""")
opcion = int(input("Introduce un número: ") )
if opcion == 1:
print("La suma de",n1,"+",n2,"es",n1+n2)
elif opcion == 2:
print("La resta de",n1,"-",n2,"es",n1-n2)
elif opcion == 3:
print("El producto de",n1,"*",n2,"es",n1*n2)
else:
print("Opción incorrecta")
###Output
primer número: 5
segundo número: 8
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
print("Bienvenido a la pizzeria Bella Napoli.\nTipos de pizza\n\t1- Vegetariana\n\t2- No vegetariana\n")
tipo = input("Introduce el número correspondiente al tipo de pizza que quieres:")
# Decisión sobre el tipo de pizza
if tipo == "1":
print("Ingredientes de pizzas vegetarianas\n\t 1- Pimiento\n\t2- Tofu\n")
ingrediente = input("Introduce el ingrediente que deseas: ")
print("Pizza vegetariana con mozzarella, tomate y ", end="")
if ingrediente == "1":
print("pimiento")
else:
print("tofu")
else:
print("Ingredientes de pizzas no vegetarianas\n\t1- Peperoni\n\t2- Jamón\n\t3- Salmón\n")
ingrediente = input("Introduce el ingrediente que deseas: ")
print("Pizza no vegetarina con mozarrella, tomate y ", end="")
if ingrediente == "1":
print("peperoni")
elif ingrediente == "2":
print("jamón")
else:
print("salmón")
###Output
Bienvenido a la pizzeria Bella Napoli.
Tipos de pizza
1- Vegetariana
2- No vegetariana
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
else:
print('el valor de x es distinto de 8')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8, 9 o 10')
###Output
el valor de x es 9
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
semaforo = input('El semaforo tiene color: ')
#semaforo.lower() # a minusculas
semaforo.upper() # a mayusculas
semaforo = semaforo.lower()
if semaforo == 'verde':
print('cruzar la calle')
elif semaforo == 'rojo'or semaforo == 'amarillo':
print('no cruzar')
else:
print('no entiendo')
###Output
cruzar la calle
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
# 1. preguntando la edad de una persona
edad = int(input('Ingrese su edad: '))
if edad >=18:
print('La persona es mayor de edad')
else:
print('La persona es menor de edad')
###Output
La persona es menor de edad
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas.
###Code
key = "matysger"
password = input("Introduce la contraseña: ")
if key == password.lower():
print("La contaseña coincide")
else:
print("La contraseña no coincide")
###Output
_____no_output_____
###Markdown
4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
numero = int(input('Ingrese un numero entero: '))
# numero % 2 -> resto de un numero
if numero % 2 == 0:
print(f'El numero ingresado {numero} es par')
else:
print('el numero ingresado {} NO es par'.format(numero))
'el numero ingresado {} NO es par'.format(numero)
f'El numero ingresado {numero} es par'
###Output
_____no_output_____
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 20000€ y 35000€ 20% Entre 35000€ y 60000€ 30% Más de 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
income = float(input("¿Cuál es tu renta anual? "))
if income < 10000:
tax = 5
elif income < 20000:
tax = 15
elif income < 35000:
tax = 20
elif income < 60000:
tax = 30
else:
tax = 45
print("Tu tipo impositivo es " + str(tax) + "%")
###Output
¿Cuál es tu renta anual? 10000
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
n1 = float(input("Introduce un número: ") )
n2 = float(input("Introduce otro número: ") )
opcion = 0
print("""
¿Qué quieres hacer?
1) Sumar los dos números
2) Restar los dos números
3) Multiplicar los dos números
""")
opcion = int(input("Introduce un número: ") )
if opcion == 1:
print("La suma de",n1,"+",n2,"es",n1+n2)
elif opcion == 2:
print("La resta de",n1,"-",n2,"es",n1-n2)
elif opcion == 3:
print("El producto de",n1,"*",n2,"es",n1*n2)
else:
print("Opción incorrecta")
###Output
Introduce un número: 1
Introduce otro número: 2
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
# Presentación del menú con los tipos de pizza
print("Bienvenido a la pizzeria Bella Napoli.\nTipos de pizza\n\t1- Vegetariana\n\t2- No vegetariana\n")
tipo = input("Introduce el número correspondiente al tipo de pizza que quieres:")
# Decisión sobre el tipo de pizza
if tipo == "1":
print("Ingredientes de pizzas vegetarianas\n\t 1- Pimiento\n\t2- Tofu\n")
ingrediente = input("Introduce el ingrediente que deseas: ")
print("Pizza vegetariana con mozzarella, tomate y ", end="")
if ingrediente == "1":
print("pimiento")
else:
print("tofu")
else:
print("Ingredientes de pizzas no vegetarianas\n\t1- Peperoni\n\t2- Jamón\n\t3- Salmón\n")
ingrediente = input("Introduce el ingrediente que deseas: ")
print("Pizza no vegetarina con mozarrella, tomate y ", end="")
if ingrediente == "1":
print("peperoni")
elif ingrediente == "2":
print("jamón")
else:
print("salmón")
###Output
Bienvenido a la pizzeria Bella Napoli.
Tipos de pizza
1- Vegetariana
2- No vegetariana
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
else:
print('el valor de x es distinto de 8')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8')
###Output
el valor de x es 9
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
semaforo=input('El semaforo tiene color: ')
if semaforo=='verde':
print('cruzar la calle')
elif semaforo == 'rojo'or semaforo == 'amarillo':
print('no cruzar')
###Output
El semaforo tiene color: rojo
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
edad=int(input('¿Cuál es tu edad?'))
if edad>=18:
print('Usted es mayor de edad')
else:
if edad > 0:
print('Usted es menor de edad')
else:
print('Edad no válida')
###Output
¿Cuál es tu edad? 24
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas.
###Code
contraseña='@12345ABC'
password=input('Ingresa tu contraseña:')
if contraseña.upper()==password.upper():
print('La contraseña coincide')
else:
print('La contraseña no coincide')
###Output
Ingresa tu contraseña: @12345aBc
###Markdown
4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
numero=int(input('Ingrese un número:'))
if numero%2==0:
print('El número es par')
else:
print('El número es impar')
###Output
Ingrese un número: 4
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 200000€ y 35000€ 20% Más de 60000€ 30% Entre 35000€ y 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
sueldo = float(input('Ingresa tu sueldo en euros:'))
if sueldo < 10000:
print('Tus impuestos son del 5%')
else:
if sueldo < 20000:
print('Tus impuestos son del 15%')
else:
if sueldo < 35000:
print('Tus impuestos son del 20%')
else:
if sueldo <= 60000:
print('Tus impuestos son del 45%')
else:
print('Tus impuestos son del 30%')
###Output
Ingresa tu sueldo en euros: 60001
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
numero1 = int(input('Ingresa el primer número:'))
numero2 = int(input('Ingresa el primer número:'))
opcion = input("""Selecciona una opcion:
S: Suma
R: Resta
M: Multiplicación
""")
if opcion.upper() == 'S':
print('Suma: {}'.format(numero1+numero2))
else:
if opcion.upper() == 'R':
print('Resta: {}'.format(numero1-numero2))
else:
if opcion.upper() == 'M':
print('Multiplicación: {}'.format(numero1*numero2))
else:
print('Opción no es correcta')
###Output
Ingresa el primer número: 9
Ingresa el primer número: 3
Selecciona una opcion:
S: Suma
R: Resta
M: Multiplicación
m
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
tipo = input("""¿Desea una pizza vegetariana?
S: Sí
N: No
""")
if tipo.upper() == 'S':
ingrediente = input("""Elige un ingrediente:
A. Pimiento
B. Tofu
""")
print("""La pizza es vegetariana y sus ingredientes son:
- Mozzarella
- Tomate""")
else:
if tipo.upper() == 'N':
ingrediente = input("""Elige un ingrediente:
C. Peperoni
D. Jamón
E. Salmón
""")
print("""La pizza no es vegetariana y sus ingredientes son:
- Mozzarella
- Tomate""")
else:
print('Opción no valida')
if ingrediente.upper() == 'A':
print('- Pimiento')
else:
if ingrediente.upper() == 'B':
print('- Tofu')
else:
if ingrediente.upper() == 'C':
print('- Peperoni')
else:
if ingrediente.upper() == 'D':
print('- Jamón')
else:
if ingrediente.upper() == 'E':
print('- Salmón')
###Output
¿Desea una pizza vegetariana?
S: Sí
N: No
n
Elige un ingrediente:
C. Peperoni
D. Jamón
E. Salmón
d
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
else:
print('el valor de x es distinto de 8')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8, 9 o 10')
###Output
el valor de x es 9
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
semaforo = input('El semaforo tiene color: ')
#semaforo.lower() # a minusculas
semaforo.upper() # a mayusculas
semaforo = semaforo.lower()
if semaforo == 'verde':
print('cruzar la calle')
elif semaforo == 'rojo'or semaforo == 'amarillo':
print('no cruzar')
else:
print('no entiendo')
###Output
cruzar la calle
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
# 1. preguntando la edad de una persona
edad = int(input('Ingrese su edad: '))
if edad >=18:
print('La persona es mayor de edad')
else:
print('La persona es menor de edad')
###Output
La persona es menor de edad
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas.
###Code
key = "contraseña"
password = input("Introduce la contraseña: ")
if key == password.lower():
print("La contaseña coincide")
else:
print("La contraseña no coincide")
###Output
Introduce la contraseña: contraseña
###Markdown
4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
numero = int(input('Ingrese un numero entero: '))
# numero % 2 -> resto de un numero
if numero % 2 == 0:
print(f'El numero ingresado {numero} es par')
else:
print('el numero ingresado {} NO es par'.format(numero))
'el numero ingresado {} NO es par'.format(numero)
f'El numero ingresado {numero} es par'
###Output
_____no_output_____
###Markdown
5.Los tramos impositivos para la declaración de la renta en un determinado país son los siguientes: Renta % de Impuesto Menos de 10000€ 5% Entre 10000€ y 20000€ 15% Entre 20000€ y 35000€ 20% Entre 35000€ y 60000€ 30% Más de 60000€ 45% Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo
###Code
income = float(input("¿Cuál es tu renta anual? "))
if income < 10000:
tax = 5
elif income < 20000:
tax = 15
elif income < 35000:
tax = 20
elif income < 60000:
tax = 30
else:
tax = 45
print("Tu tipo impositivo es " + str(tax) + "%")
###Output
¿Cuál es tu renta anual? 10000
###Markdown
6. Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:- Mostrar una suma de los dos números- Mostrar una resta de los dos números (el primero menos el segundo)- Mostrar una multiplicación de los dos números- En caso de introducir una opción inválida, el programa informará de que no es correcta.
###Code
n1 = float(input("Introduce un número: ") )
n2 = float(input("Introduce otro número: ") )
opcion = 0
print("""
¿Qué quieres hacer?
1) Sumar los dos números
2) Restar los dos números
3) Multiplicar los dos números
""")
opcion = int(input("Introduce un número: ") )
if opcion == 1:
print("La suma de",n1,"+",n2,"es",n1+n2)
elif opcion == 2:
print("La resta de",n1,"-",n2,"es",n1-n2)
elif opcion == 3:
print("El producto de",n1,"*",n2,"es",n1*n2)
else:
print("Opción incorrecta")
###Output
Introduce un número: 5
Introduce otro número: 4
###Markdown
7.La pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.- Ingredientes vegetarianos: Pimiento y tofu.- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.Escribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.
###Code
# Presentación del menú con los tipos de pizza
print("Bienvenido a la pizzeria Bella Napoli.\nTipos de pizza\n\t1- Vegetariana\n\t2- No vegetariana\n")
tipo = input("Introduce el número correspondiente al tipo de pizza que quieres:")
# Decisión sobre el tipo de pizza
if tipo == "1":
print("Ingredientes de pizzas vegetarianas\n\t 1- Pimiento\n\t2- Tofu\n")
ingrediente = input("Introduce el ingrediente que deseas: ")
print("Pizza vegetariana con mozzarella, tomate y ", end="")
if ingrediente == "1":
print("pimiento")
else:
print("tofu")
else:
print("Ingredientes de pizzas no vegetarianas\n\t1- Peperoni\n\t2- Jamón\n\t3- Salmón\n")
ingrediente = input("Introduce el ingrediente que deseas: ")
print("Pizza no vegetarina con mozarrella, tomate y ", end="")
if ingrediente == "1":
print("peperoni")
elif ingrediente == "2":
print("jamón")
else:
print("salmón")
###Output
Bienvenido a la pizzeria Bella Napoli.
Tipos de pizza
1- Vegetariana
2- No vegetariana
###Markdown
FLUJO DE CONTROL El flujo de control ayudará a nuestro programa en la toma de decisiones 1. Flujo de control (if - else) El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.
###Code
# Uso de la sentencia if (si)
if True:
print('hola')
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
else:
print('el valor de x es distinto de 8')
# uso de If anidado
a = 5
b = 10
if a == 5:
print("a vale",a)
if b == 10:
print('y b vale',b)
###Output
a vale 5
y b vale 10
###Markdown
2. Sentencia elif (sino si) Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:
###Code
# Sentencia else (si no)
x=9
if x==8 :
print('el valor de x es 8')
elif x==9:
print('el valor de x es 9')
elif x==10:
print('el valor de x es 10')
else:
print('el valor de x es distinto de 8, 9 o 10')
###Output
el valor de x es 9
###Markdown
EJERCICIOS 1.Crear un programa que permita decidir a una persona cruzar la calle o no según:- Si semáforo esta en verde cruzar la calle- Si semáforo esta en rojo o amarillo no cruzarLa persona debe poder ingresar el estado del semáforo por teclado
###Code
semaforo = input('El semaforo tiene color: ')
#semaforo.lower() # a minusculas
semaforo.upper() # a mayusculas
semaforo = semaforo.lower()
if semaforo == 'verde':
print('cruzar la calle')
elif semaforo == 'rojo'or semaforo == 'amarillo':
print('no cruzar')
else:
print('no entiendo')
###Output
cruzar la calle
###Markdown
2.Escribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.
###Code
# 1. preguntando la edad de una persona
edad = int(input('Ingrese su edad: '))
if edad >=18:
print('La persona es mayor de edad')
else:
print('La persona es menor de edad')
###Output
La persona es menor de edad
###Markdown
3.Escribir un programa que almacene la cadena de caracteres contraseña en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas. 4. Escribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.
###Code
numero = int(input('Ingrese un numero entero: '))
# numero % 2 -> resto de un numero
if numero % 2 == 0:
print(f'El numero ingresado {numero} es par')
else:
print('el numero ingresado {} NO es par'.format(numero))
'el numero ingresado {} NO es par'.format(numero)
f'El numero ingresado {numero} es par'
###Output
_____no_output_____ |
notebooks/review_results/identify_a_sample_to_review.ipynb | ###Markdown
Review model results - Step 1 - Identify a sample to review Setup This notebook assumes: Terra is running custom Docker image ghcr.io/broadinstitute/ml4h/ml4h_terra:20211101_143643. ml4h is running custom Docker image gcr.io/broad-ml4cvd/deeplearning:tf2-latest-gpu.
###Code
# TODO(deflaux): remove this cell after gcr.io/broad-ml4cvd/deeplearning:tf2-latest-gpu has this preinstalled.
from ml4h.runtime_data_defines import determine_runtime
from ml4h.runtime_data_defines import Runtime
if Runtime.ML4H_VM == determine_runtime():
!pip3 install --user --upgrade pandas_gbq pyarrow
# Be sure to restart the kernel if pip installs anything.
from ml4h.visualization_tools.facets import FacetsOverview, FacetsDive # Interactive data exploration of tabular data.
import numpy as np
import os
import pandas as pd
import re
%load_ext google.cloud.bigquery
if 'GOOGLE_PROJECT' in os.environ:
BILLING_PROJECT_ID = os.environ['GOOGLE_PROJECT']
else:
BILLING_PROJECT_ID = 'broad-ml4cvd'
###Output
_____no_output_____
###Markdown
Identify a sample to review If you want to change the SQL below, you can view the available tables: phenotype descriptions phenotype values available ML results
###Code
#---[ EDIT AND RUN THIS CELL TO READ FROM A LOCAL FILE ]---
MODEL_RESULTS_FILE = None
if MODEL_RESULTS_FILE:
sample_info = pd.read_csv(MODEL_RESULTS_FILE)
else:
sample_info = pd.read_gbq("""
---[ EDIT THIS QUERY IF YOU LIKE ]---
SELECT
sample_id,
CASE u31_0_0
WHEN 0 THEN 'Female'
WHEN 1 THEN 'Male'
ELSE 'Unknown' END AS sex_at_birth,
u21003_0_0 AS age_at_assessment,
u21001_0_0 AS bmi,
CASE u1249_0_0
WHEN 1 THEN 'Smoked on most or all days'
WHEN 2 THEN 'Smoked occasionally'
WHEN 3 THEN 'Just tried once or twice'
WHEN 4 THEN 'I have never smoked'
WHEN -3 THEN 'Prefer not to answer' END AS past_tobacco_smoking,
ecg.* EXCEPT(sample_id)
FROM
`uk-biobank-sek-data.raw_phenotypes.ukb9222_no_empty_strings_20181128`
INNER JOIN
`uk-biobank-sek-data.ml_results.inference_ecg_rest_age_sex_autoencode_lvmass` AS ecg
ON
eid = sample_id""", project_id=BILLING_PROJECT_ID)
sample_info.shape
# Compute the deltas between actual values and predicted value columns.
actual_regexp = re.compile('^(\w+)_actual$')
for actual_col in sample_info.columns:
if actual_col.endswith('_actual'):
prediction_col = actual_regexp.sub(r'\1_prediction', actual_col)
if prediction_col in sample_info.columns:
delta_col = actual_regexp.sub(r'\1_delta', actual_col)
print('Adding ' + delta_col)
sample_info[delta_col] = (sample_info[actual_col].astype('float')
- sample_info[prediction_col].astype('float'))
sample_info.shape
###Output
_____no_output_____
###Markdown
Facets OverviewUse this visualization to get an overview of the type and distribution of sample information available.For detailed instructions, see [Facets Overview](https://pair-code.github.io/facets/).
###Code
FacetsOverview(sample_info)
###Output
_____no_output_____
###Markdown
Facets DiveUse this visualization to get an overview the distributions of values for *groups* of samples.For detailed instructions, see [Facets Dive](https://pair-code.github.io/facets/).**NOTE**:* It might take a few seconds for the visualization to appear.* If the table of contents pane is in the way of the column selector drop down, click on the button to turn the table of contents off.* Try: * Binning | X-Axis: `sex_at_birth` * Binning | Y-Axis: `bmi`, use the 'count' drop down to increase/decrease the number of categorical bins * Label By: `sample_id` * Color By: `age_at_assesment` * Scatter | X-Axis: `LVM_prediction_sentinel_actual` * Scatter | Y-Axis: `LVM_prediction_sentinel_prediction` Zoom in, click on the sample(s) of interest and you'll see a pane on the right hand side with all the data for the sample **including the sample_id** which you should use for the next step.
###Code
FacetsDive(sample_info)
###Output
_____no_output_____
###Markdown
Provenance
###Code
import datetime
print(datetime.datetime.now())
%%bash
pip3 freeze
###Output
_____no_output_____
###Markdown
Review model results - Step 1 - Identify a sample to review Setup This notebook assumes: Terra is running custom Docker image ghcr.io/broadinstitute/ml4h/ml4h_terra:20210928_221837. ml4h is running custom Docker image gcr.io/broad-ml4cvd/deeplearning:tf2-latest-gpu.
###Code
# TODO(deflaux): remove this cell after gcr.io/broad-ml4cvd/deeplearning:tf2-latest-gpu has this preinstalled.
from ml4h.runtime_data_defines import determine_runtime
from ml4h.runtime_data_defines import Runtime
if Runtime.ML4H_VM == determine_runtime():
!pip3 install --user --upgrade pandas_gbq pyarrow
# Be sure to restart the kernel if pip installs anything.
from ml4h.visualization_tools.facets import FacetsOverview, FacetsDive # Interactive data exploration of tabular data.
import numpy as np
import os
import pandas as pd
import re
%load_ext google.cloud.bigquery
if 'GOOGLE_PROJECT' in os.environ:
BILLING_PROJECT_ID = os.environ['GOOGLE_PROJECT']
else:
BILLING_PROJECT_ID = 'broad-ml4cvd'
###Output
_____no_output_____
###Markdown
Identify a sample to review If you want to change the SQL below, you can view the available tables: phenotype descriptions phenotype values available ML results
###Code
#---[ EDIT AND RUN THIS CELL TO READ FROM A LOCAL FILE ]---
MODEL_RESULTS_FILE = None
if MODEL_RESULTS_FILE:
sample_info = pd.read_csv(MODEL_RESULTS_FILE)
else:
sample_info = pd.read_gbq("""
---[ EDIT THIS QUERY IF YOU LIKE ]---
SELECT
sample_id,
CASE u31_0_0
WHEN 0 THEN 'Female'
WHEN 1 THEN 'Male'
ELSE 'Unknown' END AS sex_at_birth,
u21003_0_0 AS age_at_assessment,
u21001_0_0 AS bmi,
CASE u1249_0_0
WHEN 1 THEN 'Smoked on most or all days'
WHEN 2 THEN 'Smoked occasionally'
WHEN 3 THEN 'Just tried once or twice'
WHEN 4 THEN 'I have never smoked'
WHEN -3 THEN 'Prefer not to answer' END AS past_tobacco_smoking,
ecg.* EXCEPT(sample_id)
FROM
`uk-biobank-sek-data.raw_phenotypes.ukb9222_no_empty_strings_20181128`
INNER JOIN
`uk-biobank-sek-data.ml_results.inference_ecg_rest_age_sex_autoencode_lvmass` AS ecg
ON
eid = sample_id""", project_id=BILLING_PROJECT_ID)
sample_info.shape
# Compute the deltas between actual values and predicted value columns.
actual_regexp = re.compile('^(\w+)_actual$')
for actual_col in sample_info.columns:
if actual_col.endswith('_actual'):
prediction_col = actual_regexp.sub(r'\1_prediction', actual_col)
if prediction_col in sample_info.columns:
delta_col = actual_regexp.sub(r'\1_delta', actual_col)
print('Adding ' + delta_col)
sample_info[delta_col] = (sample_info[actual_col].astype('float')
- sample_info[prediction_col].astype('float'))
sample_info.shape
###Output
_____no_output_____
###Markdown
Facets OverviewUse this visualization to get an overview of the type and distribution of sample information available.For detailed instructions, see [Facets Overview](https://pair-code.github.io/facets/).
###Code
FacetsOverview(sample_info)
###Output
_____no_output_____
###Markdown
Facets DiveUse this visualization to get an overview the distributions of values for *groups* of samples.For detailed instructions, see [Facets Dive](https://pair-code.github.io/facets/).**NOTE**:* It might take a few seconds for the visualization to appear.* If the table of contents pane is in the way of the column selector drop down, click on the button to turn the table of contents off.* Try: * Binning | X-Axis: `sex_at_birth` * Binning | Y-Axis: `bmi`, use the 'count' drop down to increase/decrease the number of categorical bins * Label By: `sample_id` * Color By: `age_at_assesment` * Scatter | X-Axis: `LVM_prediction_sentinel_actual` * Scatter | Y-Axis: `LVM_prediction_sentinel_prediction` Zoom in, click on the sample(s) of interest and you'll see a pane on the right hand side with all the data for the sample **including the sample_id** which you should use for the next step.
###Code
FacetsDive(sample_info)
###Output
_____no_output_____
###Markdown
Provenance
###Code
import datetime
print(datetime.datetime.now())
%%bash
pip3 freeze
###Output
_____no_output_____ |
docs/Notebooks/Split_verse_Weighted_masking.ipynb | ###Markdown
Test masking with clumping verse straight masking With both Old and new gradient In this notebook we assess the change in handling the weighted mask for Condition 2 of Figueria et al 2016.For that paper the spectrum is broken into several small spectra on regions masked telluric lines (>2%).The updated version just applies a boolen mask after pixel weights are calculated.
###Code
import numpy as np
import matplotlib.pyplot as plt
from eniric.atmosphere import Atmosphere
from eniric.legacy import mask_clumping, RVprec_calc_masked
from scripts.phoenix_precision import convolve_and_resample
from eniric.snr_normalization import snr_constant_band
from eniric.precision import pixel_weights, rv_precision
from eniric.utilities import band_limits, load_aces_spectrum, wav_selector
wav_, flux_ = load_aces_spectrum([3900, 4.5, 0, 0])
# Small section in K bands to experiment with
wav, flux = wav_selector(wav_, flux_, 2.1025, 2.1046)
# Telluric mask
atm_ = Atmosphere.from_band("K", bary=True)
atm = atm_.at(wav)
mask = atm.mask
###Output
_____no_output_____
###Markdown
Visualize the Pixel Weights:
###Code
# Clumping method
wclump, fclump = mask_clumping(wav, flux, mask)
print("# Number of clumps = ", len(wclump))
# print(wclump, fclump)
print(len(wclump))
wis_0 = pixel_weights(wav, flux, grad=False)
wis_1 = pixel_weights(wav, flux, grad=True)
wis_0 *= mask[:-1]
wis_1 *= mask
wis_0[wis_0 == 0] = np.nan
wis_1[wis_1 == 0] = np.nan
plt_setting = {"figsize": (15, 6)}
plt.figure(**plt_setting)
plt.plot(wav, flux / np.max(flux), label="Star")
plt.plot(atm.wl, atm.transmission, label="Telluric")
plt.plot(atm.wl, atm.mask, "--", label="Mask")
plt.axhline(0.98)
plt.legend()
plt.figure(**plt_setting)
plt.plot(wav[:-1], wis_0, "bs-", label="Mask Grad False")
plt.plot(wav, wis_1, "ko--", label="Mask Grad true")
w, f = (wclump[0], fclump[0])
wis1 = pixel_weights(w, f, grad=True)
wis0 = pixel_weights(w, f, grad=False)
plt.plot(w[:-1], wis0 * 1.05, "g+:", label="Clump, Grad False")
plt.plot(w, wis1 * 1.05, "r.-.", label="Clump, Grad True")
plt.legend()
plt.xlim(wclump[0][0] * 0.99999, wclump[0][-1] * 1.00001)
plt.show()
plt.figure(**plt_setting)
plt.plot(wav[:-1], wis_0, "bs-", label="grad False")
plt.plot(wav, wis_1, "ko--", label="grad true")
w, f = (wclump[1], fclump[1])
wis1 = pixel_weights(w, f, grad=True)
wis0 = pixel_weights(w, f, grad=False)
plt.plot(w[:-1], wis0 * 1.05, "g+:", label="Clump grad False")
plt.plot(w, wis1 * 1.05, "r.-.", label="Clump grad True")
plt.legend()
plt.xlim(wclump[-1][0] * 0.999999, wclump[-1][-1] * 1.00001)
plt.show()
###Output
_____no_output_____
###Markdown
Ffrom these two examples the calculations with the same gradients produces the same pixel weights The clumped version produces less weight though.The masked version produces slightly different value for the last pixel due to how it is calculated. With the new graident all pixels are kept except in the clumped version the last pixel s the end and not in the middle so is not calculated with central difference but finite backward difference. Calculations of RV
###Code
# Old and new indicate the split method.
print("Old with gradient {:0.06f}".format(RVprec_calc_masked(wav, flux, atm.mask, grad=True)))
print("New with gradient {:0.06f}".format(rv_precision(wav, flux, atm.mask, grad=True)))
print("Old without finite diff{:0.06f}".format(RVprec_calc_masked(wav, flux, atm.mask, grad=False)))
print("New with finite diff{:0.06f}".format(rv_precision(wav, flux, atm.mask, grad=False)))
###Output
Old with gradient 0.155197 m / s
New with gradient 0.155525 m / s
Old without finite diff0.150011 m / s
New with finite diff0.149899 m / s
###Markdown
Differences between versions with same gradient is at the 4th sf. These are not the correct scale, this will be addressed in the next section. Calculating difference ratios. Assessing the changes to actual Figueira et al. values by not splitting on telluric lines first then applying mask to the weights, or slitting then calculating weights.Convolving the spectra to vsini=1 an R=100k of a M0 spectra in the Z,Y,J,H,K bands to be consistent with paper. Using Old gradient and new gradient methods to access difference.The old gradient drops the last pixel, which drops many pixels when spectra is split between telluric lines.
###Code
# Explore relative difference of different bands
wav_, flux_ = load_aces_spectrum([3900, 4.5, 0, 0])
wav, flux = wav_selector(wav_, flux_, 0.7, 2.5)
table = []
table.append("Band, Cond#1, Split, Masked, ratio, Cond#1, Split, Masked, ratio")
table.append("Grad, False , True ")
# Get J band SNR normalization value
wav_j, flux_j = convolve_and_resample(
wav, flux, vsini=1, R=100000, band="J", sampling=3
)
snr_norm = snr_constant_band(wav_j, flux_j, snr=100, band="J")
for band in ["Z", "Y", "J", "H", "K"]:
atm = Atmosphere.from_band(band, bary=True)
w, f = convolve_and_resample(wav, flux, vsini=1, R=100000, band=band, sampling=3)
f /= snr_norm
atm = atm.at(w)
a = RVprec_calc_masked(w, f, atm.mask, grad=True)
b = RVprec_calc_masked(w, f, atm.mask, grad=False)
c = rv_precision(w, f, atm.mask, grad=True)
d = rv_precision(w, f, atm.mask, grad=False)
e = rv_precision(w, f, grad=True)
f = rv_precision(w, f, grad=False)
false_ratio = (d - b) / b
true_ratio = (c - a) / a
table.append(
"{0:5}, {1:4.02f}, {2:6.02f}, {3:6.02f}, {4:5.04f}, {5:6.02f}, {6:6.02f}, {7:6.02f}, {8:5.04f}".format(
band,
f.value,
b.value,
d.value,
false_ratio,
e.value,
a.value,
c.value,
true_ratio,
)
)
for line in table:
print(line)
###Output
Band, Cond#1, Split, Masked, ratio, Cond#1, Split, Masked, ratio
Grad, False , True
Z , 4.50, 7.45, 7.40, -0.0066, 4.74, 7.78, 7.79, 0.0013
Y , 3.97, 4.77, 4.76, -0.0022, 4.24, 5.08, 5.08, 0.0006
J , 7.47, 18.64, 18.58, -0.0029, 7.86, 19.63, 19.63, 0.0001
H , 3.84, 6.10, 6.07, -0.0053, 3.97, 6.27, 6.27, 0.0008
K , 7.14, 32.30, 32.23, -0.0022, 7.45, 33.57, 33.59, 0.0005
|
IBM Cloud/WML/notebooks/unstructured_image/keras/Watson OpenScale Explanation for Image Multiclass .ipynb | ###Markdown
Tutorial on generating an explanation for an image-based model on Watson OpenScale This notebook includes steps for creating an image-based watson-machine-learning model, creating a subscription, configuring explainability, and finally generating an explanation for a transaction. Contents- [1. Setup](setup)- [2. Creating and deploying an image-based model](deployment)- [3. Subscriptions](subscription)- [4. Explainability](explainability) **Note**: This notebook is using runtime 'Default Python 3.7.x' 1. Setup 1.1 Install Watson OpenScale and WML packages
###Code
!pip install --upgrade ibm-watson-openscale --no-cache | tail -n 1
!pip install --upgrade ibm-watson-machine-learning --no-cache | tail -n 1
###Output
_____no_output_____
###Markdown
Note: Restart the kernel to assure the new libraries are being used. 1.2 Configure credentials Your Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below.**NOTE:** You can also get OpenScale `API_KEY` using IBM CLOUD CLI.How to install IBM Cloud (bluemix) console: [instruction](https://console.bluemix.net/docs/cli/reference/ibmcloud/download_cli.htmlinstall_use)How to get api key using console:```bx login --ssobx iam api-key-create 'my_key'```
###Code
CLOUD_API_KEY = "***"
IAM_URL="https://iam.ng.bluemix.net/oidc/token"
WML_CREDENTIALS = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey": CLOUD_API_KEY
}
###Output
_____no_output_____
###Markdown
2. Creating and deploying an image-based model The dataset used is MNIST dataset of handwritten digits. It consists of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images. More information about the dataset can be found here: https://keras.io/datasets/mnist-database-of-handwritten-digitsNote: Keras and TensorFlow versions supported by WML are: Keras 2.1.6 with TensorFlow 1.13 backend and Keras 2.2.4 with TensorFlow 1.14 backend. The latter combination is used in this notebook. 2.1 Creating a model
###Code
!pip install keras==2.2.5
!pip uninstall tf-nightly
!pip uninstall tf-estimate-nightly
!pip install tensorflow==2.5.0
!pip install keras_sequential_ascii
#import keras
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from keras_sequential_ascii import sequential_model_to_ascii_printout
from tensorflow.keras import backend as keras_backend
#print(tensorflow.__version__)
batch_size = 128
num_classes = 10
epochs = 5
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if keras_backend.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
from tensorflow.keras.utils import to_categorical
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)
# Define Model
from tensorflow.keras.optimizers import Adadelta
from tensorflow.keras.losses import categorical_crossentropy
def base_model():
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=categorical_crossentropy,
optimizer=Adadelta(),
metrics=['accuracy'])
return model
cnn_n = base_model()
cnn_n.summary()
# Vizualizing model structure
sequential_model_to_ascii_printout(cnn_n)
# Fit model
print(y_train.shape)
cnn = cnn_n.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_test, y_test))
scores = cnn_n.evaluate(x_test, y_test, verbose=0)
print(scores)
print("Accuracy: %.2f%%" % (scores[1]*100))
cnn_n.save("mnist_cnn.h5")
!rm mnist_cnn.tar*
!tar -czvf mnist_cnn.tar.gz mnist_cnn.h5
###Output
_____no_output_____
###Markdown
2.2 Storing the model
###Code
import json
from ibm_watson_machine_learning import APIClient
wml_client = APIClient(WML_CREDENTIALS)
wml_client.version
wml_client.spaces.list(limit=10)
WML_SPACE_ID='***' # use space id here
wml_client.set.default_space(WML_SPACE_ID)
MODEL_NAME = "MNIST Model"
software_spec_uid = wml_client.software_specifications.get_uid_by_name("tensorflow_2.4-py3.7")
print("Software Specification ID: {}".format(software_spec_uid))
model_props = {
wml_client.repository.ModelMetaNames.NAME:"{}".format(MODEL_NAME),
wml_client.repository.ModelMetaNames.TYPE: 'tensorflow_2.4',
wml_client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: software_spec_uid,
}
print("Storing model ...")
published_model_details = wml_client.repository.store_model(
model='mnist_cnn.tar.gz',
meta_props=model_props,
)
model_uid = wml_client.repository.get_model_uid(published_model_details)
print("Done")
print("Model ID: {}".format(model_uid))
###Output
_____no_output_____
###Markdown
2.3 Deploying the model
###Code
deployment_details = wml_client.deployments.create(
model_uid,
meta_props={
wml_client.deployments.ConfigurationMetaNames.NAME: "{}".format(MODEL_NAME + " deployment"),
wml_client.deployments.ConfigurationMetaNames.ONLINE: {}
}
)
scoring_url = wml_client.deployments.get_scoring_href(deployment_details)
deployment_uid=wml_client.deployments.get_uid(deployment_details)
print("Scoring URL:" + scoring_url)
print("Model id: {}".format(model_uid))
print("Deployment id: {}".format(deployment_uid))
###Output
_____no_output_____
###Markdown
3. Subscriptions 3.1 Configuring OpenScale
###Code
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator,BearerTokenAuthenticator
from ibm_watson_openscale import *
from ibm_watson_openscale.supporting_classes.enums import *
from ibm_watson_openscale.supporting_classes import *
authenticator = IAMAuthenticator(apikey=CLOUD_API_KEY)
wos_client = APIClient(authenticator=authenticator)
wos_client.version
#DB_CREDENTIALS= {"hostname":"","username":"","password":"","database":"","port":"","ssl":True,"sslmode":"","certificate_base64":""}
DB_CREDENTIALS = None
KEEP_MY_INTERNAL_POSTGRES = True
data_marts = wos_client.data_marts.list().result.data_marts
if len(data_marts) == 0:
if DB_CREDENTIALS is not None:
if SCHEMA_NAME is None:
print("Please specify the SCHEMA_NAME and rerun the cell")
print('Setting up external datamart')
added_data_mart_result = wos_client.data_marts.add(
background_mode=False,
name="WOS Data Mart",
description="Data Mart created by WOS tutorial notebook",
database_configuration=DatabaseConfigurationRequest(
database_type=DatabaseType.POSTGRESQL,
credentials=PrimaryStorageCredentialsLong(
hostname=DB_CREDENTIALS['hostname'],
username=DB_CREDENTIALS['username'],
password=DB_CREDENTIALS['password'],
db=DB_CREDENTIALS['database'],
port=DB_CREDENTIALS['port'],
ssl=True,
sslmode=DB_CREDENTIALS['sslmode'],
certificate_base64=DB_CREDENTIALS['certificate_base64']
),
location=LocationSchemaName(
schema_name= SCHEMA_NAME
)
)
).result
else:
print('Setting up internal datamart')
added_data_mart_result = wos_client.data_marts.add(
background_mode=False,
name="WOS Data Mart",
description="Data Mart created by WOS tutorial notebook",
internal_database = True).result
data_mart_id = added_data_mart_result.metadata.id
else:
data_mart_id=data_marts[0].metadata.id
print('Using existing datamart {}'.format(data_mart_id))
SERVICE_PROVIDER_NAME = "Image Multiclass Watson Machine Learning V2_test"
SERVICE_PROVIDER_DESCRIPTION = "Added by tutorial WOS notebook."
service_providers = wos_client.service_providers.list().result.service_providers
for service_provider in service_providers:
service_instance_name = service_provider.entity.name
if service_instance_name == SERVICE_PROVIDER_NAME:
service_provider_id = service_provider.metadata.id
wos_client.service_providers.delete(service_provider_id)
print("Deleted existing service_provider for WML instance: {}".format(service_provider_id))
added_service_provider_result = wos_client.service_providers.add(
name=SERVICE_PROVIDER_NAME,
description=SERVICE_PROVIDER_DESCRIPTION,
service_type=ServiceTypes.WATSON_MACHINE_LEARNING,
deployment_space_id = WML_SPACE_ID,
operational_space_id = "production",
credentials=WMLCredentialsCloud(
apikey=CLOUD_API_KEY, ## use `apikey=IAM_TOKEN` if using IAM_TOKEN to initiate client
url=WML_CREDENTIALS["url"],
instance_id=None
),
background_mode=False
).result
service_provider_id = added_service_provider_result.metadata.id
asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id, deployment_id = deployment_uid, deployment_space_id = WML_SPACE_ID).result['resources'][0]
asset_deployment_details
model_asset_details_from_deployment=wos_client.service_providers.get_deployment_asset(data_mart_id=data_mart_id,service_provider_id=service_provider_id,deployment_id=deployment_uid,deployment_space_id=WML_SPACE_ID)
model_asset_details_from_deployment
###Output
_____no_output_____
###Markdown
3.2 Subscribe the asset
###Code
subscriptions = wos_client.subscriptions.list().result.subscriptions
for subscription in subscriptions:
sub_model_id = subscription.entity.asset.asset_id
if sub_model_id == model_uid:
wos_client.subscriptions.delete(subscription.metadata.id)
print('Deleted existing subscription for model', model_uid)
from ibm_watson_openscale.base_classes.watson_open_scale_v2 import ScoringEndpointRequest
subscription_details = wos_client.subscriptions.add(
data_mart_id=data_mart_id,
service_provider_id=service_provider_id,
asset=Asset(
asset_id=model_asset_details_from_deployment["entity"]["asset"]["asset_id"],
name=model_asset_details_from_deployment["entity"]["asset"]["name"],
url=model_asset_details_from_deployment["entity"]["asset"]["url"],
asset_type=AssetTypes.MODEL,
input_data_type=InputDataType.UNSTRUCTURED_IMAGE,
problem_type=ProblemType.MULTICLASS_CLASSIFICATION
),
deployment=AssetDeploymentRequest(
deployment_id=asset_deployment_details['metadata']['guid'],
name=asset_deployment_details['entity']['name'],
deployment_type= DeploymentTypes.ONLINE,
url=asset_deployment_details['metadata']['url'],
scoring_endpoint=ScoringEndpointRequest(url=scoring_url) # scoring model without shadow deployment
),
asset_properties=AssetPropertiesRequest(
probability_fields=['probability']
)
).result
subscription_id = subscription_details.metadata.id
subscription_id
import time
time.sleep(5)
payload_data_set_id = None
payload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id
if payload_data_set_id is None:
print("Payload data set not found. Please check subscription status.")
else:
print("Payload data set id: ", payload_data_set_id)
###Output
_____no_output_____
###Markdown
3.3 Score the model and get transaction-id
###Code
!pip install numpy
!pip install matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
img = np.array(x_test[999], dtype='float')
pixels = img.reshape((28, 28))
plt.imshow(pixels, cmap='gray')
plt.show()
scoring_data = {"input_data": [{"values": [x_test[999].tolist()]}]}
predictions = wml_client.deployments.score(deployment_uid, scoring_data)
print(json.dumps(predictions, indent=2))
wos_client.data_sets.show_records(payload_data_set_id)
###Output
_____no_output_____
###Markdown
4. Explainability 4.1 Configure Explainability
###Code
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"enabled": True
}
explainability_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.EXPLAINABILITY.ID,
target=target,
parameters=parameters
).result
explainability_monitor_id = explainability_details.metadata.id
###Output
_____no_output_____
###Markdown
4.2 Get explanation for the transaction
###Code
pl_records_resp = wos_client.data_sets.get_list_of_records(data_set_id=payload_data_set_id, limit=1, offset=0).result
scoring_ids = [pl_records_resp["records"][0]["entity"]["values"]["scoring_id"]]
print("Running explanations on scoring IDs: {}".format(scoring_ids))
explanation_types = ["lime", "contrastive"]
result = wos_client.monitor_instances.explanation_tasks(scoring_ids=scoring_ids, explanation_types=explanation_types).result
print(result)
explanation_task_id=result.to_dict()['metadata']['explanation_task_ids'][0]
explanation=wos_client.monitor_instances.get_explanation_tasks(explanation_task_id=explanation_task_id).result.to_dict()
explanation
###Output
_____no_output_____
###Markdown
The explanation images can be obtained using the cells below
###Code
!pip install Pillow
from PIL import Image
import base64
import io
time.sleep(10)
img = explanation["entity"]['explanations'][0]['predictions'][0]["explanation"][0]["full_image"]
img_data = base64.b64decode(img)
Image.open(io.BytesIO(img_data))
img = explanation["entity"]['explanations'][0]['predictions'][1]["explanation"][0]["full_image"]
img_data = base64.b64decode(img)
Image.open(io.BytesIO(img_data))
###Output
_____no_output_____
###Markdown
Tutorial on generating an explanation for an image-based model on Watson OpenScale This notebook includes steps for creating an image-based watson-machine-learning model, creating a subscription, configuring explainability, and finally generating an explanation for a transaction. Contents- [1. Setup](setup)- [2. Creating and deploying an image-based model](deployment)- [3. Subscriptions](subscription)- [4. Explainability](explainability) **Note**: This notebook is using runtime 'Default Python 3.8.x' 1. Setup 1.1 Install Watson OpenScale and WML packages
###Code
!pip install --upgrade ibm-watson-openscale --no-cache | tail -n 1
!pip install --upgrade ibm-watson-machine-learning --no-cache | tail -n 1
###Output
_____no_output_____
###Markdown
Note: Restart the kernel to assure the new libraries are being used. 1.2 Configure credentials Your Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below.**NOTE:** You can also get OpenScale `API_KEY` using IBM CLOUD CLI.How to install IBM Cloud (bluemix) console: [instruction](https://console.bluemix.net/docs/cli/reference/ibmcloud/download_cli.htmlinstall_use)How to get api key using console:```bx login --ssobx iam api-key-create 'my_key'```
###Code
CLOUD_API_KEY = "***"
IAM_URL="https://iam.ng.bluemix.net/oidc/token"
WML_CREDENTIALS = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey": CLOUD_API_KEY
}
###Output
_____no_output_____
###Markdown
2. Creating and deploying an image-based model The dataset used is MNIST dataset of handwritten digits. It consists of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images. More information about the dataset can be found here: https://keras.io/datasets/mnist-database-of-handwritten-digitsNote: Keras and TensorFlow versions supported by WML are: Keras 2.1.6 with TensorFlow 1.13 backend and Keras 2.2.4 with TensorFlow 1.14 backend. The latter combination is used in this notebook. 2.1 Creating a model
###Code
!pip install keras==2.2.5
!pip uninstall tf-nightly
!pip uninstall tf-estimate-nightly
!pip install tensorflow==2.5.0
!pip install keras_sequential_ascii
#import keras
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from keras_sequential_ascii import sequential_model_to_ascii_printout
from tensorflow.keras import backend as keras_backend
#print(tensorflow.__version__)
batch_size = 128
num_classes = 10
epochs = 5
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if keras_backend.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
from tensorflow.keras.utils import to_categorical
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)
# Define Model
from tensorflow.keras.optimizers import Adadelta
from tensorflow.keras.losses import categorical_crossentropy
def base_model():
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=categorical_crossentropy,
optimizer=Adadelta(),
metrics=['accuracy'])
return model
cnn_n = base_model()
cnn_n.summary()
# Vizualizing model structure
sequential_model_to_ascii_printout(cnn_n)
# Fit model
print(y_train.shape)
cnn = cnn_n.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_test, y_test))
scores = cnn_n.evaluate(x_test, y_test, verbose=0)
print(scores)
print("Accuracy: %.2f%%" % (scores[1]*100))
cnn_n.save("mnist_cnn.h5")
!rm mnist_cnn.tar*
!tar -czvf mnist_cnn.tar.gz mnist_cnn.h5
###Output
_____no_output_____
###Markdown
2.2 Storing the model
###Code
import json
from ibm_watson_machine_learning import APIClient
wml_client = APIClient(WML_CREDENTIALS)
wml_client.version
wml_client.spaces.list(limit=10)
WML_SPACE_ID='***' # use space id here
wml_client.set.default_space(WML_SPACE_ID)
MODEL_NAME = "MNIST Model"
software_spec_uid = wml_client.software_specifications.get_uid_by_name("tensorflow_2.4-py3.8")
print("Software Specification ID: {}".format(software_spec_uid))
model_props = {
wml_client.repository.ModelMetaNames.NAME:"{}".format(MODEL_NAME),
wml_client.repository.ModelMetaNames.TYPE: 'tensorflow_2.4',
wml_client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: software_spec_uid,
}
print("Storing model ...")
published_model_details = wml_client.repository.store_model(
model='mnist_cnn.tar.gz',
meta_props=model_props,
)
model_uid = wml_client.repository.get_model_uid(published_model_details)
print("Done")
print("Model ID: {}".format(model_uid))
###Output
_____no_output_____
###Markdown
2.3 Deploying the model
###Code
deployment_details = wml_client.deployments.create(
model_uid,
meta_props={
wml_client.deployments.ConfigurationMetaNames.NAME: "{}".format(MODEL_NAME + " deployment"),
wml_client.deployments.ConfigurationMetaNames.ONLINE: {}
}
)
scoring_url = wml_client.deployments.get_scoring_href(deployment_details)
deployment_uid=wml_client.deployments.get_uid(deployment_details)
print("Scoring URL:" + scoring_url)
print("Model id: {}".format(model_uid))
print("Deployment id: {}".format(deployment_uid))
###Output
_____no_output_____
###Markdown
3. Subscriptions 3.1 Configuring OpenScale
###Code
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator,BearerTokenAuthenticator
from ibm_watson_openscale import *
from ibm_watson_openscale.supporting_classes.enums import *
from ibm_watson_openscale.supporting_classes import *
authenticator = IAMAuthenticator(apikey=CLOUD_API_KEY)
wos_client = APIClient(authenticator=authenticator)
wos_client.version
#DB_CREDENTIALS= {"hostname":"","username":"","password":"","database":"","port":"","ssl":True,"sslmode":"","certificate_base64":""}
DB_CREDENTIALS = None
KEEP_MY_INTERNAL_POSTGRES = True
data_marts = wos_client.data_marts.list().result.data_marts
if len(data_marts) == 0:
if DB_CREDENTIALS is not None:
if SCHEMA_NAME is None:
print("Please specify the SCHEMA_NAME and rerun the cell")
print('Setting up external datamart')
added_data_mart_result = wos_client.data_marts.add(
background_mode=False,
name="WOS Data Mart",
description="Data Mart created by WOS tutorial notebook",
database_configuration=DatabaseConfigurationRequest(
database_type=DatabaseType.POSTGRESQL,
credentials=PrimaryStorageCredentialsLong(
hostname=DB_CREDENTIALS['hostname'],
username=DB_CREDENTIALS['username'],
password=DB_CREDENTIALS['password'],
db=DB_CREDENTIALS['database'],
port=DB_CREDENTIALS['port'],
ssl=True,
sslmode=DB_CREDENTIALS['sslmode'],
certificate_base64=DB_CREDENTIALS['certificate_base64']
),
location=LocationSchemaName(
schema_name= SCHEMA_NAME
)
)
).result
else:
print('Setting up internal datamart')
added_data_mart_result = wos_client.data_marts.add(
background_mode=False,
name="WOS Data Mart",
description="Data Mart created by WOS tutorial notebook",
internal_database = True).result
data_mart_id = added_data_mart_result.metadata.id
else:
data_mart_id=data_marts[0].metadata.id
print('Using existing datamart {}'.format(data_mart_id))
SERVICE_PROVIDER_NAME = "Image Multiclass Watson Machine Learning V2_test"
SERVICE_PROVIDER_DESCRIPTION = "Added by tutorial WOS notebook."
service_providers = wos_client.service_providers.list().result.service_providers
for service_provider in service_providers:
service_instance_name = service_provider.entity.name
if service_instance_name == SERVICE_PROVIDER_NAME:
service_provider_id = service_provider.metadata.id
wos_client.service_providers.delete(service_provider_id)
print("Deleted existing service_provider for WML instance: {}".format(service_provider_id))
added_service_provider_result = wos_client.service_providers.add(
name=SERVICE_PROVIDER_NAME,
description=SERVICE_PROVIDER_DESCRIPTION,
service_type=ServiceTypes.WATSON_MACHINE_LEARNING,
deployment_space_id = WML_SPACE_ID,
operational_space_id = "production",
credentials=WMLCredentialsCloud(
apikey=CLOUD_API_KEY, ## use `apikey=IAM_TOKEN` if using IAM_TOKEN to initiate client
url=WML_CREDENTIALS["url"],
instance_id=None
),
background_mode=False
).result
service_provider_id = added_service_provider_result.metadata.id
asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id, deployment_id = deployment_uid, deployment_space_id = WML_SPACE_ID).result['resources'][0]
asset_deployment_details
model_asset_details_from_deployment=wos_client.service_providers.get_deployment_asset(data_mart_id=data_mart_id,service_provider_id=service_provider_id,deployment_id=deployment_uid,deployment_space_id=WML_SPACE_ID)
model_asset_details_from_deployment
###Output
_____no_output_____
###Markdown
3.2 Subscribe the asset
###Code
subscriptions = wos_client.subscriptions.list().result.subscriptions
for subscription in subscriptions:
sub_model_id = subscription.entity.asset.asset_id
if sub_model_id == model_uid:
wos_client.subscriptions.delete(subscription.metadata.id)
print('Deleted existing subscription for model', model_uid)
from ibm_watson_openscale.base_classes.watson_open_scale_v2 import ScoringEndpointRequest
subscription_details = wos_client.subscriptions.add(
data_mart_id=data_mart_id,
service_provider_id=service_provider_id,
asset=Asset(
asset_id=model_asset_details_from_deployment["entity"]["asset"]["asset_id"],
name=model_asset_details_from_deployment["entity"]["asset"]["name"],
url=model_asset_details_from_deployment["entity"]["asset"]["url"],
asset_type=AssetTypes.MODEL,
input_data_type=InputDataType.UNSTRUCTURED_IMAGE,
problem_type=ProblemType.MULTICLASS_CLASSIFICATION
),
deployment=AssetDeploymentRequest(
deployment_id=asset_deployment_details['metadata']['guid'],
name=asset_deployment_details['entity']['name'],
deployment_type= DeploymentTypes.ONLINE,
url=asset_deployment_details['metadata']['url'],
scoring_endpoint=ScoringEndpointRequest(url=scoring_url) # scoring model without shadow deployment
),
asset_properties=AssetPropertiesRequest(
probability_fields=['probability']
)
).result
subscription_id = subscription_details.metadata.id
subscription_id
import time
time.sleep(5)
payload_data_set_id = None
payload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id
if payload_data_set_id is None:
print("Payload data set not found. Please check subscription status.")
else:
print("Payload data set id: ", payload_data_set_id)
###Output
_____no_output_____
###Markdown
3.3 Score the model and get transaction-id
###Code
!pip install numpy
!pip install matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
img = np.array(x_test[999], dtype='float')
pixels = img.reshape((28, 28))
plt.imshow(pixels, cmap='gray')
plt.show()
scoring_data = {"input_data": [{"values": [x_test[999].tolist()]}]}
predictions = wml_client.deployments.score(deployment_uid, scoring_data)
print(json.dumps(predictions, indent=2))
wos_client.data_sets.show_records(payload_data_set_id)
###Output
_____no_output_____
###Markdown
4. Explainability 4.1 Configure Explainability
###Code
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"enabled": True
}
explainability_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.EXPLAINABILITY.ID,
target=target,
parameters=parameters
).result
explainability_monitor_id = explainability_details.metadata.id
###Output
_____no_output_____
###Markdown
4.2 Get explanation for the transaction
###Code
pl_records_resp = wos_client.data_sets.get_list_of_records(data_set_id=payload_data_set_id, limit=1, offset=0).result
scoring_ids = [pl_records_resp["records"][0]["entity"]["values"]["scoring_id"]]
print("Running explanations on scoring IDs: {}".format(scoring_ids))
explanation_types = ["lime", "contrastive"]
result = wos_client.monitor_instances.explanation_tasks(scoring_ids=scoring_ids, explanation_types=explanation_types).result
print(result)
explanation_task_id=result.to_dict()['metadata']['explanation_task_ids'][0]
explanation=wos_client.monitor_instances.get_explanation_tasks(explanation_task_id=explanation_task_id).result.to_dict()
explanation
###Output
_____no_output_____
###Markdown
The explanation images can be obtained using the cells below
###Code
!pip install Pillow
from PIL import Image
import base64
import io
time.sleep(10)
img = explanation["entity"]['explanations'][0]['predictions'][0]["explanation"][0]["full_image"]
img_data = base64.b64decode(img)
Image.open(io.BytesIO(img_data))
img = explanation["entity"]['explanations'][0]['predictions'][1]["explanation"][0]["full_image"]
img_data = base64.b64decode(img)
Image.open(io.BytesIO(img_data))
###Output
_____no_output_____ |
dev/13_learner.ipynb | ###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn,run = 'learn',None,True
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
if self.run: getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
#TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors.
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None):
store_attr(self, "with_input,with_loss,save_preds,save_targs")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs = []
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
preds,targs = to_detach(self.pred),to_detach(self.yb)
if self.save_preds is None: self.preds.append(preds)
else: (self.save_preds/str(self.iter)).save_array(preds)
if self.save_targs is None: self.targets.append(targs)
else: (self.save_targs/str(self.iter)).save_array(targs[0])
if self.with_loss:
bs = find_bs(self.yb)
loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)
self.losses.append(to_detach(loss))
def after_fit(self):
"Concatenate all recorded tensors"
if self.with_input: self.inputs = detuplify(to_concat(self.inputs))
if not self.save_preds: self.preds = detuplify(to_concat(self.preds))
if not self.save_targs: self.targets = detuplify(to_concat(self.targets))
if self.with_loss: self.losses = to_concat(self.losses)
def all_tensors(self):
res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets]
if self.with_input: res = [self.inputs] + res
if self.with_loss: res.append(self.losses)
return res
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
show_doc(GatherPredsCallback.after_fit)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#export
_loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train',
'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward',
'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train',
'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop',
'**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate',
'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit',
'after_cancel_fit', 'after_fit']
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
elif device is None: device = 'cpu'
state = torch.load(file, map_location=device)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
# export
def _try_concat(o):
try: return torch.cat(o)
except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L())
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics")
self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
@property
def metrics(self): return self._metrics
@metrics.setter
def metrics(self,v): self._metrics = L(v).map(mk_metric)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(False): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(True ): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
names = ['shuffle', 'drop_last']
try:
dl,old,has = change_attrs(dl, names, [False,False])
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally:
dl,*_ = change_attrs(dl, names, old, has); self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
with self.added_cbs(cbs), self.no_logging(), self.no_mbar():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
@delegates(GatherPredsCallback.__init__)
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, act=None, **kwargs):
cb = GatherPredsCallback(with_input=with_input, **kwargs)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
res = cb.all_tensors()
pred_i = 1 if with_input else 0
if res[pred_i] is not None:
res[pred_i] = act(res[pred_i])
if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i]))
return tuple(res)
def predict(self, item, rm_type_tfms=0):
dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms)
inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
#dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
if dl is None: dl = self.dbunch.dls[ds_idx]
b = dl.one_batch()
_,_,preds = self.get_preds(dl=[b], with_decoded=True)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
def show_training_loop(self):
indent = 0
for s in _loop:
if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2
elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}')
else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s))
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def no_mbar(self): return replacing_yield(self, 'create_mbar', False)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
distrib_barrier()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
show_training_loop="Show each step in the training loop",
no_logging="Context manager to temporarily remove `logger`",
no_mbar="Context manager to temporarily prevent the master progress bar from being created",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(6)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
for i in [0,1,3]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
test_close(end[2]-init[2], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
def _maybe_reduce(val):
if num_distrib()>1:
val = val.clone()
torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM)
val /= num_distrib()
return val
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, *learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss.mean())*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.iters,self.losses,self.values = [],[],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets[1:].map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
self.iters.append(self.smooth_loss.count)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.smooth_loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5, with_valid=True):
plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train')
if with_valid:
idx = (np.array(self.iters)<skip_start).sum()
plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid')
plt.legend()
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
if not self.training: test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
mean = tensor(self.losses).mean()
self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,14.698701858520508,18.37638282775879,18.37638282775879,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
assert targs is None
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
self.opt.clear_state()
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): test_close(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3)
###Output
(#4) [0,21.00529670715332,22.428417205810547,00:00]
(#4) [0,17.18109703063965,18.20417022705078,00:00]
(#4) [0,13.83404541015625,14.777403831481934,00:00]
###Markdown
Exporting a `Learner`
###Code
#export
@patch
def export(self:Learner, fname='export.pkl'):
"Export the content of `self` without the items and the optimizer state for inference"
if rank_distrib(): return # don't export if slave proc
old_dbunch = self.dbunch
self.dbunch = self.dbunch.new_empty()
state = self.opt.state_dict()
self.opt = None
with warnings.catch_warnings():
#To avoid the warning that come from PyTorch about model not being checked
warnings.simplefilter("ignore")
torch.save(self, self.path/fname)
self.create_opt()
self.opt.load_state_dict(state)
self.dbunch = old_dbunch
###Output
_____no_output_____
###Markdown
TTA
###Code
#export
@patch
def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25):
"Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation"
if dl is None: dl = self.dbunch.dls[ds_idx]
if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms)
with dl.dataset.set_split_idx(0), self.no_mbar():
if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n)))
aug_preds = []
for i in self.progress.mbar if hasattr(self,'progress') else range(n):
self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch
# aug_preds.append(self.get_preds(dl=dl)[0][None])
aug_preds.append(self.get_preds(ds_idx)[0][None])
aug_preds = torch.cat(aug_preds).mean(0)
self.epoch = n
with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx)
preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta)
return preds,targs
###Output
_____no_output_____
###Markdown
In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core_foundation.ipynb.
Converted 01a_core_utils.ipynb.
Converted 01b_core_dispatch.ipynb.
Converted 01c_core_transform.ipynb.
Converted 02_core_script.ipynb.
Converted 03_torchcore.ipynb.
Converted 03a_layers.ipynb.
Converted 04_data_load.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 09b_vision_utils.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 71_callback_tensorboard.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
Converted xse_resnext.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(bs*n)
return TensorDataset(x, a*x + b + 0.1*torch.randn(bs*n))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export core
_camel_re1 = re.compile('(.)([A-Z][a-z]+)')
_camel_re2 = re.compile('([a-z0-9])([A-Z])')
def camel2snake(name):
s1 = re.sub(_camel_re1, r'\1_\2', name)
return re.sub(_camel_re2, r'\1_\2', s1).lower()
test_eq(camel2snake('ClassAreCamel'), 'class_are_camel')
#export
def class2attr(self, cls_name):
return camel2snake(re.sub(rf'{cls_name}$', '', self.__class__.__name__) or cls_name.lower())
#export
@docs
class Callback():
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
def __call__(self, event_name): getattr(self, event_name, noop)()
def __repr__(self): return self.__class__.__name__
def __getattr__(self, k):
if k=='learn': raise AttributeError
if not hasattr(self,'learn'): raise AttributeError
return getattr(self.learn, k)
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
_docs=dict(__call__="Call `self.{event_name}` if it's defined",
__getattr__="Passthrough to get the attributes of `self.learn`")
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_loss=False): self.with_loss = with_loss
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss: self.losses.append(to_detach(self.loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = 'begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit'.split()
mk_class('event', **{o:o for o in _events},
doc="All possible events as attributes to get tab-completion and typo-proofing")
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
event.after_backward
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.callbacks = [TrainEvalCallback]
class Learner():
"Group together a `model`, some `dbunch` and a `loss_func` to handle training"
def __init__(self, model, dbunch, loss_func, opt_func=SGD, lr=1e-2, splitter=trainable_params,
cbs=None, cb_funcs=None, metrics=None, path=None, wd_bn_bias=False, train_bn=True):
store_attr(self, "model,dbunch,loss_func,opt_func,lr,splitter,wd_bn_bias,train_bn")
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.metrics = [m if isinstance(m, Metric) else AvgMetric(m) for m in L(metrics)]
self.training,self.logger,self.opt = False,print,None
self.cbs = L([])
self.add_cbs(cbf() for cbf in L(defaults.callbacks))
self.add_cbs(cbs)
self.add_cbs(cbf() for cbf in L(cb_funcs))
def add_cbs(self, cbs):
"Add `cbs` to the list of `Callback` and register `self` as their learner"
for cb in L(cbs): self.add_cb(cb)
def add_cb(self, cb):
"Add `cb` to the list of `Callback` and register `self` as their learner"
if getattr(self, cb.name, None):
error = f"There is another object registered in self.{cb.name}, pick a new name."
assert isinstance(getattr(self, cb.name), cb.__class__), error
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
def remove_cbs(self, cbs):
"Remove `cbs` from the list of `Callback` and deregister `self` as their learner"
for cb in L(cbs): self.remove_cb(cb)
def remove_cb(self, cb):
"Add `cb` from the list of `Callback` and deregister `self` as their learner"
cb.learn = None
setattr(self, cb.name, None)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
"Context manage that temporarily adds `cbs`"
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def create_opt(self, lr=None):
"Create an optimizer with `lr`"
opt = self.opt_func(self.splitter(self.model), lr=self.lr if lr is None else lr)
if not self.wd_bn_bias:
for p in bn_bias_params(self.model):
opt.state[p] = {**opt.state.get(p, {}), 'do_wd': False}
if self.train_bn:
for p in bn_bias_params(self.model, with_bias=False):
opt.state[p] = {**opt.state.get(p, {}), 'force_train': True}
return opt
def one_batch(self, xb, yb, i=None):
"Train or evaluate `self.model` on batch `(xb,yb)`"
try:
if i is not None: self.iter = i
self.xb,self.yb = xb,yb; self('begin_batch')
self.pred = self.model(self.xb); self('after_pred')
self.loss = self.loss_func(self.pred, self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def all_batches(self):
"Train or evaluate `self.model` on all batches of `self.dl`"
self.n_iter = len(self.dl)
for i,(xb,yb) in enumerate(self.dl): self.one_batch(xb, yb, i)
def _do_begin_fit(self, n_epoch):
"Prepare evertyhing for training `epochs` epochs"
self.n_epoch,self.loss = n_epoch,tensor(0.)
self('begin_fit')
def _do_epoch_train(self):
"Execute the training part of the `epoch`-th epoch"
self.dl = self.dbunch.train_dl
try:
self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self):
"Execute the validation part of an epoch"
try:
self.dl = self.dbunch.valid_dl
self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`."
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.opt = self.create_opt(lr=lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, dl=None, cbs=None):
"Validate on `dl` with potential new `cbs`."
self.dl = dl or self.dbunch.valid_dl
with self.added_cbs(cbs), self.no_logging():
self(['begin_fit', 'begin_epoch', 'begin_validate'])
self.all_batches()
self(['after_validate', 'after_epoch', 'after_fit'])
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, with_loss=False):
"Get the predictions and targets on the `ds_idx`-th dbunchset, optionally `with_loss`"
self.dl = self.dbunch.dls[ds_idx]
cb = GatherPredsCallback(with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(['begin_fit', 'begin_epoch', 'begin_validate'])
self.all_batches()
self(['after_validate', 'after_epoch', 'after_fit'])
if with_loss: return (torch.cat(cb.preds),torch.cat(cb.targets),torch.cat(cb.losses))
res = (torch.cat(cb.preds),torch.cat(cb.targets))
return res
def __call__(self, event_name):
"Call `event_name` (one or a list) for all callbacks"
for e in L(event_name): self._call_one(e)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
@contextmanager
def no_logging(self):
"Context manager to temporarily remove `logger`"
old_logger = self.logger
self.logger = noop
yield
self.logger = old_logger
@contextmanager
def loss_not_reduced(self):
"A context manager to evaluate `loss_func` with reduction set to none."
if hasattr(self.loss_func, 'reduction'):
self.old_red = self.loss_func.reduction
self.loss_func.reduction = 'none'
yield
self.loss_func.reduction = self.old_red
else:
old_loss_func = self.loss_func
self.loss_func = partial(self.loss_func, reduction='none')
yield
self.loss_func = old_loss_func
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below).
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, **kwargs):
return Learner(RegModel(), synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda), MSELossFlat(), **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
###Output
_____no_output_____
###Markdown
Training loop
###Code
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, true_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, self.xb)
test_eq(self.save_yb, self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.xb + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.xb * (self.pred.data - self.yb)).mean()
self.grad_b = 2 * (self.pred.data - self.yb).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
xb,yb = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(xb, yb, 42))
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(xb, yb, 42), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(xb, yb, 42), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert learn.test_train_eval is None
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `xb`: last input drawn from `self.dl` (potentially modified by callbacks)- `yb`: last target drawn from `self.dl` (potentially modified by callbacks)- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = {'reset': "Reset inner state to prepare for new computation",
'name': "Name of the `Metric`, camel-cased and with Metric removed",
'accumulate': "Use `learn` to update the state with new results",
'value': "The value of the metric"}
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],u[i:i+25]
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],u[splits[i]:splits[i+1]]
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss)*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
return t.item() if t.numel()==1 else t
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.losses,self.values = [],[],[]
names = [m.name for m in self._valid_mets]
if self.train_metrics: names = [f'train_{n}' for n in names] + [f'valid_{n}' for n in names]
else: names = ['train_loss', 'valid_loss'] + names[1:]
if self.add_time: names.append('time')
self.metric_names = ['epoch']+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
mets = [self.smooth_loss] + self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = [getattr(self, 'epoch', 0)]
def begin_train (self): [m.reset() for m in self._train_mets]
def after_train (self): self.log += [_maybe_item(m.value) for m in self._train_mets]
def begin_validate(self): [m.reset() for m in self._valid_mets]
def after_validate(self): self.log += [_maybe_item(m.value) for m in self._valid_mets]
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return []
return [self.loss] + (self.metrics if self.train_metrics else [])
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return []
return [self.loss] + self.metrics
def plot_loss(self): plt.plot(self.losses)
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
res = tensor(self.losses).mean()
self.log += [res, res] if self.train_metrics else [res]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
###Output
_____no_output_____
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Ploting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss()
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
> Warning: If your dataset is unlabelled, the targets will all be 0s.> Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
learn.dbunch.dls += (dl,)
preds,targs = learn.get_preds(ds_idx=2)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(ds_idx=2, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.opt = self.create_opt(lr=self.lr)
self.opt.freeze_to(n)
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: assert torch.allclose(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): assert torch.allclose(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: assert torch.allclose(end[i],init[i])
#bn was trained
for i in [2,3]: assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
###Output
[0, 16.396854400634766, 17.765403747558594, '00:00']
[0, 13.3157377243042, 14.431344985961914, '00:00']
[0, 10.793878555297852, 11.725634574890137, '00:00']
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01b_script.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 22_vision_learner.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_utils_test.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(bs*n)
return TensorDataset(x, a*x + b + 0.1*torch.randn(bs*n))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export core
_camel_re1 = re.compile('(.)([A-Z][a-z]+)')
_camel_re2 = re.compile('([a-z0-9])([A-Z])')
def camel2snake(name):
s1 = re.sub(_camel_re1, r'\1_\2', name)
return re.sub(_camel_re2, r'\1_\2', s1).lower()
test_eq(camel2snake('ClassAreCamel'), 'class_are_camel')
#export
def class2attr(self, cls_name):
return camel2snake(re.sub(rf'{cls_name}$', '', self.__class__.__name__) or cls_name.lower())
#export
@docs
class Callback():
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
def __call__(self, event_name): getattr(self, event_name, noop)()
def __repr__(self): return self.__class__.__name__
def __getattr__(self, k):
if k=='learn': raise AttributeError
if not hasattr(self,'learn'): raise AttributeError
return getattr(self.learn, k)
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
_docs=dict(__call__="Call `self.{event_name}` if it's defined",
__getattr__="Passthrough to get the attributes of `self.learn`")
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_loss=False): self.with_loss = with_loss
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss: self.losses.append(to_detach(self.loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = 'begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit'.split()
mk_class('event', **{o:o for o in _events},
doc="All possible events as attributes to get tab-completion and typo-proofing")
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
event.after_backward
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.callbacks = [TrainEvalCallback]
class Learner():
"Group together a `model`, some `dbunch` and a `loss_func` to handle training"
def __init__(self, dbunch, model, loss_func=None, opt_func=SGD, lr=1e-2, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn")
#TODO: infer loss_func from data
self.loss_func = CrossEntropyLossFlat() if loss_func is None else loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.metrics = [m if isinstance(m, Metric) else AvgMetric(m) for m in L(metrics)]
self.training,self.logger,self.opt = False,print,None
self.cbs = L([])
self.add_cbs(cbf() for cbf in L(defaults.callbacks))
self.add_cbs(cbs)
self.add_cbs(cbf() for cbf in L(cb_funcs))
def add_cbs(self, cbs):
"Add `cbs` to the list of `Callback` and register `self` as their learner"
for cb in L(cbs): self.add_cb(cb)
def add_cb(self, cb):
"Add `cb` to the list of `Callback` and register `self` as their learner"
if getattr(self, cb.name, None):
error = f"There is another object registered in self.{cb.name}, pick a new name."
assert isinstance(getattr(self, cb.name), cb.__class__), error
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
def remove_cbs(self, cbs):
"Remove `cbs` from the list of `Callback` and deregister `self` as their learner"
for cb in L(cbs): self.remove_cb(cb)
def remove_cb(self, cb):
"Add `cb` from the list of `Callback` and deregister `self` as their learner"
cb.learn = None
setattr(self, cb.name, None)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
"Context manage that temporarily adds `cbs`"
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def create_opt(self, lr=None):
"Create an optimizer with `lr`"
opt = self.opt_func(self.splitter(self.model), lr=self.lr if lr is None else lr)
if not self.wd_bn_bias:
for p in bn_bias_params(self.model):
opt.state[p] = {**opt.state.get(p, {}), 'do_wd': False}
if self.train_bn:
for p in bn_bias_params(self.model, with_bias=False):
opt.state[p] = {**opt.state.get(p, {}), 'force_train': True}
return opt
def one_batch(self, xb, yb, i=None):
"Train or evaluate `self.model` on batch `(xb,yb)`"
try:
if i is not None: self.iter = i
self.xb,self.yb = xb,yb; self('begin_batch')
self.pred = self.model(self.xb); self('after_pred')
self.loss = self.loss_func(self.pred, self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def all_batches(self):
"Train or evaluate `self.model` on all batches of `self.dl`"
self.n_iter = len(self.dl)
for i,(xb,yb) in enumerate(self.dl): self.one_batch(xb, yb, i)
def _do_begin_fit(self, n_epoch):
"Prepare evertyhing for training `epochs` epochs"
self.n_epoch,self.loss = n_epoch,tensor(0.)
self('begin_fit')
def _do_epoch_train(self):
"Execute the training part of the `epoch`-th epoch"
self.dl = self.dbunch.train_dl
try:
self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self):
"Execute the validation part of an epoch"
try:
self.dl = self.dbunch.valid_dl
self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`."
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.opt = self.create_opt(lr=lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, dl=None, cbs=None):
"Validate on `dl` with potential new `cbs`."
self.dl = dl or self.dbunch.valid_dl
with self.added_cbs(cbs), self.no_logging():
self(['begin_fit', 'begin_epoch', 'begin_validate'])
self.all_batches()
self(['after_validate', 'after_epoch', 'after_fit'])
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, with_loss=False):
"Get the predictions and targets on the `ds_idx`-th dbunchset, optionally `with_loss`"
self.dl = self.dbunch.dls[ds_idx]
cb = GatherPredsCallback(with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(['begin_fit', 'begin_epoch', 'begin_validate'])
self.all_batches()
self(['after_validate', 'after_epoch', 'after_fit'])
if with_loss: return (torch.cat(cb.preds),torch.cat(cb.targets),torch.cat(cb.losses))
res = (torch.cat(cb.preds),torch.cat(cb.targets))
return res
def __call__(self, event_name):
"Call `event_name` (one or a list) for all callbacks"
for e in L(event_name): self._call_one(e)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
@contextmanager
def no_logging(self):
"Context manager to temporarily remove `logger`"
old_logger = self.logger
self.logger = noop
yield
self.logger = old_logger
@contextmanager
def loss_not_reduced(self):
"A context manager to evaluate `loss_func` with reduction set to none."
if hasattr(self.loss_func, 'reduction'):
self.old_red = self.loss_func.reduction
self.loss_func.reduction = 'none'
yield
self.loss_func.reduction = self.old_red
else:
old_loss_func = self.loss_func
self.loss_func = partial(self.loss_func, reduction='none')
yield
self.loss_func = old_loss_func
def save(self, file, with_opt=True):
"Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`"
#TODO: if rank_distrib(): return # don't save if slave proc
if not hasattr(self, 'opt'): with_opt=False
if not with_opt: state = get_model(self.model).state_dict()
else: state = {'model': get_model(self.model).state_dict(), 'opt':self.opt.state_dict()}
torch.save(state, join_path_file(file, self.path/self.model_dir, ext='.pth'))
def load(self, file, with_opt=None, device=None, strict=True):
"Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
if device is None: device = self.dbunch.device
elif isinstance(device, int): device = torch.device('cuda', device)
state = torch.load(join_path_file(file, self.path/self.model_dir, ext='.pth'))
if set(state.keys()) == {'model', 'opt'}:
model_state = state['model']
get_model(self.model).load_state_dict(model_state, strict=strict)
if ifnone(with_opt,True):
if self.opt is None: self.opt = self.create_opt(self.lr)
try: self.opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
pass
else:
if with_opt: warn("Saved filed doesn't contain an optimizer state.")
get_model(self.model).load_state_dict(state, strict=strict)
return self
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, **kwargs):
return Learner(synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda), RegModel(), loss_func=MSELossFlat(), **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, true_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, self.xb)
test_eq(self.save_yb, self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.xb + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.xb * (self.pred.data - self.yb)).mean()
self.grad_b = 2 * (self.pred.data - self.yb).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
xb,yb = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(xb, yb, 42))
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(xb, yb, 42), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(xb, yb, 42), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
assert learn1.opt is None
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert learn.test_train_eval is None
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `xb`: last input drawn from `self.dl` (potentially modified by callbacks)- `yb`: last target drawn from `self.dl` (potentially modified by callbacks)- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = {'reset': "Reset inner state to prepare for new computation",
'name': "Name of the `Metric`, camel-cased and with Metric removed",
'accumulate': "Use `learn` to update the state with new results",
'value': "The value of the metric"}
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],u[i:i+25]
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],u[splits[i]:splits[i+1]]
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss)*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
return t.item() if t.numel()==1 else t
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.losses,self.values = [],[],[]
names = [m.name for m in self._valid_mets]
if self.train_metrics: names = [f'train_{n}' for n in names] + [f'valid_{n}' for n in names]
else: names = ['train_loss', 'valid_loss'] + names[1:]
if self.add_time: names.append('time')
self.metric_names = ['epoch']+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
mets = [self.smooth_loss] + self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = [getattr(self, 'epoch', 0)]
def begin_train (self): [m.reset() for m in self._train_mets]
def after_train (self): self.log += [_maybe_item(m.value) for m in self._train_mets]
def begin_validate(self): [m.reset() for m in self._valid_mets]
def after_validate(self): self.log += [_maybe_item(m.value) for m in self._valid_mets]
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return []
return [self.loss] + (self.metrics if self.train_metrics else [])
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return []
return [self.loss] + self.metrics
def plot_loss(self): plt.plot(self.losses)
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
res = tensor(self.losses).mean()
self.log += [res, res] if self.train_metrics else [res]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
###Output
_____no_output_____
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Ploting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss()
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
> Warning: If your dataset is unlabelled, the targets will all be 0s.> Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
learn.dbunch.dls += (dl,)
preds,targs = learn.get_preds(ds_idx=2)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(ds_idx=2, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.opt = self.create_opt(lr=self.lr)
self.opt.freeze_to(n)
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: assert torch.allclose(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): assert torch.allclose(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: assert torch.allclose(end[i],init[i])
#bn was trained
for i in [2,3]: assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
###Output
[0, 22.762685775756836, 20.408374786376953, '00:00']
[0, 19.32206153869629, 17.310680389404297, '00:00']
[0, 16.410184860229492, 14.687780380249023, '00:00']
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01b_script.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 22_vision_learner.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_utils_test.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_data(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(bs*n)
return TensorDataset(x, a*x + b + 0.1*torch.randn(bs*n))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
source_link(bn_bias_params)
#export core
_camel_re1 = re.compile('(.)([A-Z][a-z]+)')
_camel_re2 = re.compile('([a-z0-9])([A-Z])')
def camel2snake(name):
s1 = re.sub(_camel_re1, r'\1_\2', name)
return re.sub(_camel_re2, r'\1_\2', s1).lower()
test_eq(camel2snake('ClassAreCamel'), 'class_are_camel')
#export
def class2attr(self, cls_name):
return camel2snake(re.sub(rf'{cls_name}$', '', self.__class__.__name__) or cls_name.lower())
#export
@docs
class Callback():
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
def __call__(self, event_name): getattr(self, event_name, noop)()
def __repr__(self): return self.__class__.__name__
def __getattr__(self, k):
if k=='learn': raise AttributeError
if not hasattr(self,'learn'): raise AttributeError
return getattr(self.learn, k)
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
_docs=dict(__call__="Call `self.{event_name}` if it's defined",
__getattr__="Passthrough to get the attributes of `self.learn`")
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0"
self.learn.train_iter,self.learn.pct_train = 0,0.
def begin_batch(self):
"On the first batch, put the model on the right device"
if self.learn.train_iter == 0: self.model.to(find_device(self.xb))
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.begin_batch)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_loss=False): self.with_loss = with_loss
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss: self.losses.append(to_detach(self.loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = 'begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit'.split()
mk_class('event', **{o:o for o in _events},
doc="All possible events as attributes to get tab-completion and typo-proofing")
show_doc(event, name='event', title_level=3)
event.after_backward
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.callbacks = [TrainEvalCallback]
class Learner():
"Group together a `model`, some `data` and a `loss_func` to handle training"
def __init__(self, model, data, loss_func, opt_func=SGD, lr=1e-2, splitter=trainable_params,
cbs=None, cb_funcs=None, metrics=None, path=None, wd_bn_bias=False):
self.model,self.data,self.loss_func = model,data,loss_func
self.opt_func,self.lr,self.splitter,self.wd_bn_bias = opt_func,lr,splitter,wd_bn_bias
self.path = path if path is not None else getattr(data, 'path', Path('.'))
self.metrics = [m if isinstance(m, Metric) else AvgMetric(m) for m in L(metrics)]
self.training,self.logger,self.opt = False,print,None
self.cbs = L([])
self.add_cbs(cbf() for cbf in L(defaults.callbacks))
self.add_cbs(cbs)
self.add_cbs(cbf() for cbf in L(cb_funcs))
def add_cbs(self, cbs):
"Add `cbs` to the list of `Callback` and register `self` as their learner"
for cb in L(cbs): self.add_cb(cb)
def add_cb(self, cb):
"Add `cb` to the list of `Callback` and register `self` as their learner"
if getattr(self, cb.name, None):
error = f"There is another object registered in self.{cb.name}, pick a new name."
assert isinstance(getattr(self, cb.name), cb.__class__), error
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
def remove_cbs(self, cbs):
"Remove `cbs` from the list of `Callback` and deregister `self` as their learner"
for cb in L(cbs): self.remove_cb(cb)
def remove_cb(self, cb):
"Add `cb` from the list of `Callback` and deregister `self` as their learner"
cb.learn = None
setattr(self, cb.name, None)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def create_opt(self, lr=None):
opt = self.opt_func(self.splitter(self.model), lr=self.lr if lr is None else lr)
if not self.wd_bn_bias:
for p in bn_bias_params(self.model):
p_state = opt.state.get(p, {})
p_state['do_wd'] = False
opt.state[p] = p_state
return opt
def one_batch(self, xb, yb, i=None):
"Train or evaluate `self.model` on batch `(xb,yb)`"
try:
if i is not None: self.iter = i
self.xb,self.yb = xb,yb; self('begin_batch')
self.pred = self.model(self.xb); self('after_pred')
self.loss = self.loss_func(self.pred, self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def all_batches(self):
"Train or evaluate `self.model` on all batches of `self.dl`"
self.n_iter = len(self.dl)
for i,(xb,yb) in enumerate(self.dl): self.one_batch(xb, yb, i)
def _do_begin_fit(self, n_epoch):
"Prepare evertyhing for training `epochs` epochs"
self.n_epoch,self.loss = n_epoch,tensor(0.)
self('begin_fit')
def _do_epoch_train(self):
"Execute the training part of the `epoch`-th epoch"
self.dl = self.data.train_dl
try:
self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self):
"Execute the validation part of an epoch"
try:
self.dl = self.data.valid_dl
self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`."
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.opt = self.create_opt(lr=lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, dl=None, cbs=None):
"Validate on `dl` with potential new `cbs`."
self.dl = dl or self.data.valid_dl
with self.added_cbs(cbs), self.no_logging():
self(['begin_fit', 'begin_epoch', 'begin_validate'])
self.all_batches()
self(['after_validate', 'after_epoch', 'after_fit'])
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, with_loss=False):
"Get the predictions and targets on the `ds_idx`-th dataset, optionally `with_loss`"
self.dl = self.data.dls[ds_idx]
cb = GatherPredsCallback(with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(['begin_fit', 'begin_epoch', 'begin_validate'])
self.all_batches()
self(['after_validate', 'after_epoch', 'after_fit'])
if with_loss: return (torch.cat(cb.preds),torch.cat(cb.targets),torch.cat(cb.losses))
res = (torch.cat(cb.preds),torch.cat(cb.targets))
return res
def __call__(self, event_name):
"Call `event_name` (one or a list) for all callbacks"
for e in L(event_name): self._call_one(e)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
@contextmanager
def no_logging(self):
"Context manager to temporarily remove `logger`"
old_logger = self.logger
self.logger = noop
yield
self.logger = old_logger
@contextmanager
def loss_not_reduced(self):
"A context manager to evaluate `loss_func` with reduction set to none."
if hasattr(self.loss_func, 'reduction'):
self.old_red = self.loss_func.reduction
self.loss_func.reduction = 'none'
yield
self.loss_func.reduction = self.old_red
else:
old_loss_func = self.loss_func
self.loss_func = partial(self.loss_func, reduction='none')
yield
self.loss_func = old_loss_func
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below).
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, **kwargs):
return Learner(RegModel(), synth_data(n_train=n_train,n_valid=n_valid, cuda=cuda), MSELossFlat(), **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
###Output
_____no_output_____
###Markdown
Training loop
###Code
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback)
xb,yb = learn.data.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, true_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, self.xb)
test_eq(self.save_yb, self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.xb + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.xb * (self.pred.data - self.yb)).mean()
self.grad_b = 2 * (self.pred.data - self.yb).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
xb,yb = learn.data.one_batch()
learn = synth_learner(cbs=TestOneBatch(xb, yb, 42))
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(xb, yb, 42), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(xb, yb, 42), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.data.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.data.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert learn.test_train_eval is None
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `xb`: last input drawn from `self.dl` (potentially modified by callbacks)- `yb`: last target drawn from `self.dl` (potentially modified by callbacks)- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = {'reset': "Reset inner state to prepare for new computation",
'name': "Name of the `Metric`, camel-cased and with Metric removed",
'accumulate': "Use `learn` to update the state with new results",
'value': "The value of the metric"}
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],u[i:i+25]
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],u[splits[i]:splits[i+1]]
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss)*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
return t.item() if t.numel()==1 else t
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.losses,self.values = [],[],[]
names = [m.name for m in self._valid_mets]
if self.train_metrics: names = [f'train_{n}' for n in names] + [f'valid_{n}' for n in names]
else: names = ['train_loss', 'valid_loss'] + names[1:]
if self.add_time: names.append('time')
self.metric_names = ['epoch']+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
mets = [self.smooth_loss] + self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = [getattr(self, 'epoch', 0)]
def begin_train (self): [m.reset() for m in self._train_mets]
def after_train (self): self.log += [_maybe_item(m.value) for m in self._train_mets]
def begin_validate(self): [m.reset() for m in self._valid_mets]
def after_validate(self): self.log += [_maybe_item(m.value) for m in self._valid_mets]
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return []
return [self.loss] + (self.metrics if self.train_metrics else [])
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return []
return [self.loss] + self.metrics
def plot_loss(self): plt.plot(self.losses)
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
res = tensor(self.losses).mean()
self.log += [res, res] if self.train_metrics else [res]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
###Output
_____no_output_____
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Ploting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss()
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.data.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(learn.data.train_dl)
test_eq(res[0], res[1])
x,y = learn.data.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.data.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
> Warning: If your dataset is unlabelled, the targets will all be 0s.> Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.data.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
learn.data.dls += (dl,)
preds,targs = learn.get_preds(ds_idx=2)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(ds_idx=2, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_script.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_test_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_synth_learner.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_data(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(bs*n)
return TensorDataset(x, a*x + b + 0.1*torch.randn(bs*n))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = [Cuda()] if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, tfms=tfms)
valid_dl = TfmdDL(valid_ds, bs=bs, tfms=tfms)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export core
_camel_re1 = re.compile('(.)([A-Z][a-z]+)')
_camel_re2 = re.compile('([a-z0-9])([A-Z])')
def camel2snake(name):
s1 = re.sub(_camel_re1, r'\1_\2', name)
return re.sub(_camel_re2, r'\1_\2', s1).lower()
test_eq(camel2snake('ClassAreCamel'), 'class_are_camel')
#export
def class2attr(self, cls_name):
return camel2snake(re.sub(rf'{cls_name}$', '', self.__class__.__name__) or cls_name.lower())
#export
@docs
class Callback():
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
def __call__(self, event_name): getattr(self, event_name, noop)()
def __repr__(self): return self.__class__.__name__
def __getattr__(self, k): return getattr(self.learn, k)
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
_docs=dict(__call__="Call `self.{event_name}` if it's defined",
__getattr__="Passthrough to get the attributes of `self.learn`")
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0"
self.learn.train_iter,self.learn.pct_train = 0,0.
def begin_batch(self):
"On the first batch, put the model on the right device"
if self.learn.train_iter == 0: self.model.to(find_device(self.xb))
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.begin_batch)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_loss=False): self.with_loss = with_loss
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss: self.losses.append(to_detach(self.loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = 'begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit'.split()
mk_class('event', **{o:o for o in _events},
doc="All possible events as attributes to get tab-completion and typo-proofing")
show_doc(event, name='event', title_level=3)
event.after_backward
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Helper functions to grab certain parameters
###Code
# export core
def trainable_params(m):
"Return all trainable parameters of `m`"
return [p for p in m.parameters() if p.requires_grad]
m = nn.Linear(4,5)
test_eq(trainable_params(m), [m.weight, m.bias])
m.weight.requires_grad_(False)
test_eq(trainable_params(m), [m.bias])
#export core
def bn_bias_params(m):
"Return all bias and BatchNorm parameters"
if isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)):
return list(m.parameters())
res = sum([bn_bias_params(c) for c in m.children()], [])
if hasattr(m, 'bias'): res.append(m.bias)
return res
model = nn.Sequential(nn.Linear(10,20), nn.BatchNorm1d(20), nn.Conv1d(3,4, 3))
test_eq(bn_bias_params(model), [model[0].bias, model[1].weight, model[1].bias, model[2].bias])
model = SequentialEx(nn.Linear(10,20), nn.Sequential(nn.BatchNorm1d(20), nn.Conv1d(3,4, 3)))
test_eq(bn_bias_params(model), [model[0].bias, model[1][0].weight, model[1][0].bias, model[1][1].bias])
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.callbacks = [TrainEvalCallback]
class Learner():
"Group together a `model`, some `data` and a `loss_func` to handle training"
def __init__(self, model, data, loss_func, opt_func=SGD, lr=1e-2, splitter=trainable_params,
cbs=None, cb_funcs=None, metrics=None, path=None, wd_bn_bias=False):
self.model,self.data,self.loss_func = model,data,loss_func
self.opt_func,self.lr,self.splitter,self.wd_bn_bias = opt_func,lr,splitter,wd_bn_bias
self.path = path if path is not None else getattr(data, 'path', Path('.'))
self.metrics = [m if isinstance(m, Metric) else AvgMetric(m) for m in L(metrics)]
self.training,self.logger,self.opt = False,print,None
self.cbs = L([])
self.add_cbs(cbf() for cbf in L(defaults.callbacks))
self.add_cbs(cbs)
self.add_cbs(cbf() for cbf in L(cb_funcs))
def add_cbs(self, cbs):
"Add `cbs` to the list of `Callback` and register `self` as their learner"
for cb in L(cbs): self.add_cb(cb)
def add_cb(self, cb):
"Add `cb` to the list of `Callback` and register `self` as their learner"
if getattr(self, cb.name, None):
error = f"There is another object registered in self.{cb.name}, pick a new name."
assert isinstance(getattr(self, cb.name), cb.__class__), error
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
def remove_cbs(self, cbs):
"Remove `cbs` from the list of `Callback` and deregister `self` as their learner"
for cb in L(cbs): self.remove_cb(cb)
def remove_cb(self, cb):
"Add `cb` from the list of `Callback` and deregister `self` as their learner"
cb.learn = None
setattr(self, cb.name, None)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def create_opt(self, lr=None):
opt = self.opt_func(self.splitter(self.model), lr=self.lr if lr is None else lr)
if not self.wd_bn_bias:
for p in bn_bias_params(self.model):
p_state = opt.state.get(p, {})
p_state['do_wd'] = False
opt.state[p] = p_state
return opt
def one_batch(self, xb, yb, i=None):
"Train or evaluate `self.model` on batch `(xb,yb)`"
try:
if i is not None: self.iter = i
self.xb,self.yb = xb,yb; self('begin_batch')
self.pred = self.model(self.xb); self('after_pred')
self.loss = self.loss_func(self.pred, self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def all_batches(self):
"Train or evaluate `self.model` on all batches of `self.dl`"
self.n_iter = len(self.dl)
for i,(xb,yb) in enumerate(self.dl): self.one_batch(xb, yb, i)
def _do_begin_fit(self, n_epoch):
"Prepare evertyhing for training `epochs` epochs"
self.n_epoch,self.loss = n_epoch,tensor(0.)
self('begin_fit')
def _do_epoch_train(self):
"Execute the training part of the `epoch`-th epoch"
self.dl = self.data.train_dl
try:
self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self):
"Execute the validation part of an epoch"
try:
self.dl = self.data.valid_dl
self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`."
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.opt = self.create_opt(lr=lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, dl=None, cbs=None):
"Validate on `dl` with potential new `cbs`."
self.dl = dl or self.data.valid_dl
with self.added_cbs(cbs), self.no_logging():
self(['begin_fit', 'begin_epoch', 'begin_validate'])
self.all_batches()
self(['after_validate', 'after_epoch', 'after_fit'])
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, with_loss=False):
"Get the predictions and targets on the `ds_idx`-th dataset, optionally `with_loss`"
self.dl = self.data.dls[ds_idx]
cb = GatherPredsCallback(with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(['begin_fit', 'begin_epoch', 'begin_validate'])
self.all_batches()
self(['after_validate', 'after_epoch', 'after_fit'])
if with_loss: return (torch.cat(cb.preds),torch.cat(cb.targets),torch.cat(cb.losses))
res = (torch.cat(cb.preds),torch.cat(cb.targets))
return res
def __call__(self, event_name):
"Call `event_name` (one or a list) for all callbacks"
for e in L(event_name): self._call_one(e)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
@contextmanager
def no_logging(self):
"Context manager to temporarily remove `logger`"
old_logger = self.logger
self.logger = noop
yield
self.logger = old_logger
@contextmanager
def loss_not_reduced(self):
"A context manager to evaluate `loss_func` with reduction set to none."
if hasattr(self.loss_func, 'reduction'):
self.old_red = self.loss_func.reduction
self.loss_func.reduction = 'none'
yield
self.loss_func.reduction = self.old_red
else:
old_loss_func = self.loss_func
self.loss_func = partial(self.loss_func, reduction='none')
yield
self.loss_func = old_loss_func
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below).
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, **kwargs):
return Learner(RegModel(), synth_data(n_train=n_train,n_valid=n_valid, cuda=cuda), MSELossFlat(), **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
###Output
_____no_output_____
###Markdown
Training loop
###Code
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback)
xb,yb = learn.data.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, true_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, self.xb)
test_eq(self.save_yb, self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.xb + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.xb * (self.pred.data - self.yb)).mean()
self.grad_b = 2 * (self.pred.data - self.yb).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
xb,yb = learn.data.one_batch()
learn = synth_learner(cbs=TestOneBatch(xb, yb, 42))
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(xb, yb, 42), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(xb, yb, 42), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.data.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.data.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert learn.test_train_eval is None
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `xb`: last input drawn from `self.dl` (potentially modified by callbacks)- `yb`: last target drawn from `self.dl` (potentially modified by callbacks)- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = {'reset': "Reset inner state to prepare for new computation",
'name': "Name of the `Metric`, camel-cased and with Metric removed",
'accumulate': "Use `learn` to update the state with new results",
'value': "The value of the metric"}
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],u[i:i+25]
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],u[splits[i]:splits[i+1]]
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss)*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
return t.item() if t.numel()==1 else t
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.losses,self.values = [],[],[]
names = [m.name for m in self._valid_mets]
if self.train_metrics: names = [f'train_{n}' for n in names] + [f'valid_{n}' for n in names]
else: names = ['train_loss', 'valid_loss'] + names[1:]
if self.add_time: names.append('time')
self.metric_names = ['epoch']+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
mets = [self.smooth_loss] + self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = [getattr(self, 'epoch', 0)]
def begin_train (self): [m.reset() for m in self._train_mets]
def after_train (self): self.log += [_maybe_item(m.value) for m in self._train_mets]
def begin_validate(self): [m.reset() for m in self._valid_mets]
def after_validate(self): self.log += [_maybe_item(m.value) for m in self._valid_mets]
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return []
return [self.loss] + (self.metrics if self.train_metrics else [])
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return []
return [self.loss] + self.metrics
def plot_loss(self): plt.plot(self.losses)
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
res = tensor(self.losses).mean()
self.log += [res, res] if self.train_metrics else [res]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
###Output
_____no_output_____
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Ploting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss()
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.data.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(learn.data.train_dl)
test_eq(res[0], res[1])
x,y = learn.data.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.data.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
> Warning: If your dataset is unlabelled, the targets will all be 0s.> Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.data.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
learn.data.dls += (dl,)
preds,targs = learn.get_preds(ds_idx=2)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(ds_idx=2, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_script.ipynb.
Converted 02_transforms.ipynb.
Converted 03_pipeline.ipynb.
Converted 04_data_external.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_rect_augment.ipynb.
Converted 11_layers.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 30_text_core.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_synth_learner.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn = 'learn',None
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False): store_attr(self, "with_input,with_loss")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs=[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss:
bs = find_bs(self.yb)
loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)
self.losses.append(to_detach(loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
elif device is None: device = 'cpu'
state = torch.load(file, map_location=device)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
# export
def _try_concat(o):
try:
return torch.cat(o)
except:
return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L())
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics")
self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
@property
def metrics(self): return self._metrics
@metrics.setter
def metrics(self,v): self._metrics = L(v).map(mk_metric)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(False): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(True ): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
try:
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
#self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
if dl is None: dl = self.dbunch.dls[ds_idx]
with self.added_cbs(cbs), self.no_logging(), self.no_mbar():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_loss=False, with_decoded=False, act=None):
#self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
preds = act(torch.cat(cb.preds))
res = (preds, detuplify(tuple(torch.cat(o) for o in zip(*cb.targets))))
if with_decoded: res = res + (getattr(self.loss_func, 'decodes', noop)(preds),)
if with_input: res = (tuple(_try_concat(o) for o in zip(*cb.inputs)),) + res
if with_loss: res = res + (torch.cat(cb.losses),)
return res
def predict(self, item, rm_type_tfms=0):
dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms)
inp,preds,_ = self.get_preds(dl=dl, with_input=True)
dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*inp,dec_preds))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
if dl is None: dl = self.dbunch.dls[ds_idx]
b = dl.one_batch()
_,_,preds = self.get_preds(dl=[b], with_decoded=True)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
def show_training_loop(self):
loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train',
'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward',
'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train',
'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop',
'**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate',
'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit',
'after_cancel_fit', 'after_fit']
indent = 0
for s in loop:
if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2
elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}')
else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s))
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def no_mbar(self): return replacing_yield(self, 'create_mbar', False)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
distrib_barrier()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
show_training_loop="Show each step in the training loop",
no_logging="Context manager to temporarily remove `logger`",
no_mbar="Context manager to temporarily prevent the master progress bar from being created",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
for i in [0,1,3]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
test_close(end[2]-init[2], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
def _maybe_reduce(val):
if num_distrib()>1:
val = val.clone()
torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM)
val /= num_distrib()
return val
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, *learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss.mean())*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.iters,self.losses,self.values = [],[],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets[1:].map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
self.iters.append(self.smooth_loss.count)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.smooth_loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5, with_valid=True):
plt.plot(self.losses[skip_start:], label='train')
if with_valid:
plt.plot(self.iters, L(self.values).itemgot(1), label='valid')
plt.legend()
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
if not self.training: test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
mean = tensor(self.losses).mean()
self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,2.975327253341675,2.8497583866119385,2.8497583866119385,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(*inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
assert targs is None
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
self.opt.clear_state()
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): test_close(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3)
###Output
(#4) [0,11.501923561096191,11.670934677124023,00:00]
(#4) [0,9.419240951538086,9.610092163085938,00:00]
(#4) [0,7.74031400680542,7.916210174560547,00:00]
###Markdown
Exporting a `Learner`
###Code
#export
@patch
def export(self:Learner, fname='export.pkl'):
"Export the content of `self` without the items and the optimizer state for inference"
if rank_distrib(): return # don't export if slave proc
old_dbunch = self.dbunch
self.dbunch = self.dbunch.new_empty()
state = self.opt.state_dict()
self.opt = None
with warnings.catch_warnings():
#To avoid the warning that come from PyTorch about model not being checked
warnings.simplefilter("ignore")
torch.save(self, self.path/fname)
self.create_opt()
self.opt.load_state_dict(state)
self.dbunch = old_dbunch
###Output
_____no_output_____
###Markdown
TTA
###Code
#export
@patch
def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.5):
"Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation"
if dl is None: dl = self.dbunch.dls[ds_idx]
if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms)
with dl.dataset.set_split_idx(0), self.no_mbar():
self.progress.mbar = master_bar(list(range(n+1)))
aug_preds = []
for i in self.progress.mbar:
self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch
aug_preds.append(self.get_preds(dl=dl)[0][None])
aug_preds = torch.cat(aug_preds).mean(0)
self.epoch = n
preds,targs = self.get_preds(dl=dl)
preds = torch.lerp(aug_preds, preds, beta)
return preds,targs
###Output
_____no_output_____
###Markdown
In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core_foundation.ipynb.
Converted 01a_core_utils.ipynb.
Converted 01b_core_dispatch.ipynb.
Converted 01c_core_transform.ipynb.
Converted 02_core_script.ipynb.
Converted 03_torchcore.ipynb.
Converted 03a_layers.ipynb.
Converted 04_data_load.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 09b_vision_utils.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 71_callback_tensorboard.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
Converted xse_resnext.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn = 'learn',None
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False): store_attr(self, "with_input,with_loss")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs=[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss:
bs = find_bs(self.yb)
loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)
self.losses.append(to_detach(loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
elif device is None: device = 'cpu'
state = torch.load(file, map_location=device)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
# export
def _try_concat(o):
try:
return torch.cat(o)
except:
return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L())
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics")
self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
@property
def metrics(self): return self._metrics
@metrics.setter
def metrics(self,v): self._metrics = L(v).map(mk_metric)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(False): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(True ): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
try:
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
#self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
if dl is None: dl = self.dbunch.dls[ds_idx]
with self.added_cbs(cbs), self.no_logging(), self.no_mbar():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_loss=False, with_decoded=False, act=None):
#self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
preds = act(torch.cat(cb.preds))
res = (preds, detuplify(tuple(torch.cat(o) for o in zip(*cb.targets))))
if with_decoded: res = res + (getattr(self.loss_func, 'decodes', noop)(preds),)
if with_input: res = (tuple(_try_concat(o) for o in zip(*cb.inputs)),) + res
if with_loss: res = res + (torch.cat(cb.losses),)
return res
def predict(self, item, rm_type_tfms=0):
dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms)
inp,preds,_ = self.get_preds(dl=dl, with_input=True)
dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*inp,dec_preds))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
if dl is None: dl = self.dbunch.dls[ds_idx]
b = dl.one_batch()
_,_,preds = self.get_preds(dl=[b], with_decoded=True)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
def show_training_loop(self):
loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train',
'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward',
'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train',
'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop',
'**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate',
'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit',
'after_cancel_fit', 'after_fit']
indent = 0
for s in loop:
if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2
elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}')
else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s))
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def no_mbar(self): return replacing_yield(self, 'create_mbar', False)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
show_training_loop="Show each step in the training loop",
no_logging="Context manager to temporarily remove `logger`",
no_mbar="Context manager to temporarily prevent the master progress bar from being created",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
for i in [0,1,3]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
test_close(end[2]-init[2], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
def _maybe_reduce(val):
if num_distrib()>1:
val = val.clone()
torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM)
val /= num_distrib()
return val
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(_maybe_reduce(self.func(learn.pred, *learn.yb)))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(_maybe_reduce(learn.loss.mean()))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean()), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.iters,self.losses,self.values = [],[],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets[1:].map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
self.iters.append(self.smooth_loss.count)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.smooth_loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5, with_valid=True):
plt.plot(self.losses[skip_start:], label='train')
if with_valid:
plt.plot(self.iters, L(self.values).itemgot(1), label='valid')
plt.legend()
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
if not self.training: test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
mean = tensor(self.losses).mean()
self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,2.975327253341675,2.8497583866119385,2.8497583866119385,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(*inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
assert targs is None
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
self.opt.clear_state()
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): test_close(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3)
###Output
(#4) [0,11.501923561096191,11.670934677124023,00:00]
(#4) [0,9.419240951538086,9.610092163085938,00:00]
(#4) [0,7.74031400680542,7.916210174560547,00:00]
###Markdown
Exporting a `Learner`
###Code
#export
@patch
def export(self:Learner, fname='export.pkl'):
"Export the content of `self` without the items and the optimizer state for inference"
if rank_distrib(): return # don't export if slave proc
old_dbunch = self.dbunch
self.dbunch = dbunch.new_empty()
state = self.opt.state_dict()
self.opt = None
with warnings.catch_warnings():
#To avoid the warning that come from PyTorch about model not being checked
warnings.simplefilter("ignore")
torch.save(self, open(self.path/fname, 'wb'))
self.create_opt()
self.opt.load_state_dict(state)
self.dbunch = old_dbunch
###Output
_____no_output_____
###Markdown
TTA
###Code
#export
@patch
def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.5):
"Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation"
if dl is None: dl = self.dbunch.dls[ds_idx]
if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms)
with dl.dataset.set_split_idx(0), self.no_mbar():
self.progress.mbar = master_bar(list(range(n+1)))
aug_preds = []
for i in self.progress.mbar:
self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch
aug_preds.append(self.get_preds(dl=dl)[0][None])
aug_preds = torch.cat(aug_preds).mean(0)
self.epoch = n
preds,targs = self.get_preds(dl=dl)
preds = torch.lerp(aug_preds, preds, beta)
return preds,targs
###Output
_____no_output_____
###Markdown
In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core_foundation.ipynb.
Converted 01a_core_utils.ipynb.
Converted 01b_core_dispatch.ipynb.
Converted 01c_core_transform.ipynb.
Converted 02_core_script.ipynb.
Converted 03_torchcore.ipynb.
Converted 03a_layers.ipynb.
Converted 04_data_load.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 71_callback_tensorboard.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
Converted xse_resnext.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn = 'learn',None
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False): store_attr(self, "with_input,with_loss")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs=[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss: self.losses.append(to_detach(self.loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
state = torch.load(file)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
x = [(tensor([1]),),(tensor([2]),),(tensor([3]),)]
y = [(tensor([1]),tensor([1])),(tensor([2]),tensor([2])),(tensor([3]),tensor([3]))]
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=SGD, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn")
self.training,self.logger,self.opt,self.cbs = False,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.metrics = L(metrics).map(mk_metric)
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(True ): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(False): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
try:
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
self.dl = self.dbunch.dls[ds_idx] if dl is None else dl
with self.added_cbs(cbs), self.no_logging():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_loss=False, decoded=False, act=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
preds = act(torch.cat(cb.preds))
if decoded: preds = getattr(sellf.loss_func, 'decodes', noop)(preds)
res = (preds, detuplify(tuple(torch.cat(o) for o in zip(*cb.targets))))
if with_input: res = (tuple(torch.cat(o) for o in zip(*cb.inputs)),) + res
if with_loss: res = res + (torch.cat(cb.losses),)
return res
def predict(self, item):
dl = test_dl(self.dbunch, [item])
inp,preds,_ = self.get_preds(dl=dl, with_input=True)
dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*inp,dec_preds))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
dl = self.dbunch.dls[ds_idx] if dl is None else dl
b = dl.one_batch()
preds,_ = self.get_preds(dl=[b])
preds = getattr(self.loss_func, "decodes", noop)(preds)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
def show_training_loop(self):
loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train',
'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward',
'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train',
'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop',
'**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate',
'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit',
'after_cancel_fit', 'after_fit']
indent = 0
for s in loop:
if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2
elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}')
else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s))
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
#TODO: if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
show_training_loop="Show each step in the training loop",
no_logging="Context manager to temporarily remove `logger`",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, *learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss.mean())*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean()), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.losses,self.values = [],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = L(self.smooth_loss) + (self._train_mets if self.training else self._valid_mets)
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets.map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5): plt.plot(self.losses[skip_start:])
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
res = tensor(self.losses).mean()
self.log += [res, res] if self.train_metrics else [res]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,7.155638217926025,8.358983993530273,8.358984231948853,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(*inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
assert targs is None
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
def test_dl(dbunch, test_items):
"Create a test dataloader from `test_items` using validation transforms of `dbunch`"
test_ds = test_set(dbunch.valid_ds, test_items) if isinstance(dbunch.valid_ds, DataSource) else test_items
return dbunch.valid_dl.new(test_ds)
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): test_close(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3)
###Output
(#4) [0,16.528823852539062,13.021082878112793,00:00]
(#4) [0,13.5247163772583,10.613409042358398,00:00]
(#4) [0,11.02744197845459,8.654027938842773,00:00]
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 03a_layers.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn = 'learn',None
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False): store_attr(self, "with_input,with_loss")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs=[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss: self.losses.append(to_detach(self.loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
state = torch.load(file)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
x = [(tensor([1]),),(tensor([2]),),(tensor([3]),)]
y = [(tensor([1]),tensor([1])),(tensor([2]),tensor([2])),(tensor([3]),tensor([3]))]
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=SGD, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn")
self.training,self.logger,self.opt,self.cbs = False,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.metrics = L(metrics).map(mk_metric)
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(True ): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(False): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
try:
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
self.dl = self.dbunch.dls[ds_idx] if dl is None else dl
with self.added_cbs(cbs), self.no_logging():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_loss=False, decoded=False, act=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
preds = act(torch.cat(cb.preds))
if decoded: preds = getattr(sellf.loss_func, 'decodes', noop)(preds)
res = (preds, detuplify(tuple(torch.cat(o) for o in zip(*cb.targets))))
if with_input: res = (tuple(torch.cat(o) for o in zip(*cb.inputs)),) + res
if with_loss: res = res + (torch.cat(cb.losses),)
return res
def predict(self, item):
dl = test_dl(self.dbunch, [item])
inp,preds,_ = self.get_preds(dl=dl, with_input=True)
dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*inp,dec_preds))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
dl = self.dbunch.dls[ds_idx] if dl is None else dl
b = dl.one_batch()
preds,_ = self.get_preds(dl=[b])
preds = getattr(self.loss_func, "decodes", noop)(preds)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
#TODO: if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
no_logging="Context manager to temporarily remove `logger`",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, *learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss.mean())*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean()), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.losses,self.values = [],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = L(self.smooth_loss) + (self._train_mets if self.training else self._valid_mets)
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets.map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5): plt.plot(self.losses[skip_start:])
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
res = tensor(self.losses).mean()
self.log += [res, res] if self.train_metrics else [res]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,11.316393852233887,13.917574882507324,13.917574882507324,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(*inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
assert targs is None
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): test_close(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3)
###Output
(#4) [0,10.63065242767334,11.039981842041016,00:00]
(#4) [0,8.829426765441895,9.200983047485352,00:00]
(#4) [0,7.327431678771973,7.672776699066162,00:00]
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_torch_core.ipynb.
Converted 02_script.ipynb.
Converted 03_dataloader.ipynb.
Converted 04_transform.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 10_data_block.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn = 'learn',None
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False): store_attr(self, "with_input,with_loss")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs=[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss: self.losses.append(to_detach(self.loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
state = torch.load(file)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
x = [(tensor([1]),),(tensor([2]),),(tensor([3]),)]
y = [(tensor([1]),tensor([1])),(tensor([2]),tensor([2])),(tensor([3]),tensor([3]))]
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=SGD, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn")
self.training,self.logger,self.opt,self.cbs = False,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.metrics = L(metrics).map(mk_metric)
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(True ): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(False): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
try:
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
self.dl = self.dbunch.dls[ds_idx] if dl is None else dl
with self.added_cbs(cbs), self.no_logging():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_loss=False, decoded=False, act=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
preds = act(torch.cat(cb.preds))
if decoded: preds = getattr(sellf.loss_func, 'decodes', noop)(preds)
res = (preds, detuplify(tuple(torch.cat(o) for o in zip(*cb.targets))))
if with_input: res = (tuple(torch.cat(o) for o in zip(*cb.inputs)),) + res
if with_loss: res = res + (torch.cat(cb.losses),)
return res
def predict(self, item):
dl = test_dl(self.dbunch, [item])
inp,preds,_ = self.get_preds(dl=dl, with_input=True)
dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*inp,dec_preds))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
dl = self.dbunch.dls[ds_idx] if dl is None else dl
b = dl.one_batch()
preds,_ = self.get_preds(dl=[b])
preds = getattr(self.loss_func, "decodes", noop)(preds)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
#TODO: if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
no_logging="Context manager to temporarily remove `logger`",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, true_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, *learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss.mean())*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean()), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.losses,self.values = [],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = L(self.smooth_loss) + (self._train_mets if self.training else self._valid_mets)
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets.map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5): plt.plot(self.losses[skip_start:])
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
res = tensor(self.losses).mean()
self.log += [res, res] if self.train_metrics else [res]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,11.315444946289062,11.461511611938477,11.461511135101318,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(*inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, ())
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: assert torch.allclose(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): assert torch.allclose(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: assert torch.allclose(end[i],init[i])
#bn was trained
for i in [2,3]: assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
###Output
(#4) [0,23.467575073242188,29.819826126098633,00:00]
(#4) [0,19.802383422851562,25.183053970336914,00:00]
(#4) [0,16.64244842529297,21.275495529174805,00:00]
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_torch_core.ipynb.
Converted 02_script.ipynb.
Converted 03_dataloader.ipynb.
Converted 04_transform.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 10_data_block.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 22_vision_learner.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
This cell doesn't have an export destination and was ignored:
e
Converted 50_data_block_examples.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export core
_camel_re1 = re.compile('(.)([A-Z][a-z]+)')
_camel_re2 = re.compile('([a-z0-9])([A-Z])')
def camel2snake(name):
s1 = re.sub(_camel_re1, r'\1_\2', name)
return re.sub(_camel_re2, r'\1_\2', s1).lower()
test_eq(camel2snake('ClassAreCamel'), 'class_are_camel')
#export
def class2attr(self, cls_name):
return camel2snake(re.sub(rf'{cls_name}$', '', self.__class__.__name__) or cls_name.lower())
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def default(self): return self.__dict__.get('learn')
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_loss=False): self.with_loss = with_loss
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss: self.losses.append(to_detach(self.loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_inference = [event.begin_fit, event.begin_epoch, event.begin_validate]
_after_inference = [event.after_validate, event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
state = torch.load(file)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
x = [(tensor([1]),),(tensor([2]),),(tensor([3]),)]
y = [(tensor([1]),tensor([1])),(tensor([2]),tensor([2])),(tensor([3]),tensor([3]))]
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=SGD, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn")
self.training,self.logger,self.opt,self.cbs = False,print,None,L()
#TODO: infer loss_func from data
self.loss_func = CrossEntropyLossFlat() if loss_func is None else loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.metrics = L(metrics).map(mk_metric)
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(True ): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(False): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==0 else len(b)-1)
return b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
try:
self.iter,(self.xb,self.yb) = i,self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self):
try:
self.dl = self.dbunch.valid_dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, dl=None, cbs=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
self.dl = dl or self.dbunch.valid_dl
with self.added_cbs(cbs), self.no_logging():
self(_before_inference)
self.all_batches()
self(_after_inference)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, with_loss=False):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
self.dl = self.dbunch.dls[ds_idx]
cb = GatherPredsCallback(with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(_before_inference)
self.all_batches()
self(_after_inference)
if with_loss: return torch.cat(cb.preds),torch.cat(cb.targets),torch.cat(cb.losses)
return torch.cat(cb.preds),torch.cat(cb.targets)
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
#TODO: if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset, optionally `with_loss`",
no_logging="Context manager to temporarily remove `logger`",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, true_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.xb[0] + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.xb[0] * (self.pred.data - self.yb[0])).mean()
self.grad_b = 2 * (self.pred.data - self.yb[0]).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `xb`: last input drawn from `self.dl` (potentially modified by callbacks)- `yb`: last target drawn from `self.dl` (potentially modified by callbacks)- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, *learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss.mean())*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean()), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.losses,self.values = [],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
mets = L(self.smooth_loss) + (self._train_mets if self.training else self._valid_mets)
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets.map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5): plt.plot(self.losses[skip_start:])
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
res = tensor(self.losses).mean()
self.log += [res, res] if self.train_metrics else [res]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,32.451690673828125,34.610939025878906,34.61093807220459,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
> Warning: If your dataset is unlabelled, the targets will all be 0s.> Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
learn.dbunch.dls += (dl,)
preds,targs = learn.get_preds(ds_idx=2)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(ds_idx=2, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: assert torch.allclose(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): assert torch.allclose(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: assert torch.allclose(end[i],init[i])
#bn was trained
for i in [2,3]: assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
_____no_output_____
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn = 'learn',None
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False): store_attr(self, "with_input,with_loss")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs=[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss:
bs = find_bs(self.yb)
loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)
self.losses.append(to_detach(loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
elif device is None: device = 'cpu'
state = torch.load(file, map_location=device)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
# export
def _try_concat(o):
try:
return torch.cat(o)
except:
return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L())
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics")
self.training,self.logger,self.opt,self.cbs = False,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
@property
def metrics(self): return self._metrics
@metrics.setter
def metrics(self,v): self._metrics = L(v).map(mk_metric)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(True ): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(False): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
try:
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
if dl is None: dl = self.dbunch.dls[ds_idx]
with self.added_cbs(cbs), self.no_logging():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_loss=False, with_decoded=False, act=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
preds = act(torch.cat(cb.preds))
res = (preds, detuplify(tuple(torch.cat(o) for o in zip(*cb.targets))))
if with_decoded: res = res + (getattr(self.loss_func, 'decodes', noop)(preds),)
if with_input: res = (tuple(_try_concat(o) for o in zip(*cb.inputs)),) + res
if with_loss: res = res + (torch.cat(cb.losses),)
return res
def predict(self, item, rm_type_tfms=0):
dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms)
inp,preds,_ = self.get_preds(dl=dl, with_input=True)
dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*inp,dec_preds))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
if dl is None: dl = self.dbunch.dls[ds_idx]
b = dl.one_batch()
_,_,preds = self.get_preds(dl=[b], with_decoded=True)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
def show_training_loop(self):
loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train',
'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward',
'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train',
'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop',
'**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate',
'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit',
'after_cancel_fit', 'after_fit']
indent = 0
for s in loop:
if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2
elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}')
else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s))
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
show_training_loop="Show each step in the training loop",
no_logging="Context manager to temporarily remove `logger`",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
def _maybe_reduce(val):
if num_distrib()>1:
val = val.clone()
torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM)
val /= num_distrib()
return val
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(_maybe_reduce(self.func(learn.pred, *learn.yb)))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(_maybe_reduce(learn.loss.mean()))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean()), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.iters,self.losses,self.values = [],[],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets[1:].map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
self.iters.append(self.smooth_loss.count)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.smooth_loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5, with_valid=True):
plt.plot(self.losses[skip_start:], label='train')
if with_valid:
plt.plot(self.iters, L(self.values).itemgot(1), label='valid')
plt.legend()
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
if not self.training: test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
mean = tensor(self.losses).mean()
self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,10.249631881713867,9.148826599121094,9.148827075958252,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(*inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
assert targs is None
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): test_close(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3)
###Output
(#4) [0,30.35057258605957,27.175193786621094,00:00]
(#4) [0,23.77756690979004,21.27766227722168,00:00]
(#4) [0,18.555871963500977,16.66706085205078,00:00]
###Markdown
Exporting a `Learner`
###Code
#export
@patch
def export(self:Learner, fname='export.pkl'):
"Export the content of `self` without the items and the optimizer state for inference"
if rank_distrib(): return # don't export if slave proc
old_dbunch = self.dbunch
self.dbunch = dbunch.new_empty()
state = self.opt.state_dict()
self.opt = None
with warnings.catch_warnings():
#To avoid the warning that come from PyTorch about model not being checked
warnings.simplefilter("ignore")
torch.save(self, open(self.path/fname, 'wb'))
self.create_opt()
self.opt.load_state_dict(state)
self.dbunch = old_dbunch
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 03a_layers.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn,run = 'learn',None,True
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
if self.run: getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
#TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors.
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None):
store_attr(self, "with_input,with_loss,save_preds,save_targs")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs = []
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
preds,targs = to_detach(self.pred),to_detach(self.yb)
if self.save_preds is None: self.preds.append(preds)
else: (self.save_preds/str(self.iter)).save_array(preds)
if self.save_targs is None: self.targets.append(targs)
else: (self.save_targs/str(self.iter)).save_array(targs[0])
if self.with_loss:
bs = find_bs(self.yb)
loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)
self.losses.append(to_detach(loss))
def after_fit(self):
"Concatenate all recorded tensors"
if self.with_input: self.inputs = detuplify(to_concat(self.inputs))
if not self.save_preds: self.preds = detuplify(to_concat(self.preds))
if not self.save_targs: self.targets = detuplify(to_concat(self.targets))
if self.with_loss: self.losses = to_concat(self.losses)
def all_tensors(self):
res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets]
if self.with_input: res = [self.inputs] + res
if self.with_loss: res.append(self.losses)
return res
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
show_doc(GatherPredsCallback.after_fit)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#export
_loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train',
'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward',
'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train',
'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop',
'**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate',
'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit',
'after_cancel_fit', 'after_fit']
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
elif device is None: device = 'cpu'
state = torch.load(file, map_location=device)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
# export
def _try_concat(o):
try: return torch.cat(o)
except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L())
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics")
self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
@property
def metrics(self): return self._metrics
@metrics.setter
def metrics(self,v): self._metrics = L(v).map(mk_metric)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(False): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(True ): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
names = ['shuffle', 'drop_last']
try:
dl,old,has = change_attrs(dl, names, [False,False])
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally:
dl,*_ = change_attrs(dl, names, old, has); self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
with self.added_cbs(cbs), self.no_logging(), self.no_mbar():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
@delegates(GatherPredsCallback.__init__)
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, act=None, **kwargs):
cb = GatherPredsCallback(with_input=with_input, **kwargs)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
res = cb.all_tensors()
pred_i = 1 if with_input else 0
if res[pred_i] is not None:
res[pred_i] = act(res[pred_i])
if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i]))
return tuple(res)
def predict(self, item, rm_type_tfms=0):
dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms)
inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*tuplify(inp),*tuplify(dec_preds)))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
if dl is None: dl = self.dbunch.dls[ds_idx]
b = dl.one_batch()
_,_,preds = self.get_preds(dl=[b], with_decoded=True)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
def show_training_loop(self):
indent = 0
for s in _loop:
if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2
elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}')
else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s))
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def no_mbar(self): return replacing_yield(self, 'create_mbar', False)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
distrib_barrier()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
show_training_loop="Show each step in the training loop",
no_logging="Context manager to temporarily remove `logger`",
no_mbar="Context manager to temporarily prevent the master progress bar from being created",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(6)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
for i in [0,1,3]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
test_close(end[2]-init[2], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
def _maybe_reduce(val):
if num_distrib()>1:
val = val.clone()
torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM)
val /= num_distrib()
return val
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, *learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss.mean())*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.iters,self.losses,self.values = [],[],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets[1:].map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
self.iters.append(self.smooth_loss.count)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.smooth_loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5, with_valid=True):
plt.plot(list(range(skip_start, len(self.losses))), self.losses[skip_start:], label='train')
if with_valid:
idx = (np.array(self.iters)<skip_start).sum()
plt.plot(self.iters[idx:], L(self.values[idx:]).itemgot(1), label='valid')
plt.legend()
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
if not self.training: test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
mean = tensor(self.losses).mean()
self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,14.698701858520508,18.37638282775879,18.37638282775879,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
assert targs is None
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
self.opt.clear_state()
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): test_close(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3)
###Output
(#4) [0,21.00529670715332,22.428417205810547,00:00]
(#4) [0,17.18109703063965,18.20417022705078,00:00]
(#4) [0,13.83404541015625,14.777403831481934,00:00]
###Markdown
Exporting a `Learner`
###Code
#export
@patch
def export(self:Learner, fname='export.pkl'):
"Export the content of `self` without the items and the optimizer state for inference"
if rank_distrib(): return # don't export if slave proc
old_dbunch = self.dbunch
self.dbunch = self.dbunch.new_empty()
state = self.opt.state_dict()
self.opt = None
with warnings.catch_warnings():
#To avoid the warning that come from PyTorch about model not being checked
warnings.simplefilter("ignore")
torch.save(self, self.path/fname)
self.create_opt()
self.opt.load_state_dict(state)
self.dbunch = old_dbunch
###Output
_____no_output_____
###Markdown
TTA
###Code
#export
@patch
def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25):
"Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation"
if dl is None: dl = self.dbunch.dls[ds_idx]
if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms)
with dl.dataset.set_split_idx(0), self.no_mbar():
if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n)))
aug_preds = []
for i in self.progress.mbar if hasattr(self,'progress') else range(n):
self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch
# aug_preds.append(self.get_preds(dl=dl)[0][None])
aug_preds.append(self.get_preds(ds_idx)[0][None])
aug_preds = torch.cat(aug_preds).mean(0)
self.epoch = n
with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx)
preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta)
return preds,targs
###Output
_____no_output_____
###Markdown
In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_core_foundation.ipynb.
Converted 01a_core_utils.ipynb.
Converted 01b_core_dispatch.ipynb.
Converted 01c_core_transform.ipynb.
Converted 02_core_script.ipynb.
Converted 03_torchcore.ipynb.
Converted 03a_layers.ipynb.
Converted 04_data_load.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 09b_vision_utils.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 71_callback_tensorboard.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
Converted xse_resnext.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn = 'learn',None
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False): store_attr(self, "with_input,with_loss")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs=[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss: self.losses.append(to_detach(self.loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
elif device is None: device = 'cpu'
state = torch.load(file, map_location=device)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
# export
def _try_concat(o):
try:
return torch.cat(o)
except:
return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L())
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=SGD, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics")
self.training,self.logger,self.opt,self.cbs = False,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
@property
def metrics(self): return self._metrics
@metrics.setter
def metrics(self,v): self._metrics = L(v).map(mk_metric)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(True ): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(False): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
try:
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
if dl is None: dl = self.dbunch.dls[ds_idx]
with self.added_cbs(cbs), self.no_logging():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_loss=False, with_decoded=False, act=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
preds = act(torch.cat(cb.preds))
res = (preds, detuplify(tuple(torch.cat(o) for o in zip(*cb.targets))))
if with_decoded: res = res + (getattr(self.loss_func, 'decodes', noop)(preds),)
if with_input: res = (tuple(_try_concat(o) for o in zip(*cb.inputs)),) + res
if with_loss: res = res + (torch.cat(cb.losses),)
return res
def predict(self, item, rm_type_tfms=0):
dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms)
inp,preds,_ = self.get_preds(dl=dl, with_input=True)
dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*inp,dec_preds))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
if dl is None: dl = self.dbunch.dls[ds_idx]
b = dl.one_batch()
_,_,preds = self.get_preds(dl=[b], with_decoded=True)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
def show_training_loop(self):
loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train',
'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward',
'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train',
'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop',
'**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate',
'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit',
'after_cancel_fit', 'after_fit']
indent = 0
for s in loop:
if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2
elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}')
else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s))
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
#TODO: if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
show_training_loop="Show each step in the training loop",
no_logging="Context manager to temporarily remove `logger`",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, *learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss.mean())*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean()), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.iters,self.losses,self.values = [],[],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets[1:].map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
self.iters.append(self.smooth_loss.count)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.smooth_loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5, with_valid=True):
plt.plot(self.losses[skip_start:], label='train')
if with_valid:
plt.plot(self.iters, L(self.values).itemgot(0), label='valid')
plt.legend()
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
if not self.training: test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
mean = tensor(self.losses).mean()
self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,10.249631881713867,9.148826599121094,9.148827075958252,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(*inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
assert targs is None
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): test_close(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3)
###Output
(#4) [0,30.35057258605957,27.175193786621094,00:00]
(#4) [0,23.77756690979004,21.27766227722168,00:00]
(#4) [0,18.555871963500977,16.66706085205078,00:00]
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 03a_layers.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn = 'learn',None
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
def _maybe_reduce(val):
if num_distrib()>1:
val = val.clone()
torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM)
val /= num_distrib()
return val
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None):
store_attr(self, "with_input,with_loss,save_preds,save_targs")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs = []
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
preds = to_detach(self.pred)
if self.save_preds is None: self.preds.append(preds)
else:
(self.save_preds/str(self.iter)).save_array(preds)
# self.preds.append(preds[0][None])
targs = to_detach(self.yb)
if self.save_targs is None: self.targets.append(targs)
else:
(self.save_targs/str(self.iter)).save_array(targs[0])
# self.targets.append(targs[0][None])
if self.with_loss:
bs = find_bs(self.yb)
loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)
self.losses.append(to_detach(loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
elif device is None: device = 'cpu'
state = torch.load(file, map_location=device)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
# export
def _try_concat(o):
try:
return torch.cat(o)
except:
return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L())
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics")
self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
@property
def metrics(self): return self._metrics
@metrics.setter
def metrics(self,v): self._metrics = L(v).map(mk_metric)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(False): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(True ): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
try:
dl.shuffle,old_shuffle = False,dl.shuffle
dl.drop_last,old_drop = False,dl.drop_last
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally:
dl.shuffle,dl.drop_last = old_shuffle,old_drop; self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
#self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
if dl is None: dl = self.dbunch.dls[ds_idx]
with self.added_cbs(cbs), self.no_logging(), self.no_mbar():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_loss=False, with_decoded=False, act=None,
save_preds=None, save_targs=None):
#self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, save_preds=save_preds, save_targs=save_targs)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
res = []
if len(cb.preds):
preds = act(torch.cat(cb.preds))
res.append(preds)
if with_decoded: res.append(getattr(self.loss_func, 'decodes', noop)(preds))
res.append(detuplify(tuple(torch.cat(o) for o in zip(*cb.targets))))
if with_input: res = [tuple(_try_concat(o) for o in zip(*cb.inputs))] + res
if with_loss: res.append(torch.cat(cb.losses))
return res
def predict(self, item, rm_type_tfms=0):
dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms)
inp,preds,_ = self.get_preds(dl=dl, with_input=True)
dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*inp,dec_preds))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
if dl is None: dl = self.dbunch.dls[ds_idx]
b = dl.one_batch()
_,_,preds = self.get_preds(dl=[b], with_decoded=True)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
def show_training_loop(self):
loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train',
'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward',
'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train',
'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop',
'**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate',
'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit',
'after_cancel_fit', 'after_fit']
indent = 0
for s in loop:
if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2
elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}')
else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s))
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def no_mbar(self): return replacing_yield(self, 'create_mbar', False)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
distrib_barrier()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
show_training_loop="Show each step in the training loop",
no_logging="Context manager to temporarily remove `logger`",
no_mbar="Context manager to temporarily prevent the master progress bar from being created",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(6)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
for i in [0,1,3]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
test_close(end[2]-init[2], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
def _maybe_reduce(val):
if num_distrib()>1:
val = val.clone()
torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM)
val /= num_distrib()
return val
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, *learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss.mean())*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.iters,self.losses,self.values = [],[],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets[1:].map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
self.iters.append(self.smooth_loss.count)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.smooth_loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5, with_valid=True):
plt.plot(self.losses[skip_start:], label='train')
if with_valid:
plt.plot(self.iters, L(self.values).itemgot(1), label='valid')
plt.legend()
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
if not self.training: test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
mean = tensor(self.losses).mean()
self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,18.497804641723633,17.483501434326172,17.483501434326172,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(*inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
assert targs is None
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
self.opt.clear_state()
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): test_close(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3)
###Output
(#4) [0,5.785715579986572,6.27076530456543,00:00]
(#4) [0,4.893340110778809,5.358923435211182,00:00]
(#4) [0,4.20172119140625,4.581376075744629,00:00]
###Markdown
Exporting a `Learner`
###Code
#export
@patch
def export(self:Learner, fname='export.pkl'):
"Export the content of `self` without the items and the optimizer state for inference"
if rank_distrib(): return # don't export if slave proc
old_dbunch = self.dbunch
self.dbunch = self.dbunch.new_empty()
state = self.opt.state_dict()
self.opt = None
with warnings.catch_warnings():
#To avoid the warning that come from PyTorch about model not being checked
warnings.simplefilter("ignore")
torch.save(self, self.path/fname)
self.create_opt()
self.opt.load_state_dict(state)
self.dbunch = old_dbunch
###Output
_____no_output_____
###Markdown
TTA
###Code
#export
@patch
def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25):
"Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation"
if dl is None: dl = self.dbunch.dls[ds_idx]
if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms)
with dl.dataset.set_split_idx(0), self.no_mbar():
if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n)))
aug_preds = []
for i in self.progress.mbar if hasattr(self,'progress') else range(n):
self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch
# aug_preds.append(self.get_preds(dl=dl)[0][None])
aug_preds.append(self.get_preds(ds_idx)[0][None])
aug_preds = torch.cat(aug_preds).mean(0)
self.epoch = n
with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx)
preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta)
return preds,targs
###Output
_____no_output_____
###Markdown
In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core_foundation.ipynb.
Converted 01a_core_utils.ipynb.
Converted 01b_core_dispatch.ipynb.
Converted 01c_core_transform.ipynb.
Converted 02_core_script.ipynb.
Converted 03_torchcore.ipynb.
Converted 03a_layers.ipynb.
Converted 04_data_load.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 09b_vision_utils.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 71_callback_tensorboard.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
Converted xse_resnext.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn = 'learn',None
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False): store_attr(self, "with_input,with_loss")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs=[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss:
bs = find_bs(self.yb)
loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)
self.losses.append(to_detach(loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
elif device is None: device = 'cpu'
state = torch.load(file, map_location=device)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
# export
def _try_concat(o):
try:
return torch.cat(o)
except:
return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L())
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics")
self.training,self.logger,self.opt,self.cbs = False,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
@property
def metrics(self): return self._metrics
@metrics.setter
def metrics(self,v): self._metrics = L(v).map(mk_metric)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(True ): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(False): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
try:
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
if dl is None: dl = self.dbunch.dls[ds_idx]
with self.added_cbs(cbs), self.no_logging():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_loss=False, with_decoded=False, act=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
preds = act(torch.cat(cb.preds))
res = (preds, detuplify(tuple(torch.cat(o) for o in zip(*cb.targets))))
if with_decoded: res = res + (getattr(self.loss_func, 'decodes', noop)(preds),)
if with_input: res = (tuple(_try_concat(o) for o in zip(*cb.inputs)),) + res
if with_loss: res = res + (torch.cat(cb.losses),)
return res
def predict(self, item, rm_type_tfms=0):
dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms)
inp,preds,_ = self.get_preds(dl=dl, with_input=True)
dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*inp,dec_preds))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
if dl is None: dl = self.dbunch.dls[ds_idx]
b = dl.one_batch()
_,_,preds = self.get_preds(dl=[b], with_decoded=True)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
def show_training_loop(self):
loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train',
'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward',
'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train',
'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop',
'**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate',
'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit',
'after_cancel_fit', 'after_fit']
indent = 0
for s in loop:
if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2
elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}')
else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s))
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
#TODO: if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
show_training_loop="Show each step in the training loop",
no_logging="Context manager to temporarily remove `logger`",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, *learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss.mean())*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean()), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.iters,self.losses,self.values = [],[],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets[1:].map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
self.iters.append(self.smooth_loss.count)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.smooth_loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5, with_valid=True):
plt.plot(self.losses[skip_start:], label='train')
if with_valid:
plt.plot(self.iters, L(self.values).itemgot(0), label='valid')
plt.legend()
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
if not self.training: test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
mean = tensor(self.losses).mean()
self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,10.249631881713867,9.148826599121094,9.148827075958252,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(*inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
assert targs is None
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): test_close(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3)
###Output
(#4) [0,30.35057258605957,27.175193786621094,00:00]
(#4) [0,23.77756690979004,21.27766227722168,00:00]
(#4) [0,18.555871963500977,16.66706085205078,00:00]
###Markdown
Exporting a `Learner`
###Code
#export
@patch
def export(self:Learner, fname='export.pkl'):
"Export the content of `self` without the items and the optimizer state for inference"
old_dbunch = self.dbunch
self.dbunch = dbunch.new_empty()
state = self.opt.state_dict()
self.opt = None
with warnings.catch_warnings():
#To avoid the warning that come from PyTorch about model not being checked
warnings.simplefilter("ignore")
torch.save(self, open(self.path/fname, 'wb'))
self.create_opt()
self.opt.load_state_dict(state)
self.dbunch = old_dbunch
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 03a_layers.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn = 'learn',None
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
def _maybe_reduce(val):
if num_distrib()>1:
val = val.clone()
torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM)
val /= num_distrib()
return val
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None):
store_attr(self, "with_input,with_loss,save_preds,save_targs")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs = []
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
preds = to_detach(self.pred)
if self.save_preds is None: self.preds.append(preds)
else:
(self.save_preds/str(self.iter)).save_array(preds)
# self.preds.append(preds[0][None])
targs = to_detach(self.yb)
if self.save_targs is None: self.targets.append(targs)
else:
(self.save_targs/str(self.iter)).save_array(targs[0])
# self.targets.append(targs[0][None])
if self.with_loss:
bs = find_bs(self.yb)
loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)
self.losses.append(to_detach(loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
elif device is None: device = 'cpu'
state = torch.load(file, map_location=device)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
# export
def _try_concat(o):
try: return torch.cat(o)
except: return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L())
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics")
self.training,self.create_mbar,self.logger,self.opt,self.cbs = False,True,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
@property
def metrics(self): return self._metrics
@metrics.setter
def metrics(self,v): self._metrics = L(v).map(mk_metric)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(False): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(True ): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
names = ['shuffle', 'drop_last']
try:
dl,old,has = change_attrs(dl, names, [False,False])
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally:
dl,*_ = change_attrs(dl, names, old, has); self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
#self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
if dl is None: dl = self.dbunch.dls[ds_idx]
with self.added_cbs(cbs), self.no_logging(), self.no_mbar():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_loss=False, with_decoded=False, act=None,
save_preds=None, save_targs=None):
#self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, save_preds=save_preds, save_targs=save_targs)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced(), self.no_mbar():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
res = []
if len(cb.preds):
preds = act(torch.cat(cb.preds))
res.append(preds)
if with_decoded: res.append(getattr(self.loss_func, 'decodes', noop)(preds))
res.append(detuplify(tuple(torch.cat(o) for o in zip(*cb.targets))))
if with_input: res = [tuple(_try_concat(o) for o in zip(*cb.inputs))] + res
if with_loss: res.append(torch.cat(cb.losses))
return res
def predict(self, item, rm_type_tfms=0):
dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms)
inp,preds,_ = self.get_preds(dl=dl, with_input=True)
dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*inp,dec_preds))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
if dl is None: dl = self.dbunch.dls[ds_idx]
b = dl.one_batch()
_,_,preds = self.get_preds(dl=[b], with_decoded=True)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
def show_training_loop(self):
loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train',
'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward',
'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train',
'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop',
'**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate',
'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit',
'after_cancel_fit', 'after_fit']
indent = 0
for s in loop:
if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2
elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}')
else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s))
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def no_mbar(self): return replacing_yield(self, 'create_mbar', False)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
distrib_barrier()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
show_training_loop="Show each step in the training loop",
no_logging="Context manager to temporarily remove `logger`",
no_mbar="Context manager to temporarily prevent the master progress bar from being created",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(6)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
for i in [0,1,3]: assert not torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
test_close(end[2]-init[2], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
def _maybe_reduce(val):
if num_distrib()>1:
val = val.clone()
torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM)
val /= num_distrib()
return val
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, *learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss.mean())*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.iters,self.losses,self.values = [],[],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets[1:].map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
self.iters.append(self.smooth_loss.count)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.smooth_loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5, with_valid=True):
plt.plot(self.losses[skip_start:], label='train')
if with_valid:
plt.plot(self.iters, L(self.values).itemgot(1), label='valid')
plt.legend()
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
if not self.training: test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
mean = tensor(self.losses).mean()
self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,15.826353073120117,20.148929595947266,20.14892864227295,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(*inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
assert targs is None
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
self.opt.clear_state()
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): test_close(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, wd=0.)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3)
###Output
(#4) [0,13.555200576782227,11.764554977416992,00:00]
(#4) [0,10.777898788452148,9.38945484161377,00:00]
(#4) [0,8.562957763671875,7.495195388793945,00:00]
###Markdown
Exporting a `Learner`
###Code
#export
@patch
def export(self:Learner, fname='export.pkl'):
"Export the content of `self` without the items and the optimizer state for inference"
if rank_distrib(): return # don't export if slave proc
old_dbunch = self.dbunch
self.dbunch = self.dbunch.new_empty()
state = self.opt.state_dict()
self.opt = None
with warnings.catch_warnings():
#To avoid the warning that come from PyTorch about model not being checked
warnings.simplefilter("ignore")
torch.save(self, self.path/fname)
self.create_opt()
self.opt.load_state_dict(state)
self.dbunch = old_dbunch
###Output
_____no_output_____
###Markdown
TTA
###Code
#export
@patch
def tta(self:Learner, ds_idx=1, dl=None, n=4, item_tfms=None, batch_tfms=None, beta=0.25):
"Return predictions on the `ds_idx` dataset or `dl` using Test Time Augmentation"
if dl is None: dl = self.dbunch.dls[ds_idx]
if item_tfms is not None or batch_tfms is not None: dl = dl.new(after_item=item_tfms, after_batch=batch_tfms)
with dl.dataset.set_split_idx(0), self.no_mbar():
if hasattr(self,'progress'): self.progress.mbar = master_bar(list(range(n)))
aug_preds = []
for i in self.progress.mbar if hasattr(self,'progress') else range(n):
self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch
# aug_preds.append(self.get_preds(dl=dl)[0][None])
aug_preds.append(self.get_preds(ds_idx)[0][None])
aug_preds = torch.cat(aug_preds).mean(0)
self.epoch = n
with dl.dataset.set_split_idx(1): preds,targs = self.get_preds(ds_idx)
preds = (aug_preds,preds) if beta is None else torch.lerp(aug_preds, preds, beta)
return preds,targs
###Output
_____no_output_____
###Markdown
In practice, we get the predictions `n` times with the transforms of the training set and average those. The final predictions are `(1-beta)` multiplied by this average + `beta` multiplied by the predictions obtained with the transforms of the dataset. Set `beta` to `None` to get a tuple of the predictions and tta results. Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core_foundation.ipynb.
Converted 01a_core_utils.ipynb.
Converted 01b_core_dispatch.ipynb.
Converted 01c_core_transform.ipynb.
Converted 02_core_script.ipynb.
Converted 03_torchcore.ipynb.
Converted 03a_layers.ipynb.
Converted 04_data_load.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 09b_vision_utils.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 71_callback_tensorboard.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
Converted xse_resnext.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export core
_camel_re1 = re.compile('(.)([A-Z][a-z]+)')
_camel_re2 = re.compile('([a-z0-9])([A-Z])')
def camel2snake(name):
s1 = re.sub(_camel_re1, r'\1_\2', name)
return re.sub(_camel_re2, r'\1_\2', s1).lower()
test_eq(camel2snake('ClassAreCamel'), 'class_are_camel')
#export
def class2attr(self, cls_name):
return camel2snake(re.sub(rf'{cls_name}$', '', self.__class__.__name__) or cls_name.lower())
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default='learn'
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False): store_attr(self, "with_input,with_loss")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs=[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss: self.losses.append(to_detach(self.loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_inference = [event.begin_fit, event.begin_epoch, event.begin_validate]
_after_inference = [event.after_validate, event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
state = torch.load(file)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
x = [(tensor([1]),),(tensor([2]),),(tensor([3]),)]
y = [(tensor([1]),tensor([1])),(tensor([2]),tensor([2])),(tensor([3]),tensor([3]))]
#export
def detuplify(x):
"If `x` is a tuple with one thing, extract it"
return x[0] if len(x)==1 else x
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=SGD, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn")
self.training,self.logger,self.opt,self.cbs = False,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.metrics = L(metrics).map(mk_metric)
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(True ): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(False): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self):
try:
self.dl = self.dbunch.valid_dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
self.dl = self.dbunch.dls[ds_idx] if dl is None else dl
with self.added_cbs(cbs), self.no_logging():
self(_before_inference)
self.all_batches()
self(_after_inference)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_loss=False, decoded=False, act=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
self.dl = self.dbunch.dls[ds_idx] if dl is None else dl
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(_before_inference)
self.all_batches()
self(_after_inference)
if act is None: act = getattr(self.loss_func, 'activation', noop)
preds = act(torch.cat(cb.preds))
if decoded: preds = getattr(sellf.loss_func, 'decodes', noop)(preds)
res = (preds, detuplify(tuple(torch.cat(o) for o in zip(*cb.targets))))
if with_input: res = (tuple(torch.cat(o) for o in zip(*cb.inputs)),) + res
if with_loss: res = res + (torch.cat(cb.losses),)
return res
def predict(self, item):
dl = test_dl(self.dbunch, [item])
inp,preds,_ = self.get_preds(dl=dl, with_input=True)
dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*inp,dec_preds))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
#TODO: if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
no_logging="Context manager to temporarily remove `logger`",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, true_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, *learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss.mean())*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean()), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.losses,self.values = [],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = L(self.smooth_loss) + (self._train_mets if self.training else self._valid_mets)
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets.map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5): plt.plot(self.losses[skip_start:])
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
res = tensor(self.losses).mean()
self.log += [res, res] if self.train_metrics else [res]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,2.317863941192627,1.5089753866195679,1.5089753866195679,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(*inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, ())
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: assert torch.allclose(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): assert torch.allclose(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: assert torch.allclose(end[i],init[i])
#bn was trained
for i in [2,3]: assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): assert torch.allclose(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
###Output
(#4) [0,13.395319938659668,11.900413513183594,00:00]
(#4) [0,11.211679458618164,9.968362808227539,00:00]
(#4) [0,9.423310279846191,8.347665786743164,00:00]
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 02_script.ipynb.
Converted 03_dataloader.ipynb.
Converted 04_transform.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 22_vision_learner.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_data(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(bs*n)
return TensorDataset(x, a*x + b + 0.1*torch.randn(bs*n))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export core
_camel_re1 = re.compile('(.)([A-Z][a-z]+)')
_camel_re2 = re.compile('([a-z0-9])([A-Z])')
def camel2snake(name):
s1 = re.sub(_camel_re1, r'\1_\2', name)
return re.sub(_camel_re2, r'\1_\2', s1).lower()
test_eq(camel2snake('ClassAreCamel'), 'class_are_camel')
#export
def class2attr(self, cls_name):
return camel2snake(re.sub(rf'{cls_name}$', '', self.__class__.__name__) or cls_name.lower())
#export
@docs
class Callback():
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
def __call__(self, event_name): getattr(self, event_name, noop)()
def __repr__(self): return self.__class__.__name__
def __getattr__(self, k): return getattr(self.learn, k)
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
_docs=dict(__call__="Call `self.{event_name}` if it's defined",
__getattr__="Passthrough to get the attributes of `self.learn`")
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0"
self.learn.train_iter,self.learn.pct_train = 0,0.
def begin_batch(self):
"On the first batch, put the model on the right device"
if self.learn.train_iter == 0: self.model.to(find_device(self.xb))
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.begin_batch)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_loss=False): self.with_loss = with_loss
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss: self.losses.append(to_detach(self.loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = 'begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit'.split()
mk_class('event', **{o:o for o in _events},
doc="All possible events as attributes to get tab-completion and typo-proofing")
show_doc(event, name='event', title_level=3)
event.after_backward
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Helper functions to grab certain parameters
###Code
# export core
def trainable_params(m):
"Return all trainable parameters of `m`"
return [p for p in m.parameters() if p.requires_grad]
m = nn.Linear(4,5)
test_eq(trainable_params(m), [m.weight, m.bias])
m.weight.requires_grad_(False)
test_eq(trainable_params(m), [m.bias])
#export core
def bn_bias_params(m):
"Return all bias and BatchNorm parameters"
if isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)):
return list(m.parameters())
res = sum([bn_bias_params(c) for c in m.children()], [])
if hasattr(m, 'bias'): res.append(m.bias)
return res
model = nn.Sequential(nn.Linear(10,20), nn.BatchNorm1d(20), nn.Conv1d(3,4, 3))
test_eq(bn_bias_params(model), [model[0].bias, model[1].weight, model[1].bias, model[2].bias])
model = SequentialEx(nn.Linear(10,20), nn.Sequential(nn.BatchNorm1d(20), nn.Conv1d(3,4, 3)))
test_eq(bn_bias_params(model), [model[0].bias, model[1][0].weight, model[1][0].bias, model[1][1].bias])
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.callbacks = [TrainEvalCallback]
class Learner():
"Group together a `model`, some `data` and a `loss_func` to handle training"
def __init__(self, model, data, loss_func, opt_func=SGD, lr=1e-2, splitter=trainable_params,
cbs=None, cb_funcs=None, metrics=None, path=None, wd_bn_bias=False):
self.model,self.data,self.loss_func = model,data,loss_func
self.opt_func,self.lr,self.splitter,self.wd_bn_bias = opt_func,lr,splitter,wd_bn_bias
self.path = path if path is not None else getattr(data, 'path', Path('.'))
self.metrics = [m if isinstance(m, Metric) else AvgMetric(m) for m in L(metrics)]
self.training,self.logger,self.opt = False,print,None
self.cbs = L([])
self.add_cbs(cbf() for cbf in L(defaults.callbacks))
self.add_cbs(cbs)
self.add_cbs(cbf() for cbf in L(cb_funcs))
def add_cbs(self, cbs):
"Add `cbs` to the list of `Callback` and register `self` as their learner"
for cb in L(cbs): self.add_cb(cb)
def add_cb(self, cb):
"Add `cb` to the list of `Callback` and register `self` as their learner"
if getattr(self, cb.name, None):
error = f"There is another object registered in self.{cb.name}, pick a new name."
assert isinstance(getattr(self, cb.name), cb.__class__), error
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
def remove_cbs(self, cbs):
"Remove `cbs` from the list of `Callback` and deregister `self` as their learner"
for cb in L(cbs): self.remove_cb(cb)
def remove_cb(self, cb):
"Add `cb` from the list of `Callback` and deregister `self` as their learner"
cb.learn = None
setattr(self, cb.name, None)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def create_opt(self, lr=None):
opt = self.opt_func(self.splitter(self.model), lr=self.lr if lr is None else lr)
if not self.wd_bn_bias:
for p in bn_bias_params(self.model):
p_state = opt.state.get(p, {})
p_state['do_wd'] = False
opt.state[p] = p_state
return opt
def one_batch(self, xb, yb, i=None):
"Train or evaluate `self.model` on batch `(xb,yb)`"
try:
if i is not None: self.iter = i
self.xb,self.yb = xb,yb; self('begin_batch')
self.pred = self.model(self.xb); self('after_pred')
self.loss = self.loss_func(self.pred, self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def all_batches(self):
"Train or evaluate `self.model` on all batches of `self.dl`"
self.n_iter = len(self.dl)
for i,(xb,yb) in enumerate(self.dl): self.one_batch(xb, yb, i)
def _do_begin_fit(self, n_epoch):
"Prepare evertyhing for training `epochs` epochs"
self.n_epoch,self.loss = n_epoch,tensor(0.)
self('begin_fit')
def _do_epoch_train(self):
"Execute the training part of the `epoch`-th epoch"
self.dl = self.data.train_dl
try:
self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self):
"Execute the validation part of an epoch"
try:
self.dl = self.data.valid_dl
self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`."
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.opt = self.create_opt(lr=lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, dl=None, cbs=None):
"Validate on `dl` with potential new `cbs`."
self.dl = dl or self.data.valid_dl
with self.added_cbs(cbs), self.no_logging():
self(['begin_fit', 'begin_epoch', 'begin_validate'])
self.all_batches()
self(['after_validate', 'after_epoch', 'after_fit'])
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, with_loss=False):
"Get the predictions and targets on the `ds_idx`-th dataset, optionally `with_loss`"
self.dl = self.data.dls[ds_idx]
cb = GatherPredsCallback(with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(['begin_fit', 'begin_epoch', 'begin_validate'])
self.all_batches()
self(['after_validate', 'after_epoch', 'after_fit'])
if with_loss: return (torch.cat(cb.preds),torch.cat(cb.targets),torch.cat(cb.losses))
res = (torch.cat(cb.preds),torch.cat(cb.targets))
return res
def __call__(self, event_name):
"Call `event_name` (one or a list) for all callbacks"
for e in L(event_name): self._call_one(e)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
@contextmanager
def no_logging(self):
"Context manager to temporarily remove `logger`"
old_logger = self.logger
self.logger = noop
yield
self.logger = old_logger
@contextmanager
def loss_not_reduced(self):
"A context manager to evaluate `loss_func` with reduction set to none."
if hasattr(self.loss_func, 'reduction'):
self.old_red = self.loss_func.reduction
self.loss_func.reduction = 'none'
yield
self.loss_func.reduction = self.old_red
else:
old_loss_func = self.loss_func
self.loss_func = partial(self.loss_func, reduction='none')
yield
self.loss_func = old_loss_func
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below).
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, **kwargs):
return Learner(RegModel(), synth_data(n_train=n_train,n_valid=n_valid, cuda=cuda), MSELossFlat(), **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
###Output
_____no_output_____
###Markdown
Training loop
###Code
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback)
xb,yb = learn.data.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, true_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, self.xb)
test_eq(self.save_yb, self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.xb + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.xb * (self.pred.data - self.yb)).mean()
self.grad_b = 2 * (self.pred.data - self.yb).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
xb,yb = learn.data.one_batch()
learn = synth_learner(cbs=TestOneBatch(xb, yb, 42))
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(xb, yb, 42), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(xb, yb, 42), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.data.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.data.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert learn.test_train_eval is None
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `xb`: last input drawn from `self.dl` (potentially modified by callbacks)- `yb`: last target drawn from `self.dl` (potentially modified by callbacks)- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = {'reset': "Reset inner state to prepare for new computation",
'name': "Name of the `Metric`, camel-cased and with Metric removed",
'accumulate': "Use `learn` to update the state with new results",
'value': "The value of the metric"}
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],u[i:i+25]
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],u[splits[i]:splits[i+1]]
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss)*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
return t.item() if t.numel()==1 else t
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.losses,self.values = [],[],[]
names = [m.name for m in self._valid_mets]
if self.train_metrics: names = [f'train_{n}' for n in names] + [f'valid_{n}' for n in names]
else: names = ['train_loss', 'valid_loss'] + names[1:]
if self.add_time: names.append('time')
self.metric_names = ['epoch']+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
mets = [self.smooth_loss] + self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = [getattr(self, 'epoch', 0)]
def begin_train (self): [m.reset() for m in self._train_mets]
def after_train (self): self.log += [_maybe_item(m.value) for m in self._train_mets]
def begin_validate(self): [m.reset() for m in self._valid_mets]
def after_validate(self): self.log += [_maybe_item(m.value) for m in self._valid_mets]
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return []
return [self.loss] + (self.metrics if self.train_metrics else [])
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return []
return [self.loss] + self.metrics
def plot_loss(self): plt.plot(self.losses)
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
res = tensor(self.losses).mean()
self.log += [res, res] if self.train_metrics else [res]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
###Output
_____no_output_____
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Ploting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss()
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.data.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(learn.data.train_dl)
test_eq(res[0], res[1])
x,y = learn.data.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.data.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
> Warning: If your dataset is unlabelled, the targets will all be 0s.> Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.data.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
learn.data.dls += (dl,)
preds,targs = learn.get_preds(ds_idx=2)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(ds_idx=2, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_dataloader.ipynb.
Converted 01a_script.ipynb.
Converted 02_transforms.ipynb.
Converted 03_pipeline.ipynb.
Converted 04_data_external.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_test_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 50_data_block.ipynb.
Converted 60_vision_models_xresnet.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_synth_learner.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(int(bs*n))
return TensorDataset(x, a*x + b + 0.1*torch.randn(int(bs*n)))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn = 'learn',None
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
getattr(self, event_name, noop)()
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dbunch.device)
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False): store_attr(self, "with_input,with_loss")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs=[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss:
bs = find_bs(self.yb)
loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)
self.losses.append(to_detach(loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
_before_epoch = [event.begin_fit, event.begin_epoch]
_after_epoch = [event.after_epoch, event.after_fit]
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.lr = slice(3e-3)
defaults.wd = 1e-2
defaults.callbacks = [TrainEvalCallback]
# export
def replacing_yield(o, attr, val):
"Context manager to temporarily replace an attribute"
old = getattr(o,attr)
try: yield setattr(o,attr,val)
finally: setattr(o,attr,old)
#export
def mk_metric(m):
"Convert `m` to an `AvgMetric`, unless it's already a `Metric`"
return m if isinstance(m, Metric) else AvgMetric(m)
#export
def save_model(file, model, opt, with_opt=True):
"Save `model` to `file` along with `opt` (if available, and if `with_opt`)"
if opt is None: with_opt=False
state = get_model(model).state_dict()
if with_opt: state = {'model': state, 'opt':opt.state_dict()}
torch.save(state, file)
# export
def load_model(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
if isinstance(device, int): device = torch.device('cuda', device)
elif device is None: device = 'cpu'
state = torch.load(file, map_location=device)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(model_state, strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
# export
def _try_concat(o):
try:
return torch.cat(o)
except:
return sum([L(o_[i,:] for i in range_of(o_)) for o_ in o], L())
# export
class Learner():
def __init__(self, dbunch, model, loss_func=None, opt_func=Adam, lr=defaults.lr, splitter=trainable_params, cbs=None,
cb_funcs=None, metrics=None, path=None, model_dir='models', wd_bn_bias=False, train_bn=True):
store_attr(self, "dbunch,model,opt_func,lr,splitter,model_dir,wd_bn_bias,train_bn,metrics")
self.training,self.logger,self.opt,self.cbs = False,print,None,L()
#TODO: infer loss_func from data
if loss_func is None:
loss_func = getattr(dbunch.train_ds, 'loss_func', None)
assert loss_func is not None, "Could not infer loss function from the data, please pass a loss function."
self.loss_func = loss_func
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.add_cbs(cbf() for cbf in L(defaults.callbacks)+L(cb_funcs))
self.add_cbs(cbs)
self.model.to(self.dbunch.device)
@property
def metrics(self): return self._metrics
@metrics.setter
def metrics(self,v): self._metrics = L(v).map(mk_metric)
def add_cbs(self, cbs): L(cbs).map(self.add_cb)
def remove_cbs(self, cbs): L(cbs).map(self.remove_cb)
def add_cb(self, cb):
old = getattr(self, cb.name, None)
assert not old or isinstance(old, type(cb)), f"self.{cb.name} already registered"
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
return self
def remove_cb(self, cb):
cb.learn = None
if hasattr(self, cb.name): delattr(self, cb.name)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def ordered_cbs(self, cb_func:str): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
def __call__(self, event_name): L(event_name).map(self._call_one)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)
def create_opt(self):
self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
if not self.wd_bn_bias:
for p in self._bn_bias_state(False): p['do_wd'] = False
if self.train_bn:
for p in self._bn_bias_state(True ): p['force_train'] = True
def _split(self, b):
i = getattr(self.dbunch, 'n_inp', 1 if len(b)==1 else len(b)-1)
self.xb,self.yb = b[:i],b[i:]
def all_batches(self):
self.n_iter = len(self.dl)
for o in enumerate(self.dl): self.one_batch(*o)
def one_batch(self, i, b):
self.iter = i
try:
self._split(b); self('begin_batch')
self.pred = self.model(*self.xb); self('after_pred')
if len(self.yb) == 0: return
self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def _do_begin_fit(self, n_epoch):
self.n_epoch,self.loss = n_epoch,tensor(0.); self('begin_fit')
def _do_epoch_train(self):
try:
self.dl = self.dbunch.train_dl; self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self, ds_idx=1, dl=None):
if dl is None: dl = self.dbunch.dls[ds_idx]
try:
self.dl = dl; self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, wd=defaults.wd, cbs=None, reset_opt=False):
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.create_opt()
self.opt.set_hypers(wd=wd, lr=self.lr if lr is None else lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, ds_idx=1, dl=None, cbs=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
if dl is None: dl = self.dbunch.dls[ds_idx]
with self.added_cbs(cbs), self.no_logging():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, dl=None, with_input=False, with_loss=False, with_decoded=False, act=None):
self.epoch,self.n_epoch,self.loss = 0,1,tensor(0.)
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(_before_epoch)
self._do_epoch_validate(ds_idx, dl)
self(_after_epoch)
if act is None: act = getattr(self.loss_func, 'activation', noop)
preds = act(torch.cat(cb.preds))
res = (preds, detuplify(tuple(torch.cat(o) for o in zip(*cb.targets))))
if with_decoded: res = res + (getattr(self.loss_func, 'decodes', noop)(preds),)
if with_input: res = (tuple(_try_concat(o) for o in zip(*cb.inputs)),) + res
if with_loss: res = res + (torch.cat(cb.losses),)
return res
def predict(self, item, rm_type_tfms=0):
dl = test_dl(self.dbunch, [item], rm_type_tfms=rm_type_tfms)
inp,preds,_ = self.get_preds(dl=dl, with_input=True)
dec_preds = getattr(self.loss_func, 'decodes', noop)(preds)
i = getattr(self.dbunch, 'n_inp', -1)
full_dec = self.dbunch.decode_batch((*inp,dec_preds))[0][i:]
return detuplify(full_dec),dec_preds[0],preds[0]
def show_results(self, ds_idx=0, dl=None, max_n=10, **kwargs):
if dl is None: dl = self.dbunch.dls[ds_idx]
b = dl.one_batch()
_,_,preds = self.get_preds(dl=[b], with_decoded=True)
self.dbunch.show_results(b, preds, max_n=max_n, **kwargs)
def show_training_loop(self):
loop = ['Start Fit', 'begin_fit', 'Start Epoch Loop', 'begin_epoch', 'Start Train', 'begin_train',
'Start Batch Loop', 'begin_batch', 'after_pred', 'after_loss', 'after_backward',
'after_step', 'after_cancel_batch', 'after_batch','End Batch Loop','End Train',
'after_cancel_train', 'after_train', 'Start Valid', 'begin_validate','Start Batch Loop',
'**CBs same as train batch**', 'End Batch Loop', 'End Valid', 'after_cancel_validate',
'after_validate', 'End Epoch Loop', 'after_cancel_epoch', 'after_epoch', 'End Fit',
'after_cancel_fit', 'after_fit']
indent = 0
for s in loop:
if s.startswith('Start'): print(f'{" "*indent}{s}'); indent += 2
elif s.startswith('End'): indent -= 2; print(f'{" "*indent}{s}')
else: print(f'{" "*indent} - {s:15}:', self.ordered_cbs(s))
@contextmanager
def no_logging(self): return replacing_yield(self, 'logger', noop)
@contextmanager
def loss_not_reduced(self):
if hasattr(self.loss_func, 'reduction'): return replacing_yield(self.loss_func, 'reduction', 'none')
else: return replacing_yield(self, 'loss_func', partial(self.loss_func, reduction='none'))
def save(self, file, with_opt=True):
if rank_distrib(): return # don't save if slave proc
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
save_model(file, self.model, getattr(self,'opt',None), with_opt)
def load(self, file, with_opt=None, device=None, strict=True):
if device is None: device = self.dbunch.device
if self.opt is None: self.create_opt()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model(file, self.model, self.opt, with_opt=with_opt, device=device, strict=strict)
return self
Learner.x,Learner.y = add_props(lambda i,x: detuplify((x.xb,x.yb)[i]))
#export
add_docs(Learner, "Group together a `model`, some `dbunch` and a `loss_func` to handle training",
add_cbs="Add `cbs` to the list of `Callback` and register `self` as their learner",
add_cb="Add `cb` to the list of `Callback` and register `self` as their learner",
remove_cbs="Remove `cbs` from the list of `Callback` and deregister `self` as their learner",
remove_cb="Add `cb` from the list of `Callback` and deregister `self` as their learner",
added_cbs="Context manage that temporarily adds `cbs`",
ordered_cbs="Return a list of `Callback` for one step `cb_func` in the training loop",
create_opt="Create an optimizer with `lr`",
one_batch="Train or evaluate `self.model` on batch `(xb,yb)`",
all_batches="Train or evaluate `self.model` on all batches of `self.dl`",
fit="Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`.",
validate="Validate on `dl` with potential new `cbs`.",
get_preds="Get the predictions and targets on the `ds_idx`-th dbunchset or `dl`, optionally `with_input` and `with_loss`",
predict="Return the prediction on `item`, fully decoded, loss function decoded and probabilities",
show_results="Show some predictions on `ds_idx`-th dbunchset or `dl`",
show_training_loop="Show each step in the training loop",
no_logging="Context manager to temporarily remove `logger`",
loss_not_reduced="A context manager to evaluate `loss_func` with reduction set to none.",
save="Save model and optimizer state (if `with_opt`) to `self.path/self.model_dir/file`",
load="Load model and optimizer state (if `with_opt`) from `self.path/self.model_dir/file` using `device`"
)
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below). Training loop
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, lr=defaults.lr, **kwargs):
data = synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda)
return Learner(data, RegModel(), loss_func=MSELossFlat(), lr=lr, **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback, lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, decouple_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1, lr=1e-2)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, *self.xb)
test_eq(self.save_yb, *self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.x + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.x * (self.pred.data - self.y)).mean()
self.grad_b = 2 * (self.pred.data - self.y).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
b = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(*b, 42), lr=1e-2)
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(42, b), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Serializing
###Code
show_doc(Learner.save)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer.
###Code
show_doc(Learner.load)
###Output
_____no_output_____
###Markdown
`file` can be a `Path`, a `string` or a buffer. Use `device` to load the model/optimizer state on a device different from the one it was saved.
###Code
learn = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(1)
learn.save('tmp')
assert (Path.cwd()/'models/tmp.pth').exists()
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_eq(learn.opt.state_dict(), learn1.opt.state_dict())
learn.save('tmp1', with_opt=False)
learn1 = synth_learner(cb_funcs=TstCallback, opt_func=partial(SGD, mom=0.9))
learn1 = learn1.load('tmp1')
test_eq(learn.model.a, learn1.model.a)
test_eq(learn.model.b, learn1.model.b)
test_ne(learn.opt.state_dict(), learn1.opt.state_dict())
shutil.rmtree('models')
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert not getattr(learn,'test_train_eval',None)
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = dict(
reset="Reset inner state to prepare for new computation",
name="Name of the `Metric`, camel-cased and with Metric removed",
accumulate="Use `learn` to update the state with new results",
value="The value of the metric")
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
def _maybe_reduce(val):
if num_distrib()>1:
val = val.clone()
torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM)
val /= num_distrib()
return val
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(_maybe_reduce(self.func(learn.pred, *learn.yb)))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],(u[i:i+25],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],(u[splits[i]:splits[i+1]],)
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(_maybe_reduce(learn.loss.mean()))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss.mean()), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
t = t.value
return t.item() if isinstance(t, Tensor) and t.numel()==1 else t
#export
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.iters,self.losses,self.values = [],[],[],[]
names = self._valid_mets.attrgot('name')
if self.train_metrics: names = names.map('train_{}') + names.map('valid_{}')
else: names = L('train_loss', 'valid_loss') + names[1:]
if self.add_time: names.append('time')
self.metric_names = 'epoch'+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
if len(self.yb) == 0: return
mets = self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = L(getattr(self, 'epoch', 0))
def begin_train (self): self._train_mets[1:].map(Self.reset())
def begin_validate(self): self._valid_mets.map(Self.reset())
def after_train (self): self.log += self._train_mets.map(_maybe_item)
def after_validate(self): self.log += self._valid_mets.map(_maybe_item)
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
self.iters.append(self.smooth_loss.count)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return L()
return L(self.smooth_loss) + (self.metrics if self.train_metrics else L())
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return L()
return L(self.loss) + self.metrics
def plot_loss(self, skip_start=5, with_valid=True):
plt.plot(self.losses[skip_start:], label='train')
if with_valid:
plt.plot(self.iters, L(self.values).itemgot(1), label='valid')
plt.legend()
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses from `skip_start` and onward")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
if not self.training: test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
mean = tensor(self.losses).mean()
self.log += [self.smooth_loss, mean] if self.train_metrics else [self.smooth_loss]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
#hide
#Test numpy metric
def tst_metric_np(out, targ): return F.mse_loss(out, targ).numpy()
learn = synth_learner(n_train=5, metrics=tst_metric_np)
learn.fit(1)
###Output
(#5) [0,10.249631881713867,9.148826599121094,9.148827075958252,00:00]
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Plotting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss(skip_start=1)
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(dl=learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
Depending on the `loss_func` attribute of `Learner`, an activation function will be picked automatically so that the predictions make sense. For instance if the loss is a case of cross-entropy, a softmax will be applied, or if the loss is binary cross entropy with logits, a sigmoid will be applied. If you want to make sure a certain activation function is applied, you can pass it with `act`. > Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
preds,targs = learn.get_preds(act = torch.sigmoid)
test_eq(targs, y)
test_close(preds, torch.sigmoid(learn.model(x)))
#Test get_preds work with ds not evenly dividble by bs
learn = synth_learner(n_train=2.5, metrics=tst_metric)
preds,targs = learn.get_preds(ds_idx=0)
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(dl=dl, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
#Test with inputs
inps,preds,targs = learn.get_preds(dl=dl, with_input=True)
test_eq(*inps,x)
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test with no target
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
dl = TfmdDL(TensorDataset(x), bs=16)
preds,targs = learn.get_preds(dl=dl)
assert targs is None
#hide
#Test with targets that are tuples
def _fake_loss(x,y,z,reduction=None): return F.mse_loss(x,y)
learn = synth_learner(n_train=5)
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.dbunch.n_inp=1
learn.loss_func = _fake_loss
dl = TfmdDL(TensorDataset(x, y, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_eq(targs, [y,y])
#hide
#Test with inputs that are tuples
class _TupleModel(Module):
def __init__(self, model): self.model=model
def forward(self, x1, x2): return self.model(x1)
learn = synth_learner(n_train=5)
#learn.dbunch.n_inp=2
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
learn.model = _TupleModel(learn.model)
learn.dbunch = DataBunch(TfmdDL(TensorDataset(x, x, y), bs=16),TfmdDL(TensorDataset(x, x, y), bs=16))
inps,preds,targs = learn.get_preds(ds_idx=0, with_input=True)
test_eq(inps, [x,x])
#hide
#Test auto activation function is picked
learn = synth_learner(n_train=5)
learn.loss_func = BCEWithLogitsLossFlat()
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
preds,targs = learn.get_preds(dl=dl)
test_close(preds, torch.sigmoid(learn.model(x)))
show_doc(Learner.predict)
###Output
_____no_output_____
###Markdown
It returns a tuple of three elements with, in reverse order,- the prediction from the model, potentially passed through the activation of the loss function (if it has one)- the decoded prediction, using the poential `decodes` method from it- the fully decoded prediction, using the transforms used to buil the `DataSource`/`DataBunch`
###Code
class _FakeLossFunc(Module):
reduction = 'none'
def forward(self, x, y): return F.mse_loss(x,y)
def activation(self, x): return x+1
def decodes(self, x): return 2*x
class _Add1(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
learn = synth_learner(n_train=5)
dl = TfmdDL(DataSource(torch.arange(50), tfms = [L(), [_Add1()]]))
learn.dbunch = DataBunch(dl, dl)
learn.loss_func = _FakeLossFunc()
inp = tensor([2.])
out = learn.model(inp).detach()+1 #applying model + activation
dec = 2*out #decodes from loss function
full_dec = dec-1 #decodes from _Add1
test_eq(learn.predict(tensor([2.])), [full_dec, dec, out])
###Output
_____no_output_____
###Markdown
Transfer learning
###Code
#export
@patch
def freeze_to(self:Learner, n):
if self.opt is None: self.create_opt()
self.opt.freeze_to(n)
@patch
def freeze(self:Learner): self.freeze_to(-1)
@patch
def unfreeze(self:Learner): self.freeze_to(0)
add_docs(Learner,
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model")
#hide
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
if p.requires_grad: p.grad = torch.ones_like(p.data)
def _splitter(m): return [list(m.tst[0].parameters()), list(m.tst[1].parameters()), [m.a,m.b]]
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained even frozen since `train_bn=True` by default
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
#hide
learn = synth_learner(n_train=5, opt_func = partial(SGD), cb_funcs=_PutGrad, splitter=_splitter, train_bn=False, lr=1e-2)
learn.model = _TstModel()
learn.freeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were not trained
for i in range(4): test_close(end[i],init[i])
learn.freeze_to(-2)
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear was not trained
for i in [0,1]: test_close(end[i],init[i])
#bn was trained
for i in [2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
learn.unfreeze()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
#linear and bn were trained
for i in range(4): test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]), 1e-3)
###Output
(#4) [0,30.35057258605957,27.175193786621094,00:00]
(#4) [0,23.77756690979004,21.27766227722168,00:00]
(#4) [0,18.555871963500977,16.66706085205078,00:00]
###Markdown
Exporting a `Learner`
###Code
#export
@patch
def export(self:Learner, fname='export.pkl'):
"Export the content of `self` without the items and the optimizer state for inference"
if rank_distrib(): return # don't export if slave proc
old_dbunch = self.dbunch
self.dbunch = dbunch.new_empty()
state = self.opt.state_dict()
self.opt = None
with warnings.catch_warnings():
#To avoid the warning that come from PyTorch about model not being checked
warnings.simplefilter("ignore")
torch.save(self, open(self.path/fname, 'wb'))
self.create_opt()
self.opt.load_state_dict(state)
self.dbunch = old_dbunch
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 03a_layers.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Learner> Basic class for handling the training loop We'll use the following for testing purposes (a basic linear regression problem):
###Code
from torch.utils.data import TensorDataset
def synth_dbunch(a=2, b=3, bs=16, n_train=10, n_valid=2, cuda=False):
def get_data(n):
x = torch.randn(bs*n)
return TensorDataset(x, a*x + b + 0.1*torch.randn(bs*n))
train_ds = get_data(n_train)
valid_ds = get_data(n_valid)
tfms = Cuda() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, after_batch=tfms, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, after_batch=tfms, num_workers=0)
return DataBunch(train_dl, valid_dl)
class RegModel(Module):
def __init__(self): self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
def forward(self, x): return x*self.a + self.b
###Output
_____no_output_____
###Markdown
Callback -
###Code
#export core
_camel_re1 = re.compile('(.)([A-Z][a-z]+)')
_camel_re2 = re.compile('([a-z0-9])([A-Z])')
def camel2snake(name):
s1 = re.sub(_camel_re1, r'\1_\2', name)
return re.sub(_camel_re2, r'\1_\2', s1).lower()
test_eq(camel2snake('ClassAreCamel'), 'class_are_camel')
#export
def class2attr(self, cls_name):
return camel2snake(re.sub(rf'{cls_name}$', '', self.__class__.__name__) or cls_name.lower())
#export
@docs
class Callback():
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
def __call__(self, event_name): getattr(self, event_name, noop)()
def __repr__(self): return self.__class__.__name__
def __getattr__(self, k):
if k=='learn': raise AttributeError
if not hasattr(self,'learn'): raise AttributeError
return getattr(self.learn, k)
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
_docs=dict(__call__="Call `self.{event_name}` if it's defined",
__getattr__="Passthrough to get the attributes of `self.learn`")
###Output
_____no_output_____
###Markdown
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:- compute the output of the model from the input- calculate a loss between this output and the desired target- compute the gradients of this loss with respect to all the model parameters- update the parameters accordingly- zero all the gradientsAny tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:- `begin_fit`: called before doing anything, ideal for initial setup.- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.- `begin_train`: called at the beginning of the training part of an epoch.- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).- `after_step`: called after the step and before the gradients are zeroed.- `after_batch`: called at the end of a batch, for any clean-up before the next one.- `after_train`: called at the end of the training phase of an epoch.- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.- `after_validate`: called at the end of the validation part of an epoch.- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.- `after_fit`: called at the end of training, for final clean-up.
###Code
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
###Output
_____no_output_____
###Markdown
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
###Code
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
###Output
_____no_output_____
###Markdown
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2:
###Code
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
###Output
_____no_output_____
###Markdown
A proper version needs to write `self.learn.a = self.a + 1`:
###Code
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
###Output
_____no_output_____
###Markdown
TrainEvalCallback -
###Code
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
def begin_fit(self):
"Set the iter and epoch counters to 0"
self.learn.train_iter,self.learn.pct_train = 0,0.
def begin_batch(self):
"On the first batch, put the model on the right device"
if self.learn.train_iter == 0: self.model.to(find_device(self.xb))
def after_batch(self):
"Update the iter counter (in training mode)"
if not self.training: return
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
###Output
_____no_output_____
###Markdown
This `Callback` is automatically added in every `Learner` at initialization.
###Code
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.begin_batch)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
###Output
_____no_output_____
###Markdown
GatherPredsCallback -
###Code
#export
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_loss=False): self.with_loss = with_loss
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
self.preds.append(to_detach(self.pred))
self.targets.append(to_detach(self.yb))
if self.with_loss: self.losses.append(to_detach(self.loss))
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
###Output
_____no_output_____
###Markdown
Callbacks control flow It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.This is made possible by raising specific exceptions the training loop will look for (and properly catch).
###Code
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
###Output
_____no_output_____
###Markdown
You can detect one of those exceptions occurred and add code that executes right after with the following events:- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
###Code
# export
_events = 'begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit'.split()
mk_class('event', **{o:o for o in _events},
doc="All possible events as attributes to get tab-completion and typo-proofing")
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
event.after_backward
###Output
_____no_output_____
###Markdown
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
###Code
#hide
#Full test of the control flow below, after the Learner class
###Output
_____no_output_____
###Markdown
Learner -
###Code
# export
defaults.callbacks = [TrainEvalCallback]
class Learner():
"Group together a `model`, some `dbunch` and a `loss_func` to handle training"
def __init__(self, model, dbunch, loss_func, opt_func=SGD, lr=1e-2, splitter=trainable_params,
cbs=None, cb_funcs=None, metrics=None, path=None, wd_bn_bias=False):
self.model,self.dbunch,self.loss_func = model,dbunch,loss_func
self.opt_func,self.lr,self.splitter,self.wd_bn_bias = opt_func,lr,splitter,wd_bn_bias
self.path = path if path is not None else getattr(dbunch, 'path', Path('.'))
self.metrics = [m if isinstance(m, Metric) else AvgMetric(m) for m in L(metrics)]
self.training,self.logger,self.opt = False,print,None
self.cbs = L([])
self.add_cbs(cbf() for cbf in L(defaults.callbacks))
self.add_cbs(cbs)
self.add_cbs(cbf() for cbf in L(cb_funcs))
def add_cbs(self, cbs):
"Add `cbs` to the list of `Callback` and register `self` as their learner"
for cb in L(cbs): self.add_cb(cb)
def add_cb(self, cb):
"Add `cb` to the list of `Callback` and register `self` as their learner"
if getattr(self, cb.name, None):
error = f"There is another object registered in self.{cb.name}, pick a new name."
assert isinstance(getattr(self, cb.name), cb.__class__), error
cb.learn = self
setattr(self, cb.name, cb)
self.cbs.append(cb)
def remove_cbs(self, cbs):
"Remove `cbs` from the list of `Callback` and deregister `self` as their learner"
for cb in L(cbs): self.remove_cb(cb)
def remove_cb(self, cb):
"Add `cb` from the list of `Callback` and deregister `self` as their learner"
cb.learn = None
setattr(self, cb.name, None)
if cb in self.cbs: self.cbs.remove(cb)
@contextmanager
def added_cbs(self, cbs):
self.add_cbs(cbs)
yield
self.remove_cbs(cbs)
def create_opt(self, lr=None):
opt = self.opt_func(self.splitter(self.model), lr=self.lr if lr is None else lr)
if not self.wd_bn_bias:
for p in bn_bias_params(self.model):
p_state = opt.state.get(p, {})
p_state['do_wd'] = False
opt.state[p] = p_state
return opt
def one_batch(self, xb, yb, i=None):
"Train or evaluate `self.model` on batch `(xb,yb)`"
try:
if i is not None: self.iter = i
self.xb,self.yb = xb,yb; self('begin_batch')
self.pred = self.model(self.xb); self('after_pred')
self.loss = self.loss_func(self.pred, self.yb); self('after_loss')
if not self.training: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def all_batches(self):
"Train or evaluate `self.model` on all batches of `self.dl`"
self.n_iter = len(self.dl)
for i,(xb,yb) in enumerate(self.dl): self.one_batch(xb, yb, i)
def _do_begin_fit(self, n_epoch):
"Prepare evertyhing for training `epochs` epochs"
self.n_epoch,self.loss = n_epoch,tensor(0.)
self('begin_fit')
def _do_epoch_train(self):
"Execute the training part of the `epoch`-th epoch"
self.dl = self.dbunch.train_dl
try:
self('begin_train')
self.all_batches()
except CancelTrainException: self('after_cancel_train')
finally: self('after_train')
def _do_epoch_validate(self):
"Execute the validation part of an epoch"
try:
self.dl = self.dbunch.valid_dl
self('begin_validate')
with torch.no_grad(): self.all_batches()
except CancelValidException: self('after_cancel_validate')
finally: self('after_validate')
def fit(self, n_epoch, lr=None, cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using `cbs`. Optionally `reset_opt`."
with self.added_cbs(cbs):
if reset_opt or not self.opt: self.opt = self.create_opt(lr=lr)
try:
self._do_begin_fit(n_epoch)
for epoch in range(n_epoch):
try:
self.epoch=epoch; self('begin_epoch')
self._do_epoch_train()
self._do_epoch_validate()
except CancelEpochException: self('after_cancel_epoch')
finally: self('after_epoch')
except CancelFitException: self('after_cancel_fit')
finally: self('after_fit')
def validate(self, dl=None, cbs=None):
"Validate on `dl` with potential new `cbs`."
self.dl = dl or self.dbunch.valid_dl
with self.added_cbs(cbs), self.no_logging():
self(['begin_fit', 'begin_epoch', 'begin_validate'])
self.all_batches()
self(['after_validate', 'after_epoch', 'after_fit'])
return self.recorder.values[-1]
def get_preds(self, ds_idx=1, with_loss=False):
"Get the predictions and targets on the `ds_idx`-th dbunchset, optionally `with_loss`"
self.dl = self.dbunch.dls[ds_idx]
cb = GatherPredsCallback(with_loss=with_loss)
with self.no_logging(), self.added_cbs(cb), self.loss_not_reduced():
self(['begin_fit', 'begin_epoch', 'begin_validate'])
self.all_batches()
self(['after_validate', 'after_epoch', 'after_fit'])
if with_loss: return (torch.cat(cb.preds),torch.cat(cb.targets),torch.cat(cb.losses))
res = (torch.cat(cb.preds),torch.cat(cb.targets))
return res
def __call__(self, event_name):
"Call `event_name` (one or a list) for all callbacks"
for e in L(event_name): self._call_one(e)
def _call_one(self, event_name):
assert hasattr(event, event_name)
[cb(event_name) for cb in sort_by_run(self.cbs)]
@contextmanager
def no_logging(self):
"Context manager to temporarily remove `logger`"
old_logger = self.logger
self.logger = noop
yield
self.logger = old_logger
@contextmanager
def loss_not_reduced(self):
"A context manager to evaluate `loss_func` with reduction set to none."
if hasattr(self.loss_func, 'reduction'):
self.old_red = self.loss_func.reduction
self.loss_func.reduction = 'none'
yield
self.loss_func.reduction = self.old_red
else:
old_loss_func = self.loss_func
self.loss_func = partial(self.loss_func, reduction='none')
yield
self.loss_func = old_loss_func
###Output
_____no_output_____
###Markdown
`opt_func` will be used to create an optimizer when `Learner.fit` is called, with `lr` as a learning rate. `splitter` is a function taht takes `self.model` and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). The default is `trainable_params`, which returns all trainable parameters of the model.`cbs` is one or a list of `Callback`s to pass to the `Learner`, and `cb_funcs` is one or a list of functions returning a `Callback` that will be called at init. Each `Callback` is registered as an attribute of `Learner` (with camel case). At creation, all the callbacks in `defaults.callbacks` (`TrainEvalCallback` and `Recorder`) are associated to the `Learner`.`metrics` is an optional list of metrics, that can be either functions or `Metric`s (see below).
###Code
#Test init with callbacks
def synth_learner(n_train=10, n_valid=2, cuda=False, **kwargs):
return Learner(RegModel(), synth_dbunch(n_train=n_train,n_valid=n_valid, cuda=cuda), MSELossFlat(), **kwargs)
tst_learn = synth_learner()
test_eq(len(tst_learn.cbs), 1)
assert isinstance(tst_learn.cbs[0], TrainEvalCallback)
assert hasattr(tst_learn, ('train_eval'))
tst_learn = synth_learner(cbs=TstCallback())
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
tst_learn = synth_learner(cb_funcs=TstCallback)
test_eq(len(tst_learn.cbs), 2)
assert isinstance(tst_learn.cbs[1], TstCallback)
assert hasattr(tst_learn, ('tst'))
#A name that becomes an existing attribute of the Learner will throw an exception (here add_cb)
class AddCbCallback(Callback): pass
test_fail(lambda: synth_learner(cbs=AddCbCallback()))
###Output
_____no_output_____
###Markdown
Training loop
###Code
show_doc(Learner.fit)
#Training a few epochs should make the model better
learn = synth_learner(cb_funcs=TstCallback)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit(2)
assert learn.loss < init_loss
#hide
#Test of TrainEvalCallback
class TestTrainEvalCallback(Callback):
run_after=TrainEvalCallback
def begin_fit(self):
test_eq([self.pct_train,self.train_iter], [0., 0])
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_batch(self): test_eq(next(self.model.parameters()).device, find_device(self.xb))
def after_batch(self):
if self.training:
test_eq(self.pct_train , self.old_pct_train+1/(self.n_iter*self.n_epoch))
test_eq(self.train_iter, self.old_train_iter+1)
self.old_pct_train,self.old_train_iter = self.pct_train,self.train_iter
def begin_train(self):
assert self.training and self.model.training
test_eq(self.pct_train, self.epoch/self.n_epoch)
self.old_pct_train = self.pct_train
def begin_validate(self):
assert not self.training and not self.model.training
learn = synth_learner(cb_funcs=TestTrainEvalCallback)
learn.fit(1)
#Check order is properly taken into account
learn.cbs = L(reversed(learn.cbs))
#hide
#cuda
#Check model is put on the GPU if needed
learn = synth_learner(cb_funcs=TestTrainEvalCallback, cuda=True)
learn.fit(1)
#hide
#Check wd is not applied on bn/bias when option wd_bn_bias=False
class _TstModel(nn.Module):
def __init__(self):
super().__init__()
self.a,self.b = nn.Parameter(torch.randn(1)),nn.Parameter(torch.randn(1))
self.tst = nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(3))
self.tst[0].bias.data,self.tst[1].bias.data = torch.randn(5),torch.randn(3)
def forward(self, x): return x * self.a + self.b
class _PutGrad(Callback):
def after_backward(self):
for p in self.learn.model.tst.parameters():
p.grad = torch.ones_like(p.data)
learn = synth_learner(n_train=5, opt_func = partial(SGD, wd=1, true_wd=True), cb_funcs=_PutGrad)
learn.model = _TstModel()
init = [p.clone() for p in learn.model.tst.parameters()]
learn.fit(1)
end = list(learn.model.tst.parameters())
assert not torch.allclose(end[0]-init[0], -0.05 * torch.ones_like(end[0]))
for i in [1,2,3]: test_close(end[i]-init[i], -0.05 * torch.ones_like(end[i]))
show_doc(Learner.one_batch)
###Output
_____no_output_____
###Markdown
This is an internal method called by `Learner.fit`. If passed, `i` is the index of this iteration in the epoch. In training method, this does a full training step on the batch (compute predictions, loss, gradients, update the model parameters and zero the gradients). In validation mode, it stops at the loss computation.
###Code
# export
class VerboseCallback(Callback):
"Callback that prints the name of each event called"
def __call__(self, event_name):
print(event_name)
super().__call__(event_name)
#hide
class TestOneBatch(VerboseCallback):
def __init__(self, xb, yb, i):
self.save_xb,self.save_yb,self.i = xb,yb,i
self.old_pred,self.old_loss = None,tensor(0.)
def begin_batch(self):
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_eq(self.iter, self.i)
test_eq(self.save_xb, self.xb)
test_eq(self.save_yb, self.yb)
if hasattr(self.learn, 'pred'): test_eq(self.pred, self.old_pred)
def after_pred(self):
self.old_pred = self.pred
test_eq(self.pred, self.model.a.data * self.xb + self.model.b.data)
test_eq(self.loss, self.old_loss)
def after_loss(self):
self.old_loss = self.loss
test_eq(self.loss, self.loss_func(self.old_pred, self.save_yb))
for p in self.model.parameters():
if not hasattr(p, 'grad') or p.grad is not None: test_eq(p.grad, tensor([0.]))
def after_backward(self):
self.grad_a = (2 * self.xb * (self.pred.data - self.yb)).mean()
self.grad_b = 2 * (self.pred.data - self.yb).mean()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
test_eq(self.model.a.data, self.old_a)
test_eq(self.model.b.data, self.old_b)
def after_step(self):
test_close(self.model.a.data, self.old_a - self.lr * self.grad_a)
test_close(self.model.b.data, self.old_b - self.lr * self.grad_b)
self.old_a,self.old_b = self.model.a.data.clone(),self.model.b.data.clone()
test_close(self.model.a.grad.data, self.grad_a)
test_close(self.model.b.grad.data, self.grad_b)
def after_batch(self):
for p in self.model.parameters(): test_eq(p.grad, tensor([0.]))
#hide
learn = synth_learner()
xb,yb = learn.dbunch.one_batch()
learn = synth_learner(cbs=TestOneBatch(xb, yb, 42))
#Remove train/eval
learn.cbs = learn.cbs[1:]
#Setup
learn.loss,learn.training = tensor(0.),True
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.model.train()
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
test_stdout(lambda: learn.one_batch(xb, yb, 42), '\n'.join(batch_events))
test_stdout(lambda: learn.one_batch(xb, yb, 42), '\n'.join(batch_events)) #Check it works for a second batch
show_doc(Learner.all_batches)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
with redirect_stdout(io.StringIO()):
learn._do_begin_fit(1)
learn.epoch,learn.dl = 0,learn.dbunch.train_dl
learn('begin_epoch')
learn('begin_train')
test_stdout(learn.all_batches, '\n'.join(batch_events * 5))
test_eq(learn.train_iter, 5)
valid_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
with redirect_stdout(io.StringIO()):
learn.dl = learn.dbunch.valid_dl
learn('begin_validate')
test_stdout(learn.all_batches, '\n'.join(valid_events * 2))
test_eq(learn.train_iter, 5)
#hide
learn = synth_learner(n_train=5, cbs=VerboseCallback())
test_stdout(lambda: learn._do_begin_fit(42), 'begin_fit')
test_eq(learn.n_epoch, 42)
test_eq(learn.loss, tensor(0.))
#hide
learn.opt = SGD(learn.model.parameters(), lr=learn.lr)
learn.epoch = 0
test_stdout(lambda: learn._do_epoch_train(), '\n'.join(['begin_train'] + batch_events * 5 + ['after_train']))
#hide
test_stdout(learn._do_epoch_validate, '\n'.join(['begin_validate'] + valid_events * 2+ ['after_validate']))
###Output
_____no_output_____
###Markdown
Callback handling
###Code
show_doc(Learner.__call__)
show_doc(Learner.add_cb)
learn = synth_learner()
learn.add_cb(TestTrainEvalCallback())
test_eq(len(learn.cbs), 2)
assert isinstance(learn.cbs[1], TestTrainEvalCallback)
test_eq(learn.train_eval.learn, learn)
show_doc(Learner.add_cbs)
learn.add_cbs([TestTrainEvalCallback(), TestTrainEvalCallback()])
test_eq(len(learn.cbs), 4)
show_doc(Learner.remove_cb)
cb = learn.cbs[1]
learn.remove_cb(learn.cbs[1])
test_eq(len(learn.cbs), 3)
assert cb.learn is None
assert learn.test_train_eval is None
show_doc(Learner.remove_cbs)
cb = learn.cbs[1]
learn.remove_cbs(learn.cbs[1:])
test_eq(len(learn.cbs), 1)
###Output
_____no_output_____
###Markdown
When writing a callback, the following attributes of `Learner` are available:- `model`: the model used for training/validation- `data`: the underlying `DataBunch`- `loss_func`: the loss function used- `opt`: the optimizer used to udpate the model parameters- `opt_func`: the function used to create the optimizer- `cbs`: the list containing all `Callback`s- `dl`: current `DataLoader` used for iteration- `xb`: last input drawn from `self.dl` (potentially modified by callbacks)- `yb`: last target drawn from `self.dl` (potentially modified by callbacks)- `pred`: last predictions from `self.model` (potentially modified by callbacks)- `loss`: last computed loss (potentially modified by callbacks)- `n_epoch`: the number of epochs in this training- `n_iter`: the number of iterations in the current `self.dl`- `epoch`: the current epoch index (from 0 to `n_epoch-1`)- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:- `train_iter`: the number of training iterations done since the beginning of this training- `pct_train`: from 0. to 1., the percentage of training iterations completed- `training`: flag to indicate if we're in training mode or notThe following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:- `smooth_loss`: an exponentially-averaged version of the training loss Control flow testing
###Code
#hide
batch_events = ['begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step', 'after_batch']
batchv_events = ['begin_batch', 'after_pred', 'after_loss', 'after_batch']
train_events = ['begin_train'] + batch_events + ['after_train']
valid_events = ['begin_validate'] + batchv_events + ['after_validate']
epoch_events = ['begin_epoch'] + train_events + valid_events + ['after_epoch']
cycle_events = ['begin_fit'] + epoch_events + ['after_fit']
#hide
learn = synth_learner(n_train=1, n_valid=1)
test_stdout(lambda: learn.fit(1, cbs=VerboseCallback()), '\n'.join(cycle_events))
#hide
class TestCancelCallback(VerboseCallback):
def __init__(self, cancel_at=event.begin_batch, exception=CancelBatchException, train=None):
def _interrupt():
if train is None or train == self.training: raise exception()
setattr(self, cancel_at, _interrupt)
#hide
#test cancel batch
for i,e in enumerate(batch_events[:-1]):
be = batch_events[:i+1] + ['after_cancel_batch', 'after_batch']
bev = be if i <3 else batchv_events
cycle = cycle_events[:3] + be + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(cancel_at=e)), '\n'.join(cycle))
#CancelBatchException not caught if thrown in any other event
for e in cycle_events:
if e not in batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(cancel_at=e)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i < len(batch_events) else [])
be += ['after_cancel_train', 'after_train']
cycle = cycle_events[:3] + be + ['begin_validate'] + batchv_events + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelTrainException, True)), '\n'.join(cycle))
#CancelTrainException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_train'] + batch_events[:-1]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelTrainException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i < len(batchv_events) else []) + ['after_cancel_validate']
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev + cycle_events[-3:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelValidException, False)), '\n'.join(cycle))
#CancelValidException not caught if thrown in any other event
for e in cycle_events:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelValidException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel epoch
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_cancel_epoch'] + cycle_events[-2:]
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelEpochException, False)), '\n'.join(cycle))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelEpochException, False)),
'\n'.join(cycle_events[:2] + ['after_cancel_epoch'] + cycle_events[-2:]))
#CancelEpochException not caught if thrown in any other event
for e in ['begin_fit', 'after_epoch', 'after_fit']:
if e not in ['begin_validate'] + batch_events[:3]:
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback(e, CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
#hide
#test cancel fit
#In begin fit
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_fit', CancelFitException)),
'\n'.join(['begin_fit', 'after_cancel_fit', 'after_fit']))
#In begin epoch
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback('begin_epoch', CancelFitException, False)),
'\n'.join(cycle_events[:2] + ['after_epoch', 'after_cancel_fit', 'after_fit']))
#In train
for i,e in enumerate(['begin_train'] + batch_events):
be = batch_events[:i] + (['after_batch'] if i >=1 and i<len(batch_events) else [])
cycle = cycle_events[:3] + be + ['after_train', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, True)), '\n'.join(cycle))
#In valid
for i,e in enumerate(['begin_validate'] + batchv_events):
bev = batchv_events[:i] + (['after_batch'] if i >=1 and i<len(batchv_events) else [])
cycle = cycle_events[:3] + batch_events + ['after_train', 'begin_validate'] + bev
cycle += ['after_validate', 'after_epoch', 'after_cancel_fit', 'after_fit']
test_stdout(lambda: learn.fit(1, cbs=TestCancelCallback(e, CancelFitException, False)), '\n'.join(cycle))
#CancelEpochException not caught if thrown in any other event
with redirect_stdout(io.StringIO()):
cb = TestCancelCallback('after_fit', CancelEpochException)
test_fail(lambda: learn.fit(1, cbs=cb))
learn.remove_cb(cb) #Have to remove it manually
###Output
_____no_output_____
###Markdown
Metrics -
###Code
#export
@docs
class Metric():
"Blueprint for defining a metric"
def reset(self): pass
def accumulate(self, learn): pass
@property
def value(self): raise NotImplementedError
@property
def name(self): return class2attr(self, 'Metric')
_docs = {'reset': "Reset inner state to prepare for new computation",
'name': "Name of the `Metric`, camel-cased and with Metric removed",
'accumulate': "Use `learn` to update the state with new results",
'value': "The value of the metric"}
show_doc(Metric, title_level=3)
###Output
_____no_output_____
###Markdown
Metrics can be simple averages (like accuracy) but sometimes their computation is a little bit more complex and can't be averaged over batches (like precision or recall), which is why we need a special class for them. For simple functions that can be computed as averages over batches, we can use the class `AvgMetric`, otherwise you'll need to implement the following methods.> Note: If your `Metric` has state depending on tensors, don't forget to store it on the CPU to avoid any potential memory leaks.
###Code
show_doc(Metric.reset)
show_doc(Metric.accumulate)
show_doc(Metric.value, name='Metric.value')
show_doc(Metric.name, name='Metric.name')
#export
class AvgMetric(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(self.func(learn.pred, learn.yb))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.__name__
show_doc(AvgMetric, title_level=3)
learn = synth_learner()
tst = AvgMetric(lambda x,y: (x-y).abs().mean())
t,u = torch.randn(100),torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.pred,learn.yb = t[i:i+25],u[i:i+25]
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.pred,learn.yb = t[splits[i]:splits[i+1]],u[splits[i]:splits[i+1]]
tst.accumulate(learn)
test_close(tst.value, (t-u).abs().mean())
#export
class AvgLoss(Metric):
"Average the losses taking into account potential different batch sizes"
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = find_bs(learn.yb)
self.total += to_detach(learn.loss)*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return "loss"
show_doc(AvgLoss, title_level=3)
tst = AvgLoss()
t = torch.randn(100)
tst.reset()
for i in range(0,100,25):
learn.yb,learn.loss = t[i:i+25],t[i:i+25].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#hide
#With varying batch size
tst.reset()
splits = [0, 30, 50, 60, 100]
for i in range(len(splits )-1):
learn.yb,learn.loss = t[splits[i]:splits[i+1]],t[splits[i]:splits[i+1]].mean()
tst.accumulate(learn)
test_close(tst.value, t.mean())
#export
class AvgSmoothLoss(Metric):
"Smooth average of the losses (exponentially weighted with `beta`)"
def __init__(self, beta=0.98): self.beta = beta
def reset(self): self.count,self.val = 0,tensor(0.)
def accumulate(self, learn):
self.count += 1
self.val = torch.lerp(to_detach(learn.loss), self.val, self.beta)
@property
def value(self): return self.val/(1-self.beta**self.count)
show_doc(AvgSmoothLoss, title_level=3)
tst = AvgSmoothLoss()
t = torch.randn(100)
tst.reset()
val = tensor(0.)
for i in range(4):
learn.loss = t[i*25:(i+1)*25].mean()
tst.accumulate(learn)
val = val*0.98 + t[i*25:(i+1)*25].mean()*(1-0.98)
test_close(val/(1-0.98**(i+1)), tst.value)
###Output
_____no_output_____
###Markdown
Recorder --
###Code
#export
from fastprogress.fastprogress import format_time
def _maybe_item(t):
return t.item() if t.numel()==1 else t
class Recorder(Callback):
"Callback that registers statistics (lr, loss and metrics) during training"
run_after = TrainEvalCallback
def __init__(self, add_time=True, train_metrics=False, beta=0.98):
self.add_time,self.train_metrics = add_time,train_metrics
self.loss,self.smooth_loss = AvgLoss(),AvgSmoothLoss(beta=beta)
def begin_fit(self):
"Prepare state for training"
self.lrs,self.losses,self.values = [],[],[]
names = [m.name for m in self._valid_mets]
if self.train_metrics: names = [f'train_{n}' for n in names] + [f'valid_{n}' for n in names]
else: names = ['train_loss', 'valid_loss'] + names[1:]
if self.add_time: names.append('time')
self.metric_names = ['epoch']+names
self.smooth_loss.reset()
def after_batch(self):
"Update all metrics and records lr and smooth loss in training"
mets = [self.smooth_loss] + self._train_mets if self.training else self._valid_mets
for met in mets: met.accumulate(self.learn)
if not self.training: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.smooth_loss.value)
self.learn.smooth_loss = self.smooth_loss.value
def begin_epoch(self):
"Set timer if `self.add_time=True`"
self.cancel_train,self.cancel_valid = False,False
if self.add_time: self.start_epoch = time.time()
self.log = [getattr(self, 'epoch', 0)]
def begin_train (self): [m.reset() for m in self._train_mets]
def after_train (self): self.log += [_maybe_item(m.value) for m in self._train_mets]
def begin_validate(self): [m.reset() for m in self._valid_mets]
def after_validate(self): self.log += [_maybe_item(m.value) for m in self._valid_mets]
def after_cancel_train(self): self.cancel_train = True
def after_cancel_validate(self): self.cancel_valid = True
def after_epoch(self):
"Store and log the loss/metric values"
self.values.append(self.log[1:].copy())
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
self.logger(self.log)
@property
def _train_mets(self):
if getattr(self, 'cancel_train', False): return []
return [self.loss] + (self.metrics if self.train_metrics else [])
@property
def _valid_mets(self):
if getattr(self, 'cancel_valid', False): return []
return [self.loss] + self.metrics
def plot_loss(self): plt.plot(self.losses)
#export
add_docs(Recorder,
begin_train = "Reset loss and metrics state",
after_train = "Log loss and metric values on the training set (if `self.training_metrics=True`)",
begin_validate = "Reset loss and metrics state",
after_validate = "Log loss and metric values on the validation set",
after_cancel_train = "Ignore training metrics for this epoch",
after_cancel_validate = "Ignore validation metrics for this epoch",
plot_loss = "Plot the losses")
defaults.callbacks = [TrainEvalCallback, Recorder]
###Output
_____no_output_____
###Markdown
By default, metrics are computed on the validation set only, although that can be changed with `training_metrics=True`. `beta` is the weight used to compute the exponentially weighted average of the losses (which gives the `smooth_loss` attribute to `Learner`).
###Code
#Test printed output
def tst_metric(out, targ): return F.mse_loss(out, targ)
learn = synth_learner(n_train=5, metrics=tst_metric)
pat = r"[tensor\(\d.\d*\), tensor\(\d.\d*\), tensor\(\d.\d*\), 'dd:dd']"
test_stdout(lambda: learn.fit(1), pat, regex=True)
#hide
class TestRecorderCallback(Callback):
run_after=Recorder
def begin_fit(self):
self.train_metrics,self.add_time = self.recorder.train_metrics,self.recorder.add_time
self.beta = self.recorder.smooth_loss.beta
for m in self.metrics: assert isinstance(m, Metric)
test_eq(self.recorder.smooth_loss.val, 0.)
#To test what the recorder logs, we use a custom logger function.
self.learn.logger = self.test_log
self.old_smooth,self.count = tensor(0.),0
def after_batch(self):
if self.training:
self.count += 1
test_eq(len(self.recorder.lrs), self.count)
test_eq(self.recorder.lrs[-1], self.opt.hypers[-1]['lr'])
test_eq(len(self.recorder.losses), self.count)
smooth = (1 - self.beta**(self.count-1)) * self.old_smooth * self.beta + self.loss * (1-self.beta)
smooth /= 1 - self.beta**self.count
test_close(self.recorder.losses[-1], smooth, eps=1e-4)
test_close(self.smooth_loss, smooth, eps=1e-4)
self.old_smooth = self.smooth_loss
self.bs += find_bs(self.yb)
test_eq(self.recorder.loss.count, self.bs)
if self.train_metrics or not self.training:
for m in self.metrics: test_eq(m.count, self.bs)
self.losses.append(self.loss.detach().cpu())
def begin_epoch(self):
if self.add_time: self.start_epoch = time.time()
self.log = [self.epoch]
def begin_train(self):
self.bs = 0
self.losses = []
for m in self.recorder._train_mets: test_eq(m.count, self.bs)
def after_train(self):
res = tensor(self.losses).mean()
self.log += [res, res] if self.train_metrics else [res]
test_eq(self.log, self.recorder.log)
self.losses = []
def begin_validate(self):
self.bs = 0
self.losses = []
for m in [self.recorder.loss] + self.metrics: test_eq(m.count, self.bs)
def test_log(self, log):
res = tensor(self.losses).mean()
self.log += [res, res]
if self.add_time: self.log.append(format_time(time.time() - self.start_epoch))
test_eq(log, self.log)
#hide
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.train_metrics=True
learn.fit(1)
test_eq(learn.recorder.metric_names,
['epoch', 'train_loss', 'train_tst_metric', 'valid_loss', 'valid_tst_metric', 'time'])
learn = synth_learner(n_train=5, metrics = tst_metric, cb_funcs = TestRecorderCallback)
learn.recorder.add_time=False
learn.fit(1)
test_eq(learn.recorder.metric_names, ['epoch', 'train_loss', 'valid_loss', 'tst_metric'])
###Output
_____no_output_____
###Markdown
Callback internals
###Code
show_doc(Recorder.begin_fit)
show_doc(Recorder.begin_epoch)
show_doc(Recorder.begin_validate)
show_doc(Recorder.after_batch)
show_doc(Recorder.after_epoch)
###Output
_____no_output_____
###Markdown
Ploting tools
###Code
show_doc(Recorder.plot_loss)
#hide
learn.recorder.plot_loss()
###Output
_____no_output_____
###Markdown
Inference functions
###Code
show_doc(Learner.no_logging)
learn = synth_learner(n_train=5, metrics=tst_metric)
with learn.no_logging():
test_stdout(lambda: learn.fit(1), '')
test_eq(learn.logger, print)
show_doc(Learner.validate)
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
res = learn.validate()
test_eq(res[0], res[1])
x,y = learn.dbunch.valid_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#hide
#Test other dl
res = learn.validate(learn.dbunch.train_dl)
test_eq(res[0], res[1])
x,y = learn.dbunch.train_ds.tensors
test_close(res[0], F.mse_loss(learn.model(x), y))
#Test additional callback is executed.
cycle = cycle_events[:2] + ['begin_validate'] + batchv_events * 2 + cycle_events[-3:]
test_stdout(lambda: learn.validate(cbs=VerboseCallback()), '\n'.join(cycle))
show_doc(Learner.loss_not_reduced)
#hide
test_eq(learn.loss_func.reduction, 'mean')
with learn.loss_not_reduced():
test_eq(learn.loss_func.reduction, 'none')
x,y = learn.dbunch.one_batch()
p = learn.model(x)
losses = learn.loss_func(p, y)
test_eq(losses.shape, y.shape)
test_eq(losses, F.mse_loss(p,y, reduction='none'))
test_eq(learn.loss_func.reduction, 'mean')
show_doc(Learner.get_preds)
###Output
_____no_output_____
###Markdown
> Warning: If your dataset is unlabelled, the targets will all be 0s.> Note: If you want to use the option `with_loss=True` on a custom loss function, make sure you have implemented a `reduction` attribute that supports 'none'
###Code
#Test result
learn = synth_learner(n_train=5, metrics=tst_metric)
preds,targs = learn.get_preds()
x,y = learn.dbunch.valid_ds.tensors
test_eq(targs, y)
test_close(preds, learn.model(x))
#hide
#Test other dataset
x = torch.randn(16*5)
y = 2*x + 3 + 0.1*torch.randn(16*5)
dl = TfmdDL(TensorDataset(x, y), bs=16)
learn.dbunch.dls += (dl,)
preds,targs = learn.get_preds(ds_idx=2)
test_eq(targs, y)
test_close(preds, learn.model(x))
#Test with loss
preds,targs,losses = learn.get_preds(ds_idx=2, with_loss=True)
test_eq(targs, y)
test_close(preds, learn.model(x))
test_close(losses, F.mse_loss(preds, targs, reduction='none'))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_script.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_test_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_synth_learner.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
|
.ipynb_checkpoints/Samlet CSV bygger-checkpoint.ipynb | ###Markdown
###Code
df_urls = pd.read_csv("indvidual_urls.csv")
ratings_df = pd.DataFrame()
#empty lists
count = 0
loc_list = []
reviewCount_list = []
distance_list = []
unique_list = []
price_class_list = []
main_rating_list = []
ranking_list = []
price_class_value_list = []
###Output
_____no_output_____
###Markdown
###Code
#giant for loop
for url in df_urls["Restaurant_links"][:10]:
#Sofies ratings loop
count = count+1
print(count)
trip = ureq(url)
trip_html = trip.read()
trip.close()
trip_soup = soup(trip_html, "lxml")
test = trip_soup.findAll(True, {"class":["restaurants-detail-overview-cards-RatingsOverviewCard__ratingText--1P1Lq", "restaurants-detail-overview-cards-RatingsOverviewCard__ratingBubbles--1kQYC"]})
name = trip_soup.findAll(True, {"class":["ui_header h1"]})
name = str(name)
name = name[26:]
name = name.replace("</h1>]", '')
elements = []
for x in test:
elements.append(str(x))
keys = elements[0::2]
values = elements[1::2]
keys.append("Name")
values.append(str(name))
keys[:] = [s.replace('<span class="restaurants-detail-overview-cards-RatingsOverviewCard__ratingText--1P1Lq">', '') for s in keys]
keys[:] = [s.replace('</span>', '') for s in keys]
values[:] = [s.replace('<span class="restaurants-detail-overview-cards-RatingsOverviewCard__ratingBubbles--1kQYC"><span class="ui_bubble_rating bubble_', '') for s in values]
values[:] = [s.replace('"></span></span>', '') for s in values]
ratings_dict = {}
for i in range(len(keys)):
ratings_dict[keys[i]] = values[i]
#append
ratings_df = ratings_df.append([ratings_dict], ignore_index=True)
#Najas location loop
p = re.compile(r'"coords":"(.*?)"')
r = requests.get(url)
coords = p.findall(r.text)[1]
loc_list.append(coords)
#Naja review count
reviewCount = str(trip_soup.find(class_="reviewCount"))
reviewCount = reviewCount.split(">")[1].split("<")[0]
reviewCount_list.append(reviewCount)
#out of loop
ratings_df["Location"] = loc_list
ratings_df["Number of reviews"] = reviewCount_list
for url in df_urls["Restaurant_links"][:10]:
url = ureq(url)
url_html = url.read()
url.close()
url_soup = soup(url_html,'lxml')
#Exstracting price_class_number $$$
price_class_number = str(url_soup.find('div', class_="header_links"))
price_class = re.sub('[^$-]', '', price_class_number)
price_class_list.append(price_class)
#Exstracting number of bubbles
bubbles = str(url_soup.find(class_="restaurants-detail-overview-cards-RatingsOverviewCard__overallRating--nohTl"))
main_rating = re.sub('[^0-9,.]', '', bubbles) #stripping all other than the ranking numbers
main_rating_list.append(main_rating)
#Exstracting list_ranking
list_ranking =str(url_soup.find(class_="restaurants-detail-overview-cards-RatingsOverviewCard__ranking--17CmN").find('span', class_=""))
ranking = re.sub('[^0-9,]', '', list_ranking)
ranking_list.append(ranking)
#Exstracting price_class_value
price_class_value = str(url_soup.find(class_="restaurants-detail-overview-cards-DetailsSectionOverviewCard__tagText--1OH6h"))
price_class_value = re.sub('[^0-9,.]', '', price_class_value).split('.')[-1]
price_class_value_list.append(price_class_value)
ratings_df["Price class"] = price_class_list
ratings_df["Main rating"] = main_rating_list
ratings_df["Ranking on list"] = ranking_list
ratings_df["Max price value"] = price_class_value_list
###Output
_____no_output_____
###Markdown
###Code
Kgs_Nytorv = '55.679977,12.5841893' #longitude and latitude for Kongens Nytorv
#calculating distance from nytorv to the coordinates in the list
def distance(x):
Start = ratings_df["Location"][x]
Stop = Kgs_Nytorv
distance_list.append(great_circle(Start, Stop).meters)
for x in ratings_df.index:
distance(x)
#appending to df
ratings_df["Distance from Kgs. Nytorv"] = distance_list
###Output
_____no_output_____
###Markdown
###Code
#creates a unique id
for x in ratings_df.index:
unique_list.append(uuid.uuid1())
ratings_df["Unique ID"] = unique_list
###Output
_____no_output_____
###Markdown
###Code
#sorting by name of column
ratings_df = ratings_df.reindex(sorted(ratings_df.columns), axis=1)
ratings_df
###Output
_____no_output_____
###Markdown
###Code
#send to csv
#ratings_df.to_csv("SHORT output_deailed_ratings.csv")
###Output
_____no_output_____ |
[1-1]_Convert_SciGlass_GComp_file_to_Pandas_DataFrame_(mol%).ipynb | ###Markdown
Convert GComp.csv file to composition table as is to use by Pandas dataframe (mol%) Some notes here: (1) SciGlass mdb files are publised by EPAM under ODC Open Database License (ODbL). Use the information and codes below following the EPAM's license and also at your own risk. Understand that there is no guarantee for any problems that may occur. (2) The following code is written to work with csv files. Access 2.0 mdb files published by EPAM should be converted to csv files and then executed. Please convert the mdb files to csv files first and use them. (3) File names and contents (presumed) - Gcomp: composition data - SciGK: property data together with typical compositions (wt% and mol%) - Reference: source data such as journals or patents - Kod2Ref: connection keys from "kod" to "Refer_ID" to quote source data in the "Reference" file (4) You need only "GComp.csv" file in this notebook.
###Code
# Import libralies
import pandas as pd
import numpy as np
# Load "GComp.csv" file
df = pd.read_csv('data_SciGlass/GComp.csv')
print(df.shape)
df.head()
# Check the data in the 2nd row and the 3rd column as an example
comp_example_1 = df.iloc[1, 2]
comp_example_1
###Output
_____no_output_____
###Markdown
The example above shows that element, molecular mass, wt%, and mol% are lined up in one set between the separater "\x7f". Then split them by the separater.
###Code
# split by "\x7f"
comp_0 = df['Composition'].str.split('\x7f')
comp_0
# Check the data in the 2nd row and the 3rd column as an example
comp_ex = list(comp_0[1])
print(comp_ex)
print(len(comp_ex))
# Extract composition and mol% data utilizing the remainder divided by 4
print('Composition, mol%')
for i in range(len(comp_ex)//4):
print(comp_ex[i * 4 + 1], comp_ex[i * 4 + 4])
###Output
Composition, mol%
SiO2 45.35
P2O5 11.35
ZrO2 9.31
Na2O 33.99
###Markdown
Now element, molecular mass, wt%, and mol% can be extraced.
###Code
# Check the number of data in the longest row.
max_compostion_num = 0
line_length = []
for j in list(comp_0):
if len(j) > max_compostion_num:
max_compostion_num = len(j)
line_length.append(len(j))
print(max_compostion_num)
###Output
162
###Markdown
The longest row in the "GComp" contains 40 compositions (162 devided by 4) and more than the number of composition in "SciGK" (that is only 17). "GComp" should be used as the composition data, so make the matrix that composition forms a column, and glass forms a row with "GlasNo" (glass ID) and "Kod" (Reference_ID).
###Code
# Get the number of the lomgest row
max_idx = np.argmax(line_length)
max_idx
# Check the comosition and mol% in the longest row and its total
comp_longest = comp_0[max_idx]
comp_total = 0
print('Composition, mol%\n')
for i in range(len(comp_longest)//4):
print(comp_longest[i * 4 + 1], comp_longest[i * 4 + 4])
comp_total = comp_total + float(comp_longest[i * 4 + 4])
print('\nTotal mol% = ', comp_total)
###Output
Composition, mol%
SiO2 51.33
B2O3 13.01
Al2O3 1.57
Li2O 5.94
Na2O 14.49
MgO 2.92
CaO 5.01
TiO2 0.77
Se 0.013
Rb2O 0.018
SrO 0.058
Y2O3 0.035
ZrO2 0.32
MoO3 0.38
RuO2 0.24
Rh2O3 0.031
PdO 0.13
Ag2O 0.006
CdO 0.008
SnO 0.006
Sb2O3 0.
TeO2 0.049
Cs2O 0.13
BaO 0.18
La2O3 0.08
CeO2 0.12
Pr2O3 0.055
Nd2O3 0.24
Sm2O3 0.044
Eu2O3 0.005
Gd2O3 0.011
UO3 0.27
Cr2O3 0.2
MnO 0.3
FeO 1.59
NiO 0.24
CuO 0.002
ZnO 0.001
K2O 0.03
P2O5 0.16
Total mol% = 99.99199999999998
###Markdown
Make a list of element contained in "GComp" without any omission and duplication
###Code
composition_items = []
for k in range(len(df)):
composition_k = list(comp_0[k])
for l in range(len(composition_k)//4):
composition_item_kl = composition_k[l * 4 + 1]
composition_items.append(composition_item_kl) # Add composition name to the list
composition_items_update = set(composition_items) # Delete the duplication
composition_items = list(composition_items_update) # Convert back to list
if k%50000 == 0: # Show the transaction status
print('row number in transaction : ', k)
#composition_items_update
print('\ncomplete!')
# Check the element names in the list made above
composition_items_sorted = sorted(composition_items)
print('Number of elements (all) = ', len(composition_items_sorted), '\n')
print(*composition_items_sorted)
###Output
Number of elements (all) = 726
(NH4)2SO4 (NH4)3PO4 Ag Ag2CO3 Ag2MoO4 Ag2O Ag2S Ag2SO4 Ag2Se Ag2Se5 Ag2Te Ag4SSe AgAsS2 AgBr AgCl AgF AgGaS2 AgI AgNO3 AgPO3 Al Al(PO3)3 Al2(SO4)3 Al2N3 Al2O3 Al2O3+Fe2O3 Al2S3 Al2Y2O6 AlCl3 AlF3 AlN AlPO4 Am Am2O3 AmO2 Ar As As2O3 As2O5 As2S3 As2S5 As2Se3 As2Se5 As2Te As2Te3 As2Te5 AsBr3 AsF5 AsI3 AsS AsS2 AsSBr AsSI AsSe AsSe2 AsSeI AsTe AsTe3 Au Au2O Au2O3 AuCl AuCl3 B B2O3 B2S3 B2Se3 BF3 BN BOF BPO4 BS2 Ba Ba(H2PO4)2 Ba(PO3)2 Ba3N2 BaB2O4 BaBr2 BaCl2 BaF2 BaGeO3 BaHPO4 BaI2 BaO BaO2 BaPO3F BaS BaSO4 BaSe Be BeF2 BeO BeSO4 Bi Bi2O3 Bi2O5 Bi2S3 Bi2Se3 Bi2Te3 BiBr3 BiCl3 BiF3 BiI3 BiNbO4 BiOBr BiOCl BiOF BiPO4 BiTe Br C C2H5OH C6H12O6 CO2 Ca Ca(NO3)2 Ca3N2 CaBr2 CaC2 CaCO3 CaCl2 CaF2 CaI2 CaO CaO+MgO CaS CaSO4 Cd Cd(NO3)2 CdAs2 CdBr2 CdCl2 CdF2 CdGeO3 CdI2 CdO CdS CdSO4 CdSe CdTe Ce Ce2O3 Ce2S3 CeCl3 CeF3 CeF4 CeO CeO2 CeSe Cl Cl2 Co Co2O3 Co3O4 CoBr2 CoCl2 CoF2 CoO CoS CoSO4 Cr Cr2O3 Cr2Se3 Cr3O4 CrCl3 CrF3 CrO CrO3 Cs Cs2O Cs2S Cs2SO4 CsBr CsCl CsF CsHSO4 CsI Cu Cu2MoO4 Cu2O Cu2S Cu2Se Cu2Te Cu2WO4 Cu3PO4 CuBr CuCl CuCl2 CuF2 CuI CuNbOF5 CuO CuPO3 CuS CuSO4 Dy Dy2O3 Dy2S3 Dy2Se3 DyCl3 DyF3 Er Er2(SO4)3 Er2O3 Er2S3 ErCl3 ErF3 ErI3 ErO2 ErPO4 Eu Eu2O3 EuCl2 EuCl3 EuF2 EuF3 EuO EuS F F2 Fe Fe2(SO4)3 Fe2O3 Fe3O4 FeBr2 FeCl2 FeCl3 FeF2 FeF3 FeO FeO+Fe2O3 FeS FeS2 FemOn Ga Ga2O3 Ga2S3 Ga2Se3 Ga2Te3 GaBr3 GaF3 GaI3 GaS2 GaSe GaSe2 GaTe GaTe3 Gd Gd2(SO4)3 Gd2O3 Gd2S3 GdCl3 GdF3 Ge Ge2S3 Ge2Se2Te Ge2Se3 GeAs GeBr4 GeF4 GeI2 GeI4 GeO GeO2 GeS GeS2 GeS3 GeSBr2 GeSb GeSb2 GeSe GeSe2 GeSe3 GeSe4 GeSeBr2 GeTe GeTe2 GeTe4 GeTe4.3 H H2 H2O H2S H2SO4 H3BO3 HF+H2O Hf HfF4 HfO2 Hg Hg2O HgBr2 HgCl2 HgF HgI2 HgO HgS HgSe HgTe Ho Ho2(SO4)3 Ho2O3 Ho2S3 Ho2Se3 HoAs HoCl3 HoF3 HoSe I In In2O3 In2S3 In2Se3 In2Te6 InF3 InI3 InSe InTe Ir K K2CO3 K2NbOF5 K2O K2S K2S2O8 K2SO4 K2SiF6 K3N KAsO3 KBF4 KBO2 KBr KCl KClO4 KF KHC4H4O6 KHF2 KHSO4 KI KNO3 KOH KPO3 La La2(SO4)3 La2O3 La2S3 La2Se3 LaBr3 LaCl3 LaF3 Li Li2B4O7 Li2CO3 Li2MoO4 Li2O Li2O+Na2O+K2O Li2S Li2SO3 Li2SO4 Li2Se Li2WO4 Li3N Li3PO4 LiBr LiCl LiF LiHSO4 LiI LiNO3 LiSiON Lu Lu2O3 LuF3 Mg Mg3N2 MgBr2 MgCO3 MgCl2 MgF2 MgI2 MgO MgS MgSO4 Mn Mn2O3 Mn2O7 Mn3O4 MnBr2 MnCl2 MnF2 MnNbOF5 MnO MnO2 MnS MnSO4 MnSe Mo Mo2O3 Mo2O5 MoF5 MoO MoO2 MoO3 MoO3+WO3 N N2 N2O5 NH4Cl NH4F NH4H2PO4 NH4HF2 NH4NO3 NH4PF6 NO2 NO3 Na Na2B4O7 Na2CO3 Na2HPO4 Na2MoO4 Na2O Na2O+K2O Na2S Na2SO4 Na2Se Na2WO4 Na3AlF6 Na3N Na3PO4 Na4As2O7 NaBF4 NaBH4 NaBr NaCN NaCl NaClO3 NaF NaHSO4 NaI NaPO3 Nb Nb2O3 Nb2O5 NbF5 NbO2F NbPO5 Nd Nd2(SO4)3 Nd2O3 Nd2S3 NdCl3 NdF3 Ni Ni2O3 NiBr2 NiCl2 NiF2 NiI2 NiO NiSO4 NiTe Np2O3 NpO2 O OH P P2O3 P2O5 P2S3 P2S5 P2Se3 P2Se5 P3N5 P4S3 P4S7 P4Se10 P4Se3 P4Se4 PCl5 PF3 PF5 PF6 PON Pb Pb(PO3)2 Pb3(PO4)2 Pb3O4 PbBr2 PbCl2 PbF2 PbF3 PbI2 PbO PbO2 PbS PbSO4 PbSe PbTe Pd PdO Pr Pr2(SO4)3 Pr2O3 Pr2S3 Pr2Se3 Pr6O11 PrCl3 PrF3 PrO2 Pt PtCl2 PtO PtO2 Pu2O3 PuO2 R2O R2O3 RO Ra Rb Rb2O Rb2S Rb2SO4 RbBr RbCl RbF RbI RbNO3 RbV2O5 Re2O7 ReO3 Rh Rh2O3 RhO2 RmOn Ru RuO2 S SO2 SO3 SO4 Sb Sb(PO3)3 Sb2O3 Sb2O5 Sb2S3 Sb2Se3 Sb2Te2 Sb2Te3 Sb2Te4 SbBr3 SbCl3 SbF3 SbI3 SbO SbO2 SbS2 SbSI Sc Sc(PO3)3 Sc2O3 ScF3 Se Se4I SeO2 SeO3 SeS2 Si Si2N4 Si3N4 SiC SiCl4 SiF4 SiO SiO2 SiS2 SiSe2 Sm Sm2(SO4)3 Sm2O3 Sm2S3 SmCl3 SmF3 Sn Sn2O3 Sn2S3 SnCl2 SnF2 SnF4 SnI2 SnI4 SnO SnO2 SnS SnS2 SnSe SnSe2 SnTe Sr SrB2O4 SrBr2 SrCl2 SrF2 SrO SrS SrSO4 SrSe Ta Ta2O3 Ta2O5 TaF5 TaO2F TaPO5 TaS2 Tb Tb2O3 Tb2S3 Tb2Se3 Tb3O7 Tb4O7 TbCl3 TbF3 TbO2 TcO2 Te TeCl4 TeF4 TeI TeO TeO2 TeO3 Th Th(SO4)2 ThCl4 ThF4 ThO2 Ti Ti2O3 Ti2S3 Ti3B4 TiC TiF3 TiF4 TiO TiO2 TiS2 Tl Tl2O Tl2O3 Tl2S Tl2S3 Tl2Se Tl2Se3 Tl2Te Tl2Te3 Tl2TiS3 TlAsS2 TlAsSe2 TlAsTe2 TlBr TlCl TlF TlI TlS TlS2 TlSe TlTe Tm Tm2O3 Tm2S3 TmCl3 TmF3 U U2O5 U3O8 UF2 UF4 UO2 UO2F2 UO3 V V2O3 V2O5 V2S3 VCl3 VF3 VF5 VN VO2 VO6 VOSO4 W WCl6 WO3 Y Y2(SO4)3 Y2O3 Y2S3 YF3 Yb Yb2O3 Yb2S3 YbCl3 YbF3 YbO2 YbSe Zn Zn(PO3)2 Zn3(PO4)2 Zn3As2 ZnBr2 ZnCl2 ZnF2 ZnI2 ZnO ZnS ZnSO4 ZnSe ZnTe Zr ZrF4 ZrO2 ZrS2 ZrSe2 ZrSiO4
###Markdown
726 element are contained, but it includes symbols like + or R2O.
###Code
# Check elements that include the symbol "+"
[s for s in composition_items_sorted if '+' in s]
# Check elements that include the symbol "R"
[s for s in composition_items_sorted if 'R' in s]
###Output
_____no_output_____
###Markdown
The column list that contains all composition is made, so make the matrix with "GlasNo" and "Kod".
###Code
# Extract Kod and GlasNo columns from the original dataframe
df_rebuild_0 = df.iloc[:, :2]
df_rebuild_0 = df_rebuild_0.astype(np.int32)
df_rebuild_0
###Output
_____no_output_____
###Markdown
Create a matrix (422879 x 726) with the number of rows in the "GComp", the columns of all elements, and zeros. (The zeroth column was blank and now zeros, but ignore because of no impact.)
###Code
df_composition_mol = pd.DataFrame(np.zeros(len(df) * len(composition_items_sorted)).reshape(len(df), len(composition_items_sorted)),
columns = composition_items_sorted)
print(df_composition_mol.shape)
df_composition_mol.head()
###Output
(422879, 726)
###Markdown
Place the mol% data in the appropriate places in the table (it takes some time)
###Code
import time
t1 = time.time()
# Place the numbers
for i in range (len(df)):
for j in range(len(comp_0[i])//4):
composition = comp_0[i][j * 4 + 1]
value = comp_0[i][j * 4 + 4] # value of mol%
df_composition_mol.at[i, composition] = float(value) # Place them to the right places
t2 = time.time()
print('Elapsed time = ', t2 - t1, '\n')
print(df_composition_mol.shape)
df_composition_mol.head()
###Output
Elapsed time = 24.025533199310303
(422879, 726)
###Markdown
Combine the composition data horizontally with the "Kod" and "GlasNo" tables created earlier.
###Code
df_SciGlass_mol = df_rebuild_0.join(df_composition_mol)
print(df_SciGlass_mol.shape)
df_SciGlass_mol
###Output
(422879, 728)
###Markdown
Check if the process was done correctly
###Code
# Check duplicates -> No duplicate
df_overlap_all = df_SciGlass_mol[df_SciGlass_mol.duplicated(keep = False)]
print(df_overlap_all.shape)
df_overlap_all.head(40)
# Compare the unique number of "GlasNo" and the number of rows
print("Number of rows in total = ", len(df_SciGlass_mol))
print("Unique number of 'GlasNo' = ", df_SciGlass_mol['GlasNo'].nunique())
print("The gap = ", len(df_SciGlass_mol) - df_SciGlass_mol['GlasNo'].nunique())
# Check the place where "GlasNo" are overlapped
df_overlap = df_SciGlass_mol[df_SciGlass_mol.duplicated(subset = ['GlasNo'], keep = False)]
print(df_overlap.shape)
df_overlap.head()
###Output
(4225, 728)
###Markdown
This is not enough to understand, so check the overlaps in the original table as well. (Sorted by "GlasNo")
###Code
df_overlap_s = df_overlap.sort_values('GlasNo')
overlap_idx_list = df_overlap_s.index
df_overlap = df.loc[overlap_idx_list]
pd.set_option("display.max_colwidth", 300)
print(df_overlap.shape)
df_overlap[0:40]
###Output
(4225, 3)
###Markdown
They were classified by the combination of "Kod" and "GlasNo", so no problems with the allocation Check if the numbers are applied correctly
###Code
# Typical compositions found in the "SciGK" file
main_element = ['Kod', 'GlasNo','SiO2', 'Al2O3', 'B2O3', 'CaO', 'K2O', 'Na2O', 'PbO', 'Li2O', 'MgO', 'SrO', 'BaO', 'ZnO']
# The same compositions from the dataframe made from the "GComp" file
df_SciGlass_mol_major = df_SciGlass_mol[main_element]
print(df_SciGlass_mol_major.shape)
df_SciGlass_mol_major.head()
# Load the "SciGK" file
df_scigk = pd.read_csv('data_SciGlass/SciGK.csv', encoding = 'latin1')
print(df_scigk.shape)
df_scigk.head()
# Determine the column number to extract the same compositions from the "SciGK" file
slice_1 = df_scigk.columns.get_loc('SIO2')
slice_2 = df_scigk.columns.get_loc('ZNO')
print(slice_1, slice_2)
# Extract the composition range from the "SciGK" file determined above and change the column to the same name as the new table
scigk_columns_to_check = ['KOD', 'GLASNO']
for i in range(slice_1, slice_2 + 1):
scigk_columns_to_check.append(df_scigk.columns[i])
print(scigk_columns_to_check)
df_scigk_check = df_scigk[scigk_columns_to_check]
df_scigk_check.columns = main_element
df_scigk_check.iloc[:, :2] = df_scigk_check.iloc[:, :2].astype(np.int32)
df_scigk_check.iloc[:, 2:] = df_scigk_check.iloc[:, 2:].astype(np.float64)
df_scigk_check.head()
# Determine the row number to compare
rows_to_see = 500
# 5 rows in the "GComp"
print('5 examples from Gcomp mol%')
df_SciGlass_mol_major.loc[int(rows_to_see):int(rows_to_see)+4]
# "5 rows in the SciGK"
print('5 examples from SciGK mol%')
df_scigk_check.loc[int(rows_to_see):int(rows_to_see)+4]
###Output
5 examples from SciGK mol%
###Markdown
Some disparities are found in the second decimal, but composition data look retrieved correctly from the "GComp" file, so save the file (it takes some time)
###Code
df_SciGlass_mol.to_csv('data_SciGlass/SciGlass_comp_mol.csv', index = None)
###Output
_____no_output_____ |
assets/code/Astropy_spherical_offset.ipynb | ###Markdown
Math for astropy spherical_offsets_toset up a new spherical coordinate centered at the reference point
###Code
## cartesian coordinates
# pivot point a
x_a = np.cos(dec_a) * np.cos(ra_a)
y_a = np.sin(ra_a) * np.cos(dec_a)
z_a = np.sin(dec_a)
# point b
x_b = np.cos(dec_b) * np.cos(ra_b)
y_b = np.sin(ra_b) * np.cos(dec_b)
z_b = np.sin(dec_b)
# new north pole
x_p = np.cos(dec_p) * np.cos(ra_p)
y_p = np.sin(ra_p) * np.cos(dec_p)
z_p = np.sin(dec_p)
# normal vector of the plane 1 where point a, origin and new north pole reside
A1 = (y_a * z_p) - (z_a * y_p)
B1 = x_a * z_p - z_a * x_p
C1 = y_a * x_p - x_a * y_p
# normal vector of the plane 2 where point b, origin and new north pole reside
A2 = (y_b * z_p) - (z_b * y_p)
B2 = x_b * z_p - z_b * x_p
C2 = y_b * x_p - x_b * y_p
# the intersection angle of plane 1 and plane 2 -> new ra
labc = (Math.sqrt(Math.pow(A1, 2) + Math.pow(B1, 2) + Math.pow(C1, 2))
* Math.sqrt(Math.pow(A2, 2) + Math.pow(B2, 2) + Math.pow(C2, 2)))
cos_theta = (A1*A2 + B1*B2 + C1*C2)/labc
theta = np.arccos(cos_theta).to(u.deg) * np.sign(ra_b-ra_a)
# the intersection angle of the plane 1 normal and the vector origin-b -> new dec
cos_phi = x_p*x_b + y_p*y_b + z_p*z_b
phi = 90*u.deg - np.arccos(cos_phi).to(u.deg)
print('my',theta.to('rad'),phi.to('rad'))
a = SkyCoord(ra_a, dec_a , frame='icrs')
b = SkyCoord(ra_b , dec_b , frame='icrs')
dra, ddec = a.spherical_offsets_to(b)
print('astropy',dra.to('rad'),ddec.to('rad'))
###Output
my -0.25069188491905714 rad -0.5099574742224875 rad
astropy -0.250692rad -0.509957rad
###Markdown
Calculate a precise midpoint and PA on the projected plane centered on the midpoint
###Code
x_a = np.cos(ra_a)*np.cos(dec_a)
x_b = np.cos(ra_b)*np.cos(dec_b)
psedo_x_mid = (x_a + x_b)/2
y_a = np.sin(ra_a)*np.cos(dec_a)
y_b = np.sin(ra_b)*np.cos(dec_b)
psedo_y_mid = (y_a + y_b)/2
z_a = np.sin(dec_a)
z_b = np.sin(dec_b)
psedo_z_mid = (z_a + z_b)/2
scale = psedo_x_mid**2 + psedo_y_mid**2 + psedo_z_mid**2
scale = 1 / np.sqrt(scale)
x_mid = scale * psedo_x_mid
y_mid = scale * psedo_y_mid
z_mid = scale * psedo_z_mid
ra_mid = np.arctan2(y_mid,x_mid).to(u.deg)
dec_mid = np.arctan( z_mid/np.sqrt(x_mid**2 + y_mid**2) ).to(u.deg)
print('a',ra_a.to(u.deg).value, dec_a.value)
print('b',ra_b.to(u.deg).value, dec_b.to(u.deg).value)
print('mid',ra_mid, dec_mid)
p_a = np.array([x_a,y_a,z_a])
p_b = np.array([x_b,y_b,z_b])
p_m = np.array([x_mid,y_mid,z_mid])
# projected plane centered on the midpoint
# (x-x_mid)x_mid+(y-y_mid)y_mid+(z-z_mid)z_mid=0
lm = np.dot(p_m,p_m)
t_a = lm/np.dot(p_a,p_m)
t_b = lm/np.dot(p_b, p_m)
# projected point for a and b and projected vector
cp_a = t_a * p_a
cp_b = t_b * p_b
vec_proj = cp_b - cp_a
# vector on the projected plane towards North
e = np.array([-np.cos(90*u.deg-dec_mid)*np.cos(ra_mid),
np.cos(90*u.deg-dec_mid)*np.sin(ra_mid),np.sin(90*u.deg-dec_mid)])
ll = np.sqrt(np.dot(vec_proj,vec_proj))
cos_theta = np.dot(e,vec_proj)/ll
theta = np.arccos(cos_theta)*u.radian
print('PA (intersection angle only; No direction):',theta.to(u.deg))
###Output
a 180.00416666666663 30.0
b 167.49999999999997 0.0
mid 173.3014231030648 deg 15.085223349097275 deg
PA (intersection angle only; No direction): 155.21469941876074 deg
###Markdown
Math for Keck https://www2.keck.hawaii.edu/inst/common/offset.phpApproximation in a very small FoV
###Code
ra_kmid = (ra_a + ra_b)/2
dec_kmid = (dec_a + dec_b)/2
east_offset = (ra_b-ra_a) * np.cos(dec_kmid)
north_offset = dec_b-dec_a
pa_keck = np.arctan(east_offset/north_offset).to(u.deg)
print('E and N offset:', east_offset.to(u.arcsec),north_offset.to(u.arcsec))
print('PA (E of N):', pa_keck)
###Output
E and N offset: -43481.2arcsec -108000arcsec
PA (E of N): 21.92987608386546 deg
|
CTA200H_Project/.ipynb_checkpoints/CTA200H_Project-checkpoint.ipynb | ###Markdown
Question 1: CO intensities at redshift 2.8
###Code
mf = MassFunction(Mmin = 10.05, Mmax = 12.95, z=2.8, cosmo_model=Planck13)
data_z2p8 = np.loadtxt("z2.8.txt")
h = Planck13.H0.value / 100
H = Planck13.H(2.8)
L_CO_10 = np.interp(mf.m / h, 10**data_z2p8[:,0], data_z2p8[:,2])*u.Lsun
L_CO_21 = np.interp(mf.m / h, 10**data_z2p8[:,0], data_z2p8[:,3])*u.Lsun
L_CO_32 = np.interp(mf.m / h, 10**data_z2p8[:,0], data_z2p8[:,4])*u.Lsun
L_CO_43 = np.interp(mf.m / h, 10**data_z2p8[:,0], data_z2p8[:,5])*u.Lsun
L_CO_54 = np.interp(mf.m / h, 10**data_z2p8[:,0], data_z2p8[:,6])*u.Lsun
f_duty = np.interp(mf.m / h, 10**data_z2p8[:,0], data_z2p8[:,47])
number_density = mf.dndm * h**4 / u.Msun / u.Mpc**3
halo_mass = mf.m * u.Msun / h
v_CO_10 = 115.3*u.GHz
v_CO_21 = 230.5*u.GHz
v_CO_32 = 345.8*u.GHz
v_CO_43 = 461.04*u.GHz
v_CO_54 = 576.27*u.GHz
# plt.scatter(mf.m, L_CO_10)
# plt.xlabel('halo mass (M$_\odot$)')
# plt.ylabel('CO 1-0 luminosity (L$_\odot$)')
#plt.scatter(mf.m, mf.dndm)
#plt.xscale('log')
#plt.yscale('log')
#plt.scatter(mf.m, mf.dndm*L_CO_01)
#plt.xscale('log')
#plt.yscale('log')
# plt.loglog(mf.m, mf.dndm*L_CO_10, '.')
# plt.xlabel('halo mass (M$_\odot$)')
# plt.ylabel('CO 1-0 luminosity (L$_\odot$) ')
Intensity_CO_10 = (c / ((4*np.pi)*v_CO_10*H)) * np.trapz(f_duty*number_density*L_CO_10, halo_mass)
Intensity_CO_21 = (c / ((4*np.pi)*v_CO_21*H)) * np.trapz(f_duty*number_density*L_CO_21, halo_mass)
Intensity_CO_32 = (c / ((4*np.pi)*v_CO_32*H)) * np.trapz(f_duty*number_density*L_CO_32, halo_mass)
Intensity_CO_43 = (c / ((4*np.pi)*v_CO_43*H)) * np.trapz(f_duty*number_density*L_CO_43, halo_mass)
Intensity_CO_54 = (c / ((4*np.pi)*v_CO_54*H)) * np.trapz(f_duty*number_density*L_CO_54, halo_mass)
print('CO 1-0 intensity is', Intensity_CO_10.to(u.Jy))
print('CO 2-1 intensity is', Intensity_CO_21.to(u.Jy))
print('CO 3-2 intensity is', Intensity_CO_32.to(u.Jy))
print('CO 4-3 intensity is', Intensity_CO_43.to(u.Jy))
print('CO 5-4 intensity is', Intensity_CO_54.to(u.Jy))
# T_CO_10 = ((c**3*(1+2.8)**2)/(8*np.pi*k_B*v_CO_10**3*H))*np.trapz(f_duty*number_density*L_CO_10, halo_mass)
def temp(Intensity_CO_J, v_CO_J):
return (Intensity_CO_J*c**2) / (2 * k_B * (v_CO_J**2 / (1+2.8)**2))
print('Temperature of CO 1-0 is', temp(Intensity_CO_10, v_CO_10).to(u.uK))
print('Temperature of CO 2-1 is', temp(Intensity_CO_21, v_CO_21).to(u.uK))
print('Temperature of CO 3-2 is', temp(Intensity_CO_32, v_CO_32).to(u.uK))
print('Temperature of CO 4-3 is', temp(Intensity_CO_43, v_CO_43).to(u.uK))
print('Temperature of CO 5-4 is', temp(Intensity_CO_54, v_CO_54).to(u.uK))
J = np.array([1, 2, 3, 4, 5])
T_CO_10 = temp(Intensity_CO_10, v_CO_10).to(u.uK)
T_CO_21 = temp(Intensity_CO_21, v_CO_21).to(u.uK)
T_CO_32 = temp(Intensity_CO_32, v_CO_32).to(u.uK)
T_CO_43 = temp(Intensity_CO_43, v_CO_43).to(u.uK)
T_CO_54 = temp(Intensity_CO_54, v_CO_54).to(u.uK)
ratio = np.array([1, T_CO_21/T_CO_10, T_CO_32/T_CO_10, T_CO_43/T_CO_10, T_CO_54/T_CO_10])
plt.plot(J, ratio, '.')
plt.xlabel('J')
plt.ylabel('ratio')
log_halo_mass = np.log10(mf.m / h)[::10]
# J from 2,3,4,5,6
def temp_and_Mmin(J, v_CO_J):
T_CO_result = []
for M_min in log_halo_mass:
mf = MassFunction(Mmin = M_min, Mmax = 13.10896251, z=2.8, cosmo_model=Planck13)
L_CO = np.interp(mf.m / h, 10**data_z2p8[:,0], data_z2p8[:,J])*u.Lsun
f_duty = np.interp(mf.m / h, 10**data_z2p8[:,0], data_z2p8[:,47])
number_density = mf.dndm * h**4 / u.Msun / u.Mpc**3
halo_mass = mf.m * u.Msun / h
Intensity_CO = ((c / ((4*np.pi)*v_CO_J*H)) * np.trapz(f_duty*number_density*L_CO, halo_mass)).to(u.Jy)
T_CO = temp(Intensity_CO, v_CO_J).to(u.uK)
T_CO_result.append(T_CO.value)
return T_CO_result
plt.plot(log_halo_mass, temp_and_Mmin(2, v_CO_10), label = 'CO 1-0')
plt.plot(log_halo_mass, temp_and_Mmin(3, v_CO_21), label = 'CO 2-1')
plt.plot(log_halo_mass, temp_and_Mmin(4, v_CO_32), label = 'CO 3-2')
plt.plot(log_halo_mass, temp_and_Mmin(5, v_CO_43), label = 'CO 4-3')
plt.plot(log_halo_mass, temp_and_Mmin(6, v_CO_54), label = 'CO 5-4')
plt.xlabel('halo mass (M$_\odot$)')
plt.ylabel('temperature CO lines ($\mu$K)')
plt.legend()
###Output
_____no_output_____ |
12_04_2022.ipynb | ###Markdown
###Code
l=1
u=100
for i in range(l,u+1):
if i>1:
for j in range(2,i):
if(i%j)==0:
break
else:
print(i)
n=int(input())
for i in range(1,11):
s=n*i
print(n,"*",i,"=",s)
n=int(input())
sum=0
for i in range(1,n+1):
if n==0:
print("0 is not a natural number")
sum=sum+i
print("sum of natural numbers of ",n,"is",sum)
l=[1,2,3,4,5,6,7,8.9]
print(l[:])
X = [[12,7,3],
[4 ,5,6],
[7 ,8,9]]
Y = [[5,8,1],
[6,7,3],
[4,5,9]]
result = [[0,0,0],
[0,0,0],
[0,0,0]]
# iterate through rows
for i in range(len(X)):
# iterate through columns
for j in range(len(X[0])):
result[i][j] = X[i][j] + Y[i][j]
for r in result:
print(r)
l=[1,2,2,34]
###Output
_____no_output_____
###Markdown
###Code
low=1
up=199
for i in range(low,up+1):
if i>1:
for j in range(2,i):
if(i%j)==0:
break
else:
print(i)
s=int(input())
for i in range(1,11):
z=s*i
print(s,"*",i,"=",z)
s=int(input())
sum=0
for i in range(1,s+1):
if s==0:
print("not natural number")
sum=sum+i
print(sum)
s=[1,2,3,4,5,6,7,88,99]
print(s[:3])
###Output
_____no_output_____ |
models/RandomForest.ipynb | ###Markdown
Load and clean data
###Code
#load data
csv_file = 'SEEM_bile_acid_data_patient.csv'
df_patient = pd.read_csv(csv_file, index_col=None)
csv_file2 = 'SEEM_bile_acid_data_control.csv'
df_control = pd.read_csv(csv_file2, index_col = None)
csv_file3 = 'temp.csv'
biopsy_data = pd.read_csv(csv_file3)
#remove 'Barcode','CMS ID','LC/MS code#
df_control = df_control.drop(['Barcode','CMS ID','LC/MS code#'], axis=1)
df_control = df_control.groupby('Patient ID',as_index=False).mean()
df_control['Target'] = 0
#
df_patient = df_patient.drop(['Barcode','CMS ID','LC/MS code#'], axis=1)
df_patient = df_patient.groupby('Patient ID',as_index=False).mean()
df_patient['Target'] = 0
###Output
_____no_output_____
###Markdown
Populate "Reponse" value
###Code
index = []
def find_index(biopsy_data,biomaker_data):
for i in range(0,len(biopsy_data)):
for j in range(0,len(biomaker_data)):
if biomaker_data['Patient ID'].loc[j] == biopsy_data['Patient ID'].loc[i]:
index.append(j)
###Output
_____no_output_____
###Markdown
Populdate "Response" value to patient set
###Code
find_index(biopsy_data,df_patient)
index = set(index)
for val in index:
df_patient['Target'].loc[val] = 1
###Output
C:\Users\lukek\Anaconda3\lib\site-packages\pandas\core\indexing.py:189: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self._setitem_with_indexer(indexer, value)
###Markdown
Populdate "Response" value to control set
###Code
index = []
find_index(biopsy_data,df_control)
index = set(index)
for val in index:
df_control['Target'].loc[val] = 1
###Output
_____no_output_____
###Markdown
Random Forest Random Forest to patient set
###Code
df_patient = df_patient.dropna()
X = df_patient.iloc[:,1:-1]
X = X.reset_index(drop=True)
y = df_patient.iloc[:,-1]
y = y.reset_index(drop=True)
X_train, X_test, y_train, y_test = train_test_split(
... X, y, test_size=0.33, random_state=42)
## This line instantiates the model.
rf = RandomForestClassifier()
## Fit the model on your training data.
rf.fit(X_train, y_train)
## And score it on your testing data.
rf.score(X_test, y_test)
feature_importances = pd.DataFrame(rf.feature_importances_,
index = X_train.columns,
columns=['importance']).sort_values('importance',ascending=False)
feature_importances
###Output
_____no_output_____
###Markdown
Random Forest to patient set
###Code
df_control = df_control.dropna()
X = df_control.iloc[:,1:-1]
X = X.reset_index(drop=True)
y = df_control.iloc[:,-1]
y = y.reset_index(drop=True)
X_train, X_test, y_train, y_test = train_test_split(
... X, y, test_size=0.33, random_state=42)
## This line instantiates the model.
rf = RandomForestClassifier()
## Fit the model on your training data.
rf.fit(X_train, y_train)
## And score it on your testing data.
rf.score(X_test, y_test)
feature_importances = pd.DataFrame(rf.feature_importances_,
index = X_train.columns,
columns=['importance']).sort_values('importance',ascending=False)
feature_importances
###Output
_____no_output_____ |
explore/confounding/confounding.ipynb | ###Markdown
Create a logistic regression model to predict several mutations from covariates
###Code
import os
import itertools
import warnings
import collections
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import preprocessing, grid_search
from sklearn.linear_model import SGDClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from scipy.special import logit
%matplotlib inline
plt.style.use('seaborn-notebook')
###Output
_____no_output_____
###Markdown
Load Data
###Code
path = os.path.join('..', '..', 'download', 'mutation-matrix.tsv.bz2')
Y = pd.read_table(path, index_col=0)
# Read sample information and create a covariate TSV
url = 'https://github.com/cognoma/cancer-data/raw/54140cf6addc48260c9723213c40b628d7c861da/data/covariates.tsv'
covariate_df = pd.read_table(url, index_col=0)
covariate_df.head(2)
###Output
_____no_output_____
###Markdown
Specify the type of classifier
###Code
param_grid = {
'alpha': [10 ** x for x in range(-4, 2)],
'l1_ratio': [0, 0.05, 0.1, 0.2, 0.5, 0.8, 0.9, 0.95, 1],
}
clf = SGDClassifier(
random_state=0,
class_weight='balanced',
loss='log',
penalty='elasticnet'
)
# joblib is used to cross-validate in parallel by setting `n_jobs=-1` in GridSearchCV
# Supress joblib warning. See https://github.com/scikit-learn/scikit-learn/issues/6370
warnings.filterwarnings('ignore', message='Changing the shape of non-C contiguous array')
clf_grid = grid_search.GridSearchCV(estimator=clf, param_grid=param_grid, n_jobs=-1, scoring='roc_auc')
pipeline = make_pipeline(
StandardScaler(),
clf_grid
)
###Output
_____no_output_____
###Markdown
Specify covariates and outcomes
###Code
def expand_grid(data_dict):
"""Create a dataframe from every combination of given values."""
rows = itertools.product(*data_dict.values())
return pd.DataFrame.from_records(rows, columns=data_dict.keys())
mutations = {
'7157': 'TP53', # tumor protein p53
'7428': 'VHL', # von Hippel-Lindau tumor suppressor
'29126': 'CD274', # CD274 molecule
'672': 'BRCA1', # BRCA1, DNA repair associated
'675': 'BRCA2', # BRCA2, DNA repair associated
'238': 'ALK', # anaplastic lymphoma receptor tyrosine kinase
'4221': 'MEN1', # menin 1
'5979': 'RET', # ret proto-oncogene
}
options = collections.OrderedDict()
options['mutation'] = list(mutations)
binary_options = [
'disease_covariate',
'organ_covariate',
'gender_covariate',
'mutation_covariate',
'survival_covariate'
]
for opt in binary_options:
options[opt] = [0, 1]
option_df = expand_grid(options)
option_df['symbol'] = option_df.mutation.map(mutations)
option_df.head(2)
covariate_to_columns = {
'gender': covariate_df.columns[covariate_df.columns.str.startswith('gender')].tolist(),
'disease': covariate_df.columns[covariate_df.columns.str.startswith('disease')].tolist(),
'organ': covariate_df.columns[covariate_df.columns.str.contains('organ')].tolist(),
'mutation': covariate_df.columns[covariate_df.columns.str.contains('n_mutations')].tolist(),
'survival': ['alive', 'dead'],
}
###Output
_____no_output_____
###Markdown
Compute performance
###Code
def get_aurocs(X, y, series):
"""
Fit the classifier specified by series and add the cv, training, and testing AUROCs.
series is a row of option_df, which specificies the which covariates and mutation
status to use in the classifier.
"""
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0)
series['positive_prevalence'] = np.mean(y)
pipeline.fit(X=X_train, y=y_train)
y_pred_train = pipeline.decision_function(X_train)
y_pred_test = pipeline.decision_function(X_test)
cv_score_df = grid_scores_to_df(clf_grid.grid_scores_)
series['mean_cv_auroc'] = cv_score_df.score.max()
series['training_auroc'] = roc_auc_score(y_train, y_pred_train)
series['testing_auroc'] = roc_auc_score(y_test, y_pred_test)
return series
def grid_scores_to_df(grid_scores):
"""
Convert a sklearn.grid_search.GridSearchCV.grid_scores_ attribute to
a tidy pandas DataFrame where each row is a hyperparameter-fold combinatination.
"""
rows = list()
for grid_score in grid_scores:
for fold, score in enumerate(grid_score.cv_validation_scores):
row = grid_score.parameters.copy()
row['fold'] = fold
row['score'] = score
rows.append(row)
df = pd.DataFrame(rows)
return df
rows = list()
for i, series in option_df.iterrows():
columns = list()
for name, add_columns in covariate_to_columns.items():
if series[name + '_covariate']:
columns.extend(add_columns)
if not columns:
continue
X = covariate_df[columns]
y = Y[series.mutation]
rows.append(get_aurocs(X, y, series))
auroc_df = pd.DataFrame(rows)
auroc_df.sort_values(['symbol', 'testing_auroc'], ascending=[True, False], inplace=True)
auroc_df.head()
auroc_df.to_csv('auroc.tsv', index=False, sep='\t', float_format='%.5g')
###Output
_____no_output_____
###Markdown
Covariate performance by mutation
###Code
# Filter for models which include all covariates
plot_df = auroc_df[auroc_df[binary_options].all(axis='columns')]
plot_df = pd.melt(plot_df, id_vars='symbol', value_vars=['mean_cv_auroc', 'training_auroc', 'testing_auroc'], var_name='kind', value_name='auroc')
grid = sns.factorplot(y='symbol', x='auroc', hue='kind', data=plot_df, kind="bar")
xlimits = grid.ax.set_xlim(0.5, 1)
###Output
_____no_output_____
###Markdown
Create a logistic regression model to predict several mutations from covariates
###Code
import os
import itertools
import warnings
import collections
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import preprocessing, grid_search
from sklearn.linear_model import SGDClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from scipy.special import logit
%matplotlib inline
plt.style.use('seaborn-notebook')
###Output
_____no_output_____
###Markdown
Load Data
###Code
path = os.path.join('..', '..', 'download', 'mutation-matrix.tsv.bz2')
Y = pd.read_table(path, index_col=0)
# Read sample information and create a covariate TSV
url = 'https://github.com/cognoma/cancer-data/raw/54140cf6addc48260c9723213c40b628d7c861da/data/covariates.tsv'
covariate_df = pd.read_table(url, index_col=0)
covariate_df.head(2)
###Output
_____no_output_____
###Markdown
Specify the type of classifier
###Code
param_grid = {
'alpha': [10 ** x for x in range(-4, 2)],
'l1_ratio': [0, 0.05, 0.1, 0.2, 0.5, 0.8, 0.9, 0.95, 1],
}
clf = SGDClassifier(
random_state=0,
class_weight='balanced',
loss='log',
penalty='elasticnet'
)
# joblib is used to cross-validate in parallel by setting `n_jobs=-1` in GridSearchCV
# Supress joblib warning. See https://github.com/scikit-learn/scikit-learn/issues/6370
warnings.filterwarnings('ignore', message='Changing the shape of non-C contiguous array')
clf_grid = grid_search.GridSearchCV(estimator=clf, param_grid=param_grid, n_jobs=-1, scoring='roc_auc')
pipeline = make_pipeline(
StandardScaler(),
clf_grid
)
###Output
_____no_output_____
###Markdown
Specify covariates and outcomes
###Code
def expand_grid(data_dict):
"""Create a dataframe from every combination of given values."""
rows = itertools.product(*data_dict.values())
return pd.DataFrame.from_records(rows, columns=data_dict.keys())
mutations = {
'7157': 'TP53', # tumor protein p53
'7428': 'VHL', # von Hippel-Lindau tumor suppressor
'29126': 'CD274', # CD274 molecule
'672': 'BRCA1', # BRCA1, DNA repair associated
'675': 'BRCA2', # BRCA2, DNA repair associated
'238': 'ALK', # anaplastic lymphoma receptor tyrosine kinase
'4221': 'MEN1', # menin 1
'5979': 'RET', # ret proto-oncogene
}
options = collections.OrderedDict()
options['mutation'] = list(mutations)
binary_options = [
'disease_covariate',
'organ_covariate',
'gender_covariate',
'mutation_covariate',
'survival_covariate'
]
for opt in binary_options:
options[opt] = [0, 1]
option_df = expand_grid(options)
option_df['symbol'] = option_df.mutation.map(mutations)
option_df.head(2)
covariate_to_columns = {
'gender': covariate_df.columns[covariate_df.columns.str.startswith('gender')].tolist(),
'disease': covariate_df.columns[covariate_df.columns.str.startswith('disease')].tolist(),
'organ': covariate_df.columns[covariate_df.columns.str.contains('organ')].tolist(),
'mutation': covariate_df.columns[covariate_df.columns.str.contains('n_mutations')].tolist(),
'survival': ['alive', 'dead'],
}
###Output
_____no_output_____
###Markdown
Compute performance
###Code
def get_aurocs(X, y, series):
"""
Fit the classifier specified by series and add the cv, training, and testing AUROCs.
series is a row of option_df, which specificies the which covariates and mutation
status to use in the classifier.
"""
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0)
series['positive_prevalence'] = np.mean(y)
pipeline.fit(X=X_train, y=y_train)
y_pred_train = pipeline.decision_function(X_train)
y_pred_test = pipeline.decision_function(X_test)
cv_score_df = grid_scores_to_df(clf_grid.grid_scores_)
series['mean_cv_auroc'] = cv_score_df.score.max()
series['training_auroc'] = roc_auc_score(y_train, y_pred_train)
series['testing_auroc'] = roc_auc_score(y_test, y_pred_test)
return series
def grid_scores_to_df(grid_scores):
"""
Convert a sklearn.grid_search.GridSearchCV.grid_scores_ attribute to
a tidy pandas DataFrame where each row is a hyperparameter-fold combinatination.
"""
rows = list()
for grid_score in grid_scores:
for fold, score in enumerate(grid_score.cv_validation_scores):
row = grid_score.parameters.copy()
row['fold'] = fold
row['score'] = score
rows.append(row)
df = pd.DataFrame(rows)
return df
rows = list()
for i, series in option_df.iterrows():
columns = list()
for name, add_columns in covariate_to_columns.items():
if series[name + '_covariate']:
columns.extend(add_columns)
if not columns:
continue
X = covariate_df[columns]
y = Y[series.mutation]
rows.append(get_aurocs(X, y, series))
auroc_df = pd.DataFrame(rows)
auroc_df.sort_values(['symbol', 'testing_auroc'], ascending=[True, False], inplace=True)
auroc_df.head()
auroc_df.to_csv('auroc.tsv', index=False, sep='\t', float_format='%.5g')
###Output
_____no_output_____
###Markdown
Covariate performance by mutation
###Code
# Filter for models which include all covariates
plot_df = auroc_df[auroc_df[binary_options].all(axis='columns')]
plot_df = pd.melt(plot_df, id_vars='symbol', value_vars=['mean_cv_auroc', 'training_auroc', 'testing_auroc'], var_name='kind', value_name='auroc')
grid = sns.factorplot(y='symbol', x='auroc', hue='kind', data=plot_df, kind="bar")
xlimits = grid.ax.set_xlim(0.5, 1)
###Output
_____no_output_____ |
Week 08 Unsupervised Learning/Code Challenges/Day 4 Collaborative Filtering.ipynb | ###Markdown
**Coding Challenge** ** 2** - Collaborative Filtering **Coding Challenge:** **Context**With collaborative filtering, an application can find users with similar tastes and can look at ietms they like and combine them to create a ranked list of suggestions which is known as user based recommendation. Or can also find items which are similar to each other and then suggest the items to users based on their past purchases which is known as item based recommendation. The first step in this technique is to find users with similar tastes or items which share similarity. There are various similarity models like** Cosine Similarity, Euclidean Distance Similarity and Pearson Correlation Similarity** which can be used to find similarity between users or items. In this coding challenge, you will go through the process of identifying users that are similar (i.e. User Similarity) and items that are similar (i.e. "Item Similarity")**User Similarity:****1a)** Compute "User Similarity" based on cosine similarity coefficient (fyi, the other commonly used similarity coefficients are Pearson Correlation Coefficient and Euclidean)**1b)** Based on the cosine similarity coefficient, identify 2 users who are similar and then discover common movie names that have been rated by the 2 users; examine how the similar users have rated the movies**Item Similarity:****2a) ** Compute "Item Similarity" based on the Pearson Correlation Similarity Coefficient**2b)** Pick 2 movies and find movies that are similar to the movies you have picked**Challenges:****3)** According to you, do you foresee any issue(s) associated with Collaborative Filtering? **Dataset: ** For the purposes of this challenge, we will leverage the data set accessible via https://grouplens.org/datasets/movielens/The data set is posted under the section: ***recommended for education and development*** and we will stick to the small version of the data set with 100,000 ratings
###Code
import zipfile
import pandas as pd
import numpy as np
from scipy.stats import pearsonr
from scipy.spatial.distance import pdist, squareform
! wget 'http://files.grouplens.org/datasets/movielens/ml-latest-small.zip'
folder = zipfile.ZipFile('ml-latest-small.zip')
folder.infolist()
ratings = pd.read_csv(folder.open('ml-latest-small/ratings.csv'))
movies = pd.read_csv(folder.open('ml-latest-small/movies.csv'))
display(ratings.head())
display(movies.head())
###Output
_____no_output_____
###Markdown
User Similarity
###Code
ratings_pivot = pd.pivot_table(ratings.drop('timestamp', axis=1),
index='userId', columns='movieId',
aggfunc=np.max).fillna(0)
print(ratings_pivot.shape)
ratings_pivot.head()
distances = pdist(ratings_pivot.as_matrix(), 'cosine')
squareform(distances)
###Output
_____no_output_____
###Markdown
Since pdist calculates $1 - \frac{u\cdot v}{|u||v|}$ instead of cosine similarity, I will have to subtract the result from 1.
###Code
similarities = squareform(1-distances)
print(similarities.shape)
similarities
ix = np.unravel_index(np.argmax(similarities), similarities.shape)
print(ix)
print(similarities[ix])
###Output
(150, 368)
0.8453008752801064
###Markdown
Users 151 and 369 appear to be similar, with a cosine similarity of 0.84
###Code
print('Common movies rated')
display(ratings_pivot.iloc[[150, 368], :].T[(ratings_pivot.iloc[150]>0)
& (ratings_pivot.iloc[368]>0)])
###Output
Common movies rated
###Markdown
Item Similarity
###Code
correlations = squareform(1-pdist(ratings_pivot.as_matrix().T, 'correlation'))
correlations
np.argsort(correlations[0])[::-1]
correlations[0][np.argsort(correlations[0])[::-1]]
movies.head()
###Output
_____no_output_____
###Markdown
I will see which movies correlate the most with "Toy Story" and "Jumanji."
###Code
np.argsort(correlations[1])[::-1][:5] + 1
def most_correlated_movies(movieId, corr_matrix, n=5):
ix = movieId - 1
return np.argsort(correlations[ix])[::-1][:n] + 1
toy_story_similar = most_correlated_movies(1, correlations)
movies[movies['movieId'].isin(toy_story_similar)]
jumanji_similar = most_correlated_movies(2, correlations)
movies[movies['movieId'].isin(jumanji_similar)]
###Output
_____no_output_____
###Markdown
It seems that there are less movies in DataFrame matching IDs to titles, so not every movie ID found by the `most_correlated_movies` function correponds to a named entry.
###Code
movies.shape
###Output
_____no_output_____ |
Connecting Site and Geospatial data.ipynb | ###Markdown
USAID Sites and Geospatial IntelligenceThis notebook covers the creating of geospatial data
###Code
import os
import pandas as pd
import numpy as np
#os.mkdir("geospatial_data")
%matplotlib inline
# Load the site data provided by USAID
site_data = pd.read_csv("final_data/service_delivery_site_data.csv")
site_data.head()
import geopandas as gpd
# Use the data provided by UN Geospatial Repsitory
gdf = gpd.read_file('civ_admbnda_adm3_cntig_ocha_itos_20180706/civ_admbnda_adm3_cntig_ocha_itos_20180706.shp')
gdf.head()
###Output
_____no_output_____
###Markdown
Lets start trying to find a shared column to match on
###Code
districts = site_data['site_district'].unique()
print(len(districts))
districts.sort()
districts
site_data.head()
###Output
_____no_output_____
###Markdown
'ABOBO-EST' is a neighborhood in AbidjanMatch city to district then aggregate the district level.Need to do some manual analysis to figure this whole matching up. Data Processing- Observe there is a missing I so you think perhaps the two will match on ADM2_PCODE but this is a mistake. **There is no relationship between the two**```site_data['site_code'].head()gdf['ADM2_PCODE'].head() Insert Stringins_char = lambda x: x[0:1]+"I"+x[1:]site_data['ADM2_PCODE'] = site_data['site_code'].apply(ins_char)``` Data Processing The codes are not matching up between the two dataframes- On inspection we can see there is an I missing, lets try tp add that and see if that fixes thing
###Code
print("Num sites: ", len(site_data))
print("Num boundary shapes at ADM3: ",len(gdf))
###Output
Num sites: 156
Num boundary shapes at ADM3: 510
###Markdown
Geospatial Data Regional Data- Wikipedia data- Data from UN Geospatial Data Repository Noteshttps://en.wikipedia.org/wiki/Subdivisions_of_Ivory_Coasthttps://www.youtube.com/watch?v=6pYorKr3XFQ&ab_channel=AlJazeeraEnglishhttps://www.youtube.com/watch?v=O1_wpzPX7C8&ab_channel=FRANCE24Englishhttps://fr.wikipedia.org/wiki/R%C3%A9gions_de_C%C3%B4te_d%27Ivoire
###Code
from io import StringIO
# Taken from this Wikipedia Page
# https://fr.wikipedia.org/wiki/R%C3%A9gions_de_C%C3%B4te_d%27Ivoire
wikipedia_table = """
District Chef-lieu de district Région Chef-lieu de région
Zanzan Bondoukou Bounkani Bouna
Zanzan Bondoukou Gontougo Bondoukou
Yamoussoukro (district autonome) — — —
Woroba Séguéla Béré Mankono
Woroba Séguéla Bafing Touba
Woroba Séguéla Worodougou Séguéla
Vallée du Bandama Bouaké Hambol Katiola
Vallée du Bandama Bouaké Gbêkê Bouaké
Savanes Korhogo Poro Korhogo
Savanes Korhogo Tchologo Ferkessédougou
Savanes Korhogo Bagoué Boundiali
Sassandra-Marahoué Daloa Haut-Sassandra Daloa
Sassandra-Marahoué Daloa Marahoué Bouaflé
Montagnes Man Tonkpi Man
Montagnes Man Cavally Guiglo
Montagnes Man Guémon Duékoué
Lagunes Dabou Agnéby-Tiassa Agboville
Lagunes Dabou Mé Adzopé
Lagunes Dabou Grands Ponts Dabou
Lacs Dimbokro N’Zi Dimbokro
Lacs Dimbokro Iffou Daoukro
Lacs Dimbokro Bélier Toumodi
Lacs Dimbokro Moronou Bongouanou
Gôh-Djiboua Gagnoa Gôh Gagnoa
Gôh-Djiboua Gagnoa Lôh-Djiboua Divo
Denguélé Odienné Folon Minignan
Denguélé Odienné Kabadougou Odienné
Comoé Abengourou Indénié-Djuablin Abengourou
Comoé Abengourou Sud-Comoé Aboisso
Bas-Sassandra San-Pédro Nawa Soubré
Bas-Sassandra San-Pédro San-Pédro San-Pédro
Bas-Sassandra San-Pédro Gbôklé Sassandra
Abidjan (district autonome) — — —"""
wiki_region_mappings = pd.read_csv(StringIO(wikipedia_table),sep="\t")
gdf.groupby(['ADM1_FR','ADM2_FR']).size().sort_values(ascending =False)
###Output
_____no_output_____
###Markdown
Site Data
###Code
site_data.head()
site_data.groupby(['site_region','site_district'])['site_code'].size().sort_values(ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Analysis of Site Data Boundary Structure- Data is organized by Region and then by Site, this is contrary to the way that Cote d'Ivoire organizes itself which is by: 1. District 2. Region 3. Department 4. Village 5. Commune - The USAID dataset seems to be presented as 1. Region: regions or multiple regions combined under one administrative boundary 2. District: regions and departments Fuzz-Matching of regional names in the dataset- Calculate the character lexical similarity in the strings to determine the best matches.
###Code
from fuzzywuzzy import fuzz
from fuzzywuzzy import process
def get_fuzzy_match_results(ref_array,custom_array):
# Create a dictionary to hold all the string matching calculations
custom_mapping = {}
# Lambda functions to extract the match name from the tuple
get_match_name = lambda x: match_dict[x][0]
# Lambda functions to extract the match score from the tuple
get_match_score = lambda x: match_dict[x][1]
# Create_Reference_table
reference_table = pd.DataFrame({'ref':ref_array})
# Iterate over every commune name in the reference table
for custom_label in custom_array:
# Skip values if not string
if type(custom_label) == str:
ffuzzy_match = lambda x : fuzz.partial_ratio(custom_label,x)
reference_table[custom_label] = reference_table['ref'].apply(ffuzzy_match)
#reference_table[custom_label] = fuzzy_ratios
ref_max_value = reference_table[custom_label].max()
# Identify the record that has the highest score with the provided custom_label
matching_recs = reference_table.loc[reference_table[custom_label]==ref_max_value]
# If there are two communes that have an equal score, select the first value
if len(matching_recs)>1:
#print("Multiple matches: ",custom_label)
match_site_name = matching_recs['ref'].values[0]
else:
match_site_name = matching_recs['ref'].values[0]
# Update the match_dict with a tuple to store the final string and its corresponding score
custom_mapping[custom_label] = {'est_site_name':match_site_name,'fratio':ref_max_value}
else:
custom_mapping[custom_label] = None
mapping_df = pd.DataFrame(custom_mapping).transpose()
ref_df = pd.DataFrame(reference_table).set_index('ref')
return mapping_df,ref_df
###Output
_____no_output_____
###Markdown
Create a series of vectors to perform fuzzy wuzzy matching to create mappings
###Code
import seaborn as sns; sns.set()
## USAID
# Isolate USAID Region values
usaid_civ_site_region = site_data['site_region'].str.upper().unique()
#print(len(usaid_civ_site_region))
#print(usaid_civ_site_region)
# Isolate USAID District values
usaid_civ_site_district = site_data['site_district'].str.upper().unique()
#print(len(usaid_civ_site_district))
#print(usaid_civ_site_district)
## Wikipedia
# Isolate Wikipedia Region labels
wiki_admn_region = wiki_region_mappings['Région'].str.upper().unique()
#print(len(wiki_admn_region))
#print(wiki_admn_region)
# Isolate Wikipedia Department labels
wiki_admn_dept = wiki_region_mappings['Chef-lieu de région'].str.upper().unique()
print(len(wiki_admn_dept))
#print(wiki_admn_dept)
## UN Geospatial
# Isolate UN ADM1_FR labels
geospatial_admn_1 = gdf['ADM1_FR'].str.upper().unique()
#print(len(geospatial_admn_1))
#print(geospatial_admn_1)
# Isolate UN ADM3_FR labels
geospatial_admn_2 = gdf['ADM2_FR'].str.upper().unique()
print(len(geospatial_admn_2))
#print(geospatial_admn_2)
# Isolate UN ADM4_FR labels
geospatial_admn_3 = gdf['ADM3_FR'].str.upper().unique()
print(len(geospatial_admn_3))
#print(geospatial_admn_3)
import matplotlib.pyplot as plt
def make_fuzzy_matching_evaluation(ref,labels,ref_name='x',label_name='y'):
mapping ,ref_table = get_fuzzy_match_results(ref,labels)
mapping['fratio'] = pd.to_numeric(mapping['fratio'])
plt.subplots(figsize=(8,6))
sns.heatmap(ref_table_1)
plt.title(f"{ref_name} - {len(ref)} vs {label_name} - {len(labels)}",fontdict={'fontsize':20})
plt.show()
print(mapping.fratio.describe())
return mapping ,ref_table
wu_reg_reg_map, wu_reg_reg_tbl = make_fuzzy_matching_evaluation(wiki_admn_region,usaid_civ_site_region,'Wikipedia-Regions','USAID-Regions')
###Output
_____no_output_____
###Markdown
Wikipedia: Spatial-Regions vs USAID-Regions- First we capitalize the results and there started to be several more higher fuzzy ratio scores calculated. Initially there were very weak results 'Me' was the highest match for all of the initial values.*Lets try Wikipedia Regions vs USAID - districts....*
###Code
wu_reg_dis_map, wu_reg_dis_tbl = make_fuzzy_matching_evaluation(
wiki_admn_region,
usaid_civ_site_district,'Wikipedia-Regions','USAID-Districts')
###Output
_____no_output_____
###Markdown
Results: Wikipedia-Regions vs USAID-Regions- Great results startings to see many matches over 80*Lets try Wikipedia-departments_ with USAID-regions*
###Code
wu_dep_dis_map, wu_dep_dis_tbl = make_fuzzy_matching_evaluation(
wiki_admn_dept,
usaid_civ_site_district,'Wikipedia-Departments','USAID-Districts')
###Output
_____no_output_____
###Markdown
Matching with the geospatial data and making custom maps ADM1_FR
###Code
# Compare ADM1_FR and USAID's Region Codes
ug_reg_dis_map, ug_reg_dis_tbl = make_fuzzy_matching_evaluation(
usaid_civ_site_region,
geospatial_admn_1,'USAID-Regions','UN Geospatial ADM1_FR')
# Using the matches from fuzzy model as a base with ADM1_FR and USAID Region
# Region vs Region
# This will drop one region from the reference index
# Drop Yamaoussoukora
civ_strong_matches = ug_reg_dis_map[ug_reg_dis_map.fratio>50]
print(len(civ_strong_matches))
print(civ_strong_matches)
civ_region_adm1_mapping = {index:data['est_site_name'] for index, data in civ_strong_matches.iterrows()}
###Output
32
est_site_name fratio
INDENIE-DJUABLIN INDENIE-DJUABLIN 100
DISTRICT AUTONOME D'ABIDJAN ABIDJAN 2 88
N'ZI N'ZI-IFOU-MORONOU 100
SUD-COMOE SUD-COMOE 100
ME AGNEBY-TIASSA-ME 100
AGNEBY-TIASSA AGNEBY-TIASSA-ME 100
GRANDS PONTS ABIDJAN 1-GRANDS PONTS 100
IFFOU N'ZI-IFOU-MORONOU 80
GONTOUGO BOUNKANI-GONTOUGO 100
MORONOU N'ZI-IFOU-MORONOU 100
GBEKE GBEKE 100
BELIER BELIER 100
HAMBOL HAMBOL 100
GUEMON CAVALLY-GUEMON 100
PORO PORO-TCHOLOGO-BAGOUE 100
KABADOUGOU KABADOUGOU-BAFING-FOLON 100
CAVALLY CAVALLY-GUEMON 100
TONKPI TONKPI 100
BAGOUE PORO-TCHOLOGO-BAGOUE 100
GOH GOH 100
HAUT-SASSANDRA HAUT-SASSANDRA 100
MARAHOUE MARAHOUE 100
TCHOLOGO PORO-TCHOLOGO-BAGOUE 100
WORODOUGOU WORODOUGOU-BERE 100
BOUNKANI BOUNKANI-GONTOUGO 100
BAFING KABADOUGOU-BAFING-FOLON 100
BERE WORODOUGOU-BERE 100
NAWA GBOKLE-NAWA-SAN PEDRO 100
LOH-DJIBOUA LOH-DJIBOUA 100
GBOKLE GBOKLE-NAWA-SAN PEDRO 100
SAN PEDRO GBOKLE-NAWA-SAN PEDRO 100
FOLON KABADOUGOU-BAFING-FOLON 100
###Markdown
Lets compare ADM2_Fr and USAID DistrictIn the past example we used 50 as our cut-off criteria now as we have stronger matches we will increase it to 75 ADM2_FR
###Code
# Compare ADM1_FR and USAID's Region Codes
gu_dep_dis_map, gu_dep_dis_tbl = make_fuzzy_matching_evaluation(
geospatial_admn_2,
usaid_civ_site_district,'UN Geospatial ADM1_FR','USAID-Districts')
strong_matches = gu_dep_dis_map[gu_dep_dis_map.fratio>50]
print("Strong matches")
print(len(strong_matches))
print(strong_matches.sort_values(by='fratio',ascending=False).head(10))
weak_matches = gu_dep_dis_map[gu_dep_dis_map.fratio<70]
print("Weak matches")
print(len(weak_matches))
print(weak_matches)
# Drop values with a bad mapping
# Using the weak matches we can identify the worst performing matches
# We want to keep the last records though, it seems the accent is being interpretted poorly
drop_indices = weak_matches.index[:-1]
civ_dist_adm2_mapping = {index:data['est_site_name'] for index, data in gu_dep_dis_map.drop(drop_indices).iterrows()}
###Output
_____no_output_____
###Markdown
ADM3_FR Lets compare ADM3_Fr and USAID DistrictI have already pruned the matches by evaluating weak matches.
###Code
geospatial_admn_3 = gdf['ADM3_FR'].str.upper().unique()
print(len(geospatial_admn_3))
#print(civ_admn_dept)
print(len(usaid_civ_site_district))
site_dist_geo_admn3_res,site_dist_geo_admn3_tbl = get_fuzzy_match(geospatial_admn_3,usaid_civ_site_district)
sns.heatmap(site_dist_geo_admn3_tbl.set_index('ref'))
# Compare ADM1_FR and USAID's Region Codes
gu_dep3_dis_map, gu_dep3_dis_tbl = make_fuzzy_matching_evaluation(
geospatial_admn_3,
usaid_civ_site_district,'UN Geospatial ADM1_FR','USAID-Districts')
strong_matches = gu_dep3_dis_map[gu_dep3_dis_map.fratio>50]
print("Strong matches")
print(len(strong_matches))
print(strong_matches.sort_values(by='fratio',ascending=False).head(10))
weak_matches = gu_dep3_dis_map[gu_dep3_dis_map.fratio<70]
print("Weak matches")
print(len(weak_matches))
print(weak_matches)
# Drop values with a bad mapping
# Using the weak matches we can identify the worst performing matches
drop_indices = weak_matches.index[:-1]
civ_dist_adm3_mapping = {index:data['est_site_name'] for index, data in gu_dep3_dis_map.drop(drop_indices).iterrows()}
###Output
506
81
###Markdown
Lets Apply the mappings- Starting from region, create a mapping to map each value in the USAID data set with our pruned mappings
###Code
site_data['adm3_fr'] = site_data['site_district'].map(civ_dist_adm3_mapping)
site_data['adm2_fr'] = site_data['site_district'].map(civ_dist_adm2_mapping)
###Output
_____no_output_____
###Markdown
Apply the mappings to the geospatial data- Because of how the USAID regions incorporated multiple existing geospatial boundaries, I applied the USAID regional mapping onto the existing ADM1_FR names in order to gain access to map projections with those groupings.
###Code
gdf['usaid_admin_region'] = gdf['ADM1_FR'].str.upper().map(civ_region_adm1_mapping)
gdf.to_file("geospatial_data/Custom_CIV.shp")
gdf['usaid_admin_region'].head()
###Output
_____no_output_____ |
notebooks/ShakespeareanText_Generator.ipynb | ###Markdown
###Code
import tensorflow as tf
from tensorflow import keras
import numpy as np
shakespeare_url = "https://homl.info/shakespeare"
filepath = keras.utils.get_file("shakespeare.txt", shakespeare_url)
with open(filepath) as f:
shakespeare_text = f.read()
###Output
_____no_output_____
###Markdown
Encoding using Tokenizer:
###Code
print(shakespeare_text[:148])
tokenizer = keras.preprocessing.text.Tokenizer(char_level=True)
tokenizer.fit_on_texts(shakespeare_text)
tokenizer.texts_to_sequences(["HIIII", "hiiii", "Hey there"])
tokenizer.sequences_to_texts([[20, 6, 9, 3, 4]])
max_id = len(tokenizer.word_index) # no.of distinct characters
dataset_size = tokenizer.document_count
print(max_id, dataset_size)
[encoded] = np.array(tokenizer.texts_to_sequences([shakespeare_text])) - 1
print(encoded)
###Output
[19 5 8 ... 20 26 10]
###Markdown
Splitting a Sequential Dataset:
###Code
train_size = dataset_size * 90 //100
dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size])
for item in dataset.take(10):
print(item)
n_steps = 100
window_length = n_steps + 1
dataset = dataset.window(window_length, shift=1, drop_remainder=True)
for item in dataset.take(1):
print(item)
dataset = dataset.flat_map(lambda window: window.batch(window_length))
for item in dataset.take(1):
print(item)
batch_size = 32
dataset = dataset.shuffle(10000).batch(batch_size)
for item in dataset.take(1):
print(item)
dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:]))
for item in dataset.take(1):
print(item)
dataset = dataset.map(lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))
for X, y in dataset.take(1):
print(X.shape, y.shape)
dataset = dataset.prefetch(1)
###Output
_____no_output_____
###Markdown
Building Model:
###Code
shakespearean_model = keras.models.Sequential([
keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id],
dropout=0.2),#, recurrent_dropout=0.2),
keras.layers.GRU(128, return_sequences=True,
dropout=0.2),#, recurrent_dropout=0.2),
keras.layers.TimeDistributed(keras.layers.Dense(max_id, activation="softmax")),
])
shakespearean_model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.RMSprop(4e-4),
metrics=["accuracy"])
history = shakespearean_model.fit(dataset, epochs=10)
###Output
Epoch 1/10
31368/31368 [==============================] - 358s 11ms/step - loss: 1.8901 - accuracy: 0.4375
Epoch 2/10
31368/31368 [==============================] - 355s 11ms/step - loss: 1.6282 - accuracy: 0.5018
Epoch 3/10
31368/31368 [==============================] - 355s 11ms/step - loss: 1.5891 - accuracy: 0.5119
Epoch 4/10
18094/31368 [================>.............] - ETA: 2:29 - loss: 1.5731 - accuracy: 0.5157
###Markdown
Predicting a Character:
###Code
def preprocess(texts):
X = np.array(tokenizer.texts_to_sequences(texts)) - 1
return tf.one_hot(X, max_id)
X_new = preprocess(["How are yo"])
Y_pred = shakespearean_model.predict_classes(X_new)
tokenizer.sequences_to_texts(Y_pred + 1)[0][-1]
###Output
_____no_output_____
###Markdown
Predicting multilpe characters:
###Code
def next_char(text, temperature=1):
X_new = preprocess([text])
y_proba = shakespearean_model.predict(X_new)[0, -1:, :]
rescaled_logits = tf.math.log(y_proba) / temperature
char_id = tf.random.categorical(rescaled_logits, num_samples=1) + 1
return tokenizer.sequences_to_texts(char_id.numpy())[0]
def complete_text(text, n_chars=50, temperature=1):
for _ in range(n_chars):
text += next_char(text, temperature)
return text
print(complete_text("t", temperature=0.2))
print(complete_text("a", temperature=0.5))
print(complete_text("s", temperature=1))
print(complete_text("r", temperature=2))
###Output
_____no_output_____
###Markdown
Stateful RNN: Fabien Chollet gives this definition of STATEFULNESS:Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.By default, Keras shuffles (permutes) the samples in X and the dependencies between Xi and Xi+1 are lost. Let’s assume there’s no shuffling in our explanation.If the model is stateless, the cell states are reset at each sequence. With the stateful model, all the states are propagated to the next batch. It means that the state of the sample located at index i, Xi will be used in the computation of the sample Xi+bs in the next batch, where bs is the batch size (no shuffling).
###Code
batch_size = 32
encoded_parts = np.array_split(encoded[:train_size], batch_size)
datasets = []
for encoded_part in encoded_parts:
dataset = tf.data.Dataset.from_tensor_slices(encoded_part)
dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_length))
datasets.append(dataset)
dataset = tf.data.Dataset.zip(tuple(datasets)).map(lambda *windows: tf.stack(windows))
dataset = dataset.repeat().map(lambda windows: (windows[:, :-1], windows[:, 1:]))
dataset = dataset.map(
lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))
dataset = dataset.prefetch(1)
stateful_model = keras.models.Sequential([
keras.layers.GRU(128, return_sequences=True, stateful=True,
dropout=0.2, recurrent_dropout=0.2,
batch_input_shape=[batch_size, None, max_id]),
keras.layers.GRU(128, return_sequences=True, stateful=True,
dropout=0.2, recurrent_dropout=0.2),
keras.layers.TimeDistributed(keras.layers.Dense(max_id,
activation="softmax"))
])
class ResetStatesCallback(keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs):
self.model.reset_states()
stateful_model.compile(loss="sparse_categorical_crossentropy",
optimizer="adam")
steps_per_epoch = train_size // batch_size // n_steps
stateful_model.fit(dataset, steps_per_epoch=steps_per_epoch,
epochs=50, callbacks=[ResetStatesCallback()])
###Output
_____no_output_____ |
notebooks/DistROOT.ipynb | ###Markdown
 **DistROOT: CMS Example Notebook** Get user credentials.
###Code
import getpass
import os, sys
krb5ccname = '/tmp/krb5cc_' + os.environ['USER']
print("Please enter your password")
ret = os.system("echo \"%s\" | kinit -c %s" % (getpass.getpass(), krb5ccname))
if ret == 0: print("Credentials created successfully")
else: sys.stderr.write('Error creating credentials, return code: %s\n' % ret)
###Output
Please enter your password
········
Credentials created successfully
###Markdown
Import Spark modules.
###Code
from pyspark import SparkConf, SparkContext
###Output
_____no_output_____
###Markdown
Create Spark configuration and context.
###Code
conf = SparkConf()
# Generic for SWAN-Spark prototype
conf.set('spark.driver.host', os.environ['SERVER_HOSTNAME'])
conf.set('spark.driver.port', os.environ['SPARK_PORT_1'])
conf.set('spark.fileserver.port', os.environ['SPARK_PORT_2'])
conf.set('spark.blockManager.port', os.environ['SPARK_PORT_3'])
conf.set('spark.ui.port', os.environ['SPARK_PORT_4'])
conf.set('spark.master', 'yarn')
# DistROOT specific
conf.setAppName("ROOT")
conf.set('spark.executor.extraLibraryPath', os.environ['LD_LIBRARY_PATH'])
conf.set('spark.submit.pyFiles', os.environ['HOME'] + '/.local/lib/python2.7/site-packages/DistROOT.py')
conf.set('spark.executorEnv.KRB5CCNAME', krb5ccname)
conf.set('spark.yarn.dist.files', krb5ccname + '#krbcache')
# Resource allocation
conf.set('spark.executor.instances', 4)
conf.set('spark.driver.memory', '2g')
sc = SparkContext(conf = conf)
###Output
_____no_output_____
###Markdown
Import DistROOT.
###Code
import ROOT
from DistROOT import DistTree
###Output
Welcome to JupyROOT 6.11/01
###Markdown
Define the mapper and reducer functions.
###Code
def fillCMS(reader):
import ROOT
ROOT.TH1.AddDirectory(False)
ROOT.gInterpreter.Declare('#include "file.h"')
myAnalyzer = ROOT.wmassAnalyzer(reader)
return myAnalyzer.GetHistosList()
def mergeCMS(l1, l2):
for i in xrange(l1.GetSize()):
l1.At(i).Add(l2.At(i))
return l1
###Output
_____no_output_____
###Markdown
Build the DistTree and trigger the parallel processing.
###Code
files = [ "data.root",
"data2.root" ]
dTree = DistTree(filelist = files,
treename = "random_test_tree",
npartitions = 8)
histList = dTree.ProcessAndMerge(fillCMS, mergeCMS)
###Output
_____no_output_____
###Markdown
Store resulting histograms in a file.
###Code
f = ROOT.TFile("output.root", "RECREATE")
for h in histList:
h.Write()
f.Close()
###Output
_____no_output_____
###Markdown
Draw one of the histograms we filled using Spark and ROOT.
###Code
c = ROOT.TCanvas()
histList[0].Draw()
c.Draw()
###Output
_____no_output_____ |
notebooks/Introduction_Tutorial.ipynb | ###Markdown
Introduction to ICESat-2 Surface Velocity CalculationsThis notebook is meant to introduce the processing flow for a simple along-track velocity calculation using repeat cycles of ICESat-2 elevation profiles. The notebook covers:1. Setting up the IS2_velocity library2. Loading elevation data from an hdf5 file using the built-in reader function.3. Smoothing and differentiating the elevation profile.4. Correlating the differentiated profile to calculate surface velocities.
###Code
# Import the basic libraries
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Library SetupIn order to run the IS2_velocity scripts as a python library, you must first:1. Download or clone the repository at https://github.com/ICESAT-2HackWeek/IS2_velocity.git2. Install the dependencies including numpy, scipy, h5py, astropy, icepyx, and the ICESat-2 pointCollection library.3. Go the the home directory in our repository and run ‘python setup.py install’.If you successfully run the setup.py script, you should be able to run the cell below.
###Code
# As an example, import a function from the ICESat-2 surface velocity library
from IS2_velocity.correlation_processing import calculate_velocities
help(calculate_velocities)
###Output
_____no_output_____
###Markdown
Velocity calculation: Control correlations
###Code
# Import funcitons for the velocity calculation; Correlate all the beams from one set of repeat ground tracks, rgt = 0848
from IS2_velocity.correlation_processing import calculate_velocities
### Select rgt for now
rgt = '0848'
### Control the correlation step:
segment_length = 2000 # meters, how wide is the window we are correlating in each step
search_width = 1000 # meters, how far in front of and behind the window to check for correlation
along_track_step = 100 # meters; how much to jump between each consecutivevelocity determination
max_percent_nans = 10 # Maximum % of segment length that can be nans and still do the correlation step
### Which product
product = 'ATL06'
if product == 'ATL06':
dx = 20
### Select filter type and required arguments; Currently only this running mean is supported
filter_type = 'running_average'
running_avg_window = 100 # meters
###Output
_____no_output_____
###Markdown
Velocity calculation: Load Data / Import dictionaries
###Code
from IS2_velocity.readers import load_data_by_rgt
# atl06_to_dict is within the function load_data_by_rgt
# path to data, relative to folder /notebooks
data_dir = '../data/'
rgt = '0848'
# Load data; This step loads raw data, interpolates to constant spacing, filters if requested, and
# differentiates
filter_type = 'running_average'
running_avg_window = 100
x_atc, lats, lons, h_li_raw, h_li_raw_NoNans, h_li, h_li_diff, times, min_seg_ids, \
segment_ids, cycles_this_rgt, x_ps, y_ps = \
load_data_by_rgt(rgt = rgt, path_to_data = data_dir, product = 'ATL06', \
filter_type = filter_type, running_avg_window = running_avg_window, \
format = 'hdf5')
###Output
_____no_output_____
###Markdown
Visualize one of the beams
###Code
# Plot the landice elevation along the pass.
cycle1='03'
cycle2='04'
beam='gt1l'
plt.figure(figsize=(8,4))
plt.plot(x_atc[cycle1][beam]/1000.,h_li[cycle1][beam],c='indianred')
plt.plot(x_atc[cycle2][beam]/1000.,h_li[cycle2][beam],c='steelblue')
plt.ylabel('Elevation (m)')
plt.xlabel('Along-Track Distance (km)')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Velocity calculation: Calculate velocity between cycles 03 and 04
###Code
from IS2_velocity.correlation_processing import calculate_velocities
# Calculate velocity between cycles 3 and 4
cycle1 = '03'
cycle2 = '04'
beams = ['gt1l','gt1r','gt2l','gt2r','gt3l','gt3r']
saving = True
write_out_path = '.'
write_out_prefix = ''
spatial_extent = np.array([-65, -86, -55, -81])
map_data_root = '/Users/grace/Dropbox/Cornell/projects/003/FIS_data/'
velocities, correlations, lags, midpoints_x_atc, midpoints_xy, midpoints_lons, midpoints_lats = \
calculate_velocities(rgt, x_atc, h_li_raw, h_li_diff, lats, lons, segment_ids, times, beams, cycle1, cycle2, \
product, segment_length, search_width, along_track_step, max_percent_nans, dx, saving = True, \
write_out_path = write_out_path, prepend = write_out_prefix,spatial_extent = spatial_extent, \
map_data_root = map_data_root)
###Output
_____no_output_____
###Markdown
Velocity calculation: Visualize result for one beam
###Code
from matplotlib.gridspec import GridSpec
beam = 'gt1l'
x1 = x_atc['03'][beam]
x2 = x_atc['04'][beam]
h1 = h_li['03'][beam]
h2 = h_li['04'][beam]
dh1 = h_li_diff['03'][beam]
dh2 = h_li_diff['04'][beam]
vel_xs = midpoints_x_atc[rgt][beam]
velocs = velocities[rgt][beam]
plt.figure(figsize=(8,4))
gs = GridSpec(2,2)
# Plot the elevation profiles again
plt.subplot(gs[0,0])
plt.tick_params(bottom=False,labelbottom=False)
plt.plot(x1/1000.-29000,h1,'.',c='indianred')
plt.plot(x2/1000.-29000,h2,'.',c='steelblue',ms=3)
plt.ylabel('Elevation (m)')
plt.title('ATL06',fontweight='bold')
plt.xlim(80,580)
# Plot the slopes again
plt.subplot(gs[1,0])
plt.tick_params(bottom=False,labelbottom=False)
plt.plot(x1/1000.-29000,dh1,'.',c='indianred')
plt.plot(x2/1000.-29000,dh2,'.',c='steelblue',ms=3)
plt.ylim(-.05,.05)
plt.ylabel('Surface Slope (m/m)')
plt.xlim(80,580)
# Plot the calculated velocities along track
ax5 = plt.subplot(gs[0,1])
plt.plot(vel_xs/1000.-29000,velocs,'.',c='k',label='ATL06')
plt.ylabel('Velocity (m/yr)')
plt.xlabel('Along-Track Distance (km)')
plt.xlim(80,580)
plt.ylim(-500,1500)
plt.tight_layout()
from IS2_velocity.plotting import plot_measures_along_track_comparison
datapath = '/Users/grace/Dropbox/Cornell/projects/003/git_repo_old_Hackweek/surface_velocity/contributors/grace_barcheck/download/'
out_path = '/Users/grace/Dropbox/Cornell/projects/003/out_tmp/'
map_data_root = '/Users/grace/Dropbox/Cornell/projects/003/FIS_data/'
correlation_threshold = 0.65
plot_out_location = out_path
velocity_number = 0
spatial_extent = np.array([-65, -86, -55, -81])
plot_measures_along_track_comparison(rgt, beams, out_path, correlation_threshold, spatial_extent, plot_out_location, map_data_root, velocity_number)
###Output
_____no_output_____
###Markdown
Introduction to ICESat-2 Surface Velocity CalculationsThis notebook is meant to introduce the processing flow for a simple along-track velocity calculation using repeat cysles of ICESat-2 elevation profiles. The notebook covers:1. Setting up the IS2_velocity library2. Loading elevation data from an hdf5 file using the built-in reader function.3. Smoothing and differentiating the elevation profile.4. Correlating the differentiated profile to calculate surface velocities.
###Code
# Import the basic libraries
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Library SetupIn order to run the IS2_velocity scripts as a python library, you must first:1. Download or clone the repository at https://github.com/ICESAT-2HackWeek/IS2_velocity.git2. Install the dependencies including numpy, scipy, h5py, astropy, icepyx, and the ICESat-2 pointCollection library.3. Go the the home directory in our repository and run ‘python setup.py install’.If you successfully run the setup.py script, you should be able to run the cell below.
###Code
# As an example, import a function from the ICESat-2 surface velocity library
from IS2_velocity.correlation_processing import velocity
help(velocity)
###Output
_____no_output_____
###Markdown
Import ATL06 DictionariesTwo cycles for a repeat over Foundation Ice Stream are saved within the data directory. Here we load and plot them on top of one another.
###Code
# Import the reader script
from IS2_velocity.readers import atl06_to_dict
# read in dictionaries from two different cycles
data_dir = '../data/'
fn_1 = 'processed_ATL06_20190822153035_08480411_003_01.h5'
D1=atl06_to_dict(data_dir+fn_1,'/gt2l', index=None, epsg=3031)
fn_2 = 'processed_ATL06_20190523195046_08480311_003_01.h5'
D2=atl06_to_dict(data_dir+fn_2,'/gt2l', index=None, epsg=3031)
# Plot the landice elevation along the pass.
plt.figure(figsize=(8,4))
plt.plot(D1['x_atc']/1000.,D1['h_li'],c='indianred')
plt.plot(D2['x_atc']/1000.,D2['h_li'],c='steelblue')
plt.ylabel('Elevation (m)')
plt.xlabel('Along-Track Distance (km)')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Preprocessing - Smooth and Differentiate
###Code
# Import some signal processing functions
from IS2_velocity.correlation_processing import smooth_and_diff, fill_seg_ids
# Get segment ids from the loaded dictionaries
x1,h1 = fill_seg_ids(D1['x_atc'],D1['h_li'],D1['segment_id'])
x2,h2 = fill_seg_ids(D2['x_atc'],D2['h_li'],D2['segment_id'])
# Smooth and differentiate the elevation product (this is a preprocessing step)
h1_smooth,dh1 = smooth_and_diff(x1,h1,win=100)
h2_smooth,dh2 = smooth_and_diff(x2,h2,win=100)
# ------------------------------------------
plt.figure(figsize=(8,6))
# Plot smoothed surface elevation
plt.subplot(211)
plt.tick_params(labelbottom=False,bottom=False)
plt.plot(x1/1000.,h1,c='grey')
plt.plot(x1/1000.,h1_smooth,c='k')
plt.ylabel('Elevation (m)')
# Plot the surface Slope
plt.subplot(212)
plt.plot(x1/1000.,dh1,c='k')
plt.xlabel('Along-Track Distance (km)')
plt.ylabel('Surface Slope (m/m)')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Velocity CalculationMore work is yet to be done to select the ideal search_width and segment_Right now, I have:- search_width=1000- segment_length=5000- dx=20- corr_threshold=.65
###Code
# Import funcitons for the velocity calculation and a time differencing step
from IS2_velocity.correlation_processing import velocity, time_diff
# Calculate time offset
dt = time_diff(D1,D2)
# Where to calculate the velocities
vel_xs = np.linspace(np.min(x1)+1000,np.max(x1)-1000,1000)
# Do the velocity calculation
velocities,correlations = velocity(x1,dh1,dh2,dt,vel_xs,search_width=1000,segment_length=5000)
# ------------------------------------------
from matplotlib.gridspec import GridSpec
plt.figure(figsize=(8,4))
gs = GridSpec(2,2)
# Plot the elevation profiles again
plt.subplot(gs[0,0])
plt.tick_params(bottom=False,labelbottom=False)
plt.plot(x1/1000.-29000,h1,'.',c='indianred')
plt.plot(x2/1000.-29000,h2,'.',c='steelblue',ms=3)
plt.ylabel('Elevation (m)')
plt.title('ATL06',fontweight='bold')
plt.xlim(80,580)
# Plot the slopes again
plt.subplot(gs[1,0])
plt.tick_params(bottom=False,labelbottom=False)
plt.plot(x1/1000.-29000,dh1,'.',c='indianred')
plt.plot(x2/1000.-29000,dh2,'.',c='steelblue',ms=3)
plt.ylim(-.05,.05)
plt.ylabel('Surface Slope (m/m)')
plt.xlim(80,580)
# Plot the calculated velocities along track
ax5 = plt.subplot(gs[:,1])
plt.plot(vel_xs/1000.-29000,velocities,'.',c='k',label='ATL06')
plt.ylabel('Velocity (m/yr)')
plt.xlabel('Along-Track Distance (km)')
plt.xlim(80,580)
plt.ylim(-500,1500)
plt.tight_layout()
###Output
_____no_output_____ |
1_build/02-Merge_nodes_via_ID_xrefs-(MeSH-DrugCentral-DO_Slim).ipynb | ###Markdown
Merge NodesOne issue with SemmedDB (or the UMLS Metathesaurus in general) is that the CUIs are too granular in detail. Take for example Imatinib Mesylate. The following concepts are all found within SemmedDB:| UMLS CUI | Concept Name ||----------|-------------------|| C0939537 | Imatinib mesylate || C0385728 | CGP 57148 || C1097576 | ST 1571 || C0935987 | Gleevec || C0906802 | STI571 || C0935989 | imatinib |However, all of these concepts describe the same chemical structure. Luckily, all of these UMLS CUIs can be cross-referenced to just 1 MeSH Descriptor ID: `D000068877`. This will allow us to merge these concept within the network.Diseases have similar issues, however, they are a little less straightforward. A similar, yet more complex approach will be used for thier combination.
###Code
from tqdm import tqdm
from collections import defaultdict
from collections import Counter
from queue import Queue
from itertools import chain
import pandas as pd
import pickle
import sys
sys.path.append('../../hetnet-ml/src')
import graph_tools as gt
sys.path.append('../tools/')
import load_umls
###Output
_____no_output_____
###Markdown
1. Import the DrugCentral info for Gold Standard and Add Compound Names
###Code
rels = pd.read_csv('../data/drugcentral_rel_06212018.csv')
rels.head(2)
dc_ids = pd.read_csv('../data/drugcentral_ids_06212018.csv')
dc_ids.head(2)
syn = pd.read_csv('../data/drugcentral_syn_06212018.csv')
syn.rename(columns={'id': 'struct_id'}, inplace=True)
syn.head(2)
pref = syn.query('preferred_name == 1').reset_index(drop=True)
pref = pref.dropna(subset=['struct_id'])
pref['struct_id'] = pref['struct_id'].astype('int64')
pref.head(2)
struct_id_to_name = pref.set_index('struct_id')['name'].to_dict()
rels['c_name'] = rels['struct_id'].map(lambda i: struct_id_to_name.get(i, float('nan')))
rels.shape[0] == rels['c_name'].count()
###Output
_____no_output_____
###Markdown
2. Map the Compounds in Semmed DB to MeSHAlthough we will be mapping all UMLS CUIs (that can be mapped) to MeSH, after the inital map, we will start by taking a closer look at the compounds. Because there are multiple sources of X-refs for both Compounds and Diseases, these special Metanodes will be a bit more complicated than a simple direct map.Starting with a direct Map from UMLS to MeSH will combine a lot of the Compound nodes, reducing the number of total unique compounds.
###Code
nodes = gt.remove_colons(pd.read_csv('../data/nodes_VER31_R.csv'))
umls_to_mesh = pickle.load(open('../data/UMLS-CUI_to_MeSH-Descripctor.pkl', 'rb'))
umls_to_mesh_1t1 = {k: v[0] for k, v in umls_to_mesh.items() if len(v) == 1}
nodes['mesh_id'] = nodes['id'].map(lambda c: umls_to_mesh_1t1.get(c, float('nan')))
drugs = nodes.query('label == "Chemicals & Drugs"').copy()
print('{:.3%} of Drug IDs mapped via MeSH:'.format(drugs['mesh_id'].count() / drugs.shape[0]))
print('{:,} of {:,} Mapped to {:,} Unique MSH ids'.format(drugs['mesh_id'].count(), drugs.shape[0], drugs['mesh_id'].nunique()))
num_drugs = drugs['id'].nunique()
msh_compress_drugs = drugs['mesh_id'].fillna(drugs['id']).nunique()
print('{:.3%} Reduction in Drugs by using MSH synonmyms {:,} --> {:,}'.format((num_drugs - msh_compress_drugs)/num_drugs, num_drugs, msh_compress_drugs))
###Output
86.783% of Drug IDs mapped via MeSH:
82,236 of 94,761 Mapped to 66,939 Unique MSH ids
16.143% Reduction in Drugs by using MSH synonmyms 94,761 --> 79,464
###Markdown
3. Use UMLS MeSH mappings and Mappings from DrugCentral to ensure Maximum overlapDrugCentral also has it's own internal identifiers for compounds as well as mappings from both their internal id to UMLS and MeSH. If we treat these mappings all as edges in a network, and use a Subnet finding algorthim, each subnet will essentially be a unique chemical structure, with the nodes of that subnet representing all of the different identifiers that map to that structure.
###Code
dc_maps = dc_ids.query('id_type in {}'.format(["MESH_DESCRIPTOR_UI", "MESH_SUPPLEMENTAL_RECORD_UI" , "UMLSCUI"]))
drug_adj_list = defaultdict(set)
for row in tqdm(dc_maps.itertuples(), total=len(dc_maps)):
drug_adj_list[row.struct_id].add(row.identifier)
drug_adj_list[row.identifier].add(row.struct_id)
umls_keys = list(chain(*[[k]*len(v) for k, v in umls_to_mesh.items()]))
mesh_vals = list(chain(*[v for v in umls_to_mesh.values()]))
umls_to_mesh_df = pd.DataFrame({'umls': umls_keys, 'mesh': mesh_vals})
drug_ids = drugs['id'].unique()
umls_to_mesh_drugs = umls_to_mesh_df.query('umls in @drug_ids')
umls_set = set(drugs['id']) | set(dc_maps.query('id_type == "UMLSCUI"'))
mesh_set = set(mesh_vals) | set(dc_maps.query('id_type in {}'.format(["MESH_DESCRIPTOR_UI", "MESH_SUPPLEMENTAL_RECORD_UI"]))['identifier'])
len(umls_set & mesh_set) == 0
for row in umls_to_mesh_drugs.itertuples():
drug_adj_list[row.umls].add(row.mesh)
drug_adj_list[row.mesh].add(row.umls)
# Ensure that all Struct IDs from DrugCentral make it into the subnets (even if no xrefs)
for struct_id in rels.query('relationship_name == "indication"')['struct_id'].unique():
drug_adj_list[struct_id].add(struct_id)
def get_subnets(adj_list):
all_identifiers = set(adj_list.keys())
subnets = defaultdict(set)
visited = set()
for cui in tqdm(all_identifiers):
if cui not in visited:
visited.add(cui)
q = Queue()
q.put(cui)
while not q.empty():
cur = q.get()
visited.add(cur)
for neighbour in adj_list[cur]:
subnets[cui].add(neighbour)
if neighbour not in visited:
q.put(neighbour)
visited.add(neighbour)
return subnets
subnets = get_subnets(drug_adj_list)
len(subnets)
###Output
_____no_output_____
###Markdown
Find a label for each group. Will choose based on number of umls items that can be mapped to a single MeSH term (more == higher priority).
###Code
mesh_counts = umls_keys + mesh_vals + list(dc_maps['identifier']) + list(dc_maps['struct_id'].unique())
mesh_counts = Counter(mesh_counts)
rekeyed_subnets = dict()
for v in subnets.values():
sort_sub = sorted(list(v), key=lambda k: (mesh_counts[k], k in mesh_set, k in umls_set), reverse=True)
new_key = sort_sub[0]
rekeyed_subnets[new_key] = v
# Final map is just inverse of the subnets dict
final_drug_map = dict()
for k, v in rekeyed_subnets.items():
for val in v:
final_drug_map[val] = k
len(final_drug_map)
pickle.dump(final_drug_map, open('../data/drug_merge_map.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
4. Map all the compounds and checkDo a final mapping of the compound IDs for merging, and spot check a few results
###Code
# Some items won't necessarily be mappable, so use original ID
drugs['new_id'] = drugs['id'].map(lambda i: final_drug_map.get(i, i))
# Map the Gold Standard indications as well
rels['compound_new_id'] = rels['struct_id'].map(lambda i: final_drug_map.get(i, i))
drugs['id_source'] = drugs['new_id'].map(lambda x: 'MeSH' if x in mesh_set else 'UMLS')
drugs.head(2)
print('{:.3%} Reduction in Drugs {:,} --> {:,}'.format(
(drugs.shape[0] - drugs['new_id'].nunique())/drugs.shape[0], drugs.shape[0], drugs['new_id'].nunique()))
inds = rels.query('relationship_name == "indication"')
drug_ids_semmed = set(drugs['new_id'])
drugs_in_inds = set(inds['compound_new_id'].dropna())
num_ind_in_semmed = len(drugs_in_inds & drug_ids_semmed)
print('{:.3%} of Drugs in DC Indications mapped: {:,} out of {:,}'.format(
(num_ind_in_semmed / len(drugs_in_inds)), num_ind_in_semmed, len(drugs_in_inds)))
ind_semmed_comp = inds.query('compound_new_id in @drug_ids_semmed').shape[0]
print('{:.3%} of Indications have mappable Drug: {:,} out of {:,}'.format(
(ind_semmed_comp / len(inds)), ind_semmed_comp, len(inds)))
###Output
85.423% of Drugs in DC Indications mapped: 2,010 out of 2,353
95.209% of Indications have mappable Drug: 10,414 out of 10,938
###Markdown
Looks at the Mesh IDs that mapped to the greatest number of CUIs and see if the mappings make sense...
###Code
mesh_counts.most_common(3)
to_q = mesh_counts.most_common(3)[0][0]
drugs.query('new_id == @to_q')
to_q = mesh_counts.most_common(3)[1][0]
drugs.query('new_id == @to_q')
to_q = mesh_counts.most_common(3)[2][0]
drugs.query('new_id == @to_q')
###Output
_____no_output_____
###Markdown
These all look pretty good. All of the names for `D000111` are listed uner the aliases on the [Acetylcysteine MeSH page](https://meshb.nlm.nih.gov/record/ui?ui=D000111)Now let's look at a few compounds that may have a new MeSH ID distinct from their original, thanks to incorporating the DrugCentral X-refs
###Code
new_id_not_mesh = drugs.dropna(subset=['mesh_id']).query('new_id != mesh_id')
print(len(new_id_not_mesh))
new_id_not_mesh.head(10)
diff_new_ids = new_id_not_mesh.query('id_source == "MeSH"')['new_id'].values
diff_new_ids[:5]
inds.query('compound_new_id == "D002108"')
###Output
_____no_output_____
###Markdown
5. Diseases:With diseases we will do a little more in terms of the mapping. Beacause diseases in both UMLS and MeSH, we will incorporate some of the mappings from Disease Ontology Slim to try and get general diseases. The workflow will be as folows:1. Map nodes to Mesh2. Map ind CUI and/or SNOMED terms from DrugCentral to Mesh4. Incorporate DO Slim mappings3. Find overlap between these soruces
###Code
diseases = nodes.query('label == "Disorders"').copy()
len(diseases)
dis_numbers = diseases.groupby('mesh_id').apply(len).sort_values(ascending=False)
param = dis_numbers[:10].index.tolist()
diseases.query('mesh_id in @param').sort_values('mesh_id')
conso = load_umls.open_mrconso()
conso.head(2)
snomed_xrefs = conso.query("SAB == 'SNOMEDCT_US'").dropna(subset=['CUI', 'SCUI'])
snomed_xrefs.head(2)
dis_adj_list = defaultdict(set)
disease_ids = set(diseases['id'].unique())
umls_to_mesh_dis = umls_to_mesh_df.query('umls in @disease_ids')
for row in umls_to_mesh_dis.itertuples():
dis_adj_list[row.umls].add(row.mesh)
dis_adj_list[row.mesh].add(row.umls)
# Convert the snomed concept ids to string since they're strings the adj_list
rels['snomed_conceptid'] = rels['snomed_conceptid'].map(lambda i: str(int(i)) if not pd.isnull(i) else i)
sub_rels = rels.dropna(subset=['snomed_conceptid', 'umls_cui'])
for row in sub_rels.itertuples():
dis_adj_list[row.umls_cui].add(row.snomed_conceptid)
dis_adj_list[row.snomed_conceptid].add(row.umls_cui)
# Make sure to get mesh to CUI maps for the new cuis picked up via drugcentral
if row.umls_cui in umls_to_mesh_1t1:
dis_adj_list[umls_to_mesh_1t1[row.umls_cui]].add(row.umls_cui)
dis_adj_list[row.umls_cui].add(umls_to_mesh_1t1[row.umls_cui])
ind_snomed = set(rels['snomed_conceptid'])
dis_umls = set(rels['umls_cui']) | disease_ids
dis_snomed_xrefs = snomed_xrefs.query('CUI in @dis_umls or SCUI in @ind_snomed')
print(len(dis_snomed_xrefs))
for row in tqdm(dis_snomed_xrefs.itertuples(), total=len(dis_snomed_xrefs)):
dis_adj_list[row.CUI].add(row.SCUI)
dis_adj_list[row.SCUI].add(row.CUI)
# Make sure to get mesh to CUI maps for the new cuis picked up via drugcentral
if row.CUI in umls_to_mesh_1t1:
dis_adj_list[umls_to_mesh_1t1[row.CUI]].add(row.CUI)
dis_adj_list[row.CUI].add(umls_to_mesh_1t1[row.CUI])
###Output
4%|▍ | 6385/163590 [00:00<00:02, 63849.15it/s]
###Markdown
DO Slim IntegrationThe following disease-ontology files were generated from a [fork of Daniel Himmelstein's work generating the Disease Ontology Slim](https://github.com/mmayers12/disease-ontology). The only major differnece between Daniel's Release and this version is that I have added in the Disease Ontology terms from their 'Rare Slim' list to attempt to get some coverage of Rare Monogetic Diseases. These can be another way to consolidate diesease into more general typesFirst we'll need a DOID to UMLS_CUI map, WikiData can provide a quick and dirty map
###Code
from wikidataintegrator import wdi_core
query_text = """
select ?doid ?umlscui
WHERE
{
?s wdt:P699 ?doid .
?s wdt:P2892 ?umlscui .
}
"""
result = wdi_core.WDItemEngine.execute_sparql_query(query_text, as_dataframe=True)
result.to_csv('../data/doid-to-umls.csv', index=False)
doid_to_umls = result.set_index('doid')['umlscui'].to_dict()
slim_xref = pd.read_table('../../disease-ontology/data/xrefs-prop-slim.tsv')
do_slim = pd.read_table('../../disease-ontology/data/slim-terms-prop.tsv')
slim_xref.head(2)
slim_xref['resource'].value_counts()
resources = ['SNOMEDCT_US_2016_03_01', 'UMLS', 'MESH', 'SNOMEDCT', 'SNOMEDCT_US_2015_03_01']
useful_xref = slim_xref.query('resource in @resources')
for row in useful_xref.itertuples():
dis_adj_list[row.doid_code].add(row.resource_id)
dis_adj_list[row.resource_id].add(row.doid_code)
if row.resource == "UMLS" and row.resource_id in umls_to_mesh_1t1:
dis_adj_list[umls_to_mesh_1t1[row.resource_id]].add(row.resource_id)
dis_adj_list[row.resource_id].add(umls_to_mesh_1t1[row.resource_id])
do_slim['cui'] = do_slim['subsumed_id'].map(lambda d: doid_to_umls.get(d, float('nan')))
do_slim_d = do_slim.dropna(subset=['cui'])
for row in do_slim_d.itertuples():
dis_adj_list[row.subsumed_id].add(row.cui)
dis_adj_list[row.cui].add(row.subsumed_id)
if row.cui in umls_to_mesh_1t1:
dis_adj_list[umls_to_mesh_1t1[row.cui]].add(row.cui)
dis_adj_list[row.cui].add(umls_to_mesh_1t1[row.cui])
do_slim_terms = do_slim.set_index('slim_id')['slim_name'].to_dict()
slim_ids = set(do_slim_terms.keys())
###Output
_____no_output_____
###Markdown
6. Make the final map for Diseases and Map them
###Code
dis_subnets = get_subnets(dis_adj_list)
len(dis_subnets)
umls_set = set(diseases['id'].dropna()) | set(rels['umls_cui'].dropna())
umls_to_val = {u: 9999999-int(u[1:]) for u in umls_set}
mesh_counts = umls_keys + mesh_vals + list(rels['umls_cui'].map(lambda c: umls_to_mesh_1t1.get(c, c)))
mesh_counts = Counter(mesh_counts)
rekeyed_dis_subnets = dict()
for v in dis_subnets.values():
# If a disease was consolidated under DO-SLIM, take the slim ID and name
if v & slim_ids:
new_key = (v & slim_ids).pop()
rekeyed_dis_subnets[new_key] = v
else:
# First take ones in the mesh, then by the highest number of things it consolidated
# Then take the lowest numbered UMLS ID...
sort_sub = sorted(list(v), key=lambda k: (k in mesh_set, mesh_counts[k], k in umls_set, umls_to_val.get(k, 0)), reverse=True)
new_key = sort_sub[0]
rekeyed_dis_subnets[new_key] = v
'C565169' in mesh_vals
# Final map is just inverse of the subnets dict
final_dis_map = dict()
for k, v in rekeyed_dis_subnets.items():
for val in v:
final_dis_map[val] = k
diseases['new_id'] = diseases['id'].map(lambda i: final_dis_map.get(i, i))
# See how many instances of diseases mapped to 1 mesh ID had their ID changed through
# SNOMED and DO-SLIM consolidation
print('{} original CUIs'.format(diseases.dropna(subset=['mesh_id']).query('mesh_id != new_id')['id'].nunique()))
print('Mapped to {} MeSH IDs'.format(diseases.dropna(subset=['mesh_id']).query('mesh_id != new_id')['mesh_id'].nunique()))
print('Consolidated to {} unique entities'.format(diseases.dropna(subset=['mesh_id']).query('mesh_id != new_id')['new_id'].nunique()))
def dis_source_map(x):
if x in mesh_set:
return 'MeSH'
elif x in umls_set:
return 'UMLS'
elif x.startswith('DOID:'):
return 'DO-Slim'
else:
# Just in case there's a problem...
return 'Uh-Oh'
diseases['id_source'] = diseases['new_id'].map(lambda x: dis_source_map(x))
diseases['id_source'].value_counts()
pickle.dump(final_dis_map, open('../data/disease_merge_map.pkl', 'wb'))
print('{:.3%} Reduction in Diseases {:,} --> {:,}'.format(
(diseases.shape[0] - diseases['new_id'].nunique())/diseases.shape[0], diseases.shape[0], diseases['new_id'].nunique()))
rels['disease_new_id'] = rels['umls_cui'].map(lambda c: final_dis_map.get(c, c))
print(rels['disease_new_id'].count())
bad_idx = rels[rels['disease_new_id'].isnull()].index
rels.loc[bad_idx, 'disease_new_id'] = rels.loc[bad_idx, 'snomed_conceptid'].map(lambda c: final_dis_map.get(c, float('nan')))
inds = rels.query('relationship_name == "indication"')
disease_ids_semmed = set(diseases['new_id'])
diseases_in_inds = set(inds['disease_new_id'].dropna())
num_ind_in_semmed = len(diseases_in_inds & disease_ids_semmed)
print('{:.3%} of diseases in DC Indications mapped: {:,} out of {:,}'.format(
(num_ind_in_semmed / len(diseases_in_inds)), num_ind_in_semmed, len(diseases_in_inds)))
ind_semmed_comp = inds.query('disease_new_id in @disease_ids_semmed').shape[0]
print('{:.3%} of Indications have mappable disease: {:,} out of {:,}'.format(
(ind_semmed_comp / len(inds)), ind_semmed_comp, len(inds)))
inds_dd = inds.drop_duplicates(subset=['compound_new_id', 'disease_new_id'])
new_cids = set(drugs['new_id'].unique())
new_dids = set(diseases['new_id'].unique())
inds_in_semmed = inds_dd.query('compound_new_id in @new_cids and disease_new_id in @new_dids')
print('{:.3%} of indications now have both compound and disease mappable {:,} out of {:,}'.format(
len(inds_in_semmed) / len(inds_dd), len(inds_in_semmed), len(inds_dd)))
###Output
76.181% of indications now have both compound and disease mappable 6,307 out of 8,279
###Markdown
Add in Dates for IndicationsSince the Indications are pretty much fully mapped to the network and ready to go as a Gold Standard for machine learning, we will map approval date information to the compounds now, so it's available for future analyses.
###Code
app = pd.read_csv('../data/drugcentral_approvals_06212018.csv')
app.head()
app = app.rename(columns={'approval': 'approval_date'})
app = (app.dropna(subset=['approval_date']) # Remove NaN values
.sort_values('approval_date') # Put the earliest approval_date first
.groupby('struct_id') # Group by the compound's id
.first() # And select the first instance of that id
.reset_index()) # Return struct_id to a column from the index
rels = pd.merge(rels, app[['struct_id', 'approval_date']], how='left', on='struct_id')
rels.head(2)
idx = rels[~rels['approval_date'].isnull()].index
rels.loc[idx, 'approval_year'] = rels.loc[idx, 'approval_date'].map(lambda s: s.split('-')[0])
rels.head(2)
###Output
_____no_output_____
###Markdown
7. Rebuild the NodesThe node CSV will now be rebuilt with all the new ID mappings and corresponding concept names
###Code
all_umls = set(nodes['id'])
umls_set = set(nodes['id']) | set(dc_maps.query('id_type == "UMLSCUI"')) | set(rels['umls_cui'])
def get_source(cid):
if cid in mesh_set:
return 'MeSH'
elif cid in umls_set:
return 'UMLS'
elif cid.startswith('DOID:'):
return 'DO-Slim'
else:
return 'problem...'
pickle.dump(umls_set, open('../data/umls_id_set.pkl', 'wb'))
pickle.dump(mesh_set, open('../data/mesh_id_set.pkl', 'wb'))
new_nodes = nodes.query('label not in {}'.format(['Chemicals & Drugs', 'Disorders'])).copy()
new_nodes['new_id'] = new_nodes['mesh_id'].fillna(new_nodes['id'])
new_nodes['id_source'] = new_nodes['new_id'].apply(lambda c: get_source(c))
new_nodes['id_source'].value_counts()
drug_dis = pd.concat([drugs, diseases])
curr_map = drug_dis.set_index('id')['new_id'].to_dict()
idx = drug_dis.groupby('new_id')['label'].nunique() > 1
problems = idx[idx].index.values
print(len(problems))
remap = dict()
grpd = drug_dis.query('new_id in @problems').groupby('new_id')
for grp, df in grpd:
for labels in df['label'].unique():
curr_label = df.query('label == @labels')['id'].values
# Keep the MeSH Map for the New ID if its a Drug
if labels == 'Chemcials & Drugs':
for c in curr_label:
remap[c] = grp
# Use a random Disease CUI if its a Disease
else:
new_cui = curr_label[0]
for c in curr_label:
remap[c] = new_cui
drug_dis['new_id'] = drug_dis['id'].map(lambda i: remap.get(i, curr_map[i]))
###Output
_____no_output_____
###Markdown
Go back and Fix the IndicationsWe just changed 4 Diseases back to CUIs, so must ensure those don't affect the earlier mappings to indicaitons
###Code
if rels.query('disease_new_id in @problems').shape[0] > 0:
print('This is a problem')
else:
print('This is a non-issue so no need to fix anything')
new_nodes = pd.concat([new_nodes, drug_dis])
new_nodes = new_nodes.sort_values('label')
idx = new_nodes.groupby('new_id')['label'].nunique() > 1
problems = idx[idx].index.values
print(len(problems))
new_nodes.query('new_id in {}'.format(problems.tolist())).sort_values('new_id').head(10)
###Output
_____no_output_____
###Markdown
Fix other node-type conflictsSince the UMLS to MeSH map has no regard for semmantic type of the node, some concepts may have been condensed across semmantic types.All the Drug and Disease overlaps should be solved, so now move onto other nodetype conflicts.Conflicts will be solved in this manner:1. If one of the types is a Drug or a Disease, that one gets the MeSH ID2. If no Drug or Disease, the one that has the largest number of nodes will recieve the MeSH ID3. Remaining Nodetypes will be separated and assume the CUI of the node with the highest degree of connection in the networkTake, for example, `Ivalon`. It has 4 original CUIs that mapped to the same MeSH ID. Two of which carried the semmantic type `Chemicals & Drugs` and two `Devices`. The mesh_id will be kept for the `Chemicals & Drugs` version of the Nodes, which will be merged. The `Devices` versions of the nodes will be merged, whichever CUI has the most has the greatest number of edges will be the CUI used for this merged node. `Chemicals & Drugs` and `Disorders` will always take the meshID before other semmantic types. Otherwise, the MeSH id will be assignged to the semanitc type had the most CUIs merged into 1 node. The other semmatic types will again have the CUI selected based on edge count.
###Code
edges = gt.remove_colons(pd.read_csv('../data/edges_VER31_R.csv', converters={'pmids':eval}))
cui_counts = edges['start_id'].value_counts().add(edges['end_id'].value_counts(), fill_value=0).to_dict()
# For now, just return conflicting nodes to thier old semmantic type
grpd = new_nodes.query('new_id in @problems').groupby('new_id')
remap = dict()
for msh_id, df in tqdm(grpd, total=len(grpd)):
# Get all the labels and counts for those labels
labels = df['label'].unique().tolist()
counts = df['label'].value_counts().to_dict()
# Sort the by the Number of different nodes mapped to that label
labels = sorted(labels, key=lambda l: counts[l], reverse=True)
# Chemicals and Drugs and Diseases have higher priorities in the context of machine learning
# So any item that could be either of those types will be set to them automatically.
drug_or_dis = False
# Select the Chemicals & Drugs nodes to have the MeSH ID if possible
if 'Chemicals & Drugs' in labels:
labels.remove('Chemicals & Drugs')
curr_label = df.query('label == "Chemicals & Drugs"')['id'].values
drug_or_dis = True
for c in curr_label:
remap[c] = msh_id
# Otherwise, elect the Disorders nodes to have the MeSH ID if possible
elif 'Disorders' in labels:
labels.remove('Disorders')
curr_label = df.query('label == "Disorders"')['id'].values
drug_or_dis = True
for c in curr_label:
remap[c] = msh_id
# Finally assign a merged CUI based on edge counts
for i, label in enumerate(labels):
curr_label = df.query('label == @label')['id'].values
# Give highest counts of nodes the MeSH ID, if not already assigned to a Drug or Disease
if i == 0 and not drug_or_dis:
new_cui = msh_id
else:
# For types that won't get a MeSH ID,
# get the CUI that has largest number of instances in the edges
new_cui = sorted(curr_label, key=lambda v: cui_counts.get(v, 0), reverse=True)[0]
for c in curr_label:
remap[c] = new_cui
# Perform the new Mapping
curr_map = new_nodes.set_index('id')['new_id'].to_dict()
new_nodes['new_id'] = nodes['id'].map(lambda i: remap.get(i, curr_map[i]))
# Ensure there are now no problems
idx = new_nodes.groupby('new_id')['label'].nunique() > 1
problems = idx[idx].index.values
print(len(problems))
num_old_ids = new_nodes['id'].nunique()
num_new_ids = new_nodes['new_id'].nunique()
print('{:.3%} reduction in the number of NODES\n{:,} --> {:,}'.format((num_old_ids-num_new_ids)/num_old_ids, num_old_ids, num_new_ids))
new_nodes['id_source'] = new_nodes['new_id'].apply(lambda c: get_source(c))
new_nodes['id_source'].value_counts()
cui_to_name = nodes.set_index('id')['name'].to_dict()
cui_to_name = {**cui_to_name, **rels.set_index('umls_cui')['concept_name'].to_dict()}
cui_to_name = {**cui_to_name, **rels.set_index('compound_new_id')['c_name'].to_dict()}
msh_to_name = pickle.load(open('../data/MeSH_DescUID_to_Name.pkl', 'rb'))
# The mappings from UMLS are less reliable, so use the ones that came from MeSH itself first
msh_to_name = {**pickle.load(open('../data/MeSH_id_to_name_via_UMLS.pkl', 'rb')), **msh_to_name}
id_to_name = {**struct_id_to_name, **do_slim_terms, **cui_to_name, **msh_to_name}
# All new IDs should have a mapped name
set(new_nodes['new_id']).issubset(set(id_to_name.keys()))
new_nodes['name'] = new_nodes['new_id'].map(lambda i: id_to_name[i])
pickle.dump(id_to_name, open('../data/all_ids_to_names.pkl', 'wb'))
final_node_map = new_nodes.set_index('id')['new_id'].to_dict()
###Output
_____no_output_____
###Markdown
8. Map all the edgesNow that we have a finalized original to new ID map, we can straight map all the ids in the edges file.If any edges are now duplicated, PMIDs in support for those edges will be merged into a set.
###Code
edges['start_id'] = edges['start_id'].map(lambda c: final_node_map[c])
edges['end_id'] = edges['end_id'].map(lambda c: final_node_map[c])
%%time
num_before = len(edges)
# Some edges now duplicated, de-duplicate and combine pmids
grpd = edges.groupby(['start_id', 'end_id', 'type'])
edges = grpd['pmids'].apply(lambda Series: set.union(*Series.values)).reset_index()
# re-count the pmid numbers
edges['n_pmids'] = edges['pmids'].apply(len)
num_after = len(edges)
print('{:,} Edges before node consolidation'.format(num_before))
print('{:,} Edges after node consolidation'.format(num_after))
print('A {:.3%} reduction in edges'.format((num_before - num_after) / num_before))
###Output
19,555,814 Edges before node consolidation
18,145,463 Edges after node consolidation
A 7.212% reduction in edges
###Markdown
Save the files for the network
###Code
# Get rid of the old ids in the nodes
new_nodes.drop('id', axis=1, inplace=True)
new_nodes = new_nodes.rename(columns={'new_id': 'id'})[['id', 'name', 'label', 'id_source']]
new_nodes = new_nodes.drop_duplicates(subset='id')
# Sort values before writing to disk
new_nodes = new_nodes.sort_values('label')
edges = edges.sort_values('type')
# Add in colons required by neo4j
new_nodes = gt.add_colons(new_nodes)
edges = gt.add_colons(edges)
new_nodes.to_csv('../data/nodes_VER31_R_nodes_consolidated.csv', index=False)
edges.to_csv('../data/edges_VER31_R_nodes_consolidated.csv', index=False)
pickle.dump(final_node_map, open('../data/node_id_merge_map.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
Save the relationship files for a Machine Learning Gold Standard
###Code
rels.head(2)
# Do some rennaming of the columns before saving
rels = rels.rename(columns={'c_name': 'compound_name',
'concept_name': 'disease_name',
'compound_new_id': 'compound_semmed_id',
'disease_new_id': 'disease_semmed_id'})
# Only want indications for the gold standard
# Keep Duplicates in RELs just in case they're insightful, but indicaitons should have no dups.
inds = rels.query('relationship_name == "indication"').drop_duplicates(subset=['compound_semmed_id', 'disease_semmed_id'])
rels.to_csv('../data/gold_standard_relationships_nodemerge.csv', index=False)
inds.to_csv('../data/indications_nodemerge.csv', index=False)
###Output
_____no_output_____ |
Infer_Shear_Viscosity_from_Flow.ipynb | ###Markdown
Goals of this NotebookIn this notebook we will explore topics related to performing **Bayesian inference** (in particular **Bayesian parameter estimation**) using a **Gaussian Process** (GP) surrogate.The specific example is related to the phenomenology of heavy-ion collisions.It was realized early in the 21st century that the harmonic flows $v_2$, which can be measured by $n$-particle correlations, show sensitivity to the transport properties of a Quark Gluon Plasma(QGP) fluid.More specifically, the 'elliptic flow' $v_2$ shows sensitivity to the specific shear viscosity $\eta/s$.Therefore, a common practice in phenomenology is compare hydrodynamic models with a parametrized specific shear viscosity to observables measured in experiments such as the elliptic flow, to **infer** the specific shear viscosity of the physical QGP. Bayesian InferenceA statistical methodology designed to handle arbitrarily complicated problems of inference is **Bayesian Inference**. Suppose we know some information $D$, for example a set of experimental measurements. Now, suppose we want to make an inference about a proposition $\theta$, for example some physical property of system which can not be directly measured. Bayes theorem can be written $$p(\theta|D) = \frac{ p(D|\theta) p(\theta) }{ p(D) },$$where $p(\theta|D)$ is our "posterior" for the proposition $\theta$ given that $D$ is realized, $p(D|\theta)$ is the "likelihood" of observing $D$ given that the proposition $\theta$ is realized, $p(\theta)$ is our "prior" belief about $\theta$ before observing $D$, and $p(D)$ is the "evidence". If we are only interested in our proposition $\theta$, than we can use that the evidence is independent of $\theta$ and solve instead the proportionality$$p(\theta|D) \propto p(D|\theta) p(\theta) .$$ This fact will be very useful for Bayesian parameter estimation, as the usual numerical methods to estimate the posterior will exploit it. Inferring the shear viscosity given the Elliptic FlowWe can use Bayes theorem to infer the specific shear viscosity of QGP given the observed data for the elliptic flow.In this case the specific shear viscosity will be represented by $\theta$, and the observed experimental data for the elliptic flow in a particular centrality bin will be represented by $D$. Defining our physics modelWe see that our likelihood function encodes the conditional probability of observing some value of the elliptic flow given a particular value of the specific shear viscosity. This requires us to choose some model which we believe is a good approximation of physics. In this case, we will assume that the dynamics of the collision can be modeled accurately by viscous hydrodynamics.For the purposes of this notebook, we will approximate the hydrodynamic output of the elliptic flow $v_2$ given the specific shear viscosity $\eta/s$ using a linear model.Ordinarily we would use a hydrodynamic simulation (perhaps MUSIC https://github.com/MUSIC-fluid/MUSIC) to model the physics. We will use a linear-model in this notebook because its not computationally demanding, allowing us to focus on concepts on Bayesian inference. However, whenever we discuss our model, we should have in mind a real physics model. Statistical Model ErrorLet's add to our linear physics model statistical (uncorrelated) error on top of every prediction for the elliptic flow. This will be useful for understanding how any statistical model errors influence our inference problem. For example, in a real hydrodynamics simulation with a finite number of final state particles, there will be a finite statistical error on our calculated elliptic flow. Expressing the modelLet $y$ denote the ouput $v_2$, and $\theta$ the value of the specific shear viscosity $\eta/s$. We can write our physics model by$$y = m * \theta + b +\epsilon,$$ which has a slope $m$, intercept $b$ and statistical error $\epsilon$. Let's import some libraries
###Code
import numpy as np #useful for math operations
import matplotlib.pyplot as plt #plotting
import seaborn as sns #pretty plots
sns.set()
from sklearn.gaussian_process import GaussianProcessRegressor as GPR #for using Gaussian Processes
from sklearn.gaussian_process import kernels #same
from sklearn.preprocessing import StandardScaler #useful for scaling data
#these are necessary to use the heteroscedastic noise kernel
#see https://github.com/jmetzen/gp_extras for installation and
#https://jmetzen.github.io/2015-12-17/gp_extra.html for discussion
from gp_extras.kernels import HeteroscedasticKernel
from sklearn.cluster import KMeans
import emcee #for performing Markov Chain Monte Carlo
import corner #for plotting the posterior
###Output
_____no_output_____
###Markdown
This function will define our physics (hydrodynamic) model
###Code
#Our linear model for hydrodynamic output in some centrality bin, for example 20-30%
#noise level controls statistical scatter in our physics model $\epsilon$
noise = 0.1 #amount of statistical scatter in our training calculations
np.random.seed(1)
def lin_hydro_model(eta_over_s, intercept = 0.12, slope = -0.25, noise = noise):
"""This function will play the role of a
realistic event-by-event hydrodynamic model. Here it is a linear model
with an additional random noise error."""
y = intercept + slope * (eta_over_s) # the mean model prediction
dy = noise * y * np.random.normal() #the sampled model statistical error
y += dy #add the model stat. error to the model mean
y = np.max([0., y]) #suppose our measurement definition is positive definite
return y, dy
lin_hydro_model = np.vectorize(lin_hydro_model)
###Output
_____no_output_____
###Markdown
Using a fast Model Emulator for slow physics models A real viscous hydrodynamic physics model could take hours to run a single event, and we may need thousands of events to construct a centrality average. Therefore, for computationally demanding models we can employ a fast surrogate which can estimate the interpolation uncertainty. We use Gaussian processes for this purpose in this notebook. Gaussian processes are especially useful because they provide non-parametric interpolations (as opposed to a polynomial fit, for example). Like any interpolation, we need a sampling of points in our parameter space where we know the **physics model output**. So, we first run our physics simulation on a sampling of points that **fill our parameter space** and call this sample our **design points**.
###Code
n_design_pts = 30 # this sets the number of design points where we will run our hydro model
eta_over_s_min = 0. # this defines a minimum value for our parameter (eta/s)
eta_over_s_max = 4. / (4. * np.pi) # this defines a maximum value for our parameter (eta/s)
#this chooses our sample to be a regular grid, which is an efficient sampling in one dimension
#it is reshaped into a 2D array so that we can readily use it with scikit-learn
model_X = np.linspace(eta_over_s_min, eta_over_s_max, n_design_pts).reshape(-1,1)
#these are the v_2 outputs of our hydro model, assuming that the model has finite statistical error
model_y, model_dy = lin_hydro_model(model_X)
#lets plot our physics models predictions
plt.errorbar(model_X.flatten(), model_y.flatten(), model_dy.flatten(), fmt='o', c='black')
plt.xlabel(r'$\eta/s$')
plt.ylabel(r'$v_2 [ 20-30\%]$')
plt.title('Hydro Model Design Predictions')
plt.tight_layout(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exercises:1. Does it look like a linear model produced these data? Should it?2. Try playing with the amount of noise (error) in these data. Training our Gaussian Process (GP)We will use a Gaussian process (https://en.wikipedia.org/wiki/Gaussian_process) to interpolate between the design points.For an intuitive feeling for how they work, play with this widget http://www.tmpl.fi/gp/.For more details see http://www.gaussianprocess.org/gpml/chapters/RW.pdf.A Gaussian Process is defined with some choice of a **kernel function**. Please see this page for a brief explanation of a few popular kernels : https://www.cs.toronto.edu/~duvenaud/cookbook/.This is a very good visual exploration of Gaussian Processes as well as different kernel functions: https://distill.pub/2019/visual-exploration-gaussian-processes/.We tell scikit-learn the GP kernel function to use, and some guidance for the range of the hyperparameters. Then when we call the `fit()` operation, scikit-learn automatically finds the values of hyperparameters that maximize a likelihood function:$$\log p(y^*|y_{t}, \theta) \propto -\frac{1}{2}y_{t}^{T} \Sigma^{-1}_{y_t} y_{t} - \frac{1}{2} \log |\Sigma_{y_t}|,$$where $\Sigma_{y_t}$ is the covariance matrix resulting from applying the covariance function to the **training data**. Note: The first term rewards a better fit to the training data, while the second term this likelihood function is a complexity penalty to avoid overfitting. Exercises:1. Explain why the first term in this likelihood function rewards a GP with hyperparameters that fit the data well.2. Explain why the second term penalizes a GP which is 'overfit'. What does 'overfit' mean? Now we will define and train a GPWe will use a combination of a **Squared Exponential Kernel** and a **White Noise Kernel**.
###Code
# a switch to assume homoscedastic noise in the model outputs (uniform noise as function of inputs)
# or heteroscedastic noise (varying noise as function of inputs)
use_homosced_noise = False
#scikit-learn only accepts 2d arrays as inputs
model_X = model_X.reshape(-1,1)
#this is the 'size' of possible variation of our parameters, in this case eta/s
ptp = max(model_X) - min(model_X)
#This is our Squared Exponential Kernel
rbf_kern = 1. * kernels.RBF(
length_scale=ptp,
length_scale_bounds=np.outer(ptp, (1e-2, 1e2)),
)
#This is a homoescedastic white noise kernel,
#necessary because our physics model has finite statistical accuracy
hom_noise_kern = kernels.WhiteKernel(
noise_level=noise,
noise_level_bounds=(noise*1e-2, noise*1e1)
)
#heteroscedastic noise kernel
n_clusters = 5
prototypes = KMeans(n_clusters=n_clusters).fit(model_X).cluster_centers_
het_noise_kern = HeteroscedasticKernel.construct(prototypes, 1e-3, (noise*1e-3, noise*1e3),
gamma=1.0, gamma_bounds="fixed")
if use_homosced_noise:
my_kernel = (rbf_kern + hom_noise_kern)
else:
my_kernel = (rbf_kern + het_noise_kern)
###Output
_____no_output_____
###Markdown
Exercises:1. Why do we need a White Noise Kernel?2. What does the hyperparameter which controls the 'length scale' in the Squared Exponential kernel control? How does it relate to under/over-fitting? As with many machine learning toolkits, out-of-the-box performance is often best when we first scale our outputs. The 'Standard Scaler' (https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) is convenient for this purpose.
###Code
#first scale our observables
model_y_copy = model_y.copy()
scaler = StandardScaler(copy=True).fit(model_y_copy)
scaled_model_y = scaler.transform(model_y, copy=True) # the scaled model outputs
###Output
_____no_output_____
###Markdown
Training our GP on the hydro model calculations
###Code
#maximizing the GP likelihood proceeds in an iterative process,
#beginning with a random seed.
#We want to be sure we find a global max., so we restart it several times
nrestarts=10
#define our Gaussian process, and fit it to the hydro model calculations
my_gp = GPR(kernel=my_kernel,
alpha=0.01, # the nugget, to stabilize matrix inversions
n_restarts_optimizer=nrestarts,
).fit(model_X, scaled_model_y)
###Output
_____no_output_____
###Markdown
Defining an 'emulator'It's useful to define a function which handles both the scaling of our observables as well as the interpolation with the GP. We call this function the **emulator**.
###Code
def emu_predict(eta_over_s):
"""This function handles the scaling and GP interpolation together,
returning our prediction in the ordinary observable space
rather than the scaled observable space
This map is what we call our 'emulator'. """
X = eta_over_s.reshape(-1, 1)
scaled_y, scaled_dy = my_gp.predict(X, return_std=True)
y = scaler.inverse_transform(scaled_y).reshape(len(eta_over_s))
dy = scaled_dy * scaler.scale_
return y, dy
###Output
_____no_output_____
###Markdown
Let's check how well our emulator fits the hydro physics model
###Code
#make a regular grid to plot our Emulator predictions
n_plot_pts = 100
gp_X_plot = np.linspace(eta_over_s_min, eta_over_s_max, n_plot_pts)
#get the GP Emulator's predictions of both the mean and std. deviation
gp_y, gp_dy = emu_predict(gp_X_plot)
plt.plot(gp_X_plot, gp_y, color='red', label='GP median')
plt.fill_between(gp_X_plot, y1 = gp_y - 2.*gp_dy, y2 = gp_y + 2.*gp_dy,
interpolate=True, alpha=0.7, label=r'GP 2$\sigma$', color='orange')
plt.fill_between(gp_X_plot, y1 = gp_y - gp_dy, y2 = gp_y + gp_dy,
interpolate=True, alpha=0.7, label=r'GP 1$\sigma$', color='blue')
plt.errorbar(model_X.flatten(), model_y.flatten(), model_dy.flatten(), fmt='o', c='black', label='Hydro Model')
plt.xlabel(r'$\eta/s$')
plt.ylabel(r'$v_2 [ 20-30\%]$')
plt.title('GP Emulator and Model Training Points')
plt.legend()
plt.tight_layout(True)
plt.savefig('GP.png',dpi=400)
plt.show()
###Output
_____no_output_____
###Markdown
Exercises:1. Examine how increasing or decreasing the number of design points can effect the mean and uncertainty of the GP emulator prediction. Does it fit your expectation?2. Examine how increasing or decreasing the model statistical error can effect the mean and uncertainty of the GP emulator prediction. Does it fit your expectation?3. Examine how changing the density of design points can effect the mean and uncertainty of the GP emulator prediction(Try a design which has regions which are sparsely populated by design points). Does it fit your expectation?4. What happens if you remove the white noise kernel `white_kern` from the GP? We expect our emulator to fit the points on which it was trained...our definition of the GP likelihood function is designed to do just that!Ultimately, we want to know if our emulator can be trusted for points in parameter space in which it was **not trained**. So, let's perform some validations of our GP emulator, using a **novel testing set** of model calculations.
###Code
#this defines a new set of points in parameter space where we will run our physics model
n_test_pts = 15
model_X_test = np.random.uniform(eta_over_s_min, eta_over_s_max, n_test_pts).reshape(-1,1)
#get the hydro model predictions for these new points
model_y_test, model_dy_test = lin_hydro_model(model_X_test)
#Now use the emulator trained only on the **original design points** to predict
#outputs on the **new testing set**
gp_y_test, gp_dy_test = emu_predict(model_X_test)
#Plot the emulator prediction vs the hydro model prediction
plt.xlabel(r'Hydro model $v_2$ prediction')
plt.ylabel(r'Emulator $v_2$ prediction')
plt.plot(model_y_test, model_y_test, color='r', label='perfect', ls=':', lw=2)
plt.scatter(model_y_test, gp_y_test)
plt.legend()
plt.tight_layout(True)
plt.show()
###Output
_____no_output_____
###Markdown
How does the performance look? There are stricter tests we can use to check if our surrogate prediction is biased.If $\hat{y}(\theta)$ is our emulator prediction for the parameters $\theta$, and $y(\theta)$ is our hydro model prediction, we can define the **residual** $\hat{y}(\theta) - y(\theta)$.Let's plot the residual as a function of $\eta/s$:
###Code
model_y_test = model_y_test.reshape(n_test_pts)
res = gp_y_test - model_y_test # calculate the residuals
plt.scatter(model_X_test, res)
plt.xlabel(r'$\eta/s$')
plt.ylabel(r'$\hat{v}_2 - v_2$')
plt.tight_layout(True)
plt.show()
###Output
_____no_output_____
###Markdown
Does the prediction look biased?By inspection, it doesn't look like our emulator has signficant bias for any value of $\eta/s$. They are even more illuminating tests one can make, for example Quantile-Quantile plots (https://en.wikipedia.org/wiki/Q–Q_plot). You can explore this on your own. Performing Bayesian InferenceNow, we have a fast and accurate surrogate that we can trust to compare to data anywhere in the parameter space So we want to use our emulator to perform **Bayesian inference**.Recall, that our **posterior** $p(\theta|D)$ of our parameters $\theta$ given the observed experimental data $D$ is the product of our **prior** belief about the parameters $p(\theta)$ and the **likelihood** $p(D|\theta)$ of observing those experimental data given the true value of the parameters is $\theta$. This is Bayes Theorem:$$p(\theta|D) \propto p(D|\theta)p(\theta).$$So, before using experimental data to update our belief about $\eta/s$, we need to define our prior belief about $\eta/s$. Choosing our Priors We will define two different priors, so we can examine the effect that our prior has on our posterior.One prior will be flat between two limits. The other prior will be informed by a belief, before seeing our $v_2$ data, that the shear viscosity is more likely to be a certain value within these limits.
###Code
#define two different priors, one more informed than the other
theta_min = eta_over_s_min
theta_max = eta_over_s_max
#a flat prior
def log_flat_prior(theta):
"""Flat prior on value between limits"""
if (theta_min < theta) and (theta < theta_max):
return 0. # log(1)
else:
return -np.inf # log(0)
log_flat_prior = np.vectorize(log_flat_prior)
#a peaked prior
prior_peak = 2. / (4. * np.pi) # the value of theta we belief most likely, before seeing data
prior_width = 1. / (10. * np.pi) #our uncertainty about this value, before seeing the data
def log_peaked_prior(theta):
"""Peaked (Gaussian) prior on value between limits"""
if (theta_min < theta) and (theta < theta_max):
return -0.5 * (theta - prior_peak)**2. / prior_width**2.
else:
return -np.inf # log(0)
log_peaked_prior = np.vectorize(log_peaked_prior)
#lets plot our two priors by sampling them, and plotting their histograms
n_samples_prior = int(1e6)
samples_flat_prior = np.random.uniform(theta_min, theta_max, n_samples_prior)
samples_peaked_prior = np.random.normal( prior_peak, prior_width, n_samples_prior)
plt.hist(samples_flat_prior, label='Flat prior', alpha=0.5, density=True, color='blue', bins=50)
plt.hist(samples_peaked_prior, label='Peaked prior', alpha=0.5, density=True, color='red', bins=50)
plt.xlim([theta_min, theta_max])
plt.xlabel(r'$\eta/s$')
plt.ylabel(r'$p(\eta/s)$')
plt.yticks([])
plt.legend()
plt.tight_layout(True)
plt.show()
###Output
_____no_output_____
###Markdown
Defining our LikelihoodTo compare our model predictions with experiment, we need to define our likelihood function.The likelihood is a model for the conditional probability of observing the data given some true value of the parameters. Specifically, it modelsthe conditional probability of observing some experimental value for $v_2$ given some value of $\eta/s$.A commonplace assumption is that the experimental errors follow a multivariate Gaussian distribution. This distribution also maximizes the informational entropy subject to the constraints of being normalizable, having a known mean, and a known variance.For details see (https://github.com/furnstahl/Physics-8805/blob/master/topics/maximum-entropy/MaxEnt.ipynb). The normal likelihood function is probably a good assumption for our problem, because of the nature of the measurement. However, one should consider if this normal likelihood function is appropriate depending on the nature of the specific problem and measurements.
###Code
def log_likelihood(theta, y_exp, dy_exp):
#use our GP emulator to approximate the hydro model
y_pred, dy_pred = emu_predict(theta) # emulation prediction and uncertainty
dy_tot = np.sqrt( dy_pred**2. + dy_exp**2. ) #total uncertainty, emulation and exp.
return -0.5 * np.sum( (y_pred - y_exp)**2 / dy_tot**2 )
###Output
_____no_output_____
###Markdown
Exercises:1. Why does the total uncertainty `dy_tot` in the likelihood function have this expression? What should be the total uncertainty, when we have experimental uncertainty and interpolation uncertainties which are independent?2. How would this expression generalize to a vector of outputs, rather than a scalar output? Defining our Posterior The posterior is the product of the prior and likelihood function. It follows that the logarithm of the posterior is the sum of the logs of the prior and likelihood.
###Code
#posterior using flat prior
def log_posterior_flat_prior(theta, y_exp, dy_exp):
'''Log posterior for data X given parameter array theta'''
return log_flat_prior(theta) + log_likelihood(theta, y_exp, dy_exp)
#posterior using peaked prior
def log_posterior_peaked_prior(theta, y_exp, dy_exp):
'''Log posterior for data X given parameter array theta'''
return log_peaked_prior(theta) + log_likelihood(theta, y_exp, dy_exp)
###Output
_____no_output_____
###Markdown
Inferring the value of $\eta/s$ using experimental dataSuppose that an experiment measures $v_2[20-30\%]$, and is reported by a mean value and total uncertainty...
###Code
exp_rel_uncertainty = 0.1 # experimental relative uncertainty
y_exp = 0.09 #v_2 experimental mean
dy_exp = y_exp * exp_rel_uncertainty #v_2 experimental uncertainty
###Output
_____no_output_____
###Markdown
Although our current problem is much-simplified by the use of a linear model, in general we will have no analytic expression for our likelihood function. In this case, one needs a set of numerical tools which can approximate the likelihood function. In addition, for many problems of interest our parameter space can be highly dimensional, so these methods need to work well for high-dimensional problems.We solve both of these problems by employing Markov Chain Monte Carlo sampling (http://www.columbia.edu/~mh2078/MachineLearningORFE/MCMC_Bayes.pdf). Specifically, we will use a python implementation called *emcee* (https://emcee.readthedocs.io/en/stable/), which will work well for our simple purposes. There are much more sophisticated algorithms for estimating the posterior today; see https://chi-feng.github.io/mcmc-demo/app.htmlHamiltonianMC,banana for some animations.
###Code
#these are some general settings for the MCMC
ndim = 1 # number of parameters in the model
nwalkers = 20*ndim # number of MCMC walkers
nburn = 1000 # "burn-in" period to let chains stabilize
nsteps = 2000 # number of MCMC steps to take after the burn-in period finished
# we'll start at random locations within the prior volume
starting_guesses = theta_min + \
(theta_max - theta_min) * np.random.rand(nwalkers,ndim)
####Sampling the posterior with a flat prior####
print("Sampling Posterior with Flat Prior...")
print("MCMC sampling using emcee (affine-invariant ensamble sampler) with {0} walkers".format(nwalkers))
sampler_flat_prior = emcee.EnsembleSampler(nwalkers, ndim, log_posterior_flat_prior, args=[y_exp, dy_exp])
# "burn-in" period; save final positions and then reset
pos, prob, state = sampler_flat_prior.run_mcmc(starting_guesses, nburn)
sampler_flat_prior.reset()
# production sampling period
sampler_flat_prior.run_mcmc(pos, nsteps)
print("Mean acceptance fraction: {0:.3f} (in total {1} steps)"
.format(np.mean(sampler_flat_prior.acceptance_fraction),nwalkers*nsteps))
# discard burn-in points and flatten the walkers; the shape of samples is (nwalkers*nsteps, ndim)
samples_flat_prior = sampler_flat_prior.chain.reshape((-1, ndim))
####Sampling the posterior with a peaked prior####
print("Sampling Posterior with Peaked Prior...")
print("MCMC sampling using emcee (affine-invariant ensamble sampler) with {0} walkers".format(nwalkers))
sampler_peaked_prior = emcee.EnsembleSampler(nwalkers, ndim, log_posterior_peaked_prior, args=[y_exp, dy_exp])
# "burn-in" period; save final positions and then reset
pos, prob, state = sampler_peaked_prior.run_mcmc(starting_guesses, nburn)
sampler_peaked_prior.reset()
# production sampling period
sampler_peaked_prior.run_mcmc(pos, nsteps)
print("Mean acceptance fraction: {0:.3f} (in total {1} steps)"
.format(np.mean(sampler_peaked_prior.acceptance_fraction),nwalkers*nsteps))
# discard burn-in points and flatten the walkers; the shape of samples is (nwalkers*nsteps, ndim)
samples_peaked_prior = sampler_peaked_prior.chain.reshape((-1, ndim))
###Output
Sampling Posterior with Flat Prior...
MCMC sampling using emcee (affine-invariant ensamble sampler) with 20 walkers
Mean acceptance fraction: 0.812 (in total 40000 steps)
Sampling Posterior with Peaked Prior...
MCMC sampling using emcee (affine-invariant ensamble sampler) with 20 walkers
Mean acceptance fraction: 0.813 (in total 40000 steps)
###Markdown
Plotting our PosteriorsWe can plot the samples of our posterior as histograms.
###Code
plt.hist(samples_flat_prior, bins=20, density=True, alpha=0.6,
edgecolor='blue', label='Posterior w/ Flat Prior')
plt.hist(samples_peaked_prior, bins=20, density=True, alpha=0.6,
edgecolor='red', label='Posterior w/ Peaked Prior')
plt.xlabel(r'$\eta/s$')
plt.ylabel(r'$p(\eta/s | v_2)$')
plt.yticks([])
plt.legend()
plt.tight_layout(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exercises1. What do you notice is different about the two posteriors above, their median, their uncertainties, etc?2. Try reducing the experimental error on our measurement. What do you expect to happen, and what happens? How does it depend on our emulation (interpolation) uncertainty?3. Try playing with the parameters which defined the 'peaked' prior (e.g. reducing/increasing it's width). What happens?4. In the case where we use the flat prior, what is the relation between the posterior and the likelihood function? There are many useful libraries for plotting posteriors...The corner library provides an easy to use implementation. This is especially helpful for doingparameter estimation in more than one dimension.
###Code
# make a corner plot with the posterior distribution
fig = corner.corner(samples_flat_prior, labels=["$\eta/s$"],
quantiles=[0.05, 0.5, 0.95], #what do these limits control?
show_titles=True, title_kwargs={"fontsize": 12})
plt.tight_layout(True)
plt.show()
###Output
_____no_output_____
###Markdown
Sometimes it's more aesthetic to apply a KDE smoothing to our posterior. (e.g. below)
###Code
sns.distplot(samples_flat_prior, hist=False, color="b", kde_kws={"shade": True}, label='Posterior w/ Flat Prior')
sns.distplot(samples_peaked_prior, hist=False, color="r", kde_kws={"shade": True}, label='Posterior w/ Peaked Prior')
###Output
_____no_output_____ |
tutorials/Certification_Trainings/Public/databricks_notebooks/2.6/1.SparkNLP_Basics_v2.6.3.ipynb | ###Markdown
 1. Spark NLP Basics v2.6.3
###Code
import sparknlp
print("Spark NLP version", sparknlp.version())
print("Apache Spark version:", spark.version)
spark
###Output
_____no_output_____
###Markdown
Using Pretrained Pipelines https://github.com/JohnSnowLabs/spark-nlp-modelshttps://nlp.johnsnowlabs.com/models
###Code
from sparknlp.pretrained import PretrainedPipeline
testDoc = '''Peter is a very good persn.
My life in Russia is very intersting.
John and Peter are brothrs. However they don't support each other that much.
Lucas Nogal Dunbercker is no longer happy. He has a good car though.
Europe is very culture rich. There are huge churches! and big houses!
'''
testDoc
###Output
_____no_output_____
###Markdown
Explain Document ML **Stages**- DocumentAssembler- SentenceDetector- Tokenizer- Lemmatizer- Stemmer- Part of Speech- SpellChecker (Norvig)
###Code
pipeline = PretrainedPipeline('explain_document_ml', lang='en')
pipeline.model.stages
result = pipeline.annotate(testDoc)
result.keys()
result['sentence']
result['token']
list(zip(result['token'], result['pos']))
list(zip(result['token'], result['lemmas'], result['stems'], result['spell']))
import pandas as pd
df = pd.DataFrame({'token':result['token'],
'corrected':result['spell'], 'POS':result['pos'],
'lemmas':result['lemmas'], 'stems':result['stems']})
df
###Output
_____no_output_____
###Markdown
Explain Document DL **Stages**- DocumentAssembler- SentenceDetector- Tokenizer- NER (NER with GloVe 100D embeddings, CoNLL2003 dataset)- Lemmatizer- Stemmer- Part of Speech- SpellChecker (Norvig)
###Code
pipeline_dl = PretrainedPipeline('explain_document_dl', lang='en')
pipeline_dl.model.stages
pipeline_dl.model.stages[-2].getStorageRef()
pipeline_dl.model.stages[-2].getClasses()
result = pipeline_dl.annotate(testDoc)
result.keys()
result['entities']
df = pd.DataFrame({'token':result['token'], 'ner_label':result['ner'],
'spell_corrected':result['checked'], 'POS':result['pos'],
'lemmas':result['lemma'], 'stems':result['stem']})
df
###Output
_____no_output_____
###Markdown
Recognize Entities DL
###Code
recognize_entities = PretrainedPipeline('recognize_entities_dl', lang='en')
testDoc = '''
Peter is a very good persn.
My life in Russia is very intersting.
John and Peter are brothrs. However they don't support each other that much.
Lucas Nogal Dunbercker is no longer happy. He has a good car though.
Europe is very culture rich. There are huge churches! and big houses!
'''
result = recognize_entities.annotate(testDoc)
list(zip(result['token'], result['ner']))
###Output
_____no_output_____
###Markdown
Clean Stop Words
###Code
clean_stop = PretrainedPipeline('clean_stop', lang='en')
result = clean_stop.annotate(testDoc)
result.keys()
' '.join(result['cleanTokens'])
###Output
_____no_output_____
###Markdown
Clean Slang
###Code
clean_slang = PretrainedPipeline('clean_slang', lang='en')
result = clean_slang.annotate(' Whatsup bro, call me ASAP')
' '.join(result['normal'])
###Output
_____no_output_____
###Markdown
Spell Checker (Norvig Algo)ref: https://norvig.com/spell-correct.html
###Code
spell_checker = PretrainedPipeline('check_spelling', lang='en')
testDoc = '''
Peter is a very good persn.
My life in Russia is very intersting.
John and Peter are brothrs. However they don't support each other that much.
Lucas Nogal Dunbercker is no longer happy. He has a good car though.
Europe is very culture rich. There are huge churches! and big houses!
'''
result = spell_checker.annotate(testDoc)
result.keys()
list(zip(result['token'], result['checked']))
###Output
_____no_output_____
###Markdown
Spell Checker DLhttps://medium.com/spark-nlp/applying-context-aware-spell-checking-in-spark-nlp-3c29c46963bc
###Code
spell_checker_dl = PretrainedPipeline('check_spelling_dl', lang='en')
text = 'We will go to swimming if the ueather is nice.'
result = spell_checker_dl.annotate(text)
list(zip(result['token'], result['checked']))
result.keys()
# check for the different occurrences of the word "ueather"
examples = ['We will go to swimming if the ueather is nice.',\
"I have a black ueather jacket, so nice.",\
"I introduce you to my sister, she is called ueather."]
results = spell_checker_dl.annotate(examples)
for result in results:
print (list(zip(result['token'], result['checked'])))
for result in results:
print (result['document'],'>>',[pairs for pairs in list(zip(result['token'], result['checked'])) if pairs[0]!=pairs[1]])
# if we had tried the same with spell_checker (previous version)
results = spell_checker.annotate(examples)
for result in results:
print (list(zip(result['token'], result['checked'])))
###Output
_____no_output_____
###Markdown
Parsing a list of texts
###Code
testDoc_list = ['French author who helped pioner the science-fiction genre.',
'Verne wrate about space, air, and underwater travel before navigable aircrast',
'Practical submarines were invented, and before any means of space travel had been devised.']
testDoc_list
pipeline = PretrainedPipeline('explain_document_ml', lang='en')
result_list = pipeline.annotate(testDoc_list)
len (result_list)
result_list[0]
###Output
_____no_output_____
###Markdown
Using fullAnnotate to get more details ```annotatorType: String, begin: Int, end: Int, result: String, (this is what annotate returns)metadata: Map[String, String], embeddings: Array[Float]```
###Code
text = 'Peter Parker is a nice guy and lives in New York'
# pipeline_dl >> explain_document_dl
detailed_result = pipeline_dl.fullAnnotate(text)
detailed_result
detailed_result[0]['entities']
detailed_result[0]['entities'][0].result
chunks=[]
entities=[]
for n in detailed_result[0]['entities']:
chunks.append(n.result)
entities.append(n.metadata['entity'])
df = pd.DataFrame({'chunks':chunks, 'entities':entities})
df
tuples = []
for x,y,z in zip(detailed_result[0]["token"], detailed_result[0]["pos"], detailed_result[0]["ner"]):
tuples.append((int(x.metadata['sentence']), x.result, x.begin, x.end, y.result, z.result))
df = pd.DataFrame(tuples, columns=['sent_id','token','start','end','pos', 'ner'])
df
###Output
_____no_output_____
###Markdown
Use pretrained match_chunk Pipeline for Individual Noun Phrase **Stages**- DocumentAssembler- SentenceDetector- Tokenizer- Part of Speech- ChunkerPipeline:- The pipeline uses regex `?*+`- which states that whenever the chunk finds an optional determiner (DT) followed by any number of adjectives (JJ) and then a noun (NN) then the Noun Phrase(NP) chunk should be formed.
###Code
pipeline = PretrainedPipeline('match_chunks', lang='en')
pipeline.model.stages
result = pipeline.annotate("The book has many chapters") # single noun phrase
result
result['chunk']
result = pipeline.annotate("the little yellow dog barked at the cat") #multiple noune phrases
result
result['chunk']
###Output
_____no_output_____
###Markdown
Extract exact dates from referential date phrases
###Code
pipeline = PretrainedPipeline('match_datetime', lang='en')
result = pipeline.annotate("I saw him yesterday and he told me that he will visit us next week")
result
detailed_result = pipeline.fullAnnotate("I saw him yesterday and he told me that he will visit us next week")
detailed_result
tuples = []
for x in detailed_result[0]["token"]:
tuples.append((int(x.metadata['sentence']), x.result, x.begin, x.end))
df = pd.DataFrame(tuples, columns=['sent_id','token','start','end'])
df
###Output
_____no_output_____
###Markdown
Sentiment Analysis Vivek algopaper: `Fast and accurate sentiment classification using an enhanced Naive Bayes model`https://arxiv.org/abs/1305.6143code `https://github.com/vivekn/sentiment`
###Code
sentiment = PretrainedPipeline('analyze_sentiment', lang='en')
result = sentiment.annotate("The movie I watched today was not a good one")
result['sentiment']
###Output
_____no_output_____
###Markdown
DL version (trained on imdb)
###Code
sentiment_imdb = PretrainedPipeline('analyze_sentimentdl_use_imdb', lang='en')
sentiment_imdb_glove = PretrainedPipeline('analyze_sentimentdl_glove_imdb', lang='en')
comment = '''
It's a very scary film but what impressed me was how true the film sticks to the original's tricks; it isn't filled with loud in-your-face jump scares, in fact, a lot of what makes this film scary is the slick cinematography and intricate shadow play. The use of lighting and creation of atmosphere is what makes this film so tense, which is why it's perfectly suited for those who like Horror movies but without the obnoxious gore.
'''
result = sentiment_imdb_glove.annotate(comment)
result['sentiment']
sentiment_imdb_glove.fullAnnotate(comment)[0]['sentiment']
###Output
_____no_output_____
###Markdown
DL version (trained on twitter dataset)
###Code
sentiment_twitter = PretrainedPipeline('analyze_sentimentdl_use_twitter', lang='en')
result = sentiment_twitter.annotate("The movie I watched today was not a good one")
result['sentiment']
###Output
_____no_output_____ |
Tutorials/Step 3 - Using your Graph.ipynb | ###Markdown
Step 3: Using your GraphIn step 3 of this tutorial, we use our cleaned graph to create an Origin-Destination matrix (OD). Our setting remains Reykjavik, Iceland, as we look at travel times along the network to churches.
###Code
# This is a Jupyter Notebook extension which reloads all of the modules whenever you run the code
# This is optional but good if you are modifying and testing source code
%load_ext autoreload
%autoreload 2
import os, sys
import time
import networkx as nx
import geopandas as gpd
import pandas as pd
# add to your system path the location of the LoadOSM.py and GOSTnet.py scripts
sys.path.append("../")
import GOSTnets as gn
from shapely.geometry import Point
###Output
_____no_output_____
###Markdown
First, we read in the graph from the result of the cleaning process (Step 2)
###Code
pth = "./" # change this path to your working folder
G = nx.read_gpickle(os.path.join(pth, 'tutorial_outputs', r'iceland_network_clean.pickle'))
###Output
_____no_output_____
###Markdown
At this stage each edge in the network has a property called 'length'. This was actually computed during Step 1 when the generateRoadsGDF function was run. The units of this length are in kilometres.
###Code
gn.example_edge(G)
###Output
(0, 11677, {'Wkt': 'LINESTRING (-21.7150886 64.16856079999999, -21.7150429 64.1684612, -21.7150343 64.1684424, -21.7150189 64.16841460000001, -21.7149683 64.1683788, -21.7149165 64.1683566, -21.714839 64.16832599999999, -21.7138473 64.1680006)', 'id': 5779, 'infra_type': 'residential', 'osm_id': '55759237', 'key': 'edge_5779', 'length': 0.09030580094429581, 'Type': 'legitimate'})
###Markdown
We want to convert length to time, so that we can conduct analysis on how long it takes to reach certain destinations. We do this using the convert_network_to_time function. We have used a factor of 1000, because the function is expecting meters, so we need to convert the units of kilometers to meters. The convert_network_to_time function uses a default speed dictionary that assigns speed limits to OSM highway types. However, it is possible to specify your own speed dictionary.
###Code
G_time = gn.convert_network_to_time(G, distance_tag = 'length', road_col = 'infra_type', factor = 1000)
###Output
_____no_output_____
###Markdown
We can now use the 'time' property for each edge to work out how long it takes to get from one node to another!
###Code
gn.example_edge(G_time, 1)
###Output
(0, 11677, {'Wkt': 'LINESTRING (-21.7150886 64.16856079999999, -21.7150429 64.1684612, -21.7150343 64.1684424, -21.7150189 64.16841460000001, -21.7149683 64.1683788, -21.7149165 64.1683566, -21.714839 64.16832599999999, -21.7138473 64.1680006)', 'id': 5779, 'infra_type': 'residential', 'osm_id': '55759237', 'key': 'edge_5779', 'length': 90.30580094429581, 'Type': 'legitimate', 'time': 16.255044169973246, 'mode': 'drive'})
###Markdown
To do this for just one journey, we could call nx.shortest_path_length on any given origin or destination node. Let's list 10 of our nodes using this networkX function:
###Code
list(G_time.nodes)[:10]
A = list(G_time.nodes)[0] # first node in list
B = list(G_time.nodes)[10] # 10th node in list
travel_time = nx.shortest_path_length(G_time, A, B, weight = 'time')
print('The travel time between A and B is: %d seconds, or %d minutes!' % (travel_time, travel_time / 60))
###Output
The travel time between A and B is: 1451 seconds, or 24 minutes!
###Markdown
In our example, we want to use our network for Reykjavik to work out the travel time to local churches.Here, we import a shapefile for Reykjavik, and reproject it to WGS 84:
###Code
rek = gpd.read_file(os.path.join(pth, 'tutorial_data', 'rek2.shp'))
rek = rek.to_crs('epsg:4326')
###Output
_____no_output_____
###Markdown
Next, We set a variable poly equal to just the geometry
###Code
poly = rek.geometry.iloc[0]
###Output
_____no_output_____
###Markdown
We can visualize this in-line by just calling it:
###Code
poly
###Output
_____no_output_____
###Markdown
With this in hand, we can read in a shapefile of destinations - here, the churches in Iceland. We use Shapely's 'within' command to select just those in the Reykjavik area:
###Code
churches = gpd.read_file(os.path.join(pth, 'tutorial_data', 'churches.shp'))
churches = churches.loc[churches.within(poly)]
###Output
_____no_output_____
###Markdown
In order to perform network analysis we want to know the closest network node to each church. For this, we use the pandana snap function to snap the church locations to the road network:
###Code
churches
#the crs of churchs
churches.crs
#view the pandana_snap doc string
gn.pandana_snap?
###Output
_____no_output_____
###Markdown
We want the nearest node distance (NN_dist) to be measured in meters, so that is why we include the target_crs parameter specifying the correct UTM zone.
###Code
churches = gn.pandana_snap_c(G_time, churches, source_crs = 'epsg:4326', target_crs = 'epsg:32627', add_dist_to_node_col = True)
###Output
_____no_output_____
###Markdown
As we can see from the NN_dist column, our church locations are very close to a node on the network in all cases
###Code
churches
###Output
_____no_output_____
###Markdown
When calculating an OD-Matrix, we can only use the node IDs as inputs. So, we convert this column of our dataframe over to a list of unique values:
###Code
destinations = list(set(churches.NN))
destinations
###Output
_____no_output_____
###Markdown
Further Analysis We would like to make an OD matrix where the origin is the cottage we are renting in the city, and the destinations are the churches in Reykjavik. This will help us work out how many churches we can see today!. First, we need to create the origin. It has coordinates: 64.152215, -22.002099 (Lat,Lon), so I make a point of this:
###Code
# A list with a single Shapely Point object is created with (x,y)
my_house = [Point(-22.002099, 64.152215)]
###Output
_____no_output_____
###Markdown
Next, I load it into a geodataframe and snap it to the network:
###Code
mini_gdf = gpd.GeoDataFrame({'geometry':my_house}, crs = {'init':'epsg:4326'}, geometry = 'geometry', index = [1])
mini_gdf
origin_gdf = gn.pandana_snap_c(G_time, mini_gdf, source_crs = 'epsg:4326', target_crs = 'epsg:32627')
origin_gdf
# This is the nearest node (NN)
origin_gdf.iloc[0].NN
###Output
_____no_output_____
###Markdown
Now, We can calcuate the OD matrix using the GOSTNets calculate_OD function. Bear in mind it takes list objects as inputs:
###Code
origin = [origin_gdf.iloc[0].NN]
OD = gn.calculate_OD(G_time, origin, destinations, fail_value = 9999999)
###Output
_____no_output_____
###Markdown
The OD matrix displays the time in seconds to reach each church
###Code
OD
###Output
_____no_output_____
###Markdown
We can use minutes as the measure by dividing every value in the OD Matrix by 60. Then we can convert the array nicely into a pandas Dataframe,
###Code
OD = OD / 60
OD_df = pd.DataFrame(OD, columns = destinations, index = origin)
OD_df
###Output
_____no_output_____ |
DataCampProjects/Introduction to DataCamp Projects/notebook.ipynb | ###Markdown
1. This is a Jupyter notebook!A Jupyter notebook is a document that contains text cells (what you're reading right now) and code cells. What is special with a notebook is that it's interactive: You can change or add code cells, and then run a cell by first selecting it and then clicking the run cell button above ( ▶| Run ) or hitting ctrl + enter. The result will be displayed directly in the notebook. You could use a notebook as a simple calculator. For example, it's estimated that on average 256 children were born every minute in 2016. The code cell below calculates how many children were born on average on a day.
###Code
# I'm a code cell, click me, then run me!
256 * 60 * 24 # Children × minutes × hours
###Output
_____no_output_____
###Markdown
2. Put _any_ code in code cellsBut a code cell can contain much more than a simple one-liner! This is a notebook running python and you can put any python code in a code cell (but notebooks can run other languages too, like R). Below is a code cell where we define a whole new function (greet). To show the output of greet we run it last in the code cell as the last value is always printed out.
###Code
def greet(first_name, last_name):
greeting = 'My name is ' + last_name + ', ' + first_name + ' ' + last_name + '!'
return greeting
# Replace with your first and last name.
# That is, unless your name is already James Bond.
greet('Arash', 'Tabrizian')
###Output
_____no_output_____
###Markdown
3. Jupyter notebooks ♡ dataWe've seen that notebooks can display basic objects such as numbers and strings. But notebooks also support the objects used in data science, which makes them great for interactive data analysis!For example, below we create a pandas DataFrame by reading in a csv-file with the average global temperature for the years 1850 to 2016. If we look at the head of this DataFrame the notebook will render it as a nice-looking table.
###Code
# Importing the pandas module
import pandas as pd
# Reading in the global temperature data,data
global_temp = pd.read_csv('datasets/global_temperature.csv')
# Take a look at the first datapoints
# ... YOUR CODE FOR TASK 3 ...
global_temp.head()
###Output
_____no_output_____
###Markdown
4. Jupyter notebooks ♡ plotsTables are nice but — as the saying goes — "a plot can show a thousand data points". Notebooks handle plots as well, but it requires a bit of magic. Here magic does not refer to any arcane rituals but to so-called "magic commands" that affect how the Jupyter notebook works. Magic commands start with either % or %% and the command we need to nicely display plots inline is %matplotlib inline. With this magic in place, all plots created in code cells will automatically be displayed inline. Let's take a look at the global temperature for the last 150 years.
###Code
# Setting up inline plotting using jupyter notebook "magic"
%matplotlib inline
import matplotlib.pyplot as plt
# Plotting global temperature in degrees celsius by year.
plt.plot(global_temp['year'], global_temp['degrees_celsius'])
# Adding some nice labels
plt.xlabel('Year')
plt.ylabel('Global Temperature (in Celsius)')
###Output
_____no_output_____
###Markdown
5. Jupyter notebooks ♡ a lot moreTables and plots are the most common outputs when doing data analysis, but Jupyter notebooks can render many more types of outputs such as sound, animation, video, etc. Yes, almost anything that can be shown in a modern web browser. This also makes it possible to include interactive widgets directly in the notebook!For example, this (slightly complicated) code will create an interactive map showing the locations of the three largest smartphone companies in 2016. You can move and zoom the map, and you can click the markers for more info!
###Code
# Making a map using the folium module
import folium
phone_map = folium.Map()
# Top three smart phone companies by market share in 2016.
companies = [
{'loc': [37.4970, 127.0266], 'label': 'Samsung: 20.5%'},
{'loc': [37.3318, -122.0311], 'label': 'Apple: 14.4%'},
{'loc': [22.5431, 114.0579], 'label': 'Huawei: 8.9%'}]
# Adding markers to the map.
for company in companies:
marker = folium.Marker(location=company['loc'], popup=company['label'])
marker.add_to(phone_map)
# The last object in the cell always gets shown in the notebook
phone_map
###Output
_____no_output_____
###Markdown
6. Goodbye for now!This was just a short introduction to Jupyter notebooks, an open source technology that is increasingly used for data science and analysis. I hope you enjoyed it! :)
###Code
# Are you ready to get started with DataCamp projects?
I_am_ready = False
# Ps.
# Feel free to try out any other stuff in this notebook.
# It's all yours!
###Output
_____no_output_____ |
docs/tutorials/detect.ipynb | ###Markdown
Source detection with Gammapy ContextThe first task in a source catalogue production is to identify significant excesses in the data that can be associated to unknown sources and provide a preliminary parametrization in term of position, extent, and flux. In this notebook we will use Fermi-LAT data to illustrate how to detect candidate sources in counts images with known background.**Objective: build a list of significant excesses in a Fermi-LAT map** Proposed approach This notebook show how to do source detection with Gammapy using the methods available in `~gammapy.estimators`.We will use images from a Fermi-LAT 3FHL high-energy Galactic center dataset to do this:* perform adaptive smoothing on counts image* produce 2-dimensional test-statistics (TS)* run a peak finder to detect point-source candidates* compute Li & Ma significance images* estimate source candidates radius and excess countsNote that what we do here is a quick-look analysis, the production of real source catalogs use more elaborate procedures.We will work with the following functions and classes:* `~gammapy.maps.WcsNDMap`* `~gammapy.estimators.ASmoothEstimator`* `~gammapy.estimators.TSMapEstimator`* `gammapy.estimators.utils.find_peaks` SetupAs always, let's get started with some setup ...
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from gammapy.maps import Map
from gammapy.estimators import ASmoothMapEstimator, TSMapEstimator
from gammapy.estimators.utils import find_peaks
from gammapy.datasets import MapDataset
from gammapy.modeling.models import (
BackgroundModel,
SkyModel,
PowerLawSpectralModel,
PointSpatialModel,
)
from gammapy.irf import PSFMap, EnergyDependentTablePSF, EDispKernelMap
from astropy.coordinates import SkyCoord
import astropy.units as u
import numpy as np
###Output
_____no_output_____
###Markdown
Read in input imagesWe first read in the counts cube and sum over the energy axis:
###Code
counts = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-counts-cube.fits.gz"
)
background = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-background-cube.fits.gz"
)
exposure = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-exposure-cube.fits.gz"
)
# unit is not properly stored on the file. We add it manually
exposure.unit = "cm2s"
psf = EnergyDependentTablePSF.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-psf-cube.fits.gz"
)
psfmap = PSFMap.from_energy_dependent_table_psf(psf)
edisp = EDispKernelMap.from_diagonal_response(
energy_axis=counts.geom.axes["energy"],
energy_axis_true=exposure.geom.axes["energy_true"],
)
dataset = MapDataset(
counts=counts,
background=background,
exposure=exposure,
psf=psfmap,
name="fermi-3fhl-gc",
edisp=edisp,
)
###Output
_____no_output_____
###Markdown
Adaptive smoothing For visualisation purpose it can be nice to look at a smoothed counts image. This can be performed using the adaptive smoothing algorithm from [Ebeling et al. (2006)](https://ui.adsabs.harvard.edu/abs/2006MNRAS.368...65E/abstract). In the following example the `threshold` argument gives the minimum significance expected, values below are clipped.
###Code
%%time
scales = u.Quantity(np.arange(0.05, 1, 0.05), unit="deg")
smooth = ASmoothMapEstimator(
threshold=3, scales=scales, energy_edges=[10, 500] * u.GeV
)
images = smooth.run(dataset)
plt.figure(figsize=(15, 5))
images["flux"].plot(add_cbar=True, stretch="asinh");
###Output
_____no_output_____
###Markdown
TS map estimationThe Test Statistic, TS = 2 ∆ log L ([Mattox et al. 1996](https://ui.adsabs.harvard.edu/abs/1996ApJ...461..396M/abstract)), compares the likelihood function L optimized with and without a given source.The TS map is computed by fitting by a single amplitude parameter on each pixel as described in Appendix A of [Stewart (2009)](https://ui.adsabs.harvard.edu/abs/2009A%26A...495..989S/abstract). The fit is simplified by finding roots of the derivative of the fit statistics (default settings use [Brent's method](https://en.wikipedia.org/wiki/Brent%27s_method)).We first need to define the model that will be used to test for the existence of a source. Here, we use a point source.
###Code
spatial_model = PointSpatialModel()
spectral_model = PowerLawSpectralModel(index=2)
model = SkyModel(spatial_model=spatial_model, spectral_model=spectral_model)
%%time
estimator = TSMapEstimator(
model,
kernel_width="1 deg",
selection_optional=[],
energy_edges=[10, 500] * u.GeV,
)
maps = estimator.run(dataset)
###Output
_____no_output_____
###Markdown
Plot resulting images
###Code
plt.figure(figsize=(15, 5))
maps["sqrt_ts"].plot(add_cbar=True);
plt.figure(figsize=(15, 5))
maps["flux"].plot(add_cbar=True, stretch="sqrt", vmin=0);
plt.figure(figsize=(15, 5))
maps["niter"].plot(add_cbar=True);
###Output
_____no_output_____
###Markdown
Source candidatesLet's run a peak finder on the `sqrt_ts` image to get a list of point-sources candidates (positions and peak `sqrt_ts` values).The `find_peaks` function performs a local maximun search in a sliding window, the argument `min_distance` is the minimum pixel distance between peaks (smallest possible value and default is 1 pixel).
###Code
sources = find_peaks(maps["sqrt_ts"], threshold=5, min_distance="0.25 deg")
nsou = len(sources)
sources
# Plot sources on top of significance sky image
plt.figure(figsize=(15, 5))
_, ax, _ = maps["sqrt_ts"].plot(add_cbar=True)
ax.scatter(
sources["ra"],
sources["dec"],
transform=plt.gca().get_transform("icrs"),
color="none",
edgecolor="w",
marker="o",
s=600,
lw=1.5,
);
###Output
_____no_output_____
###Markdown
Source detection with Gammapy ContextThe first task in a source catalogue production is to identify significant excesses in the data that can be associated to unknown sources and provide a preliminary parametrization in term of position, extent, and flux. In this notebook we will use Fermi-LAT data to illustrate how to detect candidate sources in counts images with known background.**Objective: build a list of significant excesses in a Fermi-LAT map** Proposed approach This notebook show how to do source detection with Gammapy using the methods available in `~gammapy.estimators`.We will use images from a Fermi-LAT 3FHL high-energy Galactic center dataset to do this:* perform adaptive smoothing on counts image* produce 2-dimensional test-statistics (TS)* run a peak finder to detect point-source candidates* compute Li & Ma significance images* estimate source candidates radius and excess countsNote that what we do here is a quick-look analysis, the production of real source catalogs use more elaborate procedures.We will work with the following functions and classes:* `~gammapy.maps.WcsNDMap`* `~gammapy.estimators.ASmoothEstimator`* `~gammapy.estimators.TSMapEstimator`* `gammapy.estimators.utils.find_peaks` SetupAs always, let's get started with some setup ...
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from gammapy.maps import Map
from gammapy.estimators import (
ASmoothMapEstimator,
TSMapEstimator,
)
from gammapy.estimators.utils import find_peaks
from gammapy.datasets import MapDataset
from gammapy.modeling.models import (
BackgroundModel,
SkyModel,
PowerLawSpectralModel,
PointSpatialModel,
)
from gammapy.irf import PSFMap, EnergyDependentTablePSF
from astropy.coordinates import SkyCoord
import astropy.units as u
import numpy as np
###Output
_____no_output_____
###Markdown
Read in input imagesWe first read in the counts cube and sum over the energy axis:
###Code
counts = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-counts-cube.fits.gz"
)
background = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-background-cube.fits.gz"
)
background = BackgroundModel(background, datasets_names=["fermi-3fhl-gc"])
exposure = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-exposure-cube.fits.gz"
)
# unit is not properly stored on the file. We add it manually
exposure.unit = "cm2s"
mask_safe = counts.copy(data=np.ones_like(counts.data).astype("bool"))
psf = EnergyDependentTablePSF.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-psf-cube.fits.gz"
)
psfmap = PSFMap.from_energy_dependent_table_psf(psf)
dataset = MapDataset(
counts=counts,
models=[background],
exposure=exposure,
psf=psfmap,
mask_safe=mask_safe,
name="fermi-3fhl-gc",
)
dataset = dataset.to_image()
###Output
_____no_output_____
###Markdown
Adaptive smoothing For visualisation purpose it can be nice to look at a smoothed counts image. This can be performed using the adaptive smoothing algorithm from [Ebeling et al. (2006)](https://ui.adsabs.harvard.edu/abs/2006MNRAS.368...65E/abstract). In the following example the `threshold` argument gives the minimum significance expected, values below are clipped.
###Code
%%time
scales = u.Quantity(np.arange(0.05, 1, 0.05), unit="deg")
smooth = ASmoothMapEstimator(threshold=3, scales=scales)
images = smooth.run(dataset)
plt.figure(figsize=(15, 5))
images["counts"].plot(add_cbar=True, vmax=10)
###Output
_____no_output_____
###Markdown
TS map estimationThe Test Statistic, TS = 2 ∆ log L ([Mattox et al. 1996](https://ui.adsabs.harvard.edu/abs/1996ApJ...461..396M/abstract)), compares the likelihood function L optimized with and without a given source.The TS map is computed by fitting by a single amplitude parameter on each pixel as described in Appendix A of [Stewart (2009)](https://ui.adsabs.harvard.edu/abs/2009A%26A...495..989S/abstract). The fit is simplified by finding roots of the derivative of the fit statistics (default settings use [Brent's method](https://en.wikipedia.org/wiki/Brent%27s_method)).We first need to define the model that will be used to test for the existence of a source. Here, we use a point source.
###Code
spatial_model = PointSpatialModel()
spectral_model = PowerLawSpectralModel(index=2)
model = SkyModel(spatial_model=spatial_model, spectral_model=spectral_model)
%%time
estimator = TSMapEstimator(model, kernel_width="0.4 deg")
images = estimator.run(dataset)
###Output
_____no_output_____
###Markdown
Plot resulting images
###Code
plt.figure(figsize=(15, 5))
images["sqrt_ts"].plot(add_cbar=True);
plt.figure(figsize=(15, 5))
images["flux"].plot(add_cbar=True, stretch="sqrt", vmin=0);
plt.figure(figsize=(15, 5))
images["niter"].plot(add_cbar=True);
###Output
_____no_output_____
###Markdown
Source candidatesLet's run a peak finder on the `sqrt_ts` image to get a list of point-sources candidates (positions and peak `sqrt_ts` values).The `find_peaks` function performs a local maximun search in a sliding window, the argument `min_distance` is the minimum pixel distance between peaks (smallest possible value and default is 1 pixel).
###Code
sources = find_peaks(images["sqrt_ts"], threshold=8, min_distance=1)
nsou = len(sources)
sources
# Plot sources on top of significance sky image
plt.figure(figsize=(15, 5))
_, ax, _ = images["sqrt_ts"].plot(add_cbar=True)
ax.scatter(
sources["ra"],
sources["dec"],
transform=plt.gca().get_transform("icrs"),
color="none",
edgecolor="w",
marker="o",
s=600,
lw=1.5,
);
###Output
_____no_output_____
###Markdown
Source detection and significance maps ContextThe first task in a source catalogue production is to identify significant excesses in the data that can be associated to unknown sources and provide a preliminary parametrization in term of position, extent, and flux. In this notebook we will use Fermi-LAT data to illustrate how to detect candidate sources in counts images with known background.**Objective: build a list of significant excesses in a Fermi-LAT map** Proposed approach This notebook show how to do source detection with Gammapy using the methods available in `~gammapy.estimators`.We will use images from a Fermi-LAT 3FHL high-energy Galactic center dataset to do this:* perform adaptive smoothing on counts image* produce 2-dimensional test-statistics (TS)* run a peak finder to detect point-source candidates* compute Li & Ma significance images* estimate source candidates radius and excess countsNote that what we do here is a quick-look analysis, the production of real source catalogs use more elaborate procedures.We will work with the following functions and classes:* `~gammapy.maps.WcsNDMap`* `~gammapy.estimators.ASmoothEstimator`* `~gammapy.estimators.TSMapEstimator`* `gammapy.estimators.utils.find_peaks` SetupAs always, let's get started with some setup ...
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from gammapy.maps import Map
from gammapy.estimators import ASmoothMapEstimator, TSMapEstimator
from gammapy.estimators.utils import find_peaks
from gammapy.datasets import MapDataset
from gammapy.modeling.models import (
BackgroundModel,
SkyModel,
PowerLawSpectralModel,
PointSpatialModel,
)
from gammapy.irf import PSFMap, EnergyDependentTablePSF, EDispKernelMap
from astropy.coordinates import SkyCoord
import astropy.units as u
import numpy as np
###Output
_____no_output_____
###Markdown
Read in input imagesWe first read in the counts cube and sum over the energy axis:
###Code
counts = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-counts-cube.fits.gz"
)
background = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-background-cube.fits.gz"
)
exposure = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-exposure-cube.fits.gz"
)
# unit is not properly stored on the file. We add it manually
exposure.unit = "cm2s"
psf = EnergyDependentTablePSF.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-psf-cube.fits.gz"
)
psfmap = PSFMap.from_energy_dependent_table_psf(psf)
edisp = EDispKernelMap.from_diagonal_response(
energy_axis=counts.geom.axes["energy"],
energy_axis_true=exposure.geom.axes["energy_true"],
)
dataset = MapDataset(
counts=counts,
background=background,
exposure=exposure,
psf=psfmap,
name="fermi-3fhl-gc",
edisp=edisp,
)
###Output
_____no_output_____
###Markdown
Adaptive smoothing For visualisation purpose it can be nice to look at a smoothed counts image. This can be performed using the adaptive smoothing algorithm from [Ebeling et al. (2006)](https://ui.adsabs.harvard.edu/abs/2006MNRAS.368...65E/abstract). In the following example the `threshold` argument gives the minimum significance expected, values below are clipped.
###Code
%%time
scales = u.Quantity(np.arange(0.05, 1, 0.05), unit="deg")
smooth = ASmoothMapEstimator(
threshold=3, scales=scales, energy_edges=[10, 500] * u.GeV
)
images = smooth.run(dataset)
plt.figure(figsize=(15, 5))
images["flux"].plot(add_cbar=True, stretch="asinh");
###Output
_____no_output_____
###Markdown
TS map estimationThe Test Statistic, TS = 2 ∆ log L ([Mattox et al. 1996](https://ui.adsabs.harvard.edu/abs/1996ApJ...461..396M/abstract)), compares the likelihood function L optimized with and without a given source.The TS map is computed by fitting by a single amplitude parameter on each pixel as described in Appendix A of [Stewart (2009)](https://ui.adsabs.harvard.edu/abs/2009A%26A...495..989S/abstract). The fit is simplified by finding roots of the derivative of the fit statistics (default settings use [Brent's method](https://en.wikipedia.org/wiki/Brent%27s_method)).We first need to define the model that will be used to test for the existence of a source. Here, we use a point source.
###Code
spatial_model = PointSpatialModel()
spectral_model = PowerLawSpectralModel(index=2)
model = SkyModel(spatial_model=spatial_model, spectral_model=spectral_model)
%%time
estimator = TSMapEstimator(
model,
kernel_width="1 deg",
selection_optional=[],
energy_edges=[10, 500] * u.GeV,
)
maps = estimator.run(dataset)
###Output
_____no_output_____
###Markdown
Plot resulting images
###Code
plt.figure(figsize=(15, 5))
maps["sqrt_ts"].plot(add_cbar=True);
plt.figure(figsize=(15, 5))
maps["flux"].plot(add_cbar=True, stretch="sqrt", vmin=0);
plt.figure(figsize=(15, 5))
maps["niter"].plot(add_cbar=True);
###Output
_____no_output_____
###Markdown
Source candidatesLet's run a peak finder on the `sqrt_ts` image to get a list of point-sources candidates (positions and peak `sqrt_ts` values).The `find_peaks` function performs a local maximun search in a sliding window, the argument `min_distance` is the minimum pixel distance between peaks (smallest possible value and default is 1 pixel).
###Code
sources = find_peaks(maps["sqrt_ts"], threshold=5, min_distance="0.25 deg")
nsou = len(sources)
sources
# Plot sources on top of significance sky image
plt.figure(figsize=(15, 5))
_, ax, _ = maps["sqrt_ts"].plot(add_cbar=True)
ax.scatter(
sources["ra"],
sources["dec"],
transform=plt.gca().get_transform("icrs"),
color="none",
edgecolor="w",
marker="o",
s=600,
lw=1.5,
);
###Output
_____no_output_____
###Markdown
Source detection and significance maps ContextThe first task in a source catalogue production is to identify significant excesses in the data that can be associated to unknown sources and provide a preliminary parametrization in term of position, extent, and flux. In this notebook we will use Fermi-LAT data to illustrate how to detect candidate sources in counts images with known background.**Objective: build a list of significant excesses in a Fermi-LAT map** Proposed approach This notebook show how to do source detection with Gammapy using the methods available in `~gammapy.estimators`.We will use images from a Fermi-LAT 3FHL high-energy Galactic center dataset to do this:* perform adaptive smoothing on counts image* produce 2-dimensional test-statistics (TS)* run a peak finder to detect point-source candidates* compute Li & Ma significance images* estimate source candidates radius and excess countsNote that what we do here is a quick-look analysis, the production of real source catalogs use more elaborate procedures.We will work with the following functions and classes:* `~gammapy.maps.WcsNDMap`* `~gammapy.estimators.ASmoothEstimator`* `~gammapy.estimators.TSMapEstimator`* `gammapy.estimators.utils.find_peaks` SetupAs always, let's get started with some setup ...
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from gammapy.maps import Map
from gammapy.estimators import ASmoothMapEstimator, TSMapEstimator
from gammapy.estimators.utils import find_peaks
from gammapy.datasets import MapDataset
from gammapy.modeling.models import (
BackgroundModel,
SkyModel,
PowerLawSpectralModel,
PointSpatialModel,
)
from gammapy.irf import PSFMap, EDispKernelMap
from astropy.coordinates import SkyCoord
import astropy.units as u
import numpy as np
###Output
_____no_output_____
###Markdown
Read in input imagesWe first read in the counts cube and sum over the energy axis:
###Code
counts = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-counts-cube.fits.gz"
)
background = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-background-cube.fits.gz"
)
exposure = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-exposure-cube.fits.gz"
)
# unit is not properly stored on the file. We add it manually
exposure.unit = "cm2s"
psfmap = PSFMap.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-psf-cube.fits.gz", format="gtpsf"
)
edisp = EDispKernelMap.from_diagonal_response(
energy_axis=counts.geom.axes["energy"],
energy_axis_true=exposure.geom.axes["energy_true"],
)
dataset = MapDataset(
counts=counts,
background=background,
exposure=exposure,
psf=psfmap,
name="fermi-3fhl-gc",
edisp=edisp,
)
###Output
_____no_output_____
###Markdown
Adaptive smoothing For visualisation purpose it can be nice to look at a smoothed counts image. This can be performed using the adaptive smoothing algorithm from [Ebeling et al. (2006)](https://ui.adsabs.harvard.edu/abs/2006MNRAS.368...65E/abstract). In the following example the `threshold` argument gives the minimum significance expected, values below are clipped.
###Code
%%time
scales = u.Quantity(np.arange(0.05, 1, 0.05), unit="deg")
smooth = ASmoothMapEstimator(
threshold=3, scales=scales, energy_edges=[10, 500] * u.GeV
)
images = smooth.run(dataset)
plt.figure(figsize=(15, 5))
images["flux"].plot(add_cbar=True, stretch="asinh");
###Output
_____no_output_____
###Markdown
TS map estimationThe Test Statistic, TS = 2 ∆ log L ([Mattox et al. 1996](https://ui.adsabs.harvard.edu/abs/1996ApJ...461..396M/abstract)), compares the likelihood function L optimized with and without a given source.The TS map is computed by fitting by a single amplitude parameter on each pixel as described in Appendix A of [Stewart (2009)](https://ui.adsabs.harvard.edu/abs/2009A%26A...495..989S/abstract). The fit is simplified by finding roots of the derivative of the fit statistics (default settings use [Brent's method](https://en.wikipedia.org/wiki/Brent%27s_method)).We first need to define the model that will be used to test for the existence of a source. Here, we use a point source.
###Code
spatial_model = PointSpatialModel()
spectral_model = PowerLawSpectralModel(index=2)
model = SkyModel(spatial_model=spatial_model, spectral_model=spectral_model)
%%time
estimator = TSMapEstimator(
model,
kernel_width="1 deg",
selection_optional=[],
energy_edges=[10, 500] * u.GeV,
)
maps = estimator.run(dataset)
###Output
_____no_output_____
###Markdown
Plot resulting images
###Code
plt.figure(figsize=(15, 5))
maps["sqrt_ts"].plot(add_cbar=True);
plt.figure(figsize=(15, 5))
maps["flux"].plot(add_cbar=True, stretch="sqrt", vmin=0);
plt.figure(figsize=(15, 5))
maps["niter"].plot(add_cbar=True);
###Output
_____no_output_____
###Markdown
Source candidatesLet's run a peak finder on the `sqrt_ts` image to get a list of point-sources candidates (positions and peak `sqrt_ts` values).The `find_peaks` function performs a local maximun search in a sliding window, the argument `min_distance` is the minimum pixel distance between peaks (smallest possible value and default is 1 pixel).
###Code
sources = find_peaks(maps["sqrt_ts"], threshold=5, min_distance="0.25 deg")
nsou = len(sources)
sources
# Plot sources on top of significance sky image
plt.figure(figsize=(15, 5))
_, ax, _ = maps["sqrt_ts"].plot(add_cbar=True)
ax.scatter(
sources["ra"],
sources["dec"],
transform=plt.gca().get_transform("icrs"),
color="none",
edgecolor="w",
marker="o",
s=600,
lw=1.5,
);
###Output
_____no_output_____ |
4-Advanced-Deployment-Scenarios-with-TensorFlow/rest_simple.ipynb | ###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Train and serve a TensorFlow model with TensorFlow Serving **Warning: This notebook is designed to be run in a Google Colab only**. It installs packages on the system and requires root access. If you want to run it in a local Jupyter notebook, please proceed with caution.Note: You can run this example right now in a Jupyter-style notebook, no setup required! Just click "Run in Google Colab"View on TensorFlow.orgRun in Google ColabView source on GitHubDownload notebook This guide trains a neural network model to classify [images of clothing, like sneakers and shirts](https://github.com/zalandoresearch/fashion-mnist), saves the trained model, and then serves it with [TensorFlow Serving](https://www.tensorflow.org/serving/). The focus is on TensorFlow Serving, rather than the modeling and training in TensorFlow, so for a complete example which focuses on the modeling and training see the [Basic Classification example](https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/keras/basic_classification.ipynb).This guide uses [tf.keras](https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/keras.ipynb), a high-level API to build and train models in TensorFlow.
###Code
import sys
# Confirm that we're using Python 3
assert sys.version_info.major is 3, 'Oops, not running Python 3. Use Runtime > Change runtime type'
# TensorFlow and tf.keras
print("Installing dependencies for Colab environment")
!pip install -Uq grpcio==1.26.0
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import os
import subprocess
print('TensorFlow version: {}'.format(tf.__version__))
###Output
Installing dependencies for Colab environment
[K |████████████████████████████████| 2.4MB 8.2MB/s
[31mERROR: tensorflow 2.5.0 has requirement grpcio~=1.34.0, but you'll have grpcio 1.26.0 which is incompatible.[0m
[?25hTensorFlow version: 2.5.0
###Markdown
Create your model Import the Fashion MNIST datasetThis guide uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here: <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the "Hello, World" of machine learning programs for computer vision. You can access the Fashion MNIST directly from TensorFlow, just import and load the data.Note: Although these are really images, they are loaded as NumPy arrays and not binary image objects.
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# scale the values to 0.0 to 1.0
train_images = train_images / 255.0
test_images = test_images / 255.0
# reshape for feeding into the model
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1)
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
print('\ntrain_images.shape: {}, of {}'.format(train_images.shape, train_images.dtype))
print('test_images.shape: {}, of {}'.format(test_images.shape, test_images.dtype))
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
32768/29515 [=================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26427392/26421880 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
8192/5148 [===============================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4423680/4422102 [==============================] - 0s 0us/step
train_images.shape: (60000, 28, 28, 1), of float64
test_images.shape: (10000, 28, 28, 1), of float64
###Markdown
Train and evaluate your modelLet's use the simplest possible CNN, since we're not focused on the modeling part.
###Code
model = keras.Sequential([
keras.layers.Conv2D(input_shape=(28,28,1), filters=8, kernel_size=3,
strides=2, activation='relu', name='Conv1'),
keras.layers.Flatten(),
keras.layers.Dense(10, name='Dense')
])
model.summary()
testing = False
epochs = 5
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.fit(train_images, train_labels, epochs=epochs)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\nTest accuracy: {}'.format(test_acc))
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Conv1 (Conv2D) (None, 13, 13, 8) 80
_________________________________________________________________
flatten (Flatten) (None, 1352) 0
_________________________________________________________________
Dense (Dense) (None, 10) 13530
=================================================================
Total params: 13,610
Trainable params: 13,610
Non-trainable params: 0
_________________________________________________________________
Epoch 1/5
1875/1875 [==============================] - 34s 2ms/step - loss: 0.5247 - sparse_categorical_accuracy: 0.8181
Epoch 2/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3793 - sparse_categorical_accuracy: 0.8657
Epoch 3/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3463 - sparse_categorical_accuracy: 0.8761
Epoch 4/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3280 - sparse_categorical_accuracy: 0.8820
Epoch 5/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3143 - sparse_categorical_accuracy: 0.8871
313/313 [==============================] - 1s 2ms/step - loss: 0.3463 - sparse_categorical_accuracy: 0.8761
Test accuracy: 0.8761000037193298
###Markdown
Save your modelTo load our trained model into TensorFlow Serving we first need to save it in [SavedModel](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/saved_model) format. This will create a protobuf file in a well-defined directory hierarchy, and will include a version number. [TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving) allows us to select which version of a model, or "servable" we want to use when we make inference requests. Each version will be exported to a different sub-directory under the given path.
###Code
# Fetch the Keras session and save the model
# The signature definition is defined by the input and output tensors,
# and stored with the default serving key
import tempfile
MODEL_DIR = tempfile.gettempdir()
version = 1
export_path = os.path.join(MODEL_DIR, str(version))
print('export_path = {}\n'.format(export_path))
tf.keras.models.save_model(
model,
export_path,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None
)
print('\nSaved model:')
!ls -l {export_path}
###Output
WARNING:absl:Function `_wrapped_model` contains input name(s) Conv1_input with unsupported characters which will be renamed to conv1_input in the SavedModel.
###Markdown
Examine your saved modelWe'll use the command line utility `saved_model_cli` to look at the [MetaGraphDefs](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/MetaGraphDef) (the models) and [SignatureDefs](../signature_defs) (the methods you can call) in our SavedModel. See [this discussion of the SavedModel CLI](https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/saved_model.mdcli-to-inspect-and-execute-savedmodel) in the TensorFlow Guide.
###Code
!saved_model_cli show --dir {export_path} --all
###Output
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['Conv1_input'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 28, 28, 1)
name: serving_default_Conv1_input:0
The given SavedModel SignatureDef contains the following output(s):
outputs['Dense'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 10)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
WARNING: Logging before flag parsing goes to stderr.
W0526 03:34:24.893263 140181215819648 deprecation.py:506] From /usr/local/lib/python2.7/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling __init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
Defined Functions:
Function Name: '__call__'
Option #1
Callable with:
Argument #1
Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'Conv1_input')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #2
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'inputs')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #3
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'inputs')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #4
Callable with:
Argument #1
Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'Conv1_input')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Function Name: '_default_save_signature'
Option #1
Callable with:
Argument #1
Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'Conv1_input')
Function Name: 'call_and_return_all_conditional_losses'
Option #1
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'inputs')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #2
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'inputs')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #3
Callable with:
Argument #1
Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'Conv1_input')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #4
Callable with:
Argument #1
Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'Conv1_input')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
###Markdown
That tells us a lot about our model! In this case we just trained our model, so we already know the inputs and outputs, but if we didn't this would be important information. It doesn't tell us everything, like the fact that this is grayscale image data for example, but it's a great start. Serve your model with TensorFlow Serving**Warning: If you are running this NOT on a Google Colab,** following cellswill install packages on the system with root access. If you want to run it ina local Jupyter notebook, please proceed with caution. Add TensorFlow Serving distribution URI as a package source:We're preparing to install TensorFlow Serving using [Aptitude](https://wiki.debian.org/Aptitude) since this Colab runs in a Debian environment. We'll add the `tensorflow-model-server` package to the list of packages that Aptitude knows about. Note that we're running as root.Note: This example is running TensorFlow Serving natively, but [you can also run it in a Docker container](https://www.tensorflow.org/tfx/serving/docker), which is one of the easiest ways to get started using TensorFlow Serving.
###Code
import sys
# We need sudo prefix if not on a Google Colab.
if 'google.colab' not in sys.modules:
SUDO_IF_NEEDED = 'sudo'
else:
SUDO_IF_NEEDED = ''
# This is the same as you would do from your command line, but without the [arch=amd64], and no sudo
# You would instead do:
# echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && \
# curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
!echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | {SUDO_IF_NEEDED} tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | {SUDO_IF_NEEDED} apt-key add -
!{SUDO_IF_NEEDED} apt update
###Output
deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2943 100 2943 0 0 36333 0 --:--:-- --:--:-- --:--:-- 36333
OK
Ign:1 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease
Ign:2 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease
Get:3 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease [3,626 B]
Get:4 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release [697 B]
Get:5 http://storage.googleapis.com/tensorflow-serving-apt stable InRelease [3,012 B]
Hit:6 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release
Get:7 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release.gpg [836 B]
Get:8 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ Packages [60.9 kB]
Get:9 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease [15.9 kB]
Get:10 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Hit:11 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:12 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server-universal amd64 Packages [347 B]
Get:13 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:14 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 Packages [340 B]
Hit:16 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease
Ign:17 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Packages
Get:17 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Packages [798 kB]
Hit:18 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic InRelease
Get:19 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:20 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB]
Get:21 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [2,152 kB]
Get:22 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main Sources [1,769 kB]
Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [452 kB]
Get:24 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [423 kB]
Get:25 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1,412 kB]
Get:26 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [2,183 kB]
Get:27 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2,583 kB]
Get:28 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main amd64 Packages [905 kB]
Get:29 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [41.5 kB]
Fetched 13.1 MB in 4s (3,143 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
79 packages can be upgraded. Run 'apt list --upgradable' to see them.
###Markdown
Install TensorFlow ServingThis is all you need - one command line!
###Code
!{SUDO_IF_NEEDED} apt-get install tensorflow-model-server
###Output
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
libnvidia-common-460
Use 'apt autoremove' to remove it.
The following NEW packages will be installed:
tensorflow-model-server
0 upgraded, 1 newly installed, 0 to remove and 79 not upgraded.
Need to get 326 MB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 tensorflow-model-server all 2.5.1 [326 MB]
Fetched 326 MB in 5s (67.1 MB/s)
Selecting previously unselected package tensorflow-model-server.
(Reading database ... 160706 files and directories currently installed.)
Preparing to unpack .../tensorflow-model-server_2.5.1_all.deb ...
Unpacking tensorflow-model-server (2.5.1) ...
Setting up tensorflow-model-server (2.5.1) ...
###Markdown
Start running TensorFlow ServingThis is where we start running TensorFlow Serving and load our model. After it loads we can start making inference requests using REST. There are some important parameters:* `rest_api_port`: The port that you'll use for REST requests.* `model_name`: You'll use this in the URL of REST requests. It can be anything.* `model_base_path`: This is the path to the directory where you've saved your model.
###Code
os.environ["MODEL_DIR"] = MODEL_DIR
%%bash --bg
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=fashion_model \
--model_base_path="${MODEL_DIR}" >server.log 2>&1
!tail server.log
###Output
2021-05-26 03:35:17.481905: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: /tmp/1
2021-05-26 03:35:17.485005: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 41460 microseconds.
2021-05-26 03:35:17.485390: I tensorflow_serving/servables/tensorflow/saved_model_warmup_util.cc:59] No warmup data file found at /tmp/1/assets.extra/tf_serving_warmup_requests
2021-05-26 03:35:17.485524: I tensorflow_serving/core/loader_harness.cc:87] Successfully loaded servable version {name: fashion_model version: 1}
2021-05-26 03:35:17.486043: I tensorflow_serving/model_servers/server_core.cc:486] Finished adding/updating models
2021-05-26 03:35:17.486094: I tensorflow_serving/model_servers/server.cc:367] Profiler service is enabled
2021-05-26 03:35:17.486461: I tensorflow_serving/model_servers/server.cc:393] Running gRPC ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
2021-05-26 03:35:17.486923: I tensorflow_serving/model_servers/server.cc:414] Exporting HTTP/REST API at:localhost:8501 ...
[evhttp_server.cc : 245] NET_LOG: Entering the event loop ...
###Markdown
Make a request to your model in TensorFlow ServingFirst, let's take a look at a random example from our test data.
###Code
def show(idx, title):
plt.figure()
plt.imshow(test_images[idx].reshape(28,28))
plt.axis('off')
plt.title('\n\n{}'.format(title), fontdict={'size': 16})
import random
rando = random.randint(0,len(test_images)-1)
show(rando, 'An Example Image: {}'.format(class_names[test_labels[rando]]))
###Output
_____no_output_____
###Markdown
Ok, that looks interesting. How hard is that for you to recognize? Now let's create the JSON object for a batch of three inference requests, and see how well our model recognizes things:
###Code
import json
data = json.dumps({"signature_name": "serving_default", "instances": test_images[0:3].tolist()})
print('Data: {} ... {}'.format(data[:50], data[len(data)-52:]))
###Output
Data: {"signature_name": "serving_default", "instances": ... [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]]]}
###Markdown
Make REST requests Newest version of the servableWe'll send a predict request as a POST to our server's REST endpoint, and pass it three examples. We'll ask our server to give us the latest version of our servable by not specifying a particular version.
###Code
# docs_infra: no_execute
!pip install -q requests
import requests
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
show(0, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[0])], np.argmax(predictions[0]), class_names[test_labels[0]], test_labels[0]))
###Output
_____no_output_____
###Markdown
A particular version of the servableNow let's specify a particular version of our servable. Since we only have one, let's select version 1. We'll also look at all three results.
###Code
# docs_infra: no_execute
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model/versions/1:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
for i in range(0,3):
show(i, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[i])], np.argmax(predictions[i]), class_names[test_labels[i]], test_labels[i]))
###Output
_____no_output_____ |
CV_Project_Approach_2_Segmenting_Image.ipynb | ###Markdown
Importing the Required Libraries
###Code
import cv2 # for using computer vision related functions
import numpy as np # for numerical computations on 2D image array
import pandas as pd # for dataset preparation for deep learning libraries
import matplotlib.pyplot as plt # for displaying image and plotting graph
def gaussian_filter(img, mask_size = 5, sigma = 2):
offset = mask_size // 2
x, y = np.meshgrid(range(-offset, offset + 1), range(-offset, offset + 1))
gauss_filter = np.exp(-((x ** 2 + y ** 2) / (2 * sigma ** 2)))
gauss_filter /= gauss_filter.sum()
return cv2.filter2D(src = img, ddepth = -1, kernel = gauss_filter)
img = cv2.imread("/content/drive/MyDrive/sem 8/CV/processed_shapes/shapes.png")
orig_img = img.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
threshed_img = cv2.threshold(gaussian_filter(gray, mask_size = 5, sigma = 10), 0, 1, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
plt.imshow(img), plt.show();
plt.imshow(gray, cmap = 'gray'), plt.show();
plt.imshow(threshed_img, cmap = 'binary_r'), plt.show();
edges = cv2.Canny(threshed_img, 0.2, 0.8)
plt.imshow(edges, cmap = 'gray');
contours, hierarchy = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
blank = np.zeros(threshed_img.shape)
cv2.drawContours(blank, contours, -1, (255,0,0), 1)
plt.imshow(blank, cmap = 'binary');
# the different classes of our shapes
categories = ["circle", "square", "star", "triangle"]
# !pip install cPickle
import _pickle as cPickle
# load the gaussian model again
with open('/content/drive/MyDrive/sem 8/CV/processed_shapes/gauss-without-lda.pkl', 'rb') as fid:
clf_loaded = cPickle.load(fid)
# obtaining the bounding box, extracting and saving the ROI (region of interest) font
font = cv2.FONT_HERSHEY_SIMPLEX
# org
org = (50, 50)
# fontScale
fontScale = 0.5
# Blue color in BGR
color = (255, 0, 0)
# Line thickness of 2 px
thickness = 2
ROI_number = 0
img = orig_img.copy()
for c in contours:
offset = 5
x,y,w,h = cv2.boundingRect(c)
x = x-offset
y = y-offset
w += 2*offset
h += 2*offset
cv2.rectangle(img, (x, y), (x + w, y + h), (36,255,12), 2)
ROI = cv2.resize(blank[y:y+h, x:x+w], (25,25), interpolation = cv2.INTER_AREA)
thres, ROI_thresh = cv2.threshold(ROI, 50, 255, cv2.THRESH_BINARY);
ROI_thresh = ROI_thresh/ROI_thresh.max()
pred = clf_loaded.predict([ROI_thresh.flatten()])
cv2.putText(img, categories[pred[0]], (x, y), font,
fontScale, color, thickness, cv2.LINE_AA)
plt.imshow(img);
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/base.py:446: UserWarning: X does not have valid feature names, but GaussianNB was fitted with feature names
"X does not have valid feature names, but"
/usr/local/lib/python3.7/dist-packages/sklearn/base.py:446: UserWarning: X does not have valid feature names, but GaussianNB was fitted with feature names
"X does not have valid feature names, but"
/usr/local/lib/python3.7/dist-packages/sklearn/base.py:446: UserWarning: X does not have valid feature names, but GaussianNB was fitted with feature names
"X does not have valid feature names, but"
/usr/local/lib/python3.7/dist-packages/sklearn/base.py:446: UserWarning: X does not have valid feature names, but GaussianNB was fitted with feature names
"X does not have valid feature names, but"
|
week08/spring2019_prep_notebook_week07_part1.ipynb | ###Markdown
Activity 1: Basic Maps with cartopy
###Code
# import our usual things
%matplotlib inline
import cartopy
import pandas as pd
import matplotlib.pyplot as plt
import ipywidgets
# lets make our maps a bit bigger for now
plt.rcParams["figure.dpi"] = 300
###Output
_____no_output_____
###Markdown
* this is grabbing a "shape" file for frequently used data* there are a bunch of specific files you can grab here: https://github.com/nvkelso/natural-earth-vector/tree/master/zips
###Code
# ok, lets start thinking about how to link this data to
# the actual readings of each detector
# first, lets read in the detector data
seismic = pd.read_csv("/Users/jillnaiman/Downloads/data_tohoku_norm_transpose.csv",
header = None)
# lets upload the locations of each of these sensors
# during the earthquake
locations = pd.read_csv("/Users/jillnaiman/Downloads/location.txt", delimiter="\t",
header = None, names = ["longitude", "latitude", "empty1", "empty2"])
# we have 3 options: we can decrese the number of stations,
# or the number of time samples, or both
# for illustration purposes, lets do both
nstations = 300 # downsample to 300
ntimes = 1440 # factor of 10
import numpy as np
stationsIndex = np.random.choice(range(locations.shape[0]-1),
nstations, replace=False)
timesIndex = np.random.choice(range(seismic.shape[0]-1),
ntimes, replace=False)
# sort each
stationsIndex.sort()
timesIndex.sort()
locations2 = locations.loc[stationsIndex]
seismic2 = seismic.loc[timesIndex,stationsIndex]
seismic2.shape, locations2.shape
# sweet
# note, we can also do the above plot with bqplot as well:
import bqplot
# scales
x_sc = bqplot.LinearScale()
y_sc = bqplot.LinearScale()
# marks
lines = bqplot.Lines(x = seismic2.index.values,
y = seismic2.iloc[:,0],
scales = {'x': x_sc, 'y': y_sc})
# axes
x_ax = bqplot.Axis(scale = x_sc)
y_ax = bqplot.Axis(scale = y_sc, orientation = 'vertical')
# combine into figure
fig = bqplot.Figure(marks = [lines], axes = [x_ax, y_ax])
# create our slider using ipywidgets
slider = ipywidgets.IntSlider(min=0, max=nstations-1)
y_sc.min = -1.0
y_sc.max = 1.0
# create a linking function for slider & plot
def update_slider(event):
lines.y = seismic2.iloc[:,event['new']]
slider.observe(update_slider, 'value')
display(ipywidgets.VBox([slider, fig]))
# note that this is much more responsive now
# than we we did this ourselves
# bqplots ftw
# # ok, so we are now super into linking THING A with THING B
# # so lets link our sesmic data with its location on the map
# # we can do this with cartopy & matplotlib
# @ipywidgets.interact(station = (0, nstations, 1),
# t = (0, ntimes, 1))
# def plot(station = 0, t = 0):
# fig = plt.figure(figsize=(10, 10))
# ax = fig.add_subplot(211,
# projection = cartopy.crs.LambertCylindrical())
# colors = seismic2.iloc[t]
# ax.scatter(locations2["longitude"],
# locations2["latitude"],
# transform = cartopy.crs.PlateCarree(),
# c = colors)
# ax.coastlines()
# ax = fig.add_subplot(212)
# ax.plot(seismic2.index.values, seismic2.iloc[:,station])
# ax.set_ylim(-1, 1)
###Output
_____no_output_____
###Markdown
Activity 3: Info viz maps with bqplot
###Code
# with bqplot
map_mark = bqplot.Map(scales={'projection': bqplot.AlbersUSA()})
fig = bqplot.Figure(marks=[map_mark], title='Basic Map Example')
fig
# can make a statemap instead
#(1)
sc_geo = bqplot.AlbersUSA()
state_data = bqplot.topo_load('map_data/USStatesMap.json')
# (2)
def_tt = bqplot.Tooltip(fields=['id', 'name'])
states_map = bqplot.Map(map_data=state_data,
scales={'projection':sc_geo},
tooltip=def_tt)
# (2) grab interactions
states_map.interactions = {'click': 'select', 'hover': 'tooltip'}
# (3) grab data directly from map
# we could also grab from the state_data itself
from states_utils import get_ids_and_names
ids, state_names = get_ids_and_names(states_map)
# lets make into arrays for ease
#state_names =np.array(state_names)
#ids = np.array(ids)
state_names, ids
# into arrays
# (4) data
def get_data_value(change):
if change['owner'].selected is not None:
for i,s in enumerate(change['owner'].selected):
print(state_names[s == ids])
states_map.observe(get_data_value,'selected')
# (1)
fig=bqplot.Figure(marks=[states_map],
title='US States Map Example',
fig_margin={'top': 0, 'bottom': 0, 'left': 0, 'right': 0}) # try w/o first and see
fig
###Output
_____no_output_____
###Markdown
Adding in some data to link to our usa map
###Code
# lets add in some exprot data
comm = pd.read_csv('/Users/jillnaiman/Downloads/total_export.csv')
comm.loc[comm['State'] == 'Alabama'].values
# we note that these are formatted as strings - this means we'll have to
# do some formatting when we plot data
# also, note that the state name is the first column and not a number
# we'll also have to take care of this too
# grab years
years = list(comm.columns.values)
years = np.array(years[1:]) # get rid of state
# as numbers
years = years.astype('int')
years
sc_geo = bqplot.AlbersUSA()
state_data = bqplot.topo_load('map_data/USStatesMap.json')
def_tt = bqplot.Tooltip(fields=['id', 'name'])
states_map = bqplot.Map(map_data=state_data, scales={'projection':sc_geo}, tooltip=def_tt)
states_map.interactions = {'click': 'select', 'hover': 'tooltip'}
fig=bqplot.Figure(marks=[states_map], title='US States Map Example',
fig_margin={'top': 0, 'bottom': 0, 'left': 0, 'right': 0})
# lets also make a line plot
# second, the lineplot
x_scl = bqplot.LinearScale()
y_scl = bqplot.LinearScale()
ax_xcl = bqplot.Axis(label='Year', scale=x_scl)
ax_ycl = bqplot.Axis(label='Total Export from State NA',
scale=y_scl,
orientation='vertical', side='left')
lines = bqplot.Lines(x = years, y = np.zeros(len(years)),
scales = {'x': x_scl, 'y': y_scl})
#print(lines)
fig_lines = bqplot.Figure(marks = [lines],
axes = [ax_ycl, ax_xcl],)
# let do something additive for all states selected
def get_data_value(change):
exports = np.zeros(len(years))
snames = ''
if change['owner'].selected is not None:
for i,s in enumerate(change['owner'].selected):
sn = state_names[s == ids][0]
snames += sn + ', '
# because of formatting, things are in arrays hence [0]
# also, take out state name hence [1:]
# NOTE! BQPLOT has misspelled massachussetts!
if sn == 'Massachusetts': sn = 'Massachussetts'
exports_in=comm.loc[comm['State'] == sn].values[0][1:]
# there are ","'s in exports we gotta take out
exports_in = np.array([exports_in[i].replace(',','') for i in range(len(exports_in))])
exports = np.add(exports, exports_in.astype('float64'))
lines.y = exports
ax_ycl.label='Total Export from ' + snames
else:
lines.y = np.zeros(len(exports))
ax_ycl.label='Total Export from NA'
states_map.observe(get_data_value,'selected')
# some formatting for vertical
#fig_lines.layout.max_height='250px'
#fig_lines.layout.min_width='800px'
#fig.layout.min_width='800px'
#ipywidgets.VBox([fig_lines,fig])
ipywidgets.HBox([fig,fig_lines])
sn = 'Massachusetts'
sn = 'Massachussetts'
print(comm[comm['State'] == sn])
comm
state_names
comm['State'].index
import pandas as pd
buildings = pd.read_csv("/Users/jillnaiman/Downloads/building_inventory.csv",
na_values = {'Year Acquired': 0, 'Year Constructed': 0, 'Square Footage': 0})
import numpy as np
nsamples =100
dsm = np.random.choice(range(len(buildings)-1),nsamples,replace=False)
dsm
buildingsDS = buildings.loc[dsm]
len(buildingsDS)
import bqplot
x_scl = bqplot.LinearScale()
y_scl = bqplot.LinearScale()
cd = buildings['Congress Dist']
an = buildings['Agency Name']
sf = buildings['Square Footage']
i,j = 0,0
cdNames = cd.unique()
anNames = an.unique()
mask = (cd.values == cdNames[i]) & (an.values == anNames[j])
ya = buildings['Year Acquired'][mask]
yaNames = ya.unique()
sfNames2 = [sf[mask][ya == yaNames[b]].sum() for b in range(len(yaNames)) ]
sfNames2 = np.array(sfNames2)
yfLine = bqplot.Lines(x=yaNames,
y=sfNames2,
colors=['Blue'],
scales={'x': x_scl, 'y': y_scl})
fig = bqplot.Figure(marks=[yfLine])
fig
###Output
_____no_output_____ |
d210127_cr_calculators/resolution_definition.ipynb | ###Markdown
Intensity Resolution Definition
###Code
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
def calculate_requirement_curve(pe, nsb, window_width, electronic_noise, miscal, enf):
"""
Equation for calculating the Goal and Requirement curves, as used in the CTA requirement
Parameters
----------
pe : ndarray
Number of photoelectrons (p.e.)
nsb : float
NSB rate (MHz)
window_width : float
Integration window width (ns)
electronic_noise : float
Charge Stddev due to integrated electronic noise (p.e.)
miscal : float
Multiplicative errors of the gain.
enf : float
Excess noise factor.
"""
var_noise = nsb * window_width + electronic_noise**2
var_enf = (1 + enf)**2 * pe
var_miscal = (miscal * pe)**2
sigma_q = np.sqrt(var_noise + var_enf + var_miscal)
return sigma_q / pe
def calculate_requirement_nominal_nsb(pe):
return calculate_requirement_curve(
pe,
nsb=0.125,
window_width=15,
electronic_noise=0.87,
miscal=0.1,
enf=0.2,
)
def calculate_requirement_high_nsb(pe):
return calculate_requirement_curve(
pe,
nsb=1.25,
window_width=15,
electronic_noise=0.87,
miscal=0.1,
enf=0.2,
)
x, y = np.loadtxt("IntensityRes.txt", unpack=True)
plt.plot(x, y)
plt.xscale("log")
plt.yscale("log")
ph = x
requirement_pde = 0.25
pe = ph * requirement_pde
req_nominal_nsb = calculate_requirement_nominal_nsb(pe)
plt.plot(ph, req_nominal_nsb)
np.testing.assert_allclose(y, req_nominal_nsb, rtol=1e-5)
x, y = np.loadtxt("IntensityResHighNSB.txt", unpack=True)
plt.plot(x, y)
plt.xscale("log")
plt.yscale("log")
ph = x
requirement_pde = 0.25
pe = ph * requirement_pde
req_high_nsb = calculate_requirement_high_nsb(pe)
plt.plot(ph, req_high_nsb)
np.testing.assert_allclose(y, req_high_nsb, rtol=1e-5)
###Output
_____no_output_____
###Markdown
The underlying formula for the requirement curves are demonstrated here. The formula used here defines the Intensity Resolution at an intensity $I$ as the Charge Resolution at a charge of $I \times \epsilon_{PDE}$, where a nominal PDE of $\epsilon_{PDE} = 0.25$ is used. There are two equivalent formula which therefore describe the Fractional Intensity Resolution:$$\frac{\sigma_{I_T}}{I_T} = \frac{1}{I_T} \sqrt{\frac{\sum_{i=0}^N (I_{M_i} - I_T)^2}{N}}$$Where $I_{M_i}$ are individual measurements of the intensity in photons of a true intensity $I_T$, and$$\frac{\sigma_{I_T=\frac{Q_T}{\epsilon_{PDE}}}}{Q_T} = \frac{1}{Q_T} \sqrt{\frac{\sum_{i=0}^N (Q_{M_i} - Q_T)^2}{N}}$$Where $Q_{M_i}$ are individual measurements of the charge (p.e.) of a true charge $Q_T$. The equivalence is demonstrated below difference between the two definitions is explored below:
###Code
amplitude_pe = 50
charge_pe = np.random.normal(amplitude_pe, 10, 100000)
res_pe = charge_pe.std()
amplitude_ph = amplitude_pe / requirement_pde
charge_ph = charge_pe / requirement_pde
res_ph = charge_ph.std()
print(f"Charge Resolution at Q = {amplitude_pe} p.e. is {res_pe/amplitude_pe:.2f}")
print(f"Intensity Resolution at I={amplitude_ph} photons using Equation 1 is {res_ph / amplitude_ph:.2f}")
print(f"Intensity Resolution at I={amplitude_ph} photons using Equation 2 is {res_pe / amplitude_pe:.2f}")
###Output
Charge Resolution at Q = 50 p.e. is 0.20
Intensity Resolution at I=200.0 photons using Equation 1 is 0.20
Intensity Resolution at I=200.0 photons using Equation 2 is 0.20
|
6. R - Lists.ipynb | ###Markdown
Lists are the R objects which contain elements of different types like − numbers, strings, vectors and another list inside it. A list can also contain a matrix or a function as its elements. List is created using list() function.**Creating a List**Following is an example to create a list containing strings, numbers, vectors and a logical values.
###Code
list_data <- list("Red", "Green", c(21,32,11), TRUE, 51.23, 119.1)
list_data
###Output
_____no_output_____
###Markdown
**Naming List Elements**The list elements can be given names and they can be accessed using these names.
###Code
# Create a list containing a vector, a matrix and a list.
list_data <- list(10,c("Jan","Feb","Mar"), matrix(c(3,9,5,1,-2,8), nrow = 2),
list("green",12.3))
list_data
# Give names to the elements in the list.
names(list_data) <- c("1st Quarter", "A_Matrix", "A Inner list")
# Show the list.
list_data
###Output
_____no_output_____
###Markdown
**Accessing List Elements**Elements of the list can be accessed by the index of the element in the list. In case of named lists it can also be accessed using the names.
###Code
# Create a list containing a vector, a matrix and a list.
list_data <- list(c("Jan","Feb","Mar"), matrix(c(3,9,5,1,-2,8), nrow = 2),list("green",12.3))
list_data
# Give names to the elements in the list.
names(list_data) <- c("1st Quarter", "A_Matrix", "A Inner list")
# Access the first element of the list.
print(list_data[1])
# Access the thrid element. As it is also a list, all its elements will be printed.
print(list_data[3])
# Access the list element using the name of the element.
print(list_data$A_Matrix)
###Output
[,1] [,2] [,3]
[1,] 3 5 -2
[2,] 9 1 8
###Markdown
**Manipulating List Elements**We can add, delete and update list elements as shown below. We can add and delete elements only at the end of a list. But we can update any element.
###Code
# Create a list containing a vector, a matrix and a list.
list_data <- list(c("Jan","Feb","Mar"), matrix(c(3,9,5,1,-2,8), nrow = 2),
list("green",12.3))
list_data
# Give names to the elements in the list.
names(list_data) <- c("1st Quarter", "A_Matrix", "A Inner list")
# Add element at the end of the list.
list_data[4] <- "New element"
print(list_data[4])
# Remove the last element.
list_data[4] <- NULL
# Print the 4th Element.
print(list_data[4])
# Update the 3rd Element.
list_data[3] <- "updated element"
print(list_data[3])
###Output
$`A Inner list`
[1] "updated element"
###Markdown
**Merging Lists**You can merge many lists into one list by placing all the lists inside one list() function.
###Code
# Create two lists.
list1 <- list(1,2,3)
list2 <- list("Sun","Mon","Tue")
# Merge the two lists By using Vectore.
merged.list <- c(list1,list2)
# Print the merged list.
merged.list
###Output
_____no_output_____
###Markdown
`Converting List to Vector`A list can be converted to a vector so that the elements of the vector can be used for further manipulation. All the arithmetic operations on vectors can be applied after the list is converted into vectors. To do this conversion, we use the unlist() function. It takes the list as input and produces a vector.
###Code
# Create lists.
list1 <- list(1:5)
print(list1)
list2 <-list(10:14)
print(list2)
# Convert the lists to vectors.
v1 <- unlist(list1)
v2 <- unlist(list2)
print(v1)
print(v2)
# Now add the vectors
result <- v1+v2
print(result)
###Output
[1] 11 13 15 17 19
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.