path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebooks/ParametrizeYourQuery.ipynb | ###Markdown
Kqlmagic - __parametrization__ features***Explains how to emebed python values in kql queries****** Make sure that you have the lastest version of KqlmagicDownload Kqlmagic from github and install/update(if latest version ims already installed you can skip this step)
###Code
#!pip install git+git://github.com/Microsoft/jupyter-Kqlmagic.git
###Output
_____no_output_____
###Markdown
Add Kqlmagic to notebook magics
###Code
#%pushd C:\My Projects\jupyter-Kqlmagic-microsoft\src
%reload_ext kql
#%popd
###Output
_____no_output_____
###Markdown
Authenticate to get access to data
###Code
%kql kusto://code().cluster('help').database('Samples')
###Output
_____no_output_____
###Markdown
Use python user namespace as source of parameters- prefix query with kql let statements to parametrize the query
###Code
my_limit = 10
my_not_state = 'TEXAS'
%%kql
let _limit_ = my_limit;
let _not_val_ = my_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
- *Note - all parameters have to be specified in the let statements* - *Note - the following parameter python types are supported: int, float, str, datetime, timedelta, dict, list and tuple* - *Note - python type timedelta is converted to timespan* - *Note - python type dict, list and tuple are converted to dynamic* - *Note - python value None is converted to null* Use python dictionary as source of parameters- set option -params_dict with the name of a python variable that refer to the dictionary- prefix query with kql let statements to parametrize the query
###Code
p_dict = {'p_limit': 20, 'p_not_state': 'IOWA'}
%%kql -params_dict p_dict
let _limit_ = p_limit;
let _not_val_ = p_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
get query string- shows the original query, as in the input cell
###Code
_kql_raw_result_.query
###Output
_____no_output_____
###Markdown
get parametrized query string- shows the parametrized query, that was submited to kusto
###Code
_kql_raw_result_.parametrized_query
###Output
_____no_output_____
###Markdown
- *Note - additional let statements were added to the original query, one let statement for each parameter* parameters dictionary is modified
###Code
p_dict = {'p_limit': 5, 'p_not_state': 'IOWA'}
###Output
_____no_output_____
###Markdown
refresh use original parameters- the same parameter values are used
###Code
_kql_raw_result_.refresh()
###Output
_____no_output_____
###Markdown
- *Note - the refresh method use the original parameter values, as they were set* submit use the current python values as parameters- a new query is created and parametrized with the current python values
###Code
_kql_raw_result_.submit()
###Output
_____no_output_____
###Markdown
Kqlmagic - __parametrization__ features***Explains how to emebed python values in kql queries****** Make sure that you have the lastest version of KqlmagicDownload Kqlmagic from PyPI and install/update(if latest version ims already installed you can skip this step)
###Code
#!pip install Kqlmagic --no-cache-dir --upgrade
###Output
_____no_output_____
###Markdown
Add Kqlmagic to notebook magics
###Code
%reload_ext Kqlmagic
###Output
_____no_output_____
###Markdown
Authenticate to get access to data
###Code
%kql azure-data-explorer://code;cluster='help';database='Samples' // -tryz-azcli_login
###Output
_____no_output_____
###Markdown
Use python user namespace as source of parameters- prefix query with **kql let statements** to parametrize the query- beware to the mapping: - int -> long - float -> real - str -> string - bool -> bool - datetime -> datetime - timedelta -> timespan - dict, list, set, tuple -> dynamic (only if can be serialized to json) - **pandas dataframe -> view table** - None -> null - unknown, str(value) == 'nan' -> real(null) - unknown, str(value) == 'NaT' -> datetime(null) - unknown str(value) == 'nat' -> time(null) - other -> string
###Code
from datetime import datetime, timedelta
my_limit = 10
my_not_state = 'TEXAS'
my_start_datetime = datetime(2007, 8, 29)
my_timespan = timedelta(days=100)
my_dict = {"a":1}
my_list = ["x", "y", "z"]
my_tuple = ("t", 44, my_limit)
my_set = {6,7,8}
%%kql
let _dict_ = my_dict;
let _list_ = my_list;
let _tuple_ = my_tuple;
let _set_ = my_set;
let _start_time_ = my_start_datetime;
let _timespan_ = my_timespan;
let _limit_ = my_limit;
let _not_val_ = my_not_state;
StormEvents
| where StartTime >= _start_time_
| where EndTime <= _start_time_ + _timespan_
| where State != _not_val_
| summarize count() by State
| extend d = _dict_
| extend l = _list_
| extend t = _tuple_
| extend s = _set_
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
Dataframe prameter as a kql table- prefix query with **kql let statement** that assigns a kql var to the dataframe- beware to the mapping of the dataframe to kql table columns types : - int8,int16,int32,int64,uint8,uint16,uint32,uint64 -> long - float16,float32,float64 -> real - character -> string - bytes -> string - void -> string - category -> string - datetime,datetime64,datetime64[ns],datetime64[ns,tz] -> datetime - timedelta,timedelta64,timedelta64[ns] -> timespan - bool -> bool - record -> dynamic - complex64,complex128 -> dynamic([real, imag]) - object -> if all objects of type: - dict,list,tuple,set -> dynamic (only if can be serialized to json) - bool or nan -> bool - float or nan -> float - int or nan -> long - datetime or 'NaT' -> datetime - timedeltae or 'NaT' -> timespan - other -> string
###Code
my_df =_kql_raw_result_.to_dataframe()
my_df
%%kql
let _my_table_ = my_df;
_my_table_ | project State, s, t | limit 3
_kql_raw_result_.parametrized_query
###Output
_____no_output_____
###Markdown
Parametrize the whole query string
###Code
sort_col = 'count_'
my_query = """StormEvents
| where State != 'OHIO'
| summarize count() by State
| sort by {0}
| limit 5""".format(sort_col)
%kql Samples@help -query my_query
###Output
_____no_output_____
###Markdown
Use python dictionary as source of parameters- set option -params_dict with the name of a python variable that refer to the dictionary- prefix query with kql let statements to parametrize the query
###Code
p_dict = {'p_limit':20, 'p_not_state':'IOWA'}
%%kql
-params_dict p_dict
let _limit_ = p_limit;
let _not_val_ = p_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
Use python dictionary expression as source of parameters- set option -params_dict with a dictionary string (python format)- prefix query with kql let statements to parametrize the query- **make sure that the dictionary expression is without spaces**
###Code
%%kql
-params_dict {'p_limit':5,'p_not_state':'OHIO'}
let _limit_ = p_limit;
let _not_val_ = p_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
get query string- shows the original query, as in the input cell
###Code
_kql_raw_result_.query
###Output
_____no_output_____
###Markdown
get parametrized query string- shows the parametrized query, that was submited to kusto
###Code
_kql_raw_result_.parametrized_query
###Output
_____no_output_____
###Markdown
- *Note - additional let statements were added to the original query, one let statement for each parameter*
###Code
p_dict = {'p_limit':5,'p_not_state':'OHIO'}
%%kql Samples@help
-displayid='True'
-params_dict p_dict
let _limit_ = p_limit;
let _not_val_ = p_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
parameters dictionary is modified
###Code
p_dict = {'p_limit': 5, 'p_not_state': 'IOWA'}
###Output
_____no_output_____
###Markdown
refresh use original parameters- the same parameter values are used
###Code
_kql_raw_result_.refresh(override_vars={'p_limit':2})
###Output
_____no_output_____
###Markdown
- *Note - the refresh method use the original parameter values, as they were set* submit use the current python values as parameters- a new query is created and parametrized with the current python values
###Code
_kql_raw_result_.submit()
###Output
_____no_output_____
###Markdown
- *Note - the submit method cretes a new query and parametrize with the current parameter values* submit can also override original query parameters- set the override_vars parameter with a dictionary of var/value that will override the source for the query parameters
###Code
_kql_raw_result_.submit(override_vars={'p_limit': 2})
###Output
_____no_output_____
###Markdown
- *Note - the override_vars dictionary has higher priority than the originary query parameters vars dictionary.*
###Code
_kql_raw_result_.refresh()
###Output
_____no_output_____
###Markdown
submit can also override original query optionsset the override_options parameter with a dictionary of option/value hat will override the current query options
###Code
_kql_raw_result_.submit(override_vars={'p_limit': 3},override_options={'show_query': True})
_kql_raw_result_.refresh()
###Output
_____no_output_____
###Markdown
show parametrized query with results- set option -show_query (abbreviation -sq)
###Code
%%kql
-params_dict p_dict -sq
let _limit_ = p_limit;
let _not_val_ = p_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
Parametrize optionall options can be parametrized.instead of providing a quoted parameter value, specify the python variable or python expression- beware, that python expression must not have spaces !!! - valid expression examples: ```my_var```, ```str(type(x))```, ```[a,1,2]``` - invalid expressions: ```str( type ( x ) )```, ```[a, 1, 2]```
###Code
table_package = 'pandas'
my_popup_state = True
%%kql -tp=table_package -pw=my_popup_state -f=table_package!='pandas'
StormEvents
| where State != 'OHIO'
| summarize count() by State
| sort by count_
| limit 5
###Output
_____no_output_____
###Markdown
Parametrize commandsall commands can be parametrized.instead of providing a quoted parameter value, specify the python variable or python expression.- **note**, if instead of the python expression, you specify a variable that starts with $, it will be retreived from the environment variables.- **beware**, that python expression must not have spaces !!!
###Code
my_topic = "kql"
%kql --help my_topic
###Output
_____no_output_____
###Markdown
Parametrize connection stringall values in connection string can be parametrized.instead of providing a quoted parameter value, specify the python variable or python expression- **note**, if you don't specify the credential's secret you will be prompted.- **note**, if instead of the python expression, you specify a variable that starts with $, it will be retreived from the environment variables.- beware, that python expression must not have spaces !!!
###Code
my_appid = "DEMO_APP"
my_appkey = "DEMO_KEY"
%kql appinsights://appid=my_appid;appkey=my_appkey
###Output
_____no_output_____
###Markdown
Parametrize the whole connection string
###Code
my_connection_str = """
loganalytics://workspace='DEMO_WORKSPACE';appkey='DEMO_KEY';alias='myworkspace'
"""
%kql -conn=my_connection_str
###Output
_____no_output_____
###Markdown
Kqlmagic - __parametrization__ features***Explains how to emebed python values in kql queries****** Make sure that you have the lastest version of KqlmagicDownload Kqlmagic from PyPI and install/update(if latest version ims already installed you can skip this step)
###Code
#!pip install Kqlmagic --no-cache-dir --upgrade
###Output
_____no_output_____
###Markdown
Add Kqlmagic to notebook magics
###Code
#%pushd C:\My Projects\jupyter-Kqlmagic-microsoft\azure
%reload_ext Kqlmagic
#%popd
###Output
_____no_output_____
###Markdown
Authenticate to get access to data
###Code
%kql azure-data-explorer://code;cluster='help';database='Samples'
###Output
_____no_output_____
###Markdown
Use python user namespace as source of parameters- prefix query with **kql let statements** to parametrize the query- beware to the mapping: - int -> long - float -> real - str -> string - bool -> bool - datetime -> datetime - timedelta -> timespan - dict, list, set, tuple -> dynamic (only if can be serialized to json) - **pandas dataframe -> view table** - None -> null - unknown, str(value) == 'nan' -> real(null) - unknown, str(value) == 'NaT' -> datetime(null) - unknown str(value) == 'nat' -> time(null) - other -> string
###Code
from datetime import datetime, timedelta
my_limit = 10
my_not_state = 'TEXAS'
my_start_datetime = datetime(2007, 8, 29)
my_timespan = timedelta(days=100)
my_dict = {"a":1}
my_list = ["x", "y", "z"]
my_tuple = ("t", 44, my_limit)
my_set = {6,7,8}
%%kql
let _dict_ = my_dict;
let _list_ = my_list;
let _tuple_ = my_tuple;
let _set_ = my_set;
let _start_time_ = my_start_datetime;
let _timespan_ = my_timespan;
let _limit_ = my_limit;
let _not_val_ = my_not_state;
StormEvents
| where StartTime >= _start_time_
| where EndTime <= _start_time_ + _timespan_
| where State != _not_val_
| summarize count() by State
| extend d = _dict_
| extend l = _list_
| extend t = _tuple_
| extend s = _set_
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
Dataframe prameter as a kql table- prefix query with **kql let statement** that assigns a kql var to the dataframe- beware to the mapping of the dataframe to kql table columns types : - int8,int16,int32,int64,uint8,uint16,uint32,uint64 -> long - float16,float32,float64 -> real - character -> string - bytes -> string - void -> string - category -> string - datetime,datetime64,datetime64[ns],datetime64[ns,tz] -> datetime - timedelta,timedelta64,timedelta64[ns] -> timespan - bool -> bool - record -> dynamic - complex64,complex128 -> dynamic([real, imag]) - object -> if all objects of type: - dict,list,tuple,set -> dynamic (only if can be serialized to json) - bool or nan -> bool - float or nan -> float - int or nan -> long - datetime or 'NaT' -> datetime - timedeltae or 'NaT' -> timespan - other -> string
###Code
my_df =_kql_raw_result_.to_dataframe()
my_df
%%kql
let _my_table_ = my_df;
_my_table_ | project State, s, t | limit 3
_kql_raw_result_.parametrized_query
###Output
_____no_output_____
###Markdown
Parametrize the whole query string
###Code
sort_col = 'count_'
my_query = """StormEvents
| where State != 'OHIO'
| summarize count() by State
| sort by {0}
| limit 5""".format(sort_col)
%kql -query my_query
###Output
_____no_output_____
###Markdown
Use python dictionary as source of parameters- set option -params_dict with the name of a python variable that refer to the dictionary- prefix query with kql let statements to parametrize the query
###Code
p_dict = {'p_limit':20, 'p_not_state':'IOWA'}
%%kql
-params_dict p_dict
let _limit_ = p_limit;
let _not_val_ = p_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
Use python dictionary expression as source of parameters- set option -params_dict with a dictionary string (python format)- prefix query with kql let statements to parametrize the query- **make sure that the dictionary expression is without spaces**
###Code
%%kql
-params_dict {'p_limit':5,'p_not_state':'OHIO'}
let _limit_ = p_limit;
let _not_val_ = p_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
get query string- shows the original query, as in the input cell
###Code
_kql_raw_result_.query
###Output
_____no_output_____
###Markdown
get parametrized query string- shows the parametrized query, that was submited to kusto
###Code
_kql_raw_result_.parametrized_query
###Output
_____no_output_____
###Markdown
- *Note - additional let statements were added to the original query, one let statement for each parameter*
###Code
p_dict = {'p_limit':5,'p_not_state':'OHIO'}
%%kql
-params_dict p_dict
let _limit_ = p_limit;
let _not_val_ = p_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
parameters dictionary is modified
###Code
p_dict = {'p_limit': 5, 'p_not_state': 'IOWA'}
###Output
_____no_output_____
###Markdown
refresh use original parameters- the same parameter values are used
###Code
_kql_raw_result_.refresh()
###Output
_____no_output_____
###Markdown
- *Note - the refresh method use the original parameter values, as they were set* submit use the current python values as parameters- a new query is created and parametrized with the current python values
###Code
_kql_raw_result_.submit()
###Output
_____no_output_____
###Markdown
- *Note - the submit method cretes a new query and parametrize with the current parameter values* Parametrize optionall options can be parametrized.instead of providing a quoted parameter value, specify the python variable or python expression- beware, that python expression must not have spaces !!! - valid expression examples: ```my_var```, ```str(type(x))```, ```[a,1,2]``` - invalid expressions: ```str( type ( x ) )```, ```[a, 1, 2]```
###Code
table_package = 'pandas'
my_popup_state = True
%%kql -tp=table_package -pw=my_popup_state -f=table_package!='pandas'
StormEvents
| where State != 'OHIO'
| summarize count() by State
| sort by count_
| limit 5
###Output
_____no_output_____
###Markdown
Parametrize commandsall commands can be parametrized.instead of providing a quoted parameter value, specify the python variable or python expression.- **note**, if instead of the python expression, you specify a variable that starts with $, it will be retreived from the environment variables.- **beware**, that python expression must not have spaces !!!
###Code
my_topic = "kql"
%kql --help my_topic
###Output
_____no_output_____
###Markdown
Parametrize connection stringall values in connection string can be parametrized.instead of providing a quoted parameter value, specify the python variable or python expression- **note**, if you don't specify the credential's secret you will be prompted.- **note**, if instead of the python expression, you specify a variable that starts with $, it will be retreived from the environment variables.- beware, that python expression must not have spaces !!!
###Code
my_appid = "DEMO_APP"
my_appkey = "DEMO_KEY"
%kql appinsights://appid=my_appid;appkey=my_appkey
###Output
_____no_output_____
###Markdown
Parametrize the whold connection string
###Code
my_connection_str = """
loganalytics://workspace='DEMO_WORKSPACE';appkey='DEMO_KEY';alias='myworkspace'
"""
%kql -conn=my_connection_str
###Output
_____no_output_____
###Markdown
Kqlmagic - __parametrization__ features***Explains how to emebed python values in kql queries****** Make sure that you have the lastest version of KqlmagicDownload Kqlmagic from PyPI and install/update(if latest version ims already installed you can skip this step)
###Code
#!pip install Kqlmagic --no-cache-dir --upgrade
###Output
_____no_output_____
###Markdown
Add Kqlmagic to notebook magics
###Code
%reload_ext Kqlmagic
###Output
_____no_output_____
###Markdown
Authenticate to get access to data
###Code
%kql azure-data-explorer://code;cluster='help';database='Samples'
###Output
_____no_output_____
###Markdown
Use python user namespace as source of parameters- prefix query with **kql let statements** to parametrize the query- beware to the mapping: - int -> long - float -> real - str -> string - bool -> bool - datetime -> datetime - timedelta -> timespan - dict, list, set, tuple -> dynamic (only if can be serialized to json) - **pandas dataframe -> view table** - None -> null - unknown, str(value) == 'nan' -> real(null) - unknown, str(value) == 'NaT' -> datetime(null) - unknown str(value) == 'nat' -> time(null) - other -> string
###Code
from datetime import datetime, timedelta
my_limit = 10
my_not_state = 'TEXAS'
my_start_datetime = datetime(2007, 8, 29)
my_timespan = timedelta(days=100)
my_dict = {"a":1}
my_list = ["x", "y", "z"]
my_tuple = ("t", 44, my_limit)
my_set = {6,7,8}
%%kql
let _dict_ = my_dict;
let _list_ = my_list;
let _tuple_ = my_tuple;
let _set_ = my_set;
let _start_time_ = my_start_datetime;
let _timespan_ = my_timespan;
let _limit_ = my_limit;
let _not_val_ = my_not_state;
StormEvents
| where StartTime >= _start_time_
| where EndTime <= _start_time_ + _timespan_
| where State != _not_val_
| summarize count() by State
| extend d = _dict_
| extend l = _list_
| extend t = _tuple_
| extend s = _set_
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
Dataframe prameter as a kql table- prefix query with **kql let statement** that assigns a kql var to the dataframe- beware to the mapping of the dataframe to kql table columns types : - int8,int16,int32,int64,uint8,uint16,uint32,uint64 -> long - float16,float32,float64 -> real - character -> string - bytes -> string - void -> string - category -> string - datetime,datetime64,datetime64[ns],datetime64[ns,tz] -> datetime - timedelta,timedelta64,timedelta64[ns] -> timespan - bool -> bool - record -> dynamic - complex64,complex128 -> dynamic([real, imag]) - object -> if all objects of type: - dict,list,tuple,set -> dynamic (only if can be serialized to json) - bool or nan -> bool - float or nan -> float - int or nan -> long - datetime or 'NaT' -> datetime - timedeltae or 'NaT' -> timespan - other -> string
###Code
my_df =_kql_raw_result_.to_dataframe()
my_df
%%kql
let _my_table_ = my_df;
_my_table_ | project State, s, t | limit 3
_kql_raw_result_.parametrized_query
###Output
_____no_output_____
###Markdown
Parametrize the whole query string
###Code
sort_col = 'count_'
my_query = """StormEvents
| where State != 'OHIO'
| summarize count() by State
| sort by {0}
| limit 5""".format(sort_col)
%kql -query my_query
###Output
_____no_output_____
###Markdown
Use python dictionary as source of parameters- set option -params_dict with the name of a python variable that refer to the dictionary- prefix query with kql let statements to parametrize the query
###Code
p_dict = {'p_limit':20, 'p_not_state':'IOWA'}
%%kql
-params_dict p_dict
let _limit_ = p_limit;
let _not_val_ = p_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
Use python dictionary expression as source of parameters- set option -params_dict with a dictionary string (python format)- prefix query with kql let statements to parametrize the query- **make sure that the dictionary expression is without spaces**
###Code
%%kql
-params_dict {'p_limit':5,'p_not_state':'OHIO'}
let _limit_ = p_limit;
let _not_val_ = p_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
get query string- shows the original query, as in the input cell
###Code
_kql_raw_result_.query
###Output
_____no_output_____
###Markdown
get parametrized query string- shows the parametrized query, that was submited to kusto
###Code
_kql_raw_result_.parametrized_query
###Output
_____no_output_____
###Markdown
- *Note - additional let statements were added to the original query, one let statement for each parameter*
###Code
p_dict = {'p_limit':5,'p_not_state':'OHIO'}
%%kql
-params_dict p_dict
let _limit_ = p_limit;
let _not_val_ = p_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
parameters dictionary is modified
###Code
p_dict = {'p_limit': 5, 'p_not_state': 'IOWA'}
###Output
_____no_output_____
###Markdown
refresh use original parameters- the same parameter values are used
###Code
_kql_raw_result_.refresh()
###Output
_____no_output_____
###Markdown
- *Note - the refresh method use the original parameter values, as they were set* submit use the current python values as parameters- a new query is created and parametrized with the current python values
###Code
_kql_raw_result_.submit()
###Output
_____no_output_____
###Markdown
- *Note - the submit method cretes a new query and parametrize with the current parameter values* Parametrize optionall options can be parametrized.instead of providing a quoted parameter value, specify the python variable or python expression- beware, that python expression must not have spaces !!! - valid expression examples: ```my_var```, ```str(type(x))```, ```[a,1,2]``` - invalid expressions: ```str( type ( x ) )```, ```[a, 1, 2]```
###Code
table_package = 'pandas'
my_popup_state = True
%%kql -tp=table_package -pw=my_popup_state -f=table_package!='pandas'
StormEvents
| where State != 'OHIO'
| summarize count() by State
| sort by count_
| limit 5
###Output
_____no_output_____
###Markdown
Parametrize commandsall commands can be parametrized.instead of providing a quoted parameter value, specify the python variable or python expression.- **note**, if instead of the python expression, you specify a variable that starts with $, it will be retreived from the environment variables.- **beware**, that python expression must not have spaces !!!
###Code
my_topic = "kql"
%kql --help my_topic
###Output
_____no_output_____
###Markdown
Parametrize connection stringall values in connection string can be parametrized.instead of providing a quoted parameter value, specify the python variable or python expression- **note**, if you don't specify the credential's secret you will be prompted.- **note**, if instead of the python expression, you specify a variable that starts with $, it will be retreived from the environment variables.- beware, that python expression must not have spaces !!!
###Code
my_appid = "DEMO_APP"
my_appkey = "DEMO_KEY"
%kql appinsights://appid=my_appid;appkey=my_appkey
###Output
_____no_output_____
###Markdown
Parametrize the whold connection string
###Code
my_connection_str = """
loganalytics://workspace='DEMO_WORKSPACE';appkey='DEMO_KEY';alias='myworkspace'
"""
%kql -conn=my_connection_str
###Output
_____no_output_____
###Markdown
Kqlmagic - __parametrization__ features***Explains how to emebed python values in kql queries****** Make sure that you have the lastest version of KqlmagicDownload Kqlmagic from github and install/update(if latest version ims already installed you can skip this step)
###Code
#!pip install Kqlmagic --upgrade
###Output
_____no_output_____
###Markdown
Add Kqlmagic to notebook magics
###Code
#%pushd C:\My Projects\jupyter-Kqlmagic-microsoft\azure
%reload_ext Kqlmagic
#%popd
###Output
_____no_output_____
###Markdown
Authenticate to get access to data
###Code
%kql kusto://code().cluster('help').database('Samples')
###Output
_____no_output_____
###Markdown
Use python user namespace as source of parameters- prefix query with kql let statements to parametrize the query
###Code
my_limit = 10
my_not_state = 'TEXAS'
%%kql
let _limit_ = my_limit;
let _not_val_ = my_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
- *Note - all parameters have to be specified in the let statements* - *Note - the following parameter python types are supported: int, float, str, datetime, timedelta, dict, list and tuple* - *Note - python type timedelta is converted to timespan* - *Note - python type dict, list and tuple are converted to dynamic* - *Note - python value None is converted to null* Use python dictionary as source of parameters- set option -params_dict with the name of a python variable that refer to the dictionary- prefix query with kql let statements to parametrize the query
###Code
p_dict = {'p_limit': 20, 'p_not_state': 'IOWA'}
%%kql -params_dict p_dict
let _limit_ = p_limit;
let _not_val_ = p_not_state;
StormEvents
| where State != _not_val_
| summarize count() by State
| sort by count_
| limit _limit_
###Output
_____no_output_____
###Markdown
get query string- shows the original query, as in the input cell
###Code
_kql_raw_result_.query
###Output
_____no_output_____
###Markdown
get parametrized query string- shows the parametrized query, that was submited to kusto
###Code
_kql_raw_result_.parametrized_query
###Output
_____no_output_____
###Markdown
- *Note - additional let statements were added to the original query, one let statement for each parameter* parameters dictionary is modified
###Code
p_dict = {'p_limit': 5, 'p_not_state': 'IOWA'}
###Output
_____no_output_____
###Markdown
refresh use original parameters- the same parameter values are used
###Code
_kql_raw_result_.refresh()
###Output
_____no_output_____
###Markdown
- *Note - the refresh method use the original parameter values, as they were set* submit use the current python values as parameters- a new query is created and parametrized with the current python values
###Code
_kql_raw_result_.submit()
###Output
_____no_output_____ |
20200123tst.ipynb | ###Markdown
20200123 [AnacondaによるKeras-TensorFlowインストール(Win10, CPU版)][1][1]:https://qiita.com/dddmm/items/9e4d9e08a071cfa4be83 [AnacondaによるKeras-TensorFlowインストール(Win10, CPU版)](https://qiita.com/dddmm/items/9e4d9e08a071cfa4be83) test
###Code
import numpy as np
import tensorflow as tf
hello = tf.constant('Hello TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
###Output
b'Hello TensorFlow!'
|
examples/AA_datatypes_and_datasets.ipynb | ###Markdown
**Abstract:** this notebook give an introduction to `sktime` in-memory data containers and data sets, with associated functionality such as in-memory format validation, conversion, and data set loading.
**Set-up instructions:** on binder, this nootebook should run out-of-the-box.
To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.
To run this notebook with a local development version of sktime, either uncomment and run the below, or `pip install -e` a local clone of the `sktime` `main` branch.
###Code
# from os import sys
# sys.path.append("..")
###Output
_____no_output_____
###Markdown
In-memory data representations and data loading
`sktime` provides modules for a number of time series related learning tasks.
These modules use `sktime` specific in-memory (i.e., python workspace) representations for time series and related objects, most importantly individual time series and time series panels. `sktime`'s in-memory representations rely on `pandas` and `numpy`, with additional conventions on the `pandas` and `numpy` object.
Users of `sktime` should be aware of these representations, since presenting the data in an `sktime` compatible representation is usually the first step in using any of the `sktime` modules.
This notebook introduces the data types used in `sktime`, related functionality such as converters and validity checkers, and common workflows for loading and conversion:
**Section 1** introduces in-memory data containers used in `sktime`, with examples.
**Section 2** introduces validity checkers and conversion functionality for in-memory data containers.
**Section 3** introduces common workflows to load data from file formats Section 1: in-memory data containers
This section provides a reference to data containers used for time series and related objets in `sktime`.
Conceptually, `sktime` distinguishes:
* the *data scientific abstract data type* - or short: **scitype** - of a data container, defined by relational and statistical properties of the data being represented and common operations on it - for instance, an abstract "time series" or an abstract "time series panel", without specifying a particular machine implementation in python
* the *machine implementation type* - or short: **mtype** - of a data container, which, for a defined *scitype*, specifies the python type and conventions on structure and value of the python in-memory object. For instance, a concrete (mathematical) time series is represented by a concrete `pandas.DataFrame` in `sktime`, subject to certain conventions on the `pandas.DataFrame`. Formally, these conventions form a specific mtype, i.e., a way to represent the (abstract) "time series" scitype.
In `sktime`, the same scitype can be implemented by multiple mtypes. For instance, `sktime` allows the user to specify time series as `pandas.DataFrame`, as `pandas.Series`, or as a `numpy.ndarray`. These are different mtypes which are admissible representations of the same scitype, "time series". Also, not all mtypes are equally rich in metadata - for instance, `pandas.DataFrame` can store column names, while this is not possible in `numpy.ndarray`.
Both scitypes and mtypes are encoded by strings in `sktime`, for easy reference.
This section introduces the mtypes for the following scitypes:
* `"Series"`, the `sktime` scitype for time series of any kind
* `"Panel"`, the `sktime` scitype for time series panels of any kind Section 1.1: Time series - the `"Series"` scitype
The major representations of time series in `sktime` are:
* `"pd.DataFrame"` - a uni- or multivariate `pandas.DataFrame`, with rows = time points, cols = variables
* `"pd.Series"` - a (univariate) `pandas.Series`, with entries corresponding to different time points
* `"np.ndarray"` - a 2D `numpy.ndarray`, with rows = time points, cols = variables
`pandas` objects must have one of the following `pandas` index types:
`Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`; if `DatetimeIndex`, the `freq` attribute must be set.
`numpy.ndarray` 2D arrays are interpreted as having an `RangeIndex` on the rows, and generally equivalent to the `pandas.DataFrame` obtained after default coercion using the `pandas.DataFrame` constructor.
###Code
# import to retrieve examples
from sktime.datatypes import get_examples
###Output
_____no_output_____
###Markdown
Section 1.1.1: Time series - the `"pd.DataFrame"` mtype
In the `"pd.DataFrame"` mtype, time series are represented by an in-memory container `obj: pandas.DataFrame` as follows.
* structure convention: `obj.index` must be monotonous, and one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`.
* variables: columns of `obj` correspond to different variables
* variable names: column names `obj.columns`
* time points: rows of `obj` correspond to different, distinct time points
* time index: `obj.index` is interpreted as a time index.
* capabilities: can represent multivariate series; can represent unequally spaced series Example of a univariate series in `"pd.DataFrame"` representation.
The single variable has name `"a"`, and is observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="pd.DataFrame", as_scitype="Series")[0]
###Output
_____no_output_____
###Markdown
Example of a bivariate series in `"pd.DataFrame"` representation.
This series has two variables, named `"a"` and `"b"`. Both are observed at the same four time points 0, 1, 2, 3.
###Code
get_examples(mtype="pd.DataFrame", as_scitype="Series")[1]
###Output
_____no_output_____
###Markdown
Section 1.1.2: Time series - the `"pd.Series"` mtype
In the `"pd.Series"` mtype, time series are represented by an in-memory container `obj: pandas.Series` as follows.
* structure convention: `obj.index` must be monotonous, and one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`.
* variables: there is a single variable, corresponding to the values of `obj`. Only univariate series can be represented.
* variable names: by default, there is no column name. If needed, a variable name can be provided as `obj.name`.
* time points: entries of `obj` correspond to different, distinct time points
* time index: `obj.index` is interpreted as a time index.
* capabilities: cannot represent multivariate series; can represent unequally spaced series Example of a univariate series in `"pd.Series"` mtype representation.
The single variable has name `"a"`, and is observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="pd.Series", as_scitype="Series")[0]
###Output
_____no_output_____
###Markdown
Section 1.1.3: Time series - the `"np.ndarray"` mtype
In the `"np.ndarray"` mtype, time series are represented by an in-memory container `obj: np.ndarray` as follows.
* structure convention: `obj` must be 2D, i.e., `obj.shape` must have length 2. This is also true for univariate time series.
* variables: variables correspond to columns of `obj`.
* variable names: the `"np.ndarray"` mtype cannot represent variable names.
* time points: the rows of `obj` correspond to different, distinct time points.
* time index: The time index is implicit and by-convention. The `i`-th row (for an integer `i`) is interpreted as an observation at the time point `i`.
* capabilities: cannot represent multivariate series; cannot represent unequally spaced series Example of a univariate series in `"np.ndarray"` mtype representation.
There is a single (unnamed) variable, it is observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="np.ndarray", as_scitype="Series")[0]
###Output
_____no_output_____
###Markdown
Example of a bivariate series in `"np.ndarray"` mtype representation.
There are two (unnamed) variables, they are both observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="np.ndarray", as_scitype="Series")[1]
###Output
_____no_output_____
###Markdown
Section 1.2: Time series panels - the `"Panel"` scitype
The major representations of time series panels in `sktime` are:
* `"pd-multiindex"` - a `pandas.DataFrame`, with row multi-index (`instances`, `timepoints`), cols = variables
* `"numpy3D"` - a 3D `np.ndarray`, with axis 0 = instances, axis 1 = variables, axis 2 = time points
* `"df-list"` - a `list` of `pandas.DataFrame`, with list index = instances, data frame rows = time points, data frame cols = variables
These representations are considered primary representations in `sktime` and are core to internal computations.
There are further, minor representations of time series panels in `sktime`:
* `"nested_univ"` - a `pandas.DataFrame`, with `pandas.Series` in cells. data frame rows = instances, data frame cols = variables, and series axis = time points
* `"numpyflat"` - a 2D `np.ndarray` with rows = instances, and columns indexed by a pair index of (variables, time points). This format is only being converted to and cannot be converted from (since number of variables and time points may be ambiguous).
* `"pd-wide"` - a `pandas.DataFrame` in wide format: has column multi-index (variables, time points), rows = instances; the "variables" index can be omitted for univariate time series
* `"pd-long"` - a `pandas.DataFrame` in long format: has cols `instances`, `timepoints`, `variable`, `value`; entries in `value` are indexed by tuples of values in (`instances`, `timepoints`, `variable`).
The minor representations are currently not fully consolidated in-code and are not discussed further below. Contributions are appreciated. Section 1.2.1: Time series panels - the `"pd-multiindex"` mtype
In the `"pd-multiindex"` mtype, time series panels are represented by an in-memory container `obj: pandas.DataFrame` as follows.
* structure convention: `obj.index` must be a pair multi-index of type `(RangeIndex, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonous. `obj.index` must have name `("instances", "timepoints")`.
* instances: rows with the same `"instances"` index correspond to the same instance; rows with different `"instances"` index correspond to different instances.
* instance index: the first element of pairs in `obj.index` is interpreted as an instance index.
* variables: columns of `obj` correspond to different variables
* variable names: column names `obj.columns`
* time points: rows of `obj` with the same `"timepoints"` index correspond correspond to the same time point; rows of `obj` with different `"timepoints"` index correspond correspond to the different time points.
* time index: the second element of pairs in `obj.index` is interpreted as a time index.
* capabilities: can represent panels of multivariate series; can represent unequally spaced series; can represent panels of unequally supported series; cannot represent panels of series with different sets of variables. Example of a panel of multivariate series in `"pd-multiindex"` mtype representation.
The panel contains three multivariate series, with instance indices 0, 1, 2. All series have two variables with names `"var_0"`, `"var_1"`. All series are observed at three time points 0, 1, 2.
###Code
get_examples(mtype="pd-multiindex", as_scitype="Panel")[0]
###Output
_____no_output_____
###Markdown
Section 1.2.2: Time series panels - the `"numpy3D"` mtype
In the `"numpy3D"` mtype, time series panels are represented by an in-memory container `obj: np.ndarray` as follows.
* structure convention: `obj` must be 3D, i.e., `obj.shape` must have length 2.
* instances: instances correspond to axis 0 elements of `obj`.
* instance index: the instance index is implicit and by-convention. The `i`-th element of axis 0 (for an integer `i`) is interpreted as indicative of observing instance `i`.
* variables: variables correspond to axis 1 elements of `obj`.
* variable names: the `"numpy3D"` mtype cannot represent variable names.
* time points: time points correspond to axis 2 elements of `obj`.
* time index: the time index is implicit and by-convention. The `i`-th elemtn of axis 2 (for an integer `i`) is interpreted as an observation at the time point `i`.
* capabilities: can represent panels of multivariate series; cannot represent unequally spaced series; cannot represent panels of unequally supported series; cannot represent panels of series with different sets of variables. Example of a panel of multivariate series in `"numpy3D"` mtype representation.
The panel contains three multivariate series, with instance indices 0, 1, 2. All series have two variables (unnamed). All series are observed at three time points 0, 1, 2.
###Code
get_examples(mtype="numpy3D", as_scitype="Panel")[0]
###Output
_____no_output_____
###Markdown
Section 1.2.3: Time series panels - the `"df-list"` mtype
In the `"df-list"` mtype, time series panels are represented by an in-memory container `obj: List[pandas.DataFrame]` as follows.
* structure convention: `obj` must be a list of `pandas.DataFrames`. Individual list elements of `obj` must follow the `"pd.DataFrame"` mtype convention for the `"Series"` scitype.
* instances: instances correspond to different list elements of `obj`.
* instance index: the instance index of an instance is the list index at which it is located in `obj`. That is, the data at `obj[i]` correspond to observations of the instance with index `i`.
* variables: columns of `obj[i]` correspond to different variables available for instance `i`.
* variable names: column names `obj[i].columns` are the names of variables available for instance `i`.
* time points: rows of `obj[i]` correspond to different, distinct time points, at which instance `i` is observed.
* time index: `obj[i].index` is interpreted as the time index for instance `i`.
* capabilities: can represent panels of multivariate series; can represent unequally spaced series; can represent panels of unequally supported series; can represent panels of series with different sets of variables. Example of a panel of multivariate series in `"df-list"` mtype representation.
The panel contains three multivariate series, with instance indices 0, 1, 2. All series have two variables with names `"var_0"`, `"var_1"`. All series are observed at three time points 0, 1, 2.
###Code
get_examples(mtype="df-list", as_scitype="Panel")[0]
###Output
_____no_output_____
###Markdown
Section 2: validity checking and mtype conversion
`sktime`'s `datatypes` module provides users with generic functionality for:
* checking in-memory containers against mtype conventions, with informative error messages that help moving data to the right format
* converting different mtypes to each other, for a given scitype
In this section, this functionality and intended usage worfklows are presented. Section 2.1: Preparing data, checking in-memory containers for validity
`sktime`'s `datatypes` module provides convenient functionality for users to check validity of their in-memory data containers, using the `check_is` and `check_raise` functions. Both functions provide generic validity checking functionality, `check_is` returns metadata and potential issues as return arguments, while `check_raise` directly produces informative error messages in case a container does not comply with a given `mtype`.
A recommended notebook workflow to ensure that a given data container is compliant with `sktime` `mtype` specification is as follows:
1. load the data in an in-memory data container
2. identify the `scitype`, e.g., is this supposed to be a time series (`Series`) or a panel of time series (`Panel`)
3. select the target `mtype` (see Section 1 for a list), and attempt to manually reformat the data to comply with the `mtype` specification if it is not already compliant
4. run `check_raise` on the data container, to check whether it complies with the `mtype` and `scitype`
5. if an error is raised, repeat 3 and 4 until no error is raised Section 2.1.1: validity checking, example 1 (simple mistake)
Suppose we have the following `numpy.ndarray` representing a univariate time series:
###Code
import numpy as np
y = np.array([1, 6, 3, 7, 2])
###Output
_____no_output_____
###Markdown
to check compatibility with sktime:
(instruction: uncomment and run the code to see the informative error message)
###Code
from sktime.datatypes import check_raise
# check_raise(y, mtype="np.ndarray")
###Output
_____no_output_____
###Markdown
this tells us that `sktime` uses 2D numpy arrays for time series, if the `np.ndarray` mtype is used. While most methods provide convenience functionality to do this coercion automatically, the "correct" format would be 2D as follows:
###Code
check_raise(y.reshape(-1, 1), mtype="np.ndarray")
###Output
_____no_output_____
###Markdown
For use in own code or additional metadata, the error message can be obtained using the `check_is` function:
###Code
from sktime.datatypes import check_is
check_is(y, mtype="np.ndarray", return_metadata=True)
###Output
_____no_output_____
###Markdown
and metadata is produced if the argument passes the validity check:
###Code
check_is(y.reshape(-1, 1), mtype="np.ndarray", return_metadata=True)
###Output
_____no_output_____
###Markdown
Note: if the name of the mtype is ambiguous and can refer to multiple scitypes, the additional argument `scitype` must be provided. This should not be the case for any common in-memory containers, we mention this for completeness.
###Code
check_is(y, mtype="np.ndarray", scitype="Series")
###Output
_____no_output_____
###Markdown
Section 2.1.2: validity checking, example 2 (non-obvious mistake)
Suppose we have converted our data into a multi-index panel, i.e., we want to have a `Panel` of mtype `pd-multiindex`.
###Code
import pandas as pd
cols = ["instances", "time points"] + [f"var_{i}" for i in range(2)]
X = pd.concat(
[
pd.DataFrame([[0, 0, 1, 4], [0, 1, 2, 5], [0, 2, 3, 6]], columns=cols),
pd.DataFrame([[1, 0, 1, 4], [1, 1, 2, 55], [1, 2, 3, 6]], columns=cols),
pd.DataFrame([[2, 0, 1, 42], [2, 1, 2, 5], [2, 2, 3, 6]], columns=cols),
]
).set_index(["instances", "time points"])
###Output
_____no_output_____
###Markdown
It is not obvious whether `X` satisfies the `pd-multiindex` specification, so let's check:
(instruction: uncomment and run the code to see the informative error message)
###Code
from sktime.datatypes import check_raise
# check_raise(X, mtype="pd-multiindex")
###Output
_____no_output_____
###Markdown
The informative error message highlights a typo in one of the multi-index columns, so we do this:
###Code
X.index.names = ["instances", "timepoints"]
###Output
_____no_output_____
###Markdown
Now the validity check passes:
###Code
check_raise(X, mtype="pd-multiindex")
###Output
_____no_output_____
###Markdown
Section 2.1.3: inferring the mtype
`sktime` also provides functionality to infer the mtype of an in-memory data container, which is useful in case one is sure that the container is compliant but one has forgotten the exact string, or in a case where one would like to know whether an in-memory container is already in some supported, compliant format. For this, only the scitype needs to be specified:
###Code
from sktime.datatypes import mtype
mtype(X, as_scitype="Panel")
###Output
_____no_output_____
###Markdown
Section 2.2: conversion between mtypes
`sktime`'s `datatypes` module also offers uninfied conversion functionality between mtypes. This is useful for users as well as for method developers.
The `convert` function requires to specify the mtype to convert from, and the mtype to convert to. The `convert_to` function only requires to specify the mtype to convert to, automatically inferring the mtype of the input if it can be inferred. `convert_to` should be used if the input can have multiple mtypes. Section 2.2.1: simple conversion
Example: converting a `numpy3D` panel of time series to `pd-multiindex` mtype:
###Code
from sktime.datatypes import get_examples
X = get_examples(mtype="numpy3D", as_scitype="Panel")[0]
X
from sktime.datatypes import convert
convert(X, from_type="numpy3D", to_type="pd-multiindex")
from sktime.datatypes import convert_to
convert_to(X, to_type="pd-multiindex")
###Output
_____no_output_____
###Markdown
Section 2.2.2: advanced conversion features
`convert_to` also allows to specify multiple output types. The `to_type` argument can be a list of mtypes. In that case, the input passed through unchanged if its mtype is on the list; if the mtype of the input is not on the list, it is converted to the mtype which is the first element of the list.
Example: converting a panel of time series of to either `"pd-multiindex"` or `"numpy3D"`. If the input is `"numpy3D"`, it remains unchanged. If the input is `"df-list"`, it is converted to `"pd-multiindex"`.
###Code
from sktime.datatypes import get_examples
X = get_examples(mtype="numpy3D", as_scitype="Panel")[0]
X
from sktime.datatypes import convert_to
convert_to(X, to_type=["pd-multiindex", "numpy3D"])
X = get_examples(mtype="df-list", as_scitype="Panel")[0]
X
convert_to(X, to_type=["pd-multiindex", "numpy3D"])
###Output
_____no_output_____
###Markdown
Section 2.2.3: inspecting implemented conversions
Currently, conversions are work in progress, and not all possible conversions are available - contributions are welcome.
To see which conversions are currently implemented for a scitype, use the `_conversions_defined` developer method from the `datatypes._convert` module. This produces a table with a "1" if conversion from mtype in row row to mtypw in column is implemented.
###Code
from sktime.datatypes._convert import _conversions_defined
_conversions_defined(scitype="Panel")
###Output
_____no_output_____
###Markdown
Section 3: loading data sets
`sktime`'s `datasets` module allows to load datasets for testing and benchmarking. This includes:
* example data sets that ship directly with `sktime`
* downloaders for data sets from common repositories
All data retrieved in this way are in `sktime` compatible in-memory and/or file formats.
Currently, no systematic tagging and registry retrieval for the available data sets is implemented - contributions to this would be very welcome. Section 3.1: forecasting data sets
`sktime`'s `datasets` module currently allows to load a the following forecasting example data sets:
| dataset name | loader function | properties |
|----------|:-------------:|------:|
| Box/Jenkins airline data | `load_airline` | univariate |
| Lynx sales data | `load_lynx` | univariate |
| Shampoo sales data | `load_shampoo_sales` | univariate |
| Pharmaceutical Benefit Scheme data | `load_PBS_dataset` | univariate |
| Longley US macroeconomic data | `load_longley` | multivariate |
| MTS consumption/income data | `load_uschange` | multivariate |
`sktime` currently has no connectors to forecasting data repositories - contributions are much appreciated.
Forecasting data sets are all of `Series` scitype, they can be univariate or multivariate.
Loaders for univariate data have no arguments, and always return the data in the `"pd.Series"` mtype:
###Code
from sktime.datasets import load_airline
load_airline()
###Output
_____no_output_____
###Markdown
Loaders for multivariate data can be called in two ways:
* without an argument, in which case a multivariate series of `"pd.DataFrame"` mtype is returned:
###Code
from sktime.datasets import load_longley
load_longley()
###Output
_____no_output_____
###Markdown
* with an argument `y_name` that must coincide with one of the column/variable names, in which a pair of series `y`, `X` is returned, with `y` of `"pd.Series"` mtype, and `X` of `"pd.DataFrame"` mtype - this is convenient for univariate forecasting with exogeneous variables.
###Code
y, X = load_longley(y_name="TOTEMP")
y
X
###Output
_____no_output_____
###Markdown
Section 3.2: time series classification data sets
`sktime`'s `datasets` module currently allows to load a the following time series classification example data sets:
| dataset name | loader function | properties |
|----------|:-------------:|------:|
| Appliance power consumption data | `load_acsf1` | univariate, equal length/index |
| Arrowhead shape data | `load_arrow_head` | univariate, equal length/index |
| Gunpoint motion data | `load_gunpoint` | univariate, equal length/index |
| Italy power demand data | `load_italy_power_demand` | univariate, equal length/index |
| Japanese vowels data | `load_japanese_vowels` | univariate, equal length/index |
| OSUleaf leaf shape data | `load_osuleaf` | univariate, equal length/index |
| Basic motions data | `load_basic_motions` | multivariate, equal length/index |
Currently, there are no unequal length or unequal index time series classification example data directly in `sktime`.
`sktime` also provides a full interface to the UCR/UEA time series data set archive, via the `load_UCR_UEA_dataset` function.
The UCR/UEA archive also contains time series classification data sets which are multivariate, or unequal length/index (in either combination).
Section 3.2.2: time series classification data sets in `sktime`
Time series classification data sets consists of a panel of time series of `Panel` scitype, together with classification labels, one per time series.
If a loader is invoked with minimal arguments, the data are returned as `"nested_univ"` mtype, with labels and series to classify in the same `pd.DataFrame`. Using the `return_X_y=True` argument, the data are returned separated into features `X` and labels `y`, with `X` a `Panel` of `nested_univ` mtype, and `y` and a `sklearn` compatible numpy vector of labels:
###Code
from sktime.datasets import load_arrow_head
X, y = load_arrow_head(return_X_y=True)
X
y
###Output
_____no_output_____
###Markdown
The panel can be converted from `"nested_univ"` mtype to other mtype formats, using `datatypes.convert` or `convert_to` (see above):
###Code
from sktime.datatypes import convert_to
convert_to(X, to_type="pd-multiindex")
###Output
_____no_output_____
###Markdown
Data set loaders can be invoked with the `split` parameter to obtain reproducible training and test sets for comparison across studies. If `split="train"`, a pre-defined training set is retrieved; if `split="test"`, a pre-defined test set is retrieved.
###Code
X_train, y_train = load_arrow_head(return_X_y=True, split="train")
X_test, y_test = load_arrow_head(return_X_y=True, split="test")
# this retrieves training and test X/y for reproducible use in studies
###Output
_____no_output_____
###Markdown
Section 3.2.3: time series classification data sets from the UCR/UEA time series classification repository
The `load_UCR_UEA_dataset` utility will download datasetes from the UCR/UEA time series classification repository and make them available as in-memory datasets, with the same syntax as `sktime` native data set loaders.
Datasets are indexed by unique string identifiers, which can be inspected on the [repository itself](https://www.timeseriesclassification.com/), or via the register in the `datasets.tsc_dataset_names` module, by property:
###Code
from sktime.datasets.tsc_dataset_names import univariate
###Output
_____no_output_____
###Markdown
The imported variables are all lists of strings which contain the unique string identifiers of datasets with certain properties, as follows:
| register name | uni-/multivariate | equal/unequal length | with/without missing values |
|----------|:-------------:|------:|------:|
| `univariate` | only univariate | both included | both included |
| `multivariate` | only multivariate | both included | both included |
| `univariate_equal_length` | only univariate | only equal length | both included |
| `univariate_variable_length` | only univariate | only unequal length | both included |
| `univariate_missing_values` | only univariate | both included | only with missing values |
| `multivariate_equal_length` | only multivariate | only equal length | both included |
| `multivariate_unequal_length` | only multivariate | only unequal length | both included |
Lookup and retrieval using these lists is, admittedly, a bit inconvenient - contributions to `sktime` to write a lookup functions such as `all_estimators` or `all_tags`, based on capability or property tags attached to datasets would be very much appreciated.
An example list is displayed below:
###Code
univariate
###Output
_____no_output_____
###Markdown
The loader function `load_UCR_UEA_dataset` behaves exactly as `sktime` data loaders, with an additional argument `name` that should be set to one of the unique identifying strings for the UCR/UEA datasets, for instance:
###Code
from sktime.datasets import load_UCR_UEA_dataset
X, y = load_UCR_UEA_dataset(name="Yoga", return_X_y=True)
###Output
_____no_output_____
###Markdown
**Abstract:** this notebook give an introduction to `sktime` in-memory data containers and data sets, with associated functionality such as in-memory format validation, conversion, and data set loading.**Set-up instructions:** on binder, this nootebook should run out-of-the-box.To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.To run this notebook with a local development version of sktime, either uncomment and run the below, or `pip install -e` a local clone of the `sktime` `main` branch.
###Code
# from os import sys
# sys.path.append("..")
###Output
_____no_output_____
###Markdown
In-memory data representations and data loading`sktime` provides modules for a number of time series related learning tasks.These modules use `sktime` specific in-memory (i.e., python workspace) representations for time series and related objects, most importantly individual time series and time series panels. `sktime`'s in-memory representations rely on `pandas` and `numpy`, with additional conventions on the `pandas` and `numpy` object.Users of `sktime` should be aware of these representations, since presenting the data in an `sktime` compatible representation is usually the first step in using any of the `sktime` modules.This notebook introduces the data types used in `sktime`, related functionality such as converters and validity checkers, and common workflows for loading and conversion:**Section 1** introduces in-memory data containers used in `sktime`, with examples.**Section 2** introduces validity checkers and conversion functionality for in-memory data containers.**Section 3** introduces common workflows to load data from file formats Section 1: in-memory data containersThis section provides a reference to data containers used for time series and related objets in `sktime`.Conceptually, `sktime` distinguishes:* the *data scientific abstract data type* - or short: **scitype** - of a data container, defined by relational and statistical properties of the data being represented and common operations on it - for instance, an abstract "time series" or an abstract "time series panel", without specifying a particular machine implementation in python* the *machine implementation type* - or short: **mtype** - of a data container, which, for a defined *scitype*, specifies the python type and conventions on structure and value of the python in-memory object. For instance, a concrete (mathematical) time series is represented by a concrete `pandas.DataFrame` in `sktime`, subject to certain conventions on the `pandas.DataFrame`. Formally, these conventions form a specific mtype, i.e., a way to represent the (abstract) "time series" scitype.In `sktime`, the same scitype can be implemented by multiple mtypes. For instance, `sktime` allows the user to specify time series as `pandas.DataFrame`, as `pandas.Series`, or as a `numpy.ndarray`. These are different mtypes which are admissible representations of the same scitype, "time series". Also, not all mtypes are equally rich in metadata - for instance, `pandas.DataFrame` can store column names, while this is not possible in `numpy.ndarray`.Scitypes and mtypes are encoded by strings in `sktime`, for easy reference.This section introduces the mtypes for the following scitypes:* `"Series"`, the `sktime` scitype for time series of any kind* `"Panel"`, the `sktime` scitype for time series panels of any kind* `"Hierarchical"`, the `sktime` scitype for hierarchical time series Section 1.1: Time series - the `"Series"` scitypeThe major representations of time series in `sktime` are:* `"pd.DataFrame"` - a uni- or multivariate `pandas.DataFrame`, with rows = time points, cols = variables* `"pd.Series"` - a (univariate) `pandas.Series`, with entries corresponding to different time points* `"np.ndarray"` - a 2D `numpy.ndarray`, with rows = time points, cols = variables`pandas` objects must have one of the following `pandas` index types:`Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`; if `DatetimeIndex`, the `freq` attribute must be set.`numpy.ndarray` 2D arrays are interpreted as having an `RangeIndex` on the rows, and generally equivalent to the `pandas.DataFrame` obtained after default coercion using the `pandas.DataFrame` constructor.
###Code
# import to retrieve examples
from sktime.datatypes import get_examples
###Output
_____no_output_____
###Markdown
Section 1.1.1: Time series - the `"pd.DataFrame"` mtypeIn the `"pd.DataFrame"` mtype, time series are represented by an in-memory container `obj: pandas.DataFrame` as follows.* structure convention: `obj.index` must be monotonous, and one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`.* variables: columns of `obj` correspond to different variables* variable names: column names `obj.columns`* time points: rows of `obj` correspond to different, distinct time points* time index: `obj.index` is interpreted as a time index.* capabilities: can represent multivariate series; can represent unequally spaced series Example of a univariate series in `"pd.DataFrame"` representation.The single variable has name `"a"`, and is observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="pd.DataFrame", as_scitype="Series")[0]
###Output
_____no_output_____
###Markdown
Example of a bivariate series in `"pd.DataFrame"` representation.This series has two variables, named `"a"` and `"b"`. Both are observed at the same four time points 0, 1, 2, 3.
###Code
get_examples(mtype="pd.DataFrame", as_scitype="Series")[1]
###Output
_____no_output_____
###Markdown
Section 1.1.2: Time series - the `"pd.Series"` mtypeIn the `"pd.Series"` mtype, time series are represented by an in-memory container `obj: pandas.Series` as follows.* structure convention: `obj.index` must be monotonous, and one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`.* variables: there is a single variable, corresponding to the values of `obj`. Only univariate series can be represented.* variable names: by default, there is no column name. If needed, a variable name can be provided as `obj.name`.* time points: entries of `obj` correspond to different, distinct time points* time index: `obj.index` is interpreted as a time index.* capabilities: cannot represent multivariate series; can represent unequally spaced series Example of a univariate series in `"pd.Series"` mtype representation.The single variable has name `"a"`, and is observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="pd.Series", as_scitype="Series")[0]
###Output
_____no_output_____
###Markdown
Section 1.1.3: Time series - the `"np.ndarray"` mtypeIn the `"np.ndarray"` mtype, time series are represented by an in-memory container `obj: np.ndarray` as follows.* structure convention: `obj` must be 2D, i.e., `obj.shape` must have length 2. This is also true for univariate time series.* variables: variables correspond to columns of `obj`.* variable names: the `"np.ndarray"` mtype cannot represent variable names.* time points: the rows of `obj` correspond to different, distinct time points. * time index: The time index is implicit and by-convention. The `i`-th row (for an integer `i`) is interpreted as an observation at the time point `i`.* capabilities: cannot represent multivariate series; cannot represent unequally spaced series Example of a univariate series in `"np.ndarray"` mtype representation.There is a single (unnamed) variable, it is observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="np.ndarray", as_scitype="Series")[0]
###Output
_____no_output_____
###Markdown
Example of a bivariate series in `"np.ndarray"` mtype representation.There are two (unnamed) variables, they are both observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="np.ndarray", as_scitype="Series")[1]
###Output
_____no_output_____
###Markdown
Section 1.2: Time series panels - the `"Panel"` scitypeThe major representations of time series panels in `sktime` are:* `"pd-multiindex"` - a `pandas.DataFrame`, with row multi-index (instances, time), cols = variables* `"numpy3D"` - a 3D `np.ndarray`, with axis 0 = instances, axis 1 = variables, axis 2 = time points* `"df-list"` - a `list` of `pandas.DataFrame`, with list index = instances, data frame rows = time points, data frame cols = variablesThese representations are considered primary representations in `sktime` and are core to internal computations.There are further, minor representations of time series panels in `sktime`:* `"nested_univ"` - a `pandas.DataFrame`, with `pandas.Series` in cells. data frame rows = instances, data frame cols = variables, and series axis = time points* `"numpyflat"` - a 2D `np.ndarray` with rows = instances, and columns indexed by a pair index of (variables, time points). This format is only being converted to and cannot be converted from (since number of variables and time points may be ambiguous).* `"pd-wide"` - a `pandas.DataFrame` in wide format: has column multi-index (variables, time points), rows = instances; the "variables" index can be omitted for univariate time series* `"pd-long"` - a `pandas.DataFrame` in long format: has cols `instances`, `timepoints`, `variable`, `value`; entries in `value` are indexed by tuples of values in (`instances`, `timepoints`, `variable`).The minor representations are currently not fully consolidated in-code and are not discussed further below. Contributions are appreciated. Section 1.2.1: Time series panels - the `"pd-multiindex"` mtypeIn the `"pd-multiindex"` mtype, time series panels are represented by an in-memory container `obj: pandas.DataFrame` as follows.* structure convention: `obj.index` must be a pair multi-index of type `(Index, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonous. `obj.index` must have two levels (can be named or not).* instance index: the first element of pairs in `obj.index` (0-th level value) is interpreted as an instance index, we call it "instance index" below.* instances: rows with the same "instance index" index value correspond to the same instance; rows with different "instance index" values correspond to different instances. * time index: the second element of pairs in `obj.index` (1-st level value) is interpreted as a time index, we call it "time index" below. * time points: rows of `obj` with the same "time index" value correspond correspond to the same time point; rows of `obj` with different "time index" index correspond correspond to the different time points.* variables: columns of `obj` correspond to different variables* variable names: column names `obj.columns`* capabilities: can represent panels of multivariate series; can represent unequally spaced series; can represent panels of unequally supported series; cannot represent panels of series with different sets of variables. Example of a panel of multivariate series in `"pd-multiindex"` mtype representation.The panel contains three multivariate series, with instance indices 0, 1, 2. All series have two variables with names `"var_0"`, `"var_1"`. All series are observed at three time points 0, 1, 2.
###Code
get_examples(mtype="pd-multiindex", as_scitype="Panel")[0]
###Output
_____no_output_____
###Markdown
Section 1.2.2: Time series panels - the `"numpy3D"` mtypeIn the `"numpy3D"` mtype, time series panels are represented by an in-memory container `obj: np.ndarray` as follows.* structure convention: `obj` must be 3D, i.e., `obj.shape` must have length 2.* instances: instances correspond to axis 0 elements of `obj`.* instance index: the instance index is implicit and by-convention. The `i`-th element of axis 0 (for an integer `i`) is interpreted as indicative of observing instance `i`. * variables: variables correspond to axis 1 elements of `obj`.* variable names: the `"numpy3D"` mtype cannot represent variable names.* time points: time points correspond to axis 2 elements of `obj`.* time index: the time index is implicit and by-convention. The `i`-th elemtn of axis 2 (for an integer `i`) is interpreted as an observation at the time point `i`.* capabilities: can represent panels of multivariate series; cannot represent unequally spaced series; cannot represent panels of unequally supported series; cannot represent panels of series with different sets of variables. Example of a panel of multivariate series in `"numpy3D"` mtype representation.The panel contains three multivariate series, with instance indices 0, 1, 2. All series have two variables (unnamed). All series are observed at three time points 0, 1, 2.
###Code
get_examples(mtype="numpy3D", as_scitype="Panel")[0]
###Output
_____no_output_____
###Markdown
Section 1.2.3: Time series panels - the `"df-list"` mtypeIn the `"df-list"` mtype, time series panels are represented by an in-memory container `obj: List[pandas.DataFrame]` as follows.* structure convention: `obj` must be a list of `pandas.DataFrames`. Individual list elements of `obj` must follow the `"pd.DataFrame"` mtype convention for the `"Series"` scitype.* instances: instances correspond to different list elements of `obj`.* instance index: the instance index of an instance is the list index at which it is located in `obj`. That is, the data at `obj[i]` correspond to observations of the instance with index `i`.* time points: rows of `obj[i]` correspond to different, distinct time points, at which instance `i` is observed.* time index: `obj[i].index` is interpreted as the time index for instance `i`.* variables: columns of `obj[i]` correspond to different variables available for instance `i`.* variable names: column names `obj[i].columns` are the names of variables available for instance `i`.* capabilities: can represent panels of multivariate series; can represent unequally spaced series; can represent panels of unequally supported series; can represent panels of series with different sets of variables. Example of a panel of multivariate series in `"df-list"` mtype representation.The panel contains three multivariate series, with instance indices 0, 1, 2. All series have two variables with names `"var_0"`, `"var_1"`. All series are observed at three time points 0, 1, 2.
###Code
get_examples(mtype="df-list", as_scitype="Panel")[0]
###Output
_____no_output_____
###Markdown
Section 1.3: Hierarchical time series - the `"Hierarchical"` scitypeThere is currently only one representation for hierarchical time series in `sktime`:* `"pd_multiindex_hier"` - a `pandas.DataFrame`, with row multi-index, last level interpreted as time, others as hierarchy, cols = variables Hierarchical time series - the `"pd_multiindex_hier"` mtype* structure convention: `obj.index` must be a 3 or more level multi-index of type `(Index, ..., Index, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonous. We call the last index the "time-like" index.* hierarchy level: rows with the same non-time-like index values correspond to the same hierarchy unit; rows with different non-time-like index combination correspond to different hierarchy unit.* hierarchy: the non-time-like indices in `obj.index` are interpreted as a hierarchy identifying index. * time index: the last element of tuples in `obj.index` is interpreted as a time index. * time points: rows of `obj` with the same `"timepoints"` index correspond correspond to the same time point; rows of `obj` with different `"timepoints"` index correspond correspond to the different time points.* variables: columns of `obj` correspond to different variables* variable names: column names `obj.columns`* capabilities: can represent hierarchical series; can represent unequally spaced series; can represent unequally supported hierarchical series; cannot represent hierarchical series with different sets of variables.
###Code
get_examples(mtype="pd_multiindex_hier", as_scitype="Hierarchical")[0]
###Output
_____no_output_____
###Markdown
Section 2: validity checking and mtype conversion`sktime`'s `datatypes` module provides users with generic functionality for:* checking in-memory containers against mtype conventions, with informative error messages that help moving data to the right format* converting different mtypes to each other, for a given scitypeIn this section, this functionality and intended usage worfklows are presented. Section 2.1: Preparing data, checking in-memory containers for validity`sktime`'s `datatypes` module provides convenient functionality for users to check validity of their in-memory data containers, using the `check_is_mtype` and `check_raise` functions. Both functions provide generic validity checking functionality, `check_is_mtype` returns metadata and potential issues as return arguments, while `check_raise` directly produces informative error messages in case a container does not comply with a given `mtype`.A recommended notebook workflow to ensure that a given data container is compliant with `sktime` `mtype` specification is as follows:1. load the data in an in-memory data container2. identify the `scitype`, e.g., is this supposed to be a time series (`Series`) or a panel of time series (`Panel`)3. select the target `mtype` (see Section 1 for a list), and attempt to manually reformat the data to comply with the `mtype` specification if it is not already compliant4. run `check_raise` on the data container, to check whether it complies with the `mtype` and `scitype`5. if an error is raised, repeat 3 and 4 until no error is raised Section 2.1.1: validity checking, example 1 (simple mistake)Suppose we have the following `numpy.ndarray` representing a univariate time series:
###Code
import numpy as np
y = np.array([1, 6, 3, 7, 2])
###Output
_____no_output_____
###Markdown
to check compatibility with sktime:(instruction: uncomment and run the code to see the informative error message)
###Code
from sktime.datatypes import check_raise
# check_raise(y, mtype="np.ndarray")
###Output
_____no_output_____
###Markdown
this tells us that `sktime` uses 2D numpy arrays for time series, if the `np.ndarray` mtype is used. While most methods provide convenience functionality to do this coercion automatically, the "correct" format would be 2D as follows:
###Code
check_raise(y.reshape(-1, 1), mtype="np.ndarray")
###Output
_____no_output_____
###Markdown
For use in own code or additional metadata, the error message can be obtained using the `check_is_mtype` function:
###Code
from sktime.datatypes import check_is_mtype
check_is_mtype(y, mtype="np.ndarray", return_metadata=True)
###Output
_____no_output_____
###Markdown
and metadata is produced if the argument passes the validity check:
###Code
check_is_mtype(y.reshape(-1, 1), mtype="np.ndarray", return_metadata=True)
###Output
_____no_output_____
###Markdown
Note: if the name of the mtype is ambiguous and can refer to multiple scitypes, the additional argument `scitype` must be provided. This should not be the case for any common in-memory containers, we mention this for completeness.
###Code
check_is_mtype(y, mtype="np.ndarray", scitype="Series")
###Output
_____no_output_____
###Markdown
Section 2.1.2: validity checking, example 2 (non-obvious mistake)Suppose we have converted our data into a multi-index panel, i.e., we want to have a `Panel` of mtype `pd-multiindex`.
###Code
import pandas as pd
cols = ["instances", "time points"] + [f"var_{i}" for i in range(2)]
X = pd.concat(
[
pd.DataFrame([[0, 0, 1, 4], [0, 1, 2, 5], [0, 2, 3, 6]], columns=cols),
pd.DataFrame([[1, 0, 1, 4], [1, 1, 2, 55], [1, 2, 3, 6]], columns=cols),
pd.DataFrame([[2, 0, 1, 42], [2, 1, 2, 5], [2, 2, 3, 6]], columns=cols),
]
).set_index(["instances", "time points"])
###Output
_____no_output_____
###Markdown
It is not obvious whether `X` satisfies the `pd-multiindex` specification, so let's check:(instruction: uncomment and run the code to see the informative error message)
###Code
from sktime.datatypes import check_raise
# check_raise(X, mtype="pd-multiindex")
###Output
_____no_output_____
###Markdown
The informative error message highlights a typo in one of the multi-index columns, so we do this:
###Code
X.index.names = ["instances", "timepoints"]
###Output
_____no_output_____
###Markdown
Now the validity check passes:
###Code
check_raise(X, mtype="pd-multiindex")
###Output
_____no_output_____
###Markdown
Section 2.1.3: inferring the mtype`sktime` also provides functionality to infer the mtype of an in-memory data container, which is useful in case one is sure that the container is compliant but one has forgotten the exact string, or in a case where one would like to know whether an in-memory container is already in some supported, compliant format. For this, only the scitype needs to be specified:
###Code
from sktime.datatypes import mtype
mtype(X, as_scitype="Panel")
###Output
_____no_output_____
###Markdown
Section 2.2: conversion between mtypes`sktime`'s `datatypes` module also offers uninfied conversion functionality between mtypes. This is useful for users as well as for method developers.The `convert` function requires to specify the mtype to convert from, and the mtype to convert to. The `convert_to` function only requires to specify the mtype to convert to, automatically inferring the mtype of the input if it can be inferred. `convert_to` should be used if the input can have multiple mtypes. Section 2.2.1: simple conversionExample: converting a `numpy3D` panel of time series to `pd-multiindex` mtype:
###Code
from sktime.datatypes import get_examples
X = get_examples(mtype="numpy3D", as_scitype="Panel")[0]
X
from sktime.datatypes import convert
convert(X, from_type="numpy3D", to_type="pd-multiindex")
from sktime.datatypes import convert_to
convert_to(X, to_type="pd-multiindex")
###Output
_____no_output_____
###Markdown
Section 2.2.2: advanced conversion features`convert_to` also allows to specify multiple output types. The `to_type` argument can be a list of mtypes. In that case, the input passed through unchanged if its mtype is on the list; if the mtype of the input is not on the list, it is converted to the mtype which is the first element of the list.Example: converting a panel of time series of to either `"pd-multiindex"` or `"numpy3D"`. If the input is `"numpy3D"`, it remains unchanged. If the input is `"df-list"`, it is converted to `"pd-multiindex"`.
###Code
from sktime.datatypes import get_examples
X = get_examples(mtype="numpy3D", as_scitype="Panel")[0]
X
from sktime.datatypes import convert_to
convert_to(X, to_type=["pd-multiindex", "numpy3D"])
X = get_examples(mtype="df-list", as_scitype="Panel")[0]
X
convert_to(X, to_type=["pd-multiindex", "numpy3D"])
###Output
_____no_output_____
###Markdown
Section 2.2.3: inspecting implemented conversionsCurrently, conversions are work in progress, and not all possible conversions are available - contributions are welcome.To see which conversions are currently implemented for a scitype, use the `_conversions_defined` developer method from the `datatypes._convert` module. This produces a table with a "1" if conversion from mtype in row row to mtypw in column is implemented.
###Code
from sktime.datatypes._convert import _conversions_defined
_conversions_defined(scitype="Panel")
###Output
_____no_output_____
###Markdown
Section 3: loading data sets`sktime`'s `datasets` module allows to load datasets for testing and benchmarking. This includes:* example data sets that ship directly with `sktime`* downloaders for data sets from common repositoriesAll data retrieved in this way are in `sktime` compatible in-memory and/or file formats.Currently, no systematic tagging and registry retrieval for the available data sets is implemented - contributions to this would be very welcome. Section 3.1: forecasting data sets`sktime`'s `datasets` module currently allows to load a the following forecasting example data sets:| dataset name | loader function | properties ||----------|:-------------:|------:|| Box/Jenkins airline data | `load_airline` | univariate || Lynx sales data | `load_lynx` | univariate || Shampoo sales data | `load_shampoo_sales` | univariate || Pharmaceutical Benefit Scheme data | `load_PBS_dataset` | univariate || Longley US macroeconomic data | `load_longley` | multivariate || MTS consumption/income data | `load_uschange` | multivariate |`sktime` currently has no connectors to forecasting data repositories - contributions are much appreciated. Forecasting data sets are all of `Series` scitype, they can be univariate or multivariate.Loaders for univariate data have no arguments, and always return the data in the `"pd.Series"` mtype:
###Code
from sktime.datasets import load_airline
load_airline()
###Output
_____no_output_____
###Markdown
Loaders for multivariate data can be called in two ways:* without an argument, in which case a multivariate series of `"pd.DataFrame"` mtype is returned:
###Code
from sktime.datasets import load_longley
load_longley()
###Output
_____no_output_____
###Markdown
* with an argument `y_name` that must coincide with one of the column/variable names, in which a pair of series `y`, `X` is returned, with `y` of `"pd.Series"` mtype, and `X` of `"pd.DataFrame"` mtype - this is convenient for univariate forecasting with exogeneous variables.
###Code
y, X = load_longley(y_name="TOTEMP")
y
X
###Output
_____no_output_____
###Markdown
Section 3.2: time series classification data sets`sktime`'s `datasets` module currently allows to load a the following time series classification example data sets:| dataset name | loader function | properties ||----------|:-------------:|------:|| Appliance power consumption data | `load_acsf1` | univariate, equal length/index || Arrowhead shape data | `load_arrow_head` | univariate, equal length/index || Gunpoint motion data | `load_gunpoint` | univariate, equal length/index || Italy power demand data | `load_italy_power_demand` | univariate, equal length/index || Japanese vowels data | `load_japanese_vowels` | univariate, equal length/index || OSUleaf leaf shape data | `load_osuleaf` | univariate, equal length/index || Basic motions data | `load_basic_motions` | multivariate, equal length/index |Currently, there are no unequal length or unequal index time series classification example data directly in `sktime`.`sktime` also provides a full interface to the UCR/UEA time series data set archive, via the `load_UCR_UEA_dataset` function.The UCR/UEA archive also contains time series classification data sets which are multivariate, or unequal length/index (in either combination). Section 3.2.2: time series classification data sets in `sktime`Time series classification data sets consists of a panel of time series of `Panel` scitype, together with classification labels, one per time series.If a loader is invoked with minimal arguments, the data are returned as `"nested_univ"` mtype, with labels and series to classify in the same `pd.DataFrame`. Using the `return_X_y=True` argument, the data are returned separated into features `X` and labels `y`, with `X` a `Panel` of `nested_univ` mtype, and `y` and a `sklearn` compatible numpy vector of labels:
###Code
from sktime.datasets import load_arrow_head
X, y = load_arrow_head(return_X_y=True)
X
y
###Output
_____no_output_____
###Markdown
The panel can be converted from `"nested_univ"` mtype to other mtype formats, using `datatypes.convert` or `convert_to` (see above):
###Code
from sktime.datatypes import convert_to
convert_to(X, to_type="pd-multiindex")
###Output
_____no_output_____
###Markdown
Data set loaders can be invoked with the `split` parameter to obtain reproducible training and test sets for comparison across studies. If `split="train"`, a pre-defined training set is retrieved; if `split="test"`, a pre-defined test set is retrieved.
###Code
X_train, y_train = load_arrow_head(return_X_y=True, split="train")
X_test, y_test = load_arrow_head(return_X_y=True, split="test")
# this retrieves training and test X/y for reproducible use in studies
###Output
_____no_output_____
###Markdown
Section 3.2.3: time series classification data sets from the UCR/UEA time series classification repositoryThe `load_UCR_UEA_dataset` utility will download datasetes from the UCR/UEA time series classification repository and make them available as in-memory datasets, with the same syntax as `sktime` native data set loaders.Datasets are indexed by unique string identifiers, which can be inspected on the [repository itself](https://www.timeseriesclassification.com/), or via the register in the `datasets.tsc_dataset_names` module, by property:
###Code
from sktime.datasets.tsc_dataset_names import univariate
###Output
_____no_output_____
###Markdown
The imported variables are all lists of strings which contain the unique string identifiers of datasets with certain properties, as follows:| register name | uni-/multivariate | equal/unequal length | with/without missing values ||----------|:-------------:|------:|------:|| `univariate` | only univariate | both included | both included || `multivariate` | only multivariate | both included | both included || `univariate_equal_length` | only univariate | only equal length | both included || `univariate_variable_length` | only univariate | only unequal length | both included || `univariate_missing_values` | only univariate | both included | only with missing values || `multivariate_equal_length` | only multivariate | only equal length | both included || `multivariate_unequal_length` | only multivariate | only unequal length | both included |Lookup and retrieval using these lists is, admittedly, a bit inconvenient - contributions to `sktime` to write a lookup functions such as `all_estimators` or `all_tags`, based on capability or property tags attached to datasets would be very much appreciated.An example list is displayed below:
###Code
univariate
###Output
_____no_output_____
###Markdown
The loader function `load_UCR_UEA_dataset` behaves exactly as `sktime` data loaders, with an additional argument `name` that should be set to one of the unique identifying strings for the UCR/UEA datasets, for instance:
###Code
from sktime.datasets import load_UCR_UEA_dataset
X, y = load_UCR_UEA_dataset(name="Yoga", return_X_y=True)
###Output
_____no_output_____
###Markdown
**Abstract:** this notebook give an introduction to `sktime` in-memory data containers and data sets, with associated functionality such as in-memory format validation, conversion, and data set loading.
**Set-up instructions:** on binder, this nootebook should run out-of-the-box.
To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.
To run this notebook with a local development version of sktime, either uncomment and run the below, or `pip install -e` a local clone of the `sktime` `main` branch.
###Code
# from os import sys
# sys.path.append("..")
###Output
_____no_output_____
###Markdown
In-memory data representations and data loading
`sktime` provides modules for a number of time series related learning tasks.
These modules use `sktime` specific in-memory (i.e., python workspace) representations for time series and related objects, most importantly individual time series and time series panels. `sktime`'s in-memory representations rely on `pandas` and `numpy`, with additional conventions on the `pandas` and `numpy` object.
Users of `sktime` should be aware of these representations, since presenting the data in an `sktime` compatible representation is usually the first step in using any of the `sktime` modules.
This notebook introduces the data types used in `sktime`, related functionality such as converters and validity checkers, and common workflows for loading and conversion:
**Section 1** introduces in-memory data containers used in `sktime`, with examples.
**Section 2** introduces validity checkers and conversion functionality for in-memory data containers.
**Section 3** introduces common workflows to load data from file formats Section 1: in-memory data containers
This section provides a reference to data containers used for time series and related objets in `sktime`.
Conceptually, `sktime` distinguishes:
* the *data scientific abstract data type* - or short: **scitype** - of a data container, defined by relational and statistical properties of the data being represented and common operations on it - for instance, an abstract "time series" or an abstract "time series panel", without specifying a particular machine implementation in python
* the *machine implementation type* - or short: **mtype** - of a data container, which, for a defined *scitype*, specifies the python type and conventions on structure and value of the python in-memory object. For instance, a concrete (mathematical) time series is represented by a concrete `pandas.DataFrame` in `sktime`, subject to certain conventions on the `pandas.DataFrame`. Formally, these conventions form a specific mtype, i.e., a way to represent the (abstract) "time series" scitype.
In `sktime`, the same scitype can be implemented by multiple mtypes. For instance, `sktime` allows the user to specify time series as `pandas.DataFrame`, as `pandas.Series`, or as a `numpy.ndarray`. These are different mtypes which are admissible representations of the same scitype, "time series". Also, not all mtypes are equally rich in metadata - for instance, `pandas.DataFrame` can store column names, while this is not possible in `numpy.ndarray`.
Both scitypes and mtypes are encoded by strings in `sktime`, for easy reference.
This section introduces the mtypes for the following scitypes:
* `"Series"`, the `sktime` scitype for time series of any kind
* `"Panel"`, the `sktime` scitype for time series panels of any kind Section 1.1: Time series - the `"Series"` scitype
The major representations of time series in `sktime` are:
* `"pd.DataFrame"` - a uni- or multivariate `pandas.DataFrame`, with rows = time points, cols = variables
* `"pd.Series"` - a (univariate) `pandas.Series`, with entries corresponding to different time points
* `"np.ndarray"` - a 2D `numpy.ndarray`, with rows = time points, cols = variables
`pandas` objects must have one of the following `pandas` index types:
`Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`; if `DatetimeIndex`, the `freq` attribute must be set.
`numpy.ndarray` 2D arrays are interpreted as having an `RangeIndex` on the rows, and generally equivalent to the `pandas.DataFrame` obtained after default coercion using the `pandas.DataFrame` constructor.
###Code
# import to retrieve examples
from sktime.datatypes import get_examples
###Output
_____no_output_____
###Markdown
Section 1.1.1: Time series - the `"pd.DataFrame"` mtype
In the `"pd.DataFrame"` mtype, time series are represented by an in-memory container `obj: pandas.DataFrame` as follows.
* structure convention: `obj.index` must be monotonous, and one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`.
* variables: columns of `obj` correspond to different variables
* variable names: column names `obj.columns`
* time points: rows of `obj` correspond to different, distinct time points
* time index: `obj.index` is interpreted as a time index.
* capabilities: can represent multivariate series; can represent unequally spaced series Example of a univariate series in `"pd.DataFrame"` representation.
The single variable has name `"a"`, and is observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="pd.DataFrame", as_scitype="Series")[0]
###Output
_____no_output_____
###Markdown
Example of a bivariate series in `"pd.DataFrame"` representation.
This series has two variables, named `"a"` and `"b"`. Both are observed at the same four time points 0, 1, 2, 3.
###Code
get_examples(mtype="pd.DataFrame", as_scitype="Series")[1]
###Output
_____no_output_____
###Markdown
Section 1.1.2: Time series - the `"pd.Series"` mtype
In the `"pd.Series"` mtype, time series are represented by an in-memory container `obj: pandas.Series` as follows.
* structure convention: `obj.index` must be monotonous, and one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`.
* variables: there is a single variable, corresponding to the values of `obj`. Only univariate series can be represented.
* variable names: by default, there is no column name. If needed, a variable name can be provided as `obj.name`.
* time points: entries of `obj` correspond to different, distinct time points
* time index: `obj.index` is interpreted as a time index.
* capabilities: cannot represent multivariate series; can represent unequally spaced series Example of a univariate series in `"pd.Series"` mtype representation.
The single variable has name `"a"`, and is observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="pd.Series", as_scitype="Series")[0]
###Output
_____no_output_____
###Markdown
Section 1.1.3: Time series - the `"np.ndarray"` mtype
In the `"np.ndarray"` mtype, time series are represented by an in-memory container `obj: np.ndarray` as follows.
* structure convention: `obj` must be 2D, i.e., `obj.shape` must have length 2. This is also true for univariate time series.
* variables: variables correspond to columns of `obj`.
* variable names: the `"np.ndarray"` mtype cannot represent variable names.
* time points: the rows of `obj` correspond to different, distinct time points.
* time index: The time index is implicit and by-convention. The `i`-th row (for an integer `i`) is interpreted as an observation at the time point `i`.
* capabilities: cannot represent multivariate series; cannot represent unequally spaced series Example of a univariate series in `"np.ndarray"` mtype representation.
There is a single (unnamed) variable, it is observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="np.ndarray", as_scitype="Series")[0]
###Output
_____no_output_____
###Markdown
Example of a bivariate series in `"np.ndarray"` mtype representation.
There are two (unnamed) variables, they are both observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="np.ndarray", as_scitype="Series")[1]
###Output
_____no_output_____
###Markdown
Section 1.2: Time series panels - the `"Panel"` scitype
The major representations of time series panels in `sktime` are:
* `"pd-multiindex"` - a `pandas.DataFrame`, with row multi-index (`instances`, `timepoints`), cols = variables
* `"numpy3D"` - a 3D `np.ndarray`, with axis 0 = instances, axis 1 = variables, axis 2 = time points
* `"df-list"` - a `list` of `pandas.DataFrame`, with list index = instances, data frame rows = time points, data frame cols = variables
These representations are considered primary representations in `sktime` and are core to internal computations.
There are further, minor representations of time series panels in `sktime`:
* `"nested_univ"` - a `pandas.DataFrame`, with `pandas.Series` in cells. data frame rows = instances, data frame cols = variables, and series axis = time points
* `"numpyflat"` - a 2D `np.ndarray` with rows = instances, and columns indexed by a pair index of (variables, time points). This format is only being converted to and cannot be converted from (since number of variables and time points may be ambiguous).
* `"pd-wide"` - a `pandas.DataFrame` in wide format: has column multi-index (variables, time points), rows = instances; the "variables" index can be omitted for univariate time series
* `"pd-long"` - a `pandas.DataFrame` in long format: has cols `instances`, `timepoints`, `variable`, `value`; entries in `value` are indexed by tuples of values in (`instances`, `timepoints`, `variable`).
The minor representations are currently not fully consolidated in-code and are not discussed further below. Contributions are appreciated. Section 1.2.1: Time series panels - the `"pd-multiindex"` mtype
In the `"pd-multiindex"` mtype, time series panels are represented by an in-memory container `obj: pandas.DataFrame` as follows.
* structure convention: `obj.index` must be a pair multi-index of type `(RangeIndex, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonous. `obj.index` must have name `("instances", "timepoints")`.
* instances: rows with the same `"instances"` index correspond to the same instance; rows with different `"instances"` index correspond to different instances.
* instance index: the first element of pairs in `obj.index` is interpreted as an instance index.
* variables: columns of `obj` correspond to different variables
* variable names: column names `obj.columns`
* time points: rows of `obj` with the same `"timepoints"` index correspond correspond to the same time point; rows of `obj` with different `"timepoints"` index correspond correspond to the different time points.
* time index: the second element of pairs in `obj.index` is interpreted as a time index.
* capabilities: can represent panels of multivariate series; can represent unequally spaced series; can represent panels of unequally supported series; cannot represent panels of series with different sets of variables. Example of a panel of multivariate series in `"pd-multiindex"` mtype representation.
The panel contains three multivariate series, with instance indices 0, 1, 2. All series have two variables with names `"var_0"`, `"var_1"`. All series are observed at three time points 0, 1, 2.
###Code
get_examples(mtype="pd-multiindex", as_scitype="Panel")[0]
###Output
_____no_output_____
###Markdown
Section 1.2.2: Time series panels - the `"numpy3D"` mtype
In the `"numpy3D"` mtype, time series panels are represented by an in-memory container `obj: np.ndarray` as follows.
* structure convention: `obj` must be 3D, i.e., `obj.shape` must have length 2.
* instances: instances correspond to axis 0 elements of `obj`.
* instance index: the instance index is implicit and by-convention. The `i`-th element of axis 0 (for an integer `i`) is interpreted as indicative of observing instance `i`.
* variables: variables correspond to axis 1 elements of `obj`.
* variable names: the `"numpy3D"` mtype cannot represent variable names.
* time points: time points correspond to axis 2 elements of `obj`.
* time index: the time index is implicit and by-convention. The `i`-th elemtn of axis 2 (for an integer `i`) is interpreted as an observation at the time point `i`.
* capabilities: can represent panels of multivariate series; cannot represent unequally spaced series; cannot represent panels of unequally supported series; cannot represent panels of series with different sets of variables. Example of a panel of multivariate series in `"numpy3D"` mtype representation.
The panel contains three multivariate series, with instance indices 0, 1, 2. All series have two variables (unnamed). All series are observed at three time points 0, 1, 2.
###Code
get_examples(mtype="numpy3D", as_scitype="Panel")[0]
###Output
_____no_output_____
###Markdown
Section 1.2.3: Time series panels - the `"df-list"` mtype
In the `"df-list"` mtype, time series panels are represented by an in-memory container `obj: List[pandas.DataFrame]` as follows.
* structure convention: `obj` must be a list of `pandas.DataFrames`. Individual list elements of `obj` must follow the `"pd.DataFrame"` mtype convention for the `"Series"` scitype.
* instances: instances correspond to different list elements of `obj`.
* instance index: the instance index of an instance is the list index at which it is located in `obj`. That is, the data at `obj[i]` correspond to observations of the instance with index `i`.
* variables: columns of `obj[i]` correspond to different variables available for instance `i`.
* variable names: column names `obj[i].columns` are the names of variables available for instance `i`.
* time points: rows of `obj[i]` correspond to different, distinct time points, at which instance `i` is observed.
* time index: `obj[i].index` is interpreted as the time index for instance `i`.
* capabilities: can represent panels of multivariate series; can represent unequally spaced series; can represent panels of unequally supported series; can represent panels of series with different sets of variables. Example of a panel of multivariate series in `"df-list"` mtype representation.
The panel contains three multivariate series, with instance indices 0, 1, 2. All series have two variables with names `"var_0"`, `"var_1"`. All series are observed at three time points 0, 1, 2.
###Code
get_examples(mtype="df-list", as_scitype="Panel")[0]
###Output
_____no_output_____
###Markdown
Section 2: validity checking and mtype conversion
`sktime`'s `datatypes` module provides users with generic functionality for:
* checking in-memory containers against mtype conventions, with informative error messages that help moving data to the right format
* converting different mtypes to each other, for a given scitype
In this section, this functionality and intended usage worfklows are presented. Section 2.1: Preparing data, checking in-memory containers for validity
`sktime`'s `datatypes` module provides convenient functionality for users to check validity of their in-memory data containers, using the `check_is_mtype` and `check_raise` functions. Both functions provide generic validity checking functionality, `check_is_mtype` returns metadata and potential issues as return arguments, while `check_raise` directly produces informative error messages in case a container does not comply with a given `mtype`.
A recommended notebook workflow to ensure that a given data container is compliant with `sktime` `mtype` specification is as follows:
1. load the data in an in-memory data container
2. identify the `scitype`, e.g., is this supposed to be a time series (`Series`) or a panel of time series (`Panel`)
3. select the target `mtype` (see Section 1 for a list), and attempt to manually reformat the data to comply with the `mtype` specification if it is not already compliant
4. run `check_raise` on the data container, to check whether it complies with the `mtype` and `scitype`
5. if an error is raised, repeat 3 and 4 until no error is raised Section 2.1.1: validity checking, example 1 (simple mistake)
Suppose we have the following `numpy.ndarray` representing a univariate time series:
###Code
import numpy as np
y = np.array([1, 6, 3, 7, 2])
###Output
_____no_output_____
###Markdown
to check compatibility with sktime:
(instruction: uncomment and run the code to see the informative error message)
###Code
from sktime.datatypes import check_raise
# check_raise(y, mtype="np.ndarray")
###Output
_____no_output_____
###Markdown
this tells us that `sktime` uses 2D numpy arrays for time series, if the `np.ndarray` mtype is used. While most methods provide convenience functionality to do this coercion automatically, the "correct" format would be 2D as follows:
###Code
check_raise(y.reshape(-1, 1), mtype="np.ndarray")
###Output
_____no_output_____
###Markdown
For use in own code or additional metadata, the error message can be obtained using the `check_is_mtype` function:
###Code
from sktime.datatypes import check_is_mtype
check_is_mtype(y, mtype="np.ndarray", return_metadata=True)
###Output
_____no_output_____
###Markdown
and metadata is produced if the argument passes the validity check:
###Code
check_is_mtype(y.reshape(-1, 1), mtype="np.ndarray", return_metadata=True)
###Output
_____no_output_____
###Markdown
Note: if the name of the mtype is ambiguous and can refer to multiple scitypes, the additional argument `scitype` must be provided. This should not be the case for any common in-memory containers, we mention this for completeness.
###Code
check_is_mtype(y, mtype="np.ndarray", scitype="Series")
###Output
_____no_output_____
###Markdown
Section 2.1.2: validity checking, example 2 (non-obvious mistake)
Suppose we have converted our data into a multi-index panel, i.e., we want to have a `Panel` of mtype `pd-multiindex`.
###Code
import pandas as pd
cols = ["instances", "time points"] + [f"var_{i}" for i in range(2)]
X = pd.concat(
[
pd.DataFrame([[0, 0, 1, 4], [0, 1, 2, 5], [0, 2, 3, 6]], columns=cols),
pd.DataFrame([[1, 0, 1, 4], [1, 1, 2, 55], [1, 2, 3, 6]], columns=cols),
pd.DataFrame([[2, 0, 1, 42], [2, 1, 2, 5], [2, 2, 3, 6]], columns=cols),
]
).set_index(["instances", "time points"])
###Output
_____no_output_____
###Markdown
It is not obvious whether `X` satisfies the `pd-multiindex` specification, so let's check:
(instruction: uncomment and run the code to see the informative error message)
###Code
from sktime.datatypes import check_raise
# check_raise(X, mtype="pd-multiindex")
###Output
_____no_output_____
###Markdown
The informative error message highlights a typo in one of the multi-index columns, so we do this:
###Code
X.index.names = ["instances", "timepoints"]
###Output
_____no_output_____
###Markdown
Now the validity check passes:
###Code
check_raise(X, mtype="pd-multiindex")
###Output
_____no_output_____
###Markdown
Section 2.1.3: inferring the mtype
`sktime` also provides functionality to infer the mtype of an in-memory data container, which is useful in case one is sure that the container is compliant but one has forgotten the exact string, or in a case where one would like to know whether an in-memory container is already in some supported, compliant format. For this, only the scitype needs to be specified:
###Code
from sktime.datatypes import mtype
mtype(X, as_scitype="Panel")
###Output
_____no_output_____
###Markdown
Section 2.2: conversion between mtypes
`sktime`'s `datatypes` module also offers uninfied conversion functionality between mtypes. This is useful for users as well as for method developers.
The `convert` function requires to specify the mtype to convert from, and the mtype to convert to. The `convert_to` function only requires to specify the mtype to convert to, automatically inferring the mtype of the input if it can be inferred. `convert_to` should be used if the input can have multiple mtypes. Section 2.2.1: simple conversion
Example: converting a `numpy3D` panel of time series to `pd-multiindex` mtype:
###Code
from sktime.datatypes import get_examples
X = get_examples(mtype="numpy3D", as_scitype="Panel")[0]
X
from sktime.datatypes import convert
convert(X, from_type="numpy3D", to_type="pd-multiindex")
from sktime.datatypes import convert_to
convert_to(X, to_type="pd-multiindex")
###Output
_____no_output_____
###Markdown
Section 2.2.2: advanced conversion features
`convert_to` also allows to specify multiple output types. The `to_type` argument can be a list of mtypes. In that case, the input passed through unchanged if its mtype is on the list; if the mtype of the input is not on the list, it is converted to the mtype which is the first element of the list.
Example: converting a panel of time series of to either `"pd-multiindex"` or `"numpy3D"`. If the input is `"numpy3D"`, it remains unchanged. If the input is `"df-list"`, it is converted to `"pd-multiindex"`.
###Code
from sktime.datatypes import get_examples
X = get_examples(mtype="numpy3D", as_scitype="Panel")[0]
X
from sktime.datatypes import convert_to
convert_to(X, to_type=["pd-multiindex", "numpy3D"])
X = get_examples(mtype="df-list", as_scitype="Panel")[0]
X
convert_to(X, to_type=["pd-multiindex", "numpy3D"])
###Output
_____no_output_____
###Markdown
Section 2.2.3: inspecting implemented conversions
Currently, conversions are work in progress, and not all possible conversions are available - contributions are welcome.
To see which conversions are currently implemented for a scitype, use the `_conversions_defined` developer method from the `datatypes._convert` module. This produces a table with a "1" if conversion from mtype in row row to mtypw in column is implemented.
###Code
from sktime.datatypes._convert import _conversions_defined
_conversions_defined(scitype="Panel")
###Output
_____no_output_____
###Markdown
Section 3: loading data sets
`sktime`'s `datasets` module allows to load datasets for testing and benchmarking. This includes:
* example data sets that ship directly with `sktime`
* downloaders for data sets from common repositories
All data retrieved in this way are in `sktime` compatible in-memory and/or file formats.
Currently, no systematic tagging and registry retrieval for the available data sets is implemented - contributions to this would be very welcome. Section 3.1: forecasting data sets
`sktime`'s `datasets` module currently allows to load a the following forecasting example data sets:
| dataset name | loader function | properties |
|----------|:-------------:|------:|
| Box/Jenkins airline data | `load_airline` | univariate |
| Lynx sales data | `load_lynx` | univariate |
| Shampoo sales data | `load_shampoo_sales` | univariate |
| Pharmaceutical Benefit Scheme data | `load_PBS_dataset` | univariate |
| Longley US macroeconomic data | `load_longley` | multivariate |
| MTS consumption/income data | `load_uschange` | multivariate |
`sktime` currently has no connectors to forecasting data repositories - contributions are much appreciated.
Forecasting data sets are all of `Series` scitype, they can be univariate or multivariate.
Loaders for univariate data have no arguments, and always return the data in the `"pd.Series"` mtype:
###Code
from sktime.datasets import load_airline
load_airline()
###Output
_____no_output_____
###Markdown
Loaders for multivariate data can be called in two ways:
* without an argument, in which case a multivariate series of `"pd.DataFrame"` mtype is returned:
###Code
from sktime.datasets import load_longley
load_longley()
###Output
_____no_output_____
###Markdown
* with an argument `y_name` that must coincide with one of the column/variable names, in which a pair of series `y`, `X` is returned, with `y` of `"pd.Series"` mtype, and `X` of `"pd.DataFrame"` mtype - this is convenient for univariate forecasting with exogeneous variables.
###Code
y, X = load_longley(y_name="TOTEMP")
y
X
###Output
_____no_output_____
###Markdown
Section 3.2: time series classification data sets
`sktime`'s `datasets` module currently allows to load a the following time series classification example data sets:
| dataset name | loader function | properties |
|----------|:-------------:|------:|
| Appliance power consumption data | `load_acsf1` | univariate, equal length/index |
| Arrowhead shape data | `load_arrow_head` | univariate, equal length/index |
| Gunpoint motion data | `load_gunpoint` | univariate, equal length/index |
| Italy power demand data | `load_italy_power_demand` | univariate, equal length/index |
| Japanese vowels data | `load_japanese_vowels` | univariate, equal length/index |
| OSUleaf leaf shape data | `load_osuleaf` | univariate, equal length/index |
| Basic motions data | `load_basic_motions` | multivariate, equal length/index |
Currently, there are no unequal length or unequal index time series classification example data directly in `sktime`.
`sktime` also provides a full interface to the UCR/UEA time series data set archive, via the `load_UCR_UEA_dataset` function.
The UCR/UEA archive also contains time series classification data sets which are multivariate, or unequal length/index (in either combination).
Section 3.2.2: time series classification data sets in `sktime`
Time series classification data sets consists of a panel of time series of `Panel` scitype, together with classification labels, one per time series.
If a loader is invoked with minimal arguments, the data are returned as `"nested_univ"` mtype, with labels and series to classify in the same `pd.DataFrame`. Using the `return_X_y=True` argument, the data are returned separated into features `X` and labels `y`, with `X` a `Panel` of `nested_univ` mtype, and `y` and a `sklearn` compatible numpy vector of labels:
###Code
from sktime.datasets import load_arrow_head
X, y = load_arrow_head(return_X_y=True)
X
y
###Output
_____no_output_____
###Markdown
The panel can be converted from `"nested_univ"` mtype to other mtype formats, using `datatypes.convert` or `convert_to` (see above):
###Code
from sktime.datatypes import convert_to
convert_to(X, to_type="pd-multiindex")
###Output
_____no_output_____
###Markdown
Data set loaders can be invoked with the `split` parameter to obtain reproducible training and test sets for comparison across studies. If `split="train"`, a pre-defined training set is retrieved; if `split="test"`, a pre-defined test set is retrieved.
###Code
X_train, y_train = load_arrow_head(return_X_y=True, split="train")
X_test, y_test = load_arrow_head(return_X_y=True, split="test")
# this retrieves training and test X/y for reproducible use in studies
###Output
_____no_output_____
###Markdown
Section 3.2.3: time series classification data sets from the UCR/UEA time series classification repository
The `load_UCR_UEA_dataset` utility will download datasetes from the UCR/UEA time series classification repository and make them available as in-memory datasets, with the same syntax as `sktime` native data set loaders.
Datasets are indexed by unique string identifiers, which can be inspected on the [repository itself](https://www.timeseriesclassification.com/), or via the register in the `datasets.tsc_dataset_names` module, by property:
###Code
from sktime.datasets.tsc_dataset_names import univariate
###Output
_____no_output_____
###Markdown
The imported variables are all lists of strings which contain the unique string identifiers of datasets with certain properties, as follows:
| register name | uni-/multivariate | equal/unequal length | with/without missing values |
|----------|:-------------:|------:|------:|
| `univariate` | only univariate | both included | both included |
| `multivariate` | only multivariate | both included | both included |
| `univariate_equal_length` | only univariate | only equal length | both included |
| `univariate_variable_length` | only univariate | only unequal length | both included |
| `univariate_missing_values` | only univariate | both included | only with missing values |
| `multivariate_equal_length` | only multivariate | only equal length | both included |
| `multivariate_unequal_length` | only multivariate | only unequal length | both included |
Lookup and retrieval using these lists is, admittedly, a bit inconvenient - contributions to `sktime` to write a lookup functions such as `all_estimators` or `all_tags`, based on capability or property tags attached to datasets would be very much appreciated.
An example list is displayed below:
###Code
univariate
###Output
_____no_output_____
###Markdown
The loader function `load_UCR_UEA_dataset` behaves exactly as `sktime` data loaders, with an additional argument `name` that should be set to one of the unique identifying strings for the UCR/UEA datasets, for instance:
###Code
from sktime.datasets import load_UCR_UEA_dataset
X, y = load_UCR_UEA_dataset(name="Yoga", return_X_y=True)
###Output
_____no_output_____
###Markdown
**Abstract:** this notebook give an introduction to `sktime` in-memory data containers and data sets, with associated functionality such as in-memory format validation, conversion, and data set loading.
**Set-up instructions:** on binder, this nootebook should run out-of-the-box.
To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.
To run this notebook with a local development version of sktime, either uncomment and run the below, or `pip install -e` a local clone of the `sktime` `main` branch.
###Code
# from os import sys
# sys.path.append("..")
###Output
_____no_output_____
###Markdown
In-memory data representations and data loading
`sktime` provides modules for a number of time series related learning tasks.
These modules use `sktime` specific in-memory (i.e., python workspace) representations for time series and related objects, most importantly individual time series and time series panels. `sktime`'s in-memory representations rely on `pandas` and `numpy`, with additional conventions on the `pandas` and `numpy` object.
Users of `sktime` should be aware of these representations, since presenting the data in an `sktime` compatible representation is usually the first step in using any of the `sktime` modules.
This notebook introduces the data types used in `sktime`, related functionality such as converters and validity checkers, and common workflows for loading and conversion:
**Section 1** introduces in-memory data containers used in `sktime`, with examples.
**Section 2** introduces validity checkers and conversion functionality for in-memory data containers.
**Section 3** introduces common workflows to load data from file formats Section 1: in-memory data containers
This section provides a reference to data containers used for time series and related objets in `sktime`.
Conceptually, `sktime` distinguishes:
* the *data scientific abstract data type* - or short: **scitype** - of a data container, defined by relational and statistical properties of the data being represented and common operations on it - for instance, an abstract "time series" or an abstract "time series panel", without specifying a particular machine implementation in python
* the *machine implementation type* - or short: **mtype** - of a data container, which, for a defined *scitype*, specifies the python type and conventions on structure and value of the python in-memory object. For instance, a concrete (mathematical) time series is represented by a concrete `pandas.DataFrame` in `sktime`, subject to certain conventions on the `pandas.DataFrame`. Formally, these conventions form a specific mtype, i.e., a way to represent the (abstract) "time series" scitype.
In `sktime`, the same scitype can be implemented by multiple mtypes. For instance, `sktime` allows the user to specify time series as `pandas.DataFrame`, as `pandas.Series`, or as a `numpy.ndarray`. These are different mtypes which are admissible representations of the same scitype, "time series". Also, not all mtypes are equally rich in metadata - for instance, `pandas.DataFrame` can store column names, while this is not possible in `numpy.ndarray`.
Both scitypes and mtypes are encoded by strings in `sktime`, for easy reference.
This section introduces the mtypes for the following scitypes:
* `"Series"`, the `sktime` scitype for time series of any kind
* `"Panel"`, the `sktime` scitype for time series panels of any kind Section 1.1: Time series - the `"Series"` scitype
The major representations of time series in `sktime` are:
* `"pd.DataFrame"` - a uni- or multivariate `pandas.DataFrame`, with rows = time points, cols = variables
* `"pd.Series"` - a (univariate) `pandas.Series`, with entries corresponding to different time points
* `"np.ndarray"` - a 2D `numpy.ndarray`, with rows = time points, cols = variables
`pandas` objects must have one of the following `pandas` index types:
`Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`; if `DatetimeIndex`, the `freq` attribute must be set.
`numpy.ndarray` 2D arrays are interpreted as having an `RangeIndex` on the rows, and generally equivalent to the `pandas.DataFrame` obtained after default coercion using the `pandas.DataFrame` constructor.
###Code
# import to retrieve examples
from sktime.datatypes import get_examples
###Output
_____no_output_____
###Markdown
Section 1.1.1: Time series - the `"pd.DataFrame"` mtype
In the `"pd.DataFrame"` mtype, time series are represented by an in-memory container `obj: pandas.DataFrame` as follows.
* structure convention: `obj.index` must be monotonous, and one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`.
* variables: columns of `obj` correspond to different variables
* variable names: column names `obj.columns`
* time points: rows of `obj` correspond to different, distinct time points
* time index: `obj.index` is interpreted as a time index.
* capabilities: can represent multivariate series; can represent unequally spaced series Example of a univariate series in `"pd.DataFrame"` representation.
The single variable has name `"a"`, and is observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="pd.DataFrame", as_scitype="Series")[0]
###Output
_____no_output_____
###Markdown
Example of a bivariate series in `"pd.DataFrame"` representation.
This series has two variables, named `"a"` and `"b"`. Both are observed at the same four time points 0, 1, 2, 3.
###Code
get_examples(mtype="pd.DataFrame", as_scitype="Series")[1]
###Output
_____no_output_____
###Markdown
Section 1.1.2: Time series - the `"pd.Series"` mtype
In the `"pd.Series"` mtype, time series are represented by an in-memory container `obj: pandas.Series` as follows.
* structure convention: `obj.index` must be monotonous, and one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex`.
* variables: there is a single variable, corresponding to the values of `obj`. Only univariate series can be represented.
* variable names: by default, there is no column name. If needed, a variable name can be provided as `obj.name`.
* time points: entries of `obj` correspond to different, distinct time points
* time index: `obj.index` is interpreted as a time index.
* capabilities: cannot represent multivariate series; can represent unequally spaced series Example of a univariate series in `"pd.Series"` mtype representation.
The single variable has name `"a"`, and is observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="pd.Series", as_scitype="Series")[0]
###Output
_____no_output_____
###Markdown
Section 1.1.3: Time series - the `"np.ndarray"` mtype
In the `"np.ndarray"` mtype, time series are represented by an in-memory container `obj: np.ndarray` as follows.
* structure convention: `obj` must be 2D, i.e., `obj.shape` must have length 2. This is also true for univariate time series.
* variables: variables correspond to columns of `obj`.
* variable names: the `"np.ndarray"` mtype cannot represent variable names.
* time points: the rows of `obj` correspond to different, distinct time points.
* time index: The time index is implicit and by-convention. The `i`-th row (for an integer `i`) is interpreted as an observation at the time point `i`.
* capabilities: cannot represent multivariate series; cannot represent unequally spaced series Example of a univariate series in `"np.ndarray"` mtype representation.
There is a single (unnamed) variable, it is observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="np.ndarray", as_scitype="Series")[0]
###Output
_____no_output_____
###Markdown
Example of a bivariate series in `"np.ndarray"` mtype representation.
There are two (unnamed) variables, they are both observed at four time points 0, 1, 2, 3.
###Code
get_examples(mtype="np.ndarray", as_scitype="Series")[1]
###Output
_____no_output_____
###Markdown
Section 1.2: Time series panels - the `"Panel"` scitype
The major representations of time series panels in `sktime` are:
* `"pd-multiindex"` - a `pandas.DataFrame`, with row multi-index (`instances`, `timepoints`), cols = variables
* `"numpy3D"` - a 3D `np.ndarray`, with axis 0 = instances, axis 1 = variables, axis 2 = time points
* `"df-list"` - a `list` of `pandas.DataFrame`, with list index = instances, data frame rows = time points, data frame cols = variables
These representations are considered primary representations in `sktime` and are core to internal computations.
There are further, minor representations of time series panels in `sktime`:
* `"nested_univ"` - a `pandas.DataFrame`, with `pandas.Series` in cells. data frame rows = instances, data frame cols = variables, and series axis = time points
* `"numpyflat"` - a 2D `np.ndarray` with rows = instances, and columns indexed by a pair index of (variables, time points). This format is only being converted to and cannot be converted from (since number of variables and time points may be ambiguous).
* `"pd-wide"` - a `pandas.DataFrame` in wide format: has column multi-index (variables, time points), rows = instances; the "variables" index can be omitted for univariate time series
* `"pd-long"` - a `pandas.DataFrame` in long format: has cols `instances`, `timepoints`, `variable`, `value`; entries in `value` are indexed by tuples of values in (`instances`, `timepoints`, `variable`).
The minor representations are currently not fully consolidated in-code and are not discussed further below. Contributions are appreciated. Section 1.2.1: Time series panels - the `"pd-multiindex"` mtype
In the `"pd-multiindex"` mtype, time series panels are represented by an in-memory container `obj: pandas.DataFrame` as follows.
* structure convention: `obj.index` must be a pair multi-index of type `(RangeIndex, t)`, where `t` is one of `Int64Index`, `RangeIndex`, `DatetimeIndex`, `PeriodIndex` and monotonous. `obj.index` must have name `("instances", "timepoints")`.
* instances: rows with the same `"instances"` index correspond to the same instance; rows with different `"instances"` index correspond to different instances.
* instance index: the first element of pairs in `obj.index` is interpreted as an instance index.
* variables: columns of `obj` correspond to different variables
* variable names: column names `obj.columns`
* time points: rows of `obj` with the same `"timepoints"` index correspond correspond to the same time point; rows of `obj` with different `"timepoints"` index correspond correspond to the different time points.
* time index: the second element of pairs in `obj.index` is interpreted as a time index.
* capabilities: can represent panels of multivariate series; can represent unequally spaced series; can represent panels of unequally supported series; cannot represent panels of series with different sets of variables. Example of a panel of multivariate series in `"pd-multiindex"` mtype representation.
The panel contains three multivariate series, with instance indices 0, 1, 2. All series have two variables with names `"var_0"`, `"var_1"`. All series are observed at three time points 0, 1, 2.
###Code
get_examples(mtype="pd-multiindex", as_scitype="Panel")[0]
###Output
_____no_output_____
###Markdown
Section 1.2.2: Time series panels - the `"numpy3D"` mtype
In the `"numpy3D"` mtype, time series panels are represented by an in-memory container `obj: np.ndarray` as follows.
* structure convention: `obj` must be 3D, i.e., `obj.shape` must have length 2.
* instances: instances correspond to axis 0 elements of `obj`.
* instance index: the instance index is implicit and by-convention. The `i`-th element of axis 0 (for an integer `i`) is interpreted as indicative of observing instance `i`.
* variables: variables correspond to axis 1 elements of `obj`.
* variable names: the `"numpy3D"` mtype cannot represent variable names.
* time points: time points correspond to axis 2 elements of `obj`.
* time index: the time index is implicit and by-convention. The `i`-th elemtn of axis 2 (for an integer `i`) is interpreted as an observation at the time point `i`.
* capabilities: can represent panels of multivariate series; cannot represent unequally spaced series; cannot represent panels of unequally supported series; cannot represent panels of series with different sets of variables. Example of a panel of multivariate series in `"numpy3D"` mtype representation.
The panel contains three multivariate series, with instance indices 0, 1, 2. All series have two variables (unnamed). All series are observed at three time points 0, 1, 2.
###Code
get_examples(mtype="numpy3D", as_scitype="Panel")[0]
###Output
_____no_output_____
###Markdown
Section 1.2.3: Time series panels - the `"df-list"` mtype
In the `"df-list"` mtype, time series panels are represented by an in-memory container `obj: List[pandas.DataFrame]` as follows.
* structure convention: `obj` must be a list of `pandas.DataFrames`. Individual list elements of `obj` must follow the `"pd.DataFrame"` mtype convention for the `"Series"` scitype.
* instances: instances correspond to different list elements of `obj`.
* instance index: the instance index of an instance is the list index at which it is located in `obj`. That is, the data at `obj[i]` correspond to observations of the instance with index `i`.
* variables: columns of `obj[i]` correspond to different variables available for instance `i`.
* variable names: column names `obj[i].columns` are the names of variables available for instance `i`.
* time points: rows of `obj[i]` correspond to different, distinct time points, at which instance `i` is observed.
* time index: `obj[i].index` is interpreted as the time index for instance `i`.
* capabilities: can represent panels of multivariate series; can represent unequally spaced series; can represent panels of unequally supported series; can represent panels of series with different sets of variables. Example of a panel of multivariate series in `"df-list"` mtype representation.
The panel contains three multivariate series, with instance indices 0, 1, 2. All series have two variables with names `"var_0"`, `"var_1"`. All series are observed at three time points 0, 1, 2.
###Code
get_examples(mtype="df-list", as_scitype="Panel")[0]
###Output
_____no_output_____
###Markdown
Section 2: validity checking and mtype conversion
`sktime`'s `datatypes` module provides users with generic functionality for:
* checking in-memory containers against mtype conventions, with informative error messages that help moving data to the right format
* converting different mtypes to each other, for a given scitype
In this section, this functionality and intended usage worfklows are presented. Section 2.1: Preparing data, checking in-memory containers for validity
`sktime`'s `datatypes` module provides convenient functionality for users to check validity of their in-memory data containers, using the `check_is_mtype` and `check_raise` functions. Both functions provide generic validity checking functionality, `check_is_mtype` returns metadata and potential issues as return arguments, while `check_raise` directly produces informative error messages in case a container does not comply with a given `mtype`.
A recommended notebook workflow to ensure that a given data container is compliant with `sktime` `mtype` specification is as follows:
1. load the data in an in-memory data container
2. identify the `scitype`, e.g., is this supposed to be a time series (`Series`) or a panel of time series (`Panel`)
3. select the target `mtype` (see Section 1 for a list), and attempt to manually reformat the data to comply with the `mtype` specification if it is not already compliant
4. run `check_raise` on the data container, to check whether it complies with the `mtype` and `scitype`
5. if an error is raised, repeat 3 and 4 until no error is raised Section 2.1.1: validity checking, example 1 (simple mistake)
Suppose we have the following `numpy.ndarray` representing a univariate time series:
###Code
import numpy as np
y = np.array([1, 6, 3, 7, 2])
###Output
_____no_output_____
###Markdown
to check compatibility with sktime:
(instruction: uncomment and run the code to see the informative error message)
###Code
from sktime.datatypes import check_raise
# check_raise(y, mtype="np.ndarray")
###Output
_____no_output_____
###Markdown
this tells us that `sktime` uses 2D numpy arrays for time series, if the `np.ndarray` mtype is used. While most methods provide convenience functionality to do this coercion automatically, the "correct" format would be 2D as follows:
###Code
check_raise(y.reshape(-1, 1), mtype="np.ndarray")
###Output
_____no_output_____
###Markdown
For use in own code or additional metadata, the error message can be obtained using the `check_is_mtype` function:
###Code
from sktime.datatypes import check_is_mtype
check_is_mtype(y, mtype="np.ndarray", return_metadata=True)
###Output
_____no_output_____
###Markdown
and metadata is produced if the argument passes the validity check:
###Code
check_is_mtype(y.reshape(-1, 1), mtype="np.ndarray", return_metadata=True)
###Output
_____no_output_____
###Markdown
Note: if the name of the mtype is ambiguous and can refer to multiple scitypes, the additional argument `scitype` must be provided. This should not be the case for any common in-memory containers, we mention this for completeness.
###Code
check_is_mtype(y, mtype="np.ndarray", scitype="Series")
###Output
_____no_output_____
###Markdown
Section 2.1.2: validity checking, example 2 (non-obvious mistake)
Suppose we have converted our data into a multi-index panel, i.e., we want to have a `Panel` of mtype `pd-multiindex`.
###Code
import pandas as pd
cols = ["instances", "time points"] + [f"var_{i}" for i in range(2)]
X = pd.concat(
[
pd.DataFrame([[0, 0, 1, 4], [0, 1, 2, 5], [0, 2, 3, 6]], columns=cols),
pd.DataFrame([[1, 0, 1, 4], [1, 1, 2, 55], [1, 2, 3, 6]], columns=cols),
pd.DataFrame([[2, 0, 1, 42], [2, 1, 2, 5], [2, 2, 3, 6]], columns=cols),
]
).set_index(["instances", "time points"])
###Output
_____no_output_____
###Markdown
It is not obvious whether `X` satisfies the `pd-multiindex` specification, so let's check:
(instruction: uncomment and run the code to see the informative error message)
###Code
from sktime.datatypes import check_raise
# check_raise(X, mtype="pd-multiindex")
###Output
_____no_output_____
###Markdown
The informative error message highlights a typo in one of the multi-index columns, so we do this:
###Code
X.index.names = ["instances", "timepoints"]
###Output
_____no_output_____
###Markdown
Now the validity check passes:
###Code
check_raise(X, mtype="pd-multiindex")
###Output
_____no_output_____
###Markdown
Section 2.1.3: inferring the mtype
`sktime` also provides functionality to infer the mtype of an in-memory data container, which is useful in case one is sure that the container is compliant but one has forgotten the exact string, or in a case where one would like to know whether an in-memory container is already in some supported, compliant format. For this, only the scitype needs to be specified:
###Code
from sktime.datatypes import mtype
mtype(X, as_scitype="Panel")
###Output
_____no_output_____
###Markdown
Section 2.2: conversion between mtypes
`sktime`'s `datatypes` module also offers uninfied conversion functionality between mtypes. This is useful for users as well as for method developers.
The `convert` function requires to specify the mtype to convert from, and the mtype to convert to. The `convert_to` function only requires to specify the mtype to convert to, automatically inferring the mtype of the input if it can be inferred. `convert_to` should be used if the input can have multiple mtypes. Section 2.2.1: simple conversion
Example: converting a `numpy3D` panel of time series to `pd-multiindex` mtype:
###Code
from sktime.datatypes import get_examples
X = get_examples(mtype="numpy3D", as_scitype="Panel")[0]
X
from sktime.datatypes import convert
convert(X, from_type="numpy3D", to_type="pd-multiindex")
from sktime.datatypes import convert_to
convert_to(X, to_type="pd-multiindex")
###Output
_____no_output_____
###Markdown
Section 2.2.2: advanced conversion features
`convert_to` also allows to specify multiple output types. The `to_type` argument can be a list of mtypes. In that case, the input passed through unchanged if its mtype is on the list; if the mtype of the input is not on the list, it is converted to the mtype which is the first element of the list.
Example: converting a panel of time series of to either `"pd-multiindex"` or `"numpy3D"`. If the input is `"numpy3D"`, it remains unchanged. If the input is `"df-list"`, it is converted to `"pd-multiindex"`.
###Code
from sktime.datatypes import get_examples
X = get_examples(mtype="numpy3D", as_scitype="Panel")[0]
X
from sktime.datatypes import convert_to
convert_to(X, to_type=["pd-multiindex", "numpy3D"])
X = get_examples(mtype="df-list", as_scitype="Panel")[0]
X
convert_to(X, to_type=["pd-multiindex", "numpy3D"])
###Output
_____no_output_____
###Markdown
Section 2.2.3: inspecting implemented conversions
Currently, conversions are work in progress, and not all possible conversions are available - contributions are welcome.
To see which conversions are currently implemented for a scitype, use the `_conversions_defined` developer method from the `datatypes._convert` module. This produces a table with a "1" if conversion from mtype in row row to mtypw in column is implemented.
###Code
from sktime.datatypes._convert import _conversions_defined
_conversions_defined(scitype="Panel")
###Output
_____no_output_____
###Markdown
Section 3: loading data sets
`sktime`'s `datasets` module allows to load datasets for testing and benchmarking. This includes:
* example data sets that ship directly with `sktime`
* downloaders for data sets from common repositories
All data retrieved in this way are in `sktime` compatible in-memory and/or file formats.
Currently, no systematic tagging and registry retrieval for the available data sets is implemented - contributions to this would be very welcome. Section 3.1: forecasting data sets
`sktime`'s `datasets` module currently allows to load a the following forecasting example data sets:
| dataset name | loader function | properties |
|----------|:-------------:|------:|
| Box/Jenkins airline data | `load_airline` | univariate |
| Lynx sales data | `load_lynx` | univariate |
| Shampoo sales data | `load_shampoo_sales` | univariate |
| Pharmaceutical Benefit Scheme data | `load_PBS_dataset` | univariate |
| Longley US macroeconomic data | `load_longley` | multivariate |
| MTS consumption/income data | `load_uschange` | multivariate |
`sktime` currently has no connectors to forecasting data repositories - contributions are much appreciated.
Forecasting data sets are all of `Series` scitype, they can be univariate or multivariate.
Loaders for univariate data have no arguments, and always return the data in the `"pd.Series"` mtype:
###Code
from sktime.datasets import load_airline
load_airline()
###Output
_____no_output_____
###Markdown
Loaders for multivariate data can be called in two ways:
* without an argument, in which case a multivariate series of `"pd.DataFrame"` mtype is returned:
###Code
from sktime.datasets import load_longley
load_longley()
###Output
_____no_output_____
###Markdown
* with an argument `y_name` that must coincide with one of the column/variable names, in which a pair of series `y`, `X` is returned, with `y` of `"pd.Series"` mtype, and `X` of `"pd.DataFrame"` mtype - this is convenient for univariate forecasting with exogeneous variables.
###Code
y, X = load_longley(y_name="TOTEMP")
y
X
###Output
_____no_output_____
###Markdown
Section 3.2: time series classification data sets
`sktime`'s `datasets` module currently allows to load a the following time series classification example data sets:
| dataset name | loader function | properties |
|----------|:-------------:|------:|
| Appliance power consumption data | `load_acsf1` | univariate, equal length/index |
| Arrowhead shape data | `load_arrow_head` | univariate, equal length/index |
| Gunpoint motion data | `load_gunpoint` | univariate, equal length/index |
| Italy power demand data | `load_italy_power_demand` | univariate, equal length/index |
| Japanese vowels data | `load_japanese_vowels` | univariate, equal length/index |
| OSUleaf leaf shape data | `load_osuleaf` | univariate, equal length/index |
| Basic motions data | `load_basic_motions` | multivariate, equal length/index |
Currently, there are no unequal length or unequal index time series classification example data directly in `sktime`.
`sktime` also provides a full interface to the UCR/UEA time series data set archive, via the `load_UCR_UEA_dataset` function.
The UCR/UEA archive also contains time series classification data sets which are multivariate, or unequal length/index (in either combination).
Section 3.2.2: time series classification data sets in `sktime`
Time series classification data sets consists of a panel of time series of `Panel` scitype, together with classification labels, one per time series.
If a loader is invoked with minimal arguments, the data are returned as `"nested_univ"` mtype, with labels and series to classify in the same `pd.DataFrame`. Using the `return_X_y=True` argument, the data are returned separated into features `X` and labels `y`, with `X` a `Panel` of `nested_univ` mtype, and `y` and a `sklearn` compatible numpy vector of labels:
###Code
from sktime.datasets import load_arrow_head
X, y = load_arrow_head(return_X_y=True)
X
y
###Output
_____no_output_____
###Markdown
The panel can be converted from `"nested_univ"` mtype to other mtype formats, using `datatypes.convert` or `convert_to` (see above):
###Code
from sktime.datatypes import convert_to
convert_to(X, to_type="pd-multiindex")
###Output
_____no_output_____
###Markdown
Data set loaders can be invoked with the `split` parameter to obtain reproducible training and test sets for comparison across studies. If `split="train"`, a pre-defined training set is retrieved; if `split="test"`, a pre-defined test set is retrieved.
###Code
X_train, y_train = load_arrow_head(return_X_y=True, split="train")
X_test, y_test = load_arrow_head(return_X_y=True, split="test")
# this retrieves training and test X/y for reproducible use in studies
###Output
_____no_output_____
###Markdown
Section 3.2.3: time series classification data sets from the UCR/UEA time series classification repository
The `load_UCR_UEA_dataset` utility will download datasetes from the UCR/UEA time series classification repository and make them available as in-memory datasets, with the same syntax as `sktime` native data set loaders.
Datasets are indexed by unique string identifiers, which can be inspected on the [repository itself](https://www.timeseriesclassification.com/), or via the register in the `datasets.tsc_dataset_names` module, by property:
###Code
from sktime.datasets.tsc_dataset_names import univariate
###Output
_____no_output_____
###Markdown
The imported variables are all lists of strings which contain the unique string identifiers of datasets with certain properties, as follows:
| register name | uni-/multivariate | equal/unequal length | with/without missing values |
|----------|:-------------:|------:|------:|
| `univariate` | only univariate | both included | both included |
| `multivariate` | only multivariate | both included | both included |
| `univariate_equal_length` | only univariate | only equal length | both included |
| `univariate_variable_length` | only univariate | only unequal length | both included |
| `univariate_missing_values` | only univariate | both included | only with missing values |
| `multivariate_equal_length` | only multivariate | only equal length | both included |
| `multivariate_unequal_length` | only multivariate | only unequal length | both included |
Lookup and retrieval using these lists is, admittedly, a bit inconvenient - contributions to `sktime` to write a lookup functions such as `all_estimators` or `all_tags`, based on capability or property tags attached to datasets would be very much appreciated.
An example list is displayed below:
###Code
univariate
###Output
_____no_output_____
###Markdown
The loader function `load_UCR_UEA_dataset` behaves exactly as `sktime` data loaders, with an additional argument `name` that should be set to one of the unique identifying strings for the UCR/UEA datasets, for instance:
###Code
from sktime.datasets import load_UCR_UEA_dataset
X, y = load_UCR_UEA_dataset(name="Yoga", return_X_y=True)
###Output
_____no_output_____ |
edx/DEV236x_Introduction_to_Python_Unit_1/MOD03_1-4.3_Intro_Python.ipynb | ###Markdown
1-4.3 Intro Python Conditionals - **`if`, `else`, `pass`** - Conditionals using Boolean String Methods - Comparison operators - **String comparisons**----- > Student will be able to - **control code flow with `if`... `else` conditional logic** - using Boolean string methods (`.isupper(), .isalpha(), startswith()...`) - using comparison (`>, =, <=, ==, !=`) - **using Strings in comparisons** Concept String Comparisons- Strings can be equal `==` or unequal `!=`- Strings can be greater than `>` or less than `<` - alphabetically `"A"` is less than `"B"`- lower case `"a"` is greater than upper case `"A"` Examples
###Code
# review and run code
"hello" < "Hello"
# review and run code
"Aardvark" > "Zebra"
# review and run code
'student' != 'Student'
# review and run code
print("'student' >= 'Student' is", 'student' >= 'Student')
print("'student' != 'Student' is", 'student' != 'Student')
# review and run code
"Hello " + "World!" == "Hello World!"
###Output
_____no_output_____
###Markdown
Task 1 String Comparisons
###Code
msg = "Hello"
# [ ] print the True/False results of testing if msg string equals "Hello" string
print(msg + ' equals "Hello" string is', msg == "Hello")
greeting = "Hello"
# [ ] get input for variable named msg, and ask user to 'Say "Hello"'
# [ ] print the results of testing if msg string equals greeting string
msg = input('Say "Hello": ')
print("Your input is equals", greeting, 'is', msg == greeting)
###Output
Say "Hello": Privet
Your input is equals Hello is False
###Markdown
Concept Conditionals: String comparisons with `if`[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/d66365b5-03fa-4d0d-a455-5adba8b8fb1b/Unit1_Section4.3-string-compare-if.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/d66365b5-03fa-4d0d-a455-5adba8b8fb1b/Unit1_Section4.3-string-compare-if.vtt","srclang":"en","kind":"subtitles","label":"english"}]) Examples
###Code
# [ ] review and run code
msg = "Save the notebook"
if msg.lower() == "save the notebook":
print("message as expected")
else:
print("message not as expected")
# [ ] review and run code
msg = "Save the notebook"
prediction = "save the notebook"
if msg.lower() == prediction.lower():
print("message as expected")
else:
print("message not as expected")
###Output
message as expected
###Markdown
Task 2 Conditionals: comparison operators with if
###Code
# [ ] get input for a variable, answer, and ask user 'What is 8 + 13? : '
# [ ] print messages for correct answer "21" or incorrect answer using if/else
# note: input returns a "string"
answer = input("What is 8 + 13? : ")
if answer == '21':
print("Your answer is coorect!")
else:
print("Your answer is incorrect.")
###Output
What is 8 + 13? : 22
Your answer is incorrect.
###Markdown
Task 3 Program: True False Quiz FunctionCall the tf_quiz function with 2 arguments- T/F question string - answer key string like "T" Return a string: "correct" or incorrect"[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/3805cc48-f5c9-4ec8-86ad-1e1db45788e4/Unit1_Section4.3-TF-quiz.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/3805cc48-f5c9-4ec8-86ad-1e1db45788e4/Unit1_Section4.3-TF-quiz.vtt","srclang":"en","kind":"subtitles","label":"english"}]) Define and use `tf_quiz()` function - **`tf_quiz()`** has **2 parameters** which are both string arguments - **`question`**: a string containg a T/F question like "Should save your notebook after edit?(T/F): " - **`correct_ans`**: a string indicating the *correct answer*, either **"T"** or **"F"** - **`tf_quiz()`** returns a string: "correct" or "incorrect"- Test tf_quiz(): **create a T/F question** (*or several!*) to **call tf_quiz()**
###Code
# [ ] Create the program, run tests
def tf_quiz(question, correct_ans):
msg = "(T/F) " + question + ' : '
answer = input(msg)
if answer.lower() == correct_ans.lower():
return 'correct'
else:
return 'incorrect'
quiz_eval = tf_quiz("There are two suns on Tatooine", 'T')
print("your answer is", quiz_eval)
###Output
(T/F) There are two suns on Tatooine : t
your answer is correct
|
extra/clustering.ipynb | ###Markdown
Read in parquet files from pre-processing
###Code
# do the reading
templates = pd.read_parquet('data/processed_dfs/templates.parquet' )
sentences = pd.read_parquet('data/processed_dfs/sentences.parquet')
mentions = pd.read_parquet('data/processed_dfs/mentions.parquet')
umls = pd.read_parquet('data/processed_dfs/umls.parquet')
sentences.head()
mentions.head()
templates.head()
###Output
_____no_output_____
###Markdown
To make templates:1 Make an empty data frame with the fields to hold template info2 For each sentence: * Get the predicates for that sentence * trim the frameset after the '.' * Get the mentions * Get mention type * Append umls cui to end of mention (just take the first one) * Order the predicates and mentions by begin offset * Combine into a string separated by spaces * Write the template and semantic template to the dataframe
###Code
print(len(templates))
# templates = templates.drop_duplicates('sem_template')
# print(len(templates))
def get_vectors(df):
tf = TfidfVectorizer()
return tf.fit_transform(df['sem_template'])
# Only use unique templates
vectors = get_vectors(templates)
vecd = vectors.todense()
print(vectors.shape)
cluster_sizes = [70, 80, 90, 100, 110, 120, 125, 130, 140, 150, 200]
for n_cluster in cluster_sizes:
km = KMeans( init='k-means++', max_iter=100, n_init=1,
n_clusters=n_cluster, verbose=False)
km.fit(vectors)
predictions = km.predict(vectors)
sil_score = silhouette_score(vectors, predictions, metric='euclidean')
print(f"Silhouette score for n_clusters={n_cluster}:")
print(sil_score)
km = KMeans( init='k-means++', max_iter=100, n_init=1,
n_clusters=120, verbose=False)
km.fit(vectors)
predictions = km.predict(vectors)
sil_score = silhouette_score(vectors, predictions, metric='euclidean')
# print(km.cluster_centers_.shape)
# order_centroids = km.cluster_centers_.argsort()[:, ::-1]
# terms = tf.get_feature_names()
# for i in range(50):
# print("Cluster %d:" % i, end='')
# for ind in order_centroids[i, :15]:
# print(' %s' % terms[ind], end='')
# print()
predictions = km.predict(vectors)
silhouette_score(vectors, predictions, metric='euclidean')
templates['cluster'] = predictions
templates.head()
sentences.shape
###Output
_____no_output_____
###Markdown
Add cluster labels to sentences and mentions (entities)
###Code
sentences = sentences.merge(templates[['sent_id', 'cluster']], on='sent_id')
mentions = mentions.merge(templates[['sent_id', 'cluster']], on='sent_id')
sentences.head()
mentions.head()
###Output
_____no_output_____
###Markdown
Get the size of each cluster
###Code
pdf = pd.DataFrame(predictions, columns=['cluster'])
cluster_counts = pdf.groupby('cluster').size().reset_index(name='count')
cluster_counts['count'].plot(kind='bar')
cluster_counts['frequency'] = cluster_counts['count'] / cluster_counts['count'].sum()
cluster_counts.head()
###Output
_____no_output_____
###Markdown
Get the distribution of CUIs in each cluster How many clusters on average does a CUI appear in
###Code
cui_clust_freq = mentions.groupby(['cui', 'cluster']).size().reset_index(name='cluster_count')
cui_clust_freq.sort_values('cluster_count', ascending=False).head(10)
num_clusters_per_cui = cui_clust_freq.groupby('cui').size().reset_index(name='num_clusters')
# avg_num_clusters = .agg({'num_clusters': 'mean'})
num_clusters_per_cui.sort_values('num_clusters', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Max and average number of clusters that CUIs appear in
###Code
print("Max number of clusters that a cui appears in")
print(num_clusters_per_cui.agg({'num_clusters': 'max'}))
print('Average number of clusters that cuis appear in:')
print(num_clusters_per_cui.agg({'num_clusters': 'mean'}))
max_clusters = num_clusters_per_cui[num_clusters_per_cui['num_clusters'] == 23]
max_clusters
###Output
_____no_output_____
###Markdown
The preferred text of cuis that occur in the most number of clusters
###Code
mentions[mentions['cui'].isin(max_clusters['cui'])]['preferred_text'].unique()
###Output
_____no_output_____
###Markdown
Average number of unique CUIs in a cluster
###Code
num_cuis_in_cluster_freq = cui_clust_freq[['cui', 'cluster']] \
.groupby('cluster') \
.size() \
.reset_index(name="num_cuis_in_cluster")
num_cuis_in_cluster_freq.sort_values('num_cuis_in_cluster', ascending=False)
num_cuis_in_cluster_freq.agg({'num_cuis_in_cluster': 'mean'})
###Output
_____no_output_____
###Markdown
Get the cluster label frequency by sentence position
###Code
cluster_label_by_sentence_pos = pd.crosstab(templates['cluster']
,templates['sentence_number']
).apply(lambda x: x / x.sum(), axis=0)
cluster_label_by_sentence_pos
###Output
_____no_output_____
###Markdown
Get the number of documents in each cluster
###Code
mentions[mentions['cluster'] == 1]
umls[umls['xmi_id'].isin([17309, 11768, 11337, 4456, 15539, 16616, 10061, 13422]) ]
sentences[sentences['sent_id'] == 'f918cc4a-2f8b-4c5e-a904-3de84efe714b']
notes = pd.read_parquet('data/note-events.parquet', engine='fastparquet')
notes[notes['ROW_ID'] == 333908]['TEXT'].iloc[0][1368:1372]
###Output
_____no_output_____
###Markdown
Generating Notes Get all the entities for the document
###Code
doc_ids = templates['doc_id'].unique()
notes = notes[notes['ROW_ID'].isin(doc_ids)]
notes = notes.reset_index(drop=True)
# notes = notes.drop(['CHARTDATE','CHARTTIME','STORETIME','CGID','ISERROR'],axis=1)
doc = notes.sample(n=1)
doc_id = doc['ROW_ID'].iloc[0]
doc_id
###Output
_____no_output_____
###Markdown
Drop templates that contain entities not in the document
###Code
ents_in_doc = mentions[mentions['doc_id'] == doc['ROW_ID'].iloc[0]]
ments_in_doc = ents_in_doc.mention_type.unique()
# print(ments_in_doc)
ents_in_doc.head()
# get metions where mention_type is in doc entities types
print(len(mentions))
doc_ments = mentions[mentions.cui.isin(ents_in_doc.cui.unique())]
# print(len(doc_ments))
doc_ments.head()
# get templates that have the corresponding sentence ids from doc_ments
template_candidates = templates[templates.sent_id.isin(doc_ments.sent_id)]
template_candidates.head()
###Output
_____no_output_____
###Markdown
Choose a cluster based on cluster frequency for that sentence position
###Code
candidate_cluster_labels = template_candidates.cluster.sort_values().unique()
candidate_clusters = cluster_label_by_sentence_pos.iloc[candidate_cluster_labels]
sent_pos = 0
# remove cluster labels not present in template candidates
selected_cluster = candidate_clusters.sample(
n=1,
weights=candidate_clusters.loc[:,sent_pos]
).iloc[0].name
selected_cluster
# templates_in_cluster = template_candidates[template_candidates['cluster'] == selected_cluster.iloc[0].index]
cluster_templates = template_candidates[template_candidates.cluster == selected_cluster]
cluster_templates.head()
###Output
_____no_output_____
###Markdown
Choose a template from the cluster base on frequency for that sentence position
###Code
# templates_at_pos = cluster_templates[cluster_templates.sentence_number == sent_pos]
template = cluster_templates.sample(n=1)
template
# sentences[sentences.sent_id == 'deef8a81-b222-4d1f-aa3f-7dfc160cb428'].iloc[0].text
###Output
_____no_output_____
###Markdown
Fill template blank Choosing textSelect text to fill the template blank based on the frequency of strings for the CUI associated with the mention
###Code
# get mentions in this template
template_id = template.iloc[0]['sent_id']
ments_in_temp = mentions[mentions.sent_id == template_id]
ments_in_temp
# Get the sentence for that template
raw_sentence = sentences[sentences.sent_id == template_id]
raw_sentence.iloc[0].text
# Select entities from entities in the document that match that entity type
#
ments_in_temp
# ments_in_temp.drop(ments_in_temp.loc[482].name, axis=0)
concepts = umls[umls.cui == ments_in_temp.iloc[0].cui]
concepts.head()
# ents_in_doc
# txt_counts.sample(n=1, weights=txt_counts.cnt).iloc[0].text
def template_filler(template, sentences, entities, all_mentions):
# print(template.sem_template)
num_start = len(entities)
template_id = template.iloc[0]['sent_id']
ments_in_temp = all_mentions[all_mentions.sent_id == template_id]
raw_sentence = sentences[sentences.sent_id == template_id]
# print(f'raw sent df size: {len(raw_sentence)}')
# print(template_id)
sent_begin = raw_sentence.iloc[0].begin
sent_end = raw_sentence.iloc[0].end
raw_text = raw_sentence.iloc[0].text
replacements = []
# rows_to_drop = []
# print('Mention types in template')
# print(ments_in_temp.mention_type.unique())
# print('types in entities')
# print(entities.mention_type.unique())
for i, row in ments_in_temp.iterrows():
ents_subset = entities[entities.mention_type == row.mention_type]
if len(ents_subset) == 0:
print('Empty list of doc entities')
print(entities.mention_type)
print(row.mention_type)
break
rand_ent = ents_subset.sample(n=1)
entities = entities[entities['id'] != rand_ent.iloc[0]['id']]
# rows_to_drop.append(rand_ent.iloc[0].name)
ent_cui = rand_ent.iloc[0].cui
# print(ent_cui)
span_text = get_text_for_mention(ent_cui, all_mentions)
replacements.append({
'text' : span_text,
'begin' : row.begin - sent_begin,
'end' : row.end - sent_begin,
})
new_sentence = ''
for i, r in enumerate(replacements):
if i == 0:
new_sentence += raw_text[0 : r['begin'] ]
else:
new_sentence += raw_text[replacements[i-1]['end'] : r['begin']]
new_sentence += r['text']
if(len(replacements) > 1):
new_sentence += raw_text[replacements[-1]['end'] : ]
# clean up
num_end = len(entities)
# print(f"Dropped {num_start - num_end} rows")
return new_sentence, entities
# Find all the text associated with the cui of the mention in the template
# choose a text span based on frequency
def get_text_for_mention(cui, mentions):
txt_counts = mentions[mentions.cui == cui].groupby('text').size().reset_index(name='cnt')
return txt_counts.sample(n=1, weights=txt_counts.cnt).iloc[0].text
###Output
_____no_output_____
###Markdown
*** Write a full note ***
###Code
# Select document to write note for
# doc = notes.sample(n=1)
# doc_id = doc['ROW_ID'].iloc[0]
doc_id = 374185
# Get all the entities in the chosen document
ents_in_doc = mentions[mentions['doc_id'] == doc_id]
new_doc_sentences = []
sent_pos = 0
while len(ents_in_doc) > 0:
# print(f"Sentence position: {sent_pos}")
# print(f"Length of remaining entities: {len(ents_in_doc)}")
# Get list of possible mentions based on CUIs found in the document
mentions_pool = mentions[(mentions.cui.isin(ents_in_doc.cui.unique()))
& (mentions.mention_type.isin(ents_in_doc.mention_type.unique()))]
# Get template pool based on mentions pool
# TODO: Need to only choose templates where all the mentions are in `ents_in_doc`
template_candidates = templates[templates.sent_id.isin(mentions_pool.sent_id)]
# ts = len(template_candidates.sent_id.unique())
# ms = len(mentions_pool.sent_id.unique())
# print(ts, ms)
def all_ents_present(row, doc_ents, ments_pool):
# Get mentions in this template
all_temp_ments = ments_pool[ments_pool['sent_id'] == row['sent_id']]
available_mentions = all_temp_ments[all_temp_ments['mention_type'].isin(doc_ents['mention_type'])]
return (len(available_mentions) > 0)
mask = template_candidates.apply(all_ents_present,
args=(ents_in_doc, mentions_pool),
axis=1)
template_candidates = template_candidates[mask]
# print(f'num templates: {len(template_candidates)}')
#If there are no more possible templates then break
if len(template_candidates) == 0:
break
# Get candidate clusters based on template pool
# Remove the cluster labels that aren't present in template bank
candidate_cluster_labels = template_candidates.cluster.sort_values().unique()
candidate_clusters = cluster_label_by_sentence_pos.iloc[candidate_cluster_labels]
# print(f"Num clusters: {len(candidate_clusters)}")
# Select cluster based on frequency at sentence position
selected_cluster = None
try:
selected_cluster = candidate_clusters.sample(
n=1,
weights=candidate_clusters.loc[:,sent_pos]
).iloc[0].name
except:
# It's possible the clusters we chose don't appear at that position
# so we can choose randomly
# print('choosing random cluster')
selected_cluster = candidate_clusters.sample(n=1).iloc[0].name
# print('selected cluster:')
# print(selected_cluster)
cluster_templates = template_candidates[template_candidates.cluster == selected_cluster]
# Choose template from cluster at random
template = cluster_templates.sample(n=1)
template_id = template.iloc[0]['sent_id']
# Get mentions in the template
ments_in_temp = mentions[mentions.sent_id == template_id]
# Write the sentence and update entities found in the document !!!
t, ents_in_doc = template_filler(template, sentences, ents_in_doc, mentions_pool)
new_doc_sentences.append(t)
sent_pos += 1
'\n'.join(new_doc_sentences)
notes[notes.ROW_ID == 374185].iloc[0].TEXT
###Output
_____no_output_____
###Markdown
Write until all mentions have been used
###Code
mentions.groupby('doc_id').size().reset_index(name='cnt').sort_values('cnt').head(10)
mentions[mentions.doc_id == 476781]
###Output
_____no_output_____ |
5. Transfer Learning - Kaggle/protein-advanced-transfer-learning.ipynb | ###Markdown
Human Protein Multi Label Image Classification - Transfer Learning & RegularizationHow a CNN learns ([source](https://developer.nvidia.com/discover/convolutional-neural-network)):Layer visualization ([source](https://medium.com/analytics-vidhya/deep-learning-visualization-and-interpretation-of-neural-networks-2f3f82f501c5)):Transfer learning ([source](https://mc.ai/transfer-learning-with-deep-learning-machine-learning-techniques/)):This is a starter notebook for the competition [Zero to GANs - Human Protein Classification](https://www.kaggle.com/c/jovian-pytorch-z2g). It incorporates transfer learning, and other techniques from https://jovian.ml/aakashns/05b-cifar10-resnet
###Code
import os
import torch
import pandas as pd
import numpy as np
from torch.utils.data import Dataset, random_split, DataLoader
from PIL import Image
import torchvision.models as models
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
import torchvision.transforms as T
from sklearn.metrics import f1_score
import torch.nn.functional as F
import torch.nn as nn
from torchvision.utils import make_grid
%matplotlib inline
###Output
_____no_output_____
###Markdown
Preparing the Data
###Code
DATA_DIR = '../input/jovian-pytorch-z2g/Human protein atlas'
TRAIN_DIR = DATA_DIR + '/train'
TEST_DIR = DATA_DIR + '/test'
TRAIN_CSV = DATA_DIR + '/train.csv'
TEST_CSV = '../input/jovian-pytorch-z2g/submission.csv'
data_df = pd.read_csv(TRAIN_CSV)
data_df.head()
labels = {
0: 'Mitochondria',
1: 'Nuclear bodies',
2: 'Nucleoli',
3: 'Golgi apparatus',
4: 'Nucleoplasm',
5: 'Nucleoli fibrillar center',
6: 'Cytosol',
7: 'Plasma membrane',
8: 'Centrosome',
9: 'Nuclear speckles'
}
def encode_label(label):
target = torch.zeros(10)
for l in str(label).split(' '):
target[int(l)] = 1.
return target
def decode_target(target, text_labels=False, threshold=0.5):
result = []
for i, x in enumerate(target):
if (x >= threshold):
if text_labels:
result.append(labels[i] + "(" + str(i) + ")")
else:
result.append(str(i))
return ' '.join(result)
class HumanProteinDataset(Dataset):
def __init__(self, df, root_dir, transform=None):
self.df = df
self.transform = transform
self.root_dir = root_dir
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
row = self.df.loc[idx]
img_id, img_label = row['Image'], row['Label']
img_fname = self.root_dir + "/" + str(img_id) + ".png"
img = Image.open(img_fname)
if self.transform:
img = self.transform(img)
return img, encode_label(img_label)
###Output
_____no_output_____
###Markdown
Data augmentations
###Code
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
train_tfms = T.Compose([
T.RandomCrop(512, padding=8, padding_mode='reflect'),
T.RandomResizedCrop(256, scale=(0.5,0.9), ratio=(1, 1)),
T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),
T.RandomHorizontalFlip(),
T.RandomRotation(10),
T.ToTensor(),
T.Normalize(*imagenet_stats,inplace=True),
T.RandomErasing(inplace=True)
])
valid_tfms = T.Compose([
T.Resize(256),
T.ToTensor(),
T.Normalize(*imagenet_stats)
])
np.random.seed(42)
msk = np.random.rand(len(data_df)) < 0.9
train_df = data_df[msk].reset_index()
val_df = data_df[~msk].reset_index()
train_ds = HumanProteinDataset(train_df, TRAIN_DIR, transform=train_tfms)
val_ds = HumanProteinDataset(val_df, TRAIN_DIR, transform=valid_tfms)
len(train_ds), len(val_ds)
def show_sample(img, target, invert=True):
if invert:
plt.imshow(1 - img.permute((1, 2, 0)))
else:
plt.imshow(img.permute(1, 2, 0))
print('Labels:', decode_target(target, text_labels=True))
show_sample(*train_ds[1541])
###Output
Labels: Cytosol(6) Plasma membrane(7)
###Markdown
DataLoaders
###Code
batch_size = 64
train_dl = DataLoader(train_ds, batch_size, shuffle=True,
num_workers=3, pin_memory=True)
val_dl = DataLoader(val_ds, batch_size*2,
num_workers=2, pin_memory=True)
def show_batch(dl, invert=True):
for images, labels in dl:
fig, ax = plt.subplots(figsize=(16, 8))
ax.set_xticks([]); ax.set_yticks([])
data = 1-images if invert else images
ax.imshow(make_grid(data, nrow=16).permute(1, 2, 0))
break
show_batch(train_dl, invert=True)
###Output
_____no_output_____
###Markdown
Model - Transfer Learning
###Code
def F_score(output, label, threshold=0.5, beta=1):
prob = output > threshold
label = label > threshold
TP = (prob & label).sum(1).float()
TN = ((~prob) & (~label)).sum(1).float()
FP = (prob & (~label)).sum(1).float()
FN = ((~prob) & label).sum(1).float()
precision = torch.mean(TP / (TP + FP + 1e-12))
recall = torch.mean(TP / (TP + FN + 1e-12))
F2 = (1 + beta**2) * precision * recall / (beta**2 * precision + recall + 1e-12)
return F2.mean(0)
class MultilabelImageClassificationBase(nn.Module):
def training_step(self, batch):
images, targets = batch
out = self(images)
loss = F.binary_cross_entropy(out, targets)
return loss
def validation_step(self, batch):
images, targets = batch
out = self(images) # Generate predictions
loss = F.binary_cross_entropy(out, targets) # Calculate loss
score = F_score(out, targets)
return {'val_loss': loss.detach(), 'val_score': score.detach() }
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
batch_scores = [x['val_score'] for x in outputs]
epoch_score = torch.stack(batch_scores).mean() # Combine accuracies
return {'val_loss': epoch_loss.item(), 'val_score': epoch_score.item()}
def epoch_end(self, epoch, result):
print("Epoch [{}], last_lr: {:.4f}, train_loss: {:.4f}, val_loss: {:.4f}, val_score: {:.4f}".format(
epoch, result['lrs'][-1], result['train_loss'], result['val_loss'], result['val_score']))
###Output
_____no_output_____
###Markdown
[Learn about ResNets.](https://towardsdatascience.com/an-overview-of-resnet-and-its-variants-5281e2f56035)Check out torchvision models: https://pytorch.org/docs/stable/torchvision/models.html
###Code
resnet34 = models.resnet34()
resnet34
class ProteinResnet(MultilabelImageClassificationBase):
def __init__(self):
super().__init__()
# Use a pretrained model
self.network = models.resnet34(pretrained=True)
# Replace last layer
num_ftrs = self.network.fc.in_features
self.network.fc = nn.Linear(num_ftrs, 10)
def forward(self, xb):
return torch.sigmoid(self.network(xb))
def freeze(self):
# To freeze the residual layers
for param in self.network.parameters():
param.require_grad = False
for param in self.network.fc.parameters():
param.require_grad = True
def unfreeze(self):
# Unfreeze all layers
for param in self.network.parameters():
param.require_grad = True
def get_default_device():
"""Pick GPU if available, else CPU"""
if torch.cuda.is_available():
return torch.device('cuda')
else:
return torch.device('cpu')
def to_device(data, device):
"""Move tensor(s) to chosen device"""
if isinstance(data, (list,tuple)):
return [to_device(x, device) for x in data]
return data.to(device, non_blocking=True)
class DeviceDataLoader():
"""Wrap a dataloader to move data to a device"""
def __init__(self, dl, device):
self.dl = dl
self.device = device
def __iter__(self):
"""Yield a batch of data after moving it to device"""
for b in self.dl:
yield to_device(b, self.device)
def __len__(self):
"""Number of batches"""
return len(self.dl)
device = get_default_device()
device
train_dl = DeviceDataLoader(train_dl, device)
val_dl = DeviceDataLoader(val_dl, device)
###Output
_____no_output_____
###Markdown
Training
###Code
@torch.no_grad()
def evaluate(model, val_loader):
model.eval()
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def get_lr(optimizer):
for param_group in optimizer.param_groups:
return param_group['lr']
def fit_one_cycle(epochs, max_lr, model, train_loader, val_loader,
weight_decay=0, grad_clip=None, opt_func=torch.optim.SGD):
torch.cuda.empty_cache()
history = []
# Set up cutom optimizer with weight decay
optimizer = opt_func(model.parameters(), max_lr, weight_decay=weight_decay)
# Set up one-cycle learning rate scheduler
sched = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, epochs=epochs,
steps_per_epoch=len(train_loader))
for epoch in range(epochs):
# Training Phase
model.train()
train_losses = []
lrs = []
for batch in tqdm(train_loader):
loss = model.training_step(batch)
train_losses.append(loss)
loss.backward()
# Gradient clipping
if grad_clip:
nn.utils.clip_grad_value_(model.parameters(), grad_clip)
optimizer.step()
optimizer.zero_grad()
# Record & update learning rate
lrs.append(get_lr(optimizer))
sched.step()
# Validation phase
result = evaluate(model, val_loader)
result['train_loss'] = torch.stack(train_losses).mean().item()
result['lrs'] = lrs
model.epoch_end(epoch, result)
history.append(result)
return history
model = to_device(ProteinResnet(), device)
history = [evaluate(model, val_dl)]
history
###Output
_____no_output_____
###Markdown
First, freeze the ResNet layers and train some epochs. This only trains the final layer to start classifying the images.
###Code
model.freeze()
epochs = 17
max_lr = 0.01
grad_clip = 0.1
weight_decay = 1e-4
opt_func = torch.optim.Adam
%%time
history += fit_one_cycle(epochs, max_lr, model, train_dl, val_dl,
grad_clip=grad_clip,
weight_decay=weight_decay,
opt_func=opt_func)
###Output
_____no_output_____
###Markdown
Now, unfreeze and train some more.
###Code
model.unfreeze()
%%time
history += fit_one_cycle(epoch, 0.001, model, train_dl, val_dl,
grad_clip=grad_clip,
weight_decay=weight_decay,
opt_func=opt_func)
train_time='22:00'
def plot_scores(history):
scores = [x['val_score'] for x in history]
plt.plot(scores, '-x')
plt.xlabel('epoch')
plt.ylabel('score')
plt.title('F1 score vs. No. of epochs');
plot_scores(history)
def plot_losses(history):
train_losses = [x.get('train_loss') for x in history]
val_losses = [x['val_loss'] for x in history]
plt.plot(train_losses, '-bx')
plt.plot(val_losses, '-rx')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(['Training', 'Validation'])
plt.title('Loss vs. No. of epochs');
plot_losses(history)
def plot_lrs(history):
lrs = np.concatenate([x.get('lrs', []) for x in history])
plt.plot(lrs)
plt.xlabel('Batch no.')
plt.ylabel('Learning rate')
plt.title('Learning Rate vs. Batch no.');
plot_lrs(history)
###Output
_____no_output_____
###Markdown
Making predictions and submission
###Code
def predict_single(image):
xb = image.unsqueeze(0)
xb = to_device(xb, device)
preds = model(xb)
prediction = preds[0]
print("Prediction: ", prediction)
show_sample(image, prediction)
test_df = pd.read_csv(TEST_CSV)
test_dataset = HumanProteinDataset(test_df, TEST_DIR, transform=valid_tfms)
img, target = test_dataset[0]
img.shape
predict_single(test_dataset[100][0])
predict_single(test_dataset[74][0])
test_dl = DeviceDataLoader(DataLoader(test_dataset, batch_size, num_workers=3, pin_memory=True), device)
@torch.no_grad()
def predict_dl(dl, model):
torch.cuda.empty_cache()
batch_probs = []
for xb, _ in tqdm(dl):
probs = model(xb)
batch_probs.append(probs.cpu().detach())
batch_probs = torch.cat(batch_probs)
return [decode_target(x) for x in batch_probs]
test_preds = predict_dl(test_dl, model)
submission_df = pd.read_csv(TEST_CSV)
submission_df.Label = test_preds
submission_df.sample(20)
sub_fname = 'submission.csv'
submission_df.to_csv(sub_fname, index=False)
###Output
_____no_output_____
###Markdown
Save and Commit
###Code
weights_fname = 'protein-resnet.pth'
torch.save(model.state_dict(), weights_fname)
!pip install jovian --upgrade --quiet
import jovian
jovian.reset()
jovian.log_hyperparams(arch='resnet34',
epochs=2*epochs,
lr=max_lr,
scheduler='one-cycle',
weight_decay=weight_decay,
grad_clip=grad_clip,
opt=opt_func.__name__)
jovian.log_metrics(val_loss=history[-1]['val_loss'],
val_score=history[-1]['val_score'],
train_loss=history[-1]['train_loss'],
time=train_time)
project_name='protein-advanced'
jovian.commit(project=project_name, environment=None, outputs=[weights_fname])
###Output
_____no_output_____ |
notebooks/binom_dist_plot.ipynb | ###Markdown
Plots the pmfs of binomial distributions with varying probability of success parameter
###Code
import os
try:
import jax
except:
%pip install jax jaxlib
import jax
import jax.numpy as jnp
from jax.scipy.stats import nbinom
try:
import matplotlib.pyplot as plt
except:
%pip install matplotlib
import matplotlib.pyplot as plt
try:
import seaborn as sns
except:
%pip install seaborn
import seaborn as sns
try:
from scipy.stats import binom
except:
%pip install scipy
from scipy.stats import binom
dev_mode = "DEV_MODE" in os.environ
if dev_mode:
import sys
sys.path.append("scripts")
import pyprobml_utils as pml
from latexify import latexify
latexify(width_scale_factor=2, fig_height=1.5)
N = 10
thetas = [0.25, 0.5, 0.75, 0.9]
x = jnp.arange(0, N + 1)
def make_graph(data):
plt.figure()
x = data["x"]
n = data["n"]
theta = data["theta"]
probs = binom.pmf(x, n, theta)
title = r"$\theta=" + str(theta) + "$"
plt.bar(x, probs, align="center")
plt.xlim([min(x) - 0.5, max(x) + 0.5])
plt.ylim([0, 0.4])
plt.xticks(x)
plt.xlabel("$x$")
plt.ylabel("$p(x)$")
plt.title(title)
sns.despine()
if dev_mode:
pml.savefig("binomDistTheta" + str(int(theta * 100)) + "_latexified.pdf")
for theta in thetas:
data = {"x": x, "n": N, "theta": theta}
make_graph(data)
###Output
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
###Markdown
DemoYou can see different examples of binomial distributions by changing the theta in the following demo.
###Code
from ipywidgets import interact
@interact(theta=(0.1, 0.9))
def generate_random(theta):
n = 10
data = {"x": jnp.arange(0, n + 1), "n": n, "theta": theta}
make_graph(data)
###Output
_____no_output_____
###Markdown
Plots the pmfs of binomial distributions with varying probability of success parameter
###Code
import os
try:
import jax
except:
%pip install jax jaxlib
import jax
import jax.numpy as jnp
from jax.scipy.stats import nbinom
try:
import matplotlib.pyplot as plt
except:
%pip install matplotlib
import matplotlib.pyplot as plt
try:
import seaborn as sns
except:
%pip install seaborn
import seaborn as sns
try:
from scipy.stats import binom
except:
%pip install scipy
from scipy.stats import binom
dev_mode = "DEV_MODE" in os.environ
if dev_mode:
import sys
sys.path.append("scripts")
from plot_utils import latexify, savefig
latexify(width_scale_factor=2, fig_height=1.5)
N = 10
thetas = [0.25, 0.5, 0.75, 0.9]
x = jnp.arange(0, N + 1)
def make_graph(data):
plt.figure()
x = data["x"]
n = data["n"]
theta = data["theta"]
probs = binom.pmf(x, n, theta)
title = r"$\theta=" + str(theta) + "$"
plt.bar(x, probs, align="center")
plt.xlim([min(x) - 0.5, max(x) + 0.5])
plt.ylim([0, 0.4])
plt.xticks(x)
plt.xlabel("$x$")
plt.ylabel("$p(x)$")
plt.title(title)
sns.despine()
if dev_mode:
savefig("binomDistTheta" + str(int(theta * 100)) + "_latexified.pdf")
for theta in thetas:
data = {"x": x, "n": N, "theta": theta}
make_graph(data)
###Output
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
###Markdown
DemoYou can see different examples of binomial distributions by changing the theta in the following demo.
###Code
from ipywidgets import interact
@interact(theta=(0.1, 0.9))
def generate_random(theta):
n = 10
data = {"x": jnp.arange(0, n + 1), "n": n, "theta": theta}
make_graph(data)
###Output
_____no_output_____ |
week_2/word-vector-representation/Operations_on_word_vectors_v2a.ipynb | ###Markdown
Operations on word vectorsWelcome to your first assignment of this week! Because word embeddings are very computationally expensive to train, most ML practitioners will load a pre-trained set of embeddings. **After this assignment you will be able to:**- Load pre-trained word vectors, and measure similarity using cosine similarity- Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______. - Modify word embeddings to reduce their gender bias Updates If you were working on the notebook before this update...* The current notebook is version "2a".* You can find your original work saved in the notebook with the previous version name ("v2") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* cosine_similarity * Additional hints.* complete_analogy * Replaces the list of input words with a set, and sets it outside the for loop (to follow best practices in coding).* Spelling, grammar and wording corrections. Let's get started! Run the following cell to load the packages you will need.
###Code
import numpy as np
from w2v_utils import *
###Output
Using TensorFlow backend.
###Markdown
Load the word vectors* For this assignment, we will use 50-dimensional GloVe vectors to represent words. * Run the following cell to load the `word_to_vec_map`.
###Code
words, word_to_vec_map = read_glove_vecs('../../readonly/glove.6B.50d.txt')
###Output
_____no_output_____
###Markdown
You've loaded:- `words`: set of words in the vocabulary.- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation. Embedding vectors versus one-hot vectors* Recall from the lesson videos that one-hot vectors do not do a good job of capturing the level of similarity between words (every one-hot vector has the same Euclidean distance from any other one-hot vector).* Embedding vectors such as GloVe vectors provide much more useful information about the meaning of individual words. * Lets now see how you can use GloVe vectors to measure the similarity between two words. 1 - Cosine similarityTo measure the similarity between two words, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows: $$\text{CosineSimilarity(u, v)} = \frac {u \cdot v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$* $u \cdot v$ is the dot product (or inner product) of two vectors* $||u||_2$ is the norm (or length) of the vector $u$* $\theta$ is the angle between $u$ and $v$. * The cosine similarity depends on the angle between $u$ and $v$. * If $u$ and $v$ are very similar, their cosine similarity will be close to 1. * If they are dissimilar, the cosine similarity will take a smaller value. **Figure 1**: The cosine of the angle between two vectors is a measure their similarity**Exercise**: Implement the function `cosine_similarity()` to evaluate the similarity between word vectors.**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$ Additional Hints* You may find `np.dot`, `np.sum`, or `np.sqrt` useful depending upon the implementation that you choose.
###Code
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similarity between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
distance = 0.0
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = u.dot(v)
# Compute the L2 norm of u (≈1 line)
norm_u = np.linalg.norm(u)
# Compute the L2 norm of v (≈1 line)
norm_v = np.linalg.norm(v)
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = dot / (norm_u * norm_v)
### END CODE HERE ###
return cosine_similarity
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
###Output
cosine_similarity(father, mother) = 0.890903844289
cosine_similarity(ball, crocodile) = 0.274392462614
cosine_similarity(france - paris, rome - italy) = -0.675147930817
###Markdown
**Expected Output**: **cosine_similarity(father, mother)** = 0.890903844289 **cosine_similarity(ball, crocodile)** = 0.274392462614 **cosine_similarity(france - paris, rome - italy)** = -0.675147930817 Try different words!* After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! * Playing around with the cosine similarity of other inputs will give you a better sense of how word vectors behave. 2 - Word analogy task* In the word analogy task, we complete the sentence: "*a* is to *b* as *c* is to **____**". * An example is: '*man* is to *woman* as *king* is to *queen*' . * We are trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$* We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity. **Exercise**: Complete the code below to be able to perform word analogies!
###Code
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""
# convert words to lowercase
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings e_a, e_b and e_c (≈1-3 lines)
e_a, e_b, e_c = word_to_vec_map[word_a], word_to_vec_map[word_b], word_to_vec_map[word_c]
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# to avoid best_word being one of the input words, skip the input words
# place the input words in a set for faster searching than a list
# We will re-use this set of input words inside the for-loop
input_words_set = {word_a, word_b, word_c}
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, skip the input words
if w in input_words_set:
continue
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = cosine_similarity(e_b - e_a, word_to_vec_map[w] - e_c)
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if cosine_sim > max_cosine_sim:
max_cosine_sim = cosine_sim
best_word = w
### END CODE HERE ###
return best_word
###Output
_____no_output_____
###Markdown
Run the cell below to test your code, this may take 1-2 minutes.
###Code
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))
###Output
italy -> italian :: spain -> spanish
india -> delhi :: japan -> tokyo
man -> woman :: boy -> girl
small -> smaller :: large -> larger
###Markdown
**Expected Output**: **italy -> italian** :: spain -> spanish **india -> delhi** :: japan -> tokyo **man -> woman ** :: boy -> girl **small -> smaller ** :: large -> larger * Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. * Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: * For example, you can try small->smaller as big->?. Congratulations!You've come to the end of the graded portion of the assignment. Here are the main points you should remember:- Cosine similarity is a good way to compare the similarity between pairs of word vectors. - Note that L2 (Euclidean) distance also works.- For NLP applications, using a pre-trained set of word vectors is often a good way to get started.- Even though you have finished the graded portions, we recommend you take a look at the rest of this notebook to learn about debiasing word vectors.Congratulations on finishing the graded portions of this notebook! 3 - Debiasing word vectors (OPTIONAL/UNGRADED) In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being an expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded. Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.)
###Code
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
###Output
[-0.087144 0.2182 -0.40986 -0.03922 -0.1032 0.94165
-0.06042 0.32988 0.46144 -0.35962 0.31102 -0.86824
0.96006 0.01073 0.24337 0.08193 -1.02722 -0.21122
0.695044 -0.00222 0.29106 0.5053 -0.099454 0.40445
0.30181 0.1355 -0.0606 -0.07131 -0.19245 -0.06115
-0.3204 0.07165 -0.13337 -0.25068714 -0.14293 -0.224957
-0.149 0.048882 0.12191 -0.27362 -0.165476 -0.20426
0.54376 -0.271425 -0.10245 -0.32108 0.2516 -0.33455
-0.04371 0.01258 ]
###Markdown
Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
###Code
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
###Output
List of names and their similarities with constructed vector:
john -0.23163356146
marie 0.315597935396
sophie 0.318687898594
ronaldo -0.312447968503
priya 0.17632041839
rahul -0.169154710392
danielle 0.243932992163
reza -0.079304296722
katy 0.283106865957
yasmin 0.233138577679
###Markdown
As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not surprising, and the result seems acceptable. But let's try with some other words.
###Code
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
###Output
Other words and their similarities:
lipstick 0.276919162564
guns -0.18884855679
science -0.0608290654093
arts 0.00818931238588
literature 0.0647250443346
warrior -0.209201646411
doctor 0.118952894109
tree -0.0708939917548
receptionist 0.330779417506
technology -0.131937324476
fashion 0.0356389462577
teacher 0.179209234318
engineer -0.0803928049452
pilot 0.00107644989919
computer -0.103303588739
singer 0.185005181365
###Markdown
Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch! We'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two types of words differently when debiasing. 3.1 - Neutralize bias for non-gender specific words The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "orthogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$. Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a 2D screen, we illustrate it using a 1 dimensional axis below. **Figure 2**: The word vector for "receptionist" represented before and after applying the neutralize operation. **Exercise**: Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$: $$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$$$e^{debiased} = e - e^{bias\_component}\tag{3}$$If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.<!-- **Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:$$u = u_B + u_{\perp}$$where : $u_B = $ and $ u_{\perp} = u - u_B $!-->
###Code
def neutralize(word, g, word_to_vec_map):
"""
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
"""
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = None
# Compute e_biascomponent using the formula given above. (≈ 1 line)
e_biascomponent = None
# Neutralize e by subtracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = None
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
###Output
_____no_output_____
###Markdown
**Expected Output**: The second result is essentially 0, up to numerical rounding (on the order of $10^{-17}$). **cosine similarity between receptionist and g, before neutralizing:** : 0.330779417506 **cosine similarity between receptionist and g, after neutralizing:** : -3.26732746085e-17 3.2 - Equalization algorithm for gender-specific wordsNext, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this. The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works: The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are: $$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$ $$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}\tag{5}$$ $$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}\tag{7}$$ $$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}\tag{8}$$$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {||(e_{w1} - \mu_{\perp}) - \mu_B||} \tag{9}$$$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {||(e_{w2} - \mu_{\perp}) - \mu_B||} \tag{10}$$$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$**Exercise**: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck!
###Code
def equalize(pair, bias_axis, word_to_vec_map):
"""
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
"""
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = None
e_w1, e_w2 = None
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = None
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = None
mu_orth = None
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = None
e_w2B = None
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = None
corrected_e_w2B = None
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = None
e2 = None
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
###Output
_____no_output_____ |
02_input_output.ipynb | ###Markdown
Input/Output Hello World! Input and Output is probably the most important part of Computer Programming. This is generally how anyone is able to interact with a computer and its software. Here we will focus on text-based input and output, but know that this is not just limited to the keyboard. Input and Output also includes the mouse, the keyboard, any type of camera and/or webcam, and so on. OutputTo denote this in pseudo-code, we will just write:``` output('Hello World')```Alternatively, you can write something like this to get your point across (This is acutally what Python uses):``` print('Hello World')``` Here it is as a Python Command:
###Code
print('Hello World')
###Output
Hello World
###Markdown
InputTo denote this in pseudo-code, we will just write:```input()```This is acutally what Python uses. We will likely want to keep the input somewhere, so we can assign the input to a variable like so:``` i = input()```Below is an example of taking user-input and displaying it with Output (the example shown is also how it is done in Python):
###Code
print('WHAT is your name?')
user_input = input()
print(f'Hello, {user_input}')
###Output
WHAT is your name?
|
tutorials/astropy-coordinates/1-Coordinates-Intro.ipynb | ###Markdown
Astronomical Coordinates 1: Getting Started with astropy.coordinates AuthorsAdrian Price-Whelan Learning Goals* Create `astropy.coordinates.SkyCoord` objects using coordinate data and object names* Use SkyCoord objects to become familiar with object oriented programming (OOP)* Use a `SkyCoord` object to query the *Gaia* archive using `astroquery`* Output coordinate data in different string representations* Demonstrate working with 3D sky coordinates (including distance information for objects) Keywordscoordinates, OOP, astroquery, gaia SummaryAstronomers use a wide variety of coordinate systems and formats to represent sky coordinates of celestial objects. For example, you may have seen terms like "right ascension" and "declination" or "galactic latitude and longitude," and you may have seen angular coordinate components represented as "0h39m15.9s," "00:39:15.9," or 9.81625º. The subpackage `astropy.coordinates` provides tools for representing the coordinates of objects and transforming them between different systems. In this tutorial, we will explore how the `astropy.coordinates` package can be used to work with astronomical coordinates. You may find it helpful to keep [the Astropy documentation for the coordinates package](http://docs.astropy.org/en/stable/coordinates/index.html) open alongside this tutorial for reference or additional reading. In the text below, you may also see some links that look like ([docs](http://docs.astropy.org/en/stable/coordinates/index.html)). These links will take you to parts of the documentation that are directly relevant to the cells from which they link. *Note: This is the 1st tutorial in a series of tutorials about astropy.coordinates.*- [Next tutorial: Astronomical Coordinates 2: Transforming Coordinate Systems and Representations](2-Coordinates-Transforms) ImportsWe start by importing some packages that we will need below:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from astropy import units as u
from astropy.coordinates import SkyCoord, Distance
from astropy.io import fits
from astropy.table import QTable
from astropy.utils.data import download_file
from astroquery.gaia import Gaia
Gaia.ROW_LIMIT = 10000 # Set the row limit for returned data
###Output
_____no_output_____
###Markdown
Representing On-sky Positions with `astropy.coordinates`In Astropy, the most common way of representing and working with sky coordinates is to use the `SkyCoord` object ([docs](https://docs.astropy.org/en/stable/coordinates/skycoord.html)). A `SkyCoord` can be created directly from angles or arrays of angles with associated units, as demonstrated below. To get started, let's assume that we want to create a `SkyCoord` object for the center of the open star cluster NGC 188 so that later we can query and retrieve stars that might be members of the cluster. Let's also assume, for now, that we already know the sky coordinates of the cluster to be (12.11, 85.26) degrees in the ICRS coordinate frame. The ICRS — sometimes referred to as "equatorial" or "J2000" coordinates ([more info about the ICRS](https://arxiv.org/abs/astro-ph/0602086)) — is currently the most common astronomical coordinate frame for stellar or extragalactic astronomy, and is the default coordinate frame for `SkyCoord`. Since we already know the ICRS position of NGC 188 (see above), we can create a `SkyCoord` object for the cluster by passing the data in to the `SkyCoord` initializer:
###Code
ngc188_center = SkyCoord(12.11*u.deg, 85.26*u.deg)
ngc188_center
###Output
_____no_output_____
###Markdown
Even though the default frame is ICRS, it is generally recommended to explicitly specify the frame your coordinates are in. In this case, this would be an equivalent way of creating our `SkyCoord` object for NGC 188:
###Code
ngc188_center = SkyCoord(12.11*u.deg, 85.26*u.deg, frame='icrs')
ngc188_center
###Output
_____no_output_____
###Markdown
As we will see later on in this series, there are many other supported coordinate frames, so it helps to get into the habit of passing in the name of a coordinate frame.In the above initializations, we passed in `astropy.units.Quantity` objects with angular units to specify the angular components of our sky coordinates. The `SkyCoord` initializer will also accept string-formatted coordinates either as separate strings for Right Ascension (RA) and Declination (Dec) or a single string. For example, if we have sexagesimal sky coordinate data: In this case, the representation of the data includes specifications of the units (the "hms" for "hour minute second", and the "dms" for "degrees minute second"
###Code
SkyCoord('00h48m26.4s', '85d15m36s', frame='icrs')
###Output
_____no_output_____
###Markdown
Some string representations do not explicitly define units, so it is sometimes necessary to specify the units of the string coordinate data explicitly if they are not implicitly included:
###Code
SkyCoord('00:48:26.4 85:15:36', unit=(u.hour, u.deg),
frame='icrs')
###Output
_____no_output_____
###Markdown
For more information and examples on initializing `SkyCoord` objects, [see this documentation](http://docs.astropy.org/en/latest/coordinates/skycoord.html). For the `SkyCoord` initializations demonstrated above, we assumed that we already had the coordinate component values ready. If you do not know the coordinate values and the object you are interested in is in [SESAME](http://cdsweb.u-strasbg.fr/cgi-bin/Sesame), you can also automatically look up and load coordinate values from the name of the object using the `SkyCoord.from_name()` class method1 ([docs](http://docs.astropy.org/en/latest/coordinates/index.htmlconvenience-methods)). Note, however, that this requires an internet connection. It is safe to skip this cell if you are not connected to the internet because we already defined the object `ngc188_center` in the cells above. 1If you do not know what a class method is, think of it like an alternative constructor for a `SkyCoord` object — calling `SkyCoord.from_name()` with a name gives you a new `SkyCoord` object. For more detailed background on what class methods are and when they're useful, see [this page](https://julien.danjou.info/blog/2013/guide-python-static-class-abstract-methods).
###Code
ngc188_center = SkyCoord.from_name('NGC 188')
ngc188_center
###Output
_____no_output_____
###Markdown
The `SkyCoord` object that we defined now has various ways of accessing the coordinate information contained within it. All `SkyCoord` objects have attributes that allow you to retrieve the coordinate component data, but the component names will change depending on the coordinate frame of the `SkyCoord` you have. In our examples we have created a `SkyCoord` in the ICRS frame, so the component names are lower-case abbreviations of Right Ascension, `.ra`, and Declination, `.dec`:
###Code
ngc188_center.ra, ngc188_center.dec
###Output
_____no_output_____
###Markdown
The `SkyCoord` component attributes (here ``ra`` and ``dec``) return specialized `Quantity`-like objects that make working with angular data easier. While `Quantity` ([docs](http://docs.astropy.org/en/stable/units/index.html)) is a general class that represents numerical values and physical units of any kind, `astropy.coordinates` defines subclasses of `Quantity` that are specifically designed for working with angles, such as the `Angle` ([docs](http://docs.astropy.org/en/stable/api/astropy.coordinates.Angle.html)) class. The `Angle` class then has additional, more specialized subclasses `Latitude` ([docs](http://docs.astropy.org/en/stable/api/astropy.coordinates.Latitude.html)) and `Longitude` ([docs](http://docs.astropy.org/en/stable/api/astropy.coordinates.Longitude.html)). These objects store angles, provide useful attributes to quickly convert to common angular units, and enable formatting the numerical values in various formats. For example, in a Jupyter notebook, these objects know how to represent themselves using LaTeX:
###Code
ngc188_center.ra
ngc188_center.dec
type(ngc188_center.ra), type(ngc188_center.dec)
###Output
_____no_output_____
###Markdown
With these objects, we can retrieve the coordinate components in different units using the `Quantity.to()` method:
###Code
(ngc188_center.ra.to(u.hourangle),
ngc188_center.ra.to(u.radian),
ngc188_center.ra.to(u.degree))
###Output
_____no_output_____
###Markdown
Or using the shorthand attributes, which return only the component values:
###Code
(ngc188_center.ra.hour,
ngc188_center.ra.radian,
ngc188_center.ra.degree)
###Output
_____no_output_____
###Markdown
We can also format the values into strings with specified units ([docs](http://docs.astropy.org/en/latest/coordinates/formatting.html)), for example:
###Code
ngc188_center.ra.to_string(unit=u.hourangle, sep=':', pad=True)
###Output
_____no_output_____
###Markdown
Querying the *Gaia* Archive to Retrieve Coordinates of Stars in NGC 188 Now that we have a `SkyCoord` object for the center of NGC 188, we can use this object with the `astroquery` package to query many different astronomical databases (see a full list of [available services in the astroquery documentation](https://astroquery.readthedocs.io/en/latest/available-services)). Here, we will use the `SkyCoord` object `ngc188_center` to select sources from the *Gaia* Data Release 2 catalog around the position of the center of NGC 188 to look for stars that might be members of the star cluster. To do this, we will use the `astroquery.gaia` subpackage ([docs](https://astroquery.readthedocs.io/en/latest/gaia/gaia.html)).This requires an internet connection, but if it fails, the catalog file is included in the repository so you can load it locally (skip the next cell if you do not have an internet connection):
###Code
job = Gaia.cone_search_async(ngc188_center, radius=0.5*u.deg)
ngc188_table = job.get_results()
# only keep stars brighter than G=19 magnitude
ngc188_table = ngc188_table[ngc188_table['phot_g_mean_mag'] < 19*u.mag]
cols = [
'source_id',
'ra',
'dec',
'parallax',
'parallax_error',
'pmra',
'pmdec',
'radial_velocity',
'phot_g_mean_mag',
'phot_bp_mean_mag',
'phot_rp_mean_mag'
]
ngc188_table[cols].write('gaia_results.fits', overwrite=True)
###Output
_____no_output_____
###Markdown
The above cell may not work if you do not have an internet connection, so we have included the results table along with the notebook:
###Code
ngc188_table = QTable.read('gaia_results.fits')
len(ngc188_table)
###Output
_____no_output_____
###Markdown
The returned `astropy.table` `Table` object now contains about 5000 stars from *Gaia* DR2 around the coordinate position of the center of NGC 188. Let's now construct a `SkyCoord` object with the results table. In the *Gaia* data archive, the ICRS coordinates of a source are given as column names `"ra"` and `"dec"`:
###Code
ngc188_table['ra']
ngc188_table['dec']
###Output
_____no_output_____
###Markdown
Note that, because the *Gaia* archive provides data tables with associated units, and we read this table using the `QTable` object ([docs](http://docs.astropy.org/en/latest/table/mixin_columns.htmlquantity-and-qtable)), the above table columns are represented as `Quantity` objects with units of degrees. Note also that these columns contain many (>5000!) coordinate values. We can pass these directly in to `SkyCoord` to get a single `SkyCoord` object to represent all of these coordinates:
###Code
ngc188_gaia_coords = SkyCoord(ngc188_table['ra'], ngc188_table['dec'])
ngc188_gaia_coords
###Output
_____no_output_____
###Markdown
Exercises Create a `SkyCoord` for the center of the open cluster the Pleiades (either by looking up the coordinates and passing them in, or by using the convenience method we learned about above):
###Code
ngc188_center = SkyCoord.from_name('NGC 188')
###Output
_____no_output_____
###Markdown
Using only a single method/function call on the `SkyCoord` object representing the center of NGC 188, print a string with the RA/Dec in the form 'HH:MM:SS.S DD:MM:SS.S'. Check your answer against [SIMBAD](http://simbad.u-strasbg.fr/simbad/), which will show you sexagesimal coordinates for the object.(Hint: `SkyCoord.to_string()` might be useful)
###Code
ngc188_center.to_string(style="hmsdms", sep=":", precision=1)
###Output
_____no_output_____
###Markdown
Using a single method/function call on the `SkyCoord` object containing the results of our *Gaia* query, compute the angular separation between each resulting star and the coordinates of the cluster center for NGC 188.(Hint: `SkyCoord.separation()` might be useful)
###Code
ngc188_gaia_coords.separation(ngc188_center)
###Output
_____no_output_____
###Markdown
More Than Just Sky Positions: Including Distance Information in `SkyCoord`So far, we have used `SkyCoord` to represent angular sky positions (i.e., `ra` and `dec` only). It is sometimes useful to include distance information with the sky coordinates of a source, thereby specifying the full 3D position of an object. To pass in distance information, `SkyCoord` accepts the keyword argument "`distance`". So, if we knew that the distance to NGC 188 is 1.96 kpc, we could also pass in a distance (as a `Quantity` object) using this argument:
###Code
ngc188_center_3d = SkyCoord(12.11*u.deg, 85.26*u.deg,
distance=1.96*u.kpc)
###Output
_____no_output_____
###Markdown
With the table of *Gaia* data we retrieved above for stars around NGC 188, `ngc188_table`, we also have parallax measurements for each star. For a precisely-measured parallax $\varpi$, the distance $d$ to a star can be obtained approximately as $d \approx 1/\varpi$. This only really works if the parallax error is small relative to the parallax ([see discussion in this paper](https://arxiv.org/abs/1507.02105)), so if we want to use these parallaxes to get distances we first have to filter out stars that have low signal-to-noise parallaxes:
###Code
parallax_snr = ngc188_table['parallax'] / ngc188_table['parallax_error']
ngc188_table_3d = ngc188_table[parallax_snr > 10]
len(ngc188_table_3d)
###Output
_____no_output_____
###Markdown
The above selection on `parallax_snr` keeps stars that have a ~10-sigma parallax measurement, but this is an arbitrary selection threshold that you may want to tune or remove in your own use cases. This selection removed over half of the stars in our original table, but for the remaining stars we can be confident that converting the parallax measurements to distances is mostly safe.The default way of passing in a distance to a `SkyCoord` object, as above, is to pass in a `Quantity` with a unit of length. However, `astropy.coordinates` also provides a specialized object, `Distance`, for handling common transformations of different distance representations ([docs](http://docs.astropy.org/en/latest/coordinates/index.htmldistance)). Among other things, this class supports passing in a parallax value:
###Code
Distance(parallax=1*u.mas)
###Output
_____no_output_____
###Markdown
The catalog of stars we queried from *Gaia* contains parallax information in milliarcsecond units, so we can create a `Distance` object directly from these values:
###Code
gaia_dist = Distance(parallax=ngc188_table_3d['parallax'].filled(np.nan))
###Output
_____no_output_____
###Markdown
We can then create a `SkyCoord` object to represent the 3D positions of all of the *Gaia* stars by passing in this distance object to the `SkyCoord` initializer:
###Code
ngc188_coords_3d = SkyCoord(ra=ngc188_table_3d['ra'],
dec=ngc188_table_3d['dec'],
distance=gaia_dist)
ngc188_coords_3d
###Output
_____no_output_____
###Markdown
Let's now use `matplotlib` to plot the sky positions of all of these sources, colored by distance to emphasize the cluster stars:
###Code
fig, ax = plt.subplots(figsize=(6.5, 5.2),
constrained_layout=True)
cs = ax.scatter(ngc188_coords_3d.ra.degree,
ngc188_coords_3d.dec.degree,
c=ngc188_coords_3d.distance.kpc,
s=5, vmin=1.5, vmax=2.5, cmap='twilight')
cb = fig.colorbar(cs)
cb.set_label(f'distance [{u.kpc:latex_inline}]')
ax.set_xlabel('RA [deg]')
ax.set_ylabel('Dec [deg]')
ax.set_title('Gaia DR2 sources near NGC 188', fontsize=18)
###Output
_____no_output_____
###Markdown
Now that we have 3D position information for both the cluster center, and for the stars we queried from *Gaia*, we can compute the 3D separation (distance) between all of the *Gaia* sources and the cluster center:
###Code
sep3d = ngc188_coords_3d.separation_3d(ngc188_center_3d)
sep3d
###Output
_____no_output_____
###Markdown
ExercisesUsing the 3D separation values, define a boolean mask to select candidate members of the cluster. Select all stars within 50 pc of the cluster center. How many candidate members of NGC 188 do we have, based on their 3D positions?
###Code
ngc188_3d_mask = sep3d < 50*u.pc
ngc188_3d_mask.sum()
###Output
_____no_output_____
###Markdown
Astronomical Coordinates 1: Getting Started with astropy.coordinates AuthorsAdrian Price-Whelan Learning Goals* Create `astropy.coordinates.SkyCoord` objects using coordinate data and object names* Use SkyCoord objects to become familiar with object oriented programming (OOP)* Use a `SkyCoord` object to query the *Gaia* archive using `astroquery`* Output coordinate data in different string representations* Demonstrate working with 3D sky coordinates (including distance information for objects) Keywordscoordinates, OOP, astroquery, gaia SummaryAstronomers use a wide variety of coordinate systems and formats to represent sky coordinates of celestial objects. For example, you may have seen terms like "right ascension" and "declination" or "galactic latitude and longitude," and you may have seen angular coordinate components represented as "0h39m15.9s," "00:39:15.9," or 9.81625º. The subpackage `astropy.coordinates` provides tools for representing the coordinates of objects and transforming them between different systems. In this tutorial, we will explore how the `astropy.coordinates` package can be used to work with astronomical coordinates. You may find it helpful to keep [the Astropy documentation for the coordinates package](http://docs.astropy.org/en/stable/coordinates/index.html) open alongside this tutorial for reference or additional reading. In the text below, you may also see some links that look like ([docs](http://docs.astropy.org/en/stable/coordinates/index.html)). These links will take you to parts of the documentation that are directly relevant to the cells from which they link. *Note: This is the 1st tutorial in a series of tutorials about astropy.coordinates.*- [Next tutorial: Astronomical Coordinates 2: Transforming Coordinate Systems and Representations](2-Coordinates-Transforms) ImportsWe start by importing some packages that we will need below:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from astropy import units as u
from astropy.coordinates import SkyCoord, Distance
from astropy.io import fits
from astropy.table import QTable
from astropy.utils.data import download_file
from astroquery.gaia import Gaia
Gaia.ROW_LIMIT = 10000 # Set the row limit for returned data
###Output
_____no_output_____
###Markdown
Representing On-sky Positions with `astropy.coordinates`In Astropy, the most common way of representing and working with sky coordinates is to use the `SkyCoord` object ([docs](https://docs.astropy.org/en/stable/coordinates/skycoord.html)). A `SkyCoord` can be created directly from angles or arrays of angles with associated units, as demonstrated below. To get started, let's assume that we want to create a `SkyCoord` object for the center of the open star cluster NGC 188 so that later we can query and retrieve stars that might be members of the cluster. Let's also assume, for now, that we already know the sky coordinates of the cluster to be (12.11, 85.26) degrees in the ICRS coordinate frame. The ICRS — sometimes referred to as "equatorial" or "J2000" coordinates ([more info about the ICRS](https://arxiv.org/abs/astro-ph/0602086)) — is currently the most common astronomical coordinate frame for stellar or extragalactic astronomy, and is the default coordinate frame for `SkyCoord`. Since we already know the ICRS position of NGC 188 (see above), we can create a `SkyCoord` object for the cluster by passing the data in to the `SkyCoord` initializer:
###Code
ngc188_center = SkyCoord(12.11*u.deg, 85.26*u.deg)
ngc188_center
###Output
_____no_output_____
###Markdown
Even though the default frame is ICRS, it is generally recommended to explicitly specify the frame your coordinates are in. In this case, this would be an equivalent way of creating our `SkyCoord` object for NGC 188:
###Code
ngc188_center = SkyCoord(12.11*u.deg, 85.26*u.deg, frame='icrs')
ngc188_center
###Output
_____no_output_____
###Markdown
As we will see later on in this series, there are many other supported coordinate frames, so it helps to get into the habit of passing in the name of a coordinate frame.In the above initializations, we passed in `astropy.units.Quantity` objects with angular units to specify the angular components of our sky coordinates. The `SkyCoord` initializer will also accept string-formatted coordinates either as separate strings for Right Ascension (RA) and Declination (Dec) or a single string. For example, if we have sexagesimal sky coordinate data: In this case, the representation of the data includes specifications of the units (the "hms" for "hour minute second", and the "dms" for "degrees minute second"
###Code
SkyCoord('00h48m26.4s', '85d15m36s', frame='icrs')
###Output
_____no_output_____
###Markdown
Some string representations do not explicitly define units, so it is sometimes necessary to specify the units of the string coordinate data explicitly if they are not implicitly included:
###Code
SkyCoord('00:48:26.4 85:15:36', unit=(u.hour, u.deg),
frame='icrs')
###Output
_____no_output_____
###Markdown
For more information and examples on initializing `SkyCoord` objects, [see this documentation](http://docs.astropy.org/en/latest/coordinates/skycoord.html). For the `SkyCoord` initializations demonstrated above, we assumed that we already had the coordinate component values ready. If you do not know the coordinate values and the object you are interested in is in [SESAME](http://cdsweb.u-strasbg.fr/cgi-bin/Sesame), you can also automatically look up and load coordinate values from the name of the object using the `SkyCoord.from_name()` class method1 ([docs](http://docs.astropy.org/en/latest/coordinates/index.htmlconvenience-methods)). Note, however, that this requires an internet connection. It is safe to skip this cell if you are not connected to the internet because we already defined the object `ngc188_center` in the cells above. 1If you do not know what a class method is, think of it like an alternative constructor for a `SkyCoord` object — calling `SkyCoord.from_name()` with a name gives you a new `SkyCoord` object. For more detailed background on what class methods are and when they're useful, see [this page](https://julien.danjou.info/blog/2013/guide-python-static-class-abstract-methods).
###Code
ngc188_center = SkyCoord.from_name('NGC 188')
ngc188_center
###Output
_____no_output_____
###Markdown
The `SkyCoord` object that we defined now has various ways of accessing the coordinate information contained within it. All `SkyCoord` objects have attributes that allow you to retrieve the coordinate component data, but the component names will change depending on the coordinate frame of the `SkyCoord` you have. In our examples we have created a `SkyCoord` in the ICRS frame, so the component names are lower-case abbreviations of Right Ascension, `.ra`, and Declination, `.dec`:
###Code
ngc188_center.ra, ngc188_center.dec
###Output
_____no_output_____
###Markdown
The `SkyCoord` component attributes (here ``ra`` and ``dec``) return specialized `Quantity`-like objects that make working with angular data easier. While `Quantity` ([docs](http://docs.astropy.org/en/stable/units/index.html)) is a general class that represents numerical values and physical units of any kind, `astropy.coordinates` defines subclasses of `Quantity` that are specifically designed for working with angles, such as the `Angle` ([docs](http://docs.astropy.org/en/stable/api/astropy.coordinates.Angle.html)) class. The `Angle` class then has additional, more specialized subclasses `Latitude` ([docs](http://docs.astropy.org/en/stable/api/astropy.coordinates.Latitude.html)) and `Longitude` ([docs](http://docs.astropy.org/en/stable/api/astropy.coordinates.Longitude.html)). These objects store angles, provide useful attributes to quickly convert to common angular units, and enable formatting the numerical values in various formats. For example, in a Jupyter notebook, these objects know how to represent themselves using LaTeX:
###Code
ngc188_center.ra
ngc188_center.dec
type(ngc188_center.ra), type(ngc188_center.dec)
###Output
_____no_output_____
###Markdown
With these objects, we can retrieve the coordinate components in different units using the `Quantity.to()` method:
###Code
(ngc188_center.ra.to(u.hourangle),
ngc188_center.ra.to(u.radian),
ngc188_center.ra.to(u.degree))
###Output
_____no_output_____
###Markdown
Or using the shorthand attributes, which return only the component values:
###Code
(ngc188_center.ra.hour,
ngc188_center.ra.radian,
ngc188_center.ra.degree)
###Output
_____no_output_____
###Markdown
We can also format the values into strings with specified units ([docs](http://docs.astropy.org/en/latest/coordinates/formatting.html)), for example:
###Code
ngc188_center.ra.to_string(unit=u.hourangle, sep=':', pad=True)
###Output
_____no_output_____
###Markdown
Querying the *Gaia* Archive to Retrieve Coordinates of Stars in NGC 188 Now that we have a `SkyCoord` object for the center of NGC 188, we can use this object with the `astroquery` package to query many different astronomical databases (see a full list of [available services in the astroquery documentation](https://astroquery.readthedocs.io/en/latest/available-services)). Here, we will use the `SkyCoord` object `ngc188_center` to select sources from the *Gaia* Data Release 2 catalog around the position of the center of NGC 188 to look for stars that might be members of the star cluster. To do this, we will use the `astroquery.gaia` subpackage ([docs](https://astroquery.readthedocs.io/en/latest/gaia/gaia.html)).This requires an internet connection, but if it fails, the catalog file is included in the repository so you can load it locally (skip the next cell if you do not have an internet connection):
###Code
job = Gaia.cone_search_async(ngc188_center, radius=0.5*u.deg)
ngc188_table = job.get_results()
# only keep stars brighter than G=19 magnitude
ngc188_table = ngc188_table[ngc188_table['phot_g_mean_mag'] < 19*u.mag]
cols = [
'source_id',
'ra',
'dec',
'parallax',
'parallax_error',
'pmra',
'pmdec',
'radial_velocity',
'phot_g_mean_mag',
'phot_bp_mean_mag',
'phot_rp_mean_mag'
]
ngc188_table[cols].write('gaia_results.fits', overwrite=True)
###Output
_____no_output_____
###Markdown
The above cell may not work if you do not have an internet connection, so we have included the results table along with the notebook:
###Code
ngc188_table = QTable.read('gaia_results.fits')
len(ngc188_table)
###Output
_____no_output_____
###Markdown
The returned `astropy.table` `Table` object now contains about 5000 stars from *Gaia* DR2 around the coordinate position of the center of NGC 188. Let's now construct a `SkyCoord` object with the results table. In the *Gaia* data archive, the ICRS coordinates of a source are given as column names `"ra"` and `"dec"`:
###Code
ngc188_table['ra']
ngc188_table['dec']
###Output
_____no_output_____
###Markdown
Note that, because the *Gaia* archive provides data tables with associated units, and we read this table using the `QTable` object ([docs](http://docs.astropy.org/en/latest/table/mixin_columns.htmlquantity-and-qtable)), the above table columns are represented as `Quantity` objects with units of degrees. Note also that these columns contain many (>5000!) coordinate values. We can pass these directly in to `SkyCoord` to get a single `SkyCoord` object to represent all of these coordinates:
###Code
ngc188_gaia_coords = SkyCoord(ngc188_table['ra'], ngc188_table['dec'])
ngc188_gaia_coords
###Output
_____no_output_____
###Markdown
Exercises Create a `SkyCoord` for the center of the open cluster the Pleiades (either by looking up the coordinates and passing them in, or by using the convenience method we learned about above):
###Code
ngc188_center = SkyCoord.from_name('NGC 188')
###Output
_____no_output_____
###Markdown
Using only a single method/function call on the `SkyCoord` object representing the center of NGC 188, print a string with the RA/Dec in the form 'HH:MM:SS.S DD:MM:SS.S'. Check your answer against [SIMBAD](http://simbad.u-strasbg.fr/simbad/), which will show you sexagesimal coordinates for the object.(Hint: `SkyCoord.to_string()` might be useful)
###Code
ngc188_center.to_string(style="hmsdms", sep=":", precision=1)
###Output
_____no_output_____
###Markdown
Using a single method/function call on the `SkyCoord` object containing the results of our *Gaia* query, compute the angular separation between each resulting star and the coordinates of the cluster center for NGC 188.(Hint: `SkyCoord.separation()` might be useful)
###Code
ngc188_gaia_coords.separation(ngc188_center)
###Output
_____no_output_____
###Markdown
More Than Just Sky Positions: Including Distance Information in `SkyCoord`So far, we have used `SkyCoord` to represent angular sky positions (i.e., `ra` and `dec` only). It is sometimes useful to include distance information with the sky coordinates of a source, thereby specifying the full 3D position of an object. To pass in distance information, `SkyCoord` accepts the keyword argument "`distance`". So, if we knew that the distance to NGC 188 is 1.96 kpc, we could also pass in a distance (as a `Quantity` object) using this argument:
###Code
ngc188_center_3d = SkyCoord(12.11*u.deg, 85.26*u.deg,
distance=1.96*u.kpc)
###Output
_____no_output_____
###Markdown
With the table of *Gaia* data we retrieved above for stars around NGC 188, `ngc188_table`, we also have parallax measurements for each star. For a precisely-measured parallax $\varpi$, the distance $d$ to a star can be obtained approximately as $d \approx 1/\varpi$. This only really works if the parallax error is small relative to the parallax ([see discussion in this paper](https://arxiv.org/abs/1507.02105)), so if we want to use these parallaxes to get distances we first have to filter out stars that have low signal-to-noise parallaxes:
###Code
parallax_snr = ngc188_table['parallax'] / ngc188_table['parallax_error']
ngc188_table_3d = ngc188_table[parallax_snr > 10]
len(ngc188_table_3d)
###Output
_____no_output_____
###Markdown
The above selection on `parallax_snr` keeps stars that have a ~10-sigma parallax measurement, but this is an arbitrary selection threshold that you may want to tune or remove in your own use cases. This selection removed over half of the stars in our original table, but for the remaining stars we can be confident that converting the parallax measurements to distances is mostly safe.The default way of passing in a distance to a `SkyCoord` object, as above, is to pass in a `Quantity` with a unit of length. However, `astropy.coordinates` also provides a specialized object, `Distance`, for handling common transformations of different distance representations ([docs](http://docs.astropy.org/en/latest/coordinates/index.htmldistance)). Among other things, this class supports passing in a parallax value:
###Code
Distance(parallax=1*u.mas)
###Output
_____no_output_____
###Markdown
The catalog of stars we queried from *Gaia* contains parallax information in milliarcsecond units, so we can create a `Distance` object directly from these values:
###Code
gaia_dist = Distance(parallax=ngc188_table_3d['parallax'].filled(np.nan))
###Output
_____no_output_____
###Markdown
We can then create a `SkyCoord` object to represent the 3D positions of all of the *Gaia* stars by passing in this distance object to the `SkyCoord` initializer:
###Code
ngc188_coords_3d = SkyCoord(ra=ngc188_table_3d['ra'],
dec=ngc188_table_3d['dec'],
distance=gaia_dist)
ngc188_coords_3d
###Output
_____no_output_____
###Markdown
Let's now use `matplotlib` to plot the sky positions of all of these sources, colored by distance to emphasize the cluster stars:
###Code
fig, ax = plt.subplots(figsize=(6.5, 5.2),
constrained_layout=True)
cs = ax.scatter(ngc188_coords_3d.ra.degree,
ngc188_coords_3d.dec.degree,
c=ngc188_coords_3d.distance.kpc,
s=5, vmin=1.5, vmax=2.5, cmap='twilight')
cb = fig.colorbar(cs)
cb.set_label(f'distance [{u.kpc:latex_inline}]')
ax.set_xlabel('RA [deg]')
ax.set_ylabel('Dec [deg]')
ax.set_title('Gaia DR2 sources near NGC 188', fontsize=18)
###Output
_____no_output_____
###Markdown
Now that we have 3D position information for both the cluster center, and for the stars we queried from *Gaia*, we can compute the 3D separation (distance) between all of the *Gaia* sources and the cluster center:
###Code
sep3d = ngc188_coords_3d.separation_3d(ngc188_center_3d)
sep3d
###Output
_____no_output_____
###Markdown
ExercisesUsing the 3D separation values, define a boolean mask to select candidate members of the cluster. Select all stars within 50 pc of the cluster center. How many candidate members of NGC 188 do we have, based on their 3D positions?
###Code
ngc188_3d_mask = sep3d < 50*u.pc
ngc188_3d_mask.sum()
###Output
_____no_output_____ |
ipynb/releases/ReleaseNotes_v16.10.ipynb | ###Markdown
Target Connectivity Board specific settings Boards specific settings can be collected into a JSONplatform description file:
###Code
!ls -la $LISA_HOME/libs/utils/platforms/
!cat $LISA_HOME/libs/utils/platforms/hikey.json
###Output
{
// HiKey boards have two SMP clusters
// Even being an SMP platform, being a two cluster system
// we can still load the devlib's bl module to get access
// to all the per-cluster functions it exposes
"board" : {
"cores" : [
"A53_0", "A53_0", "A53_0", "A53_0",
"A53_1", "A53_1", "A53_1", "A53_1"
],
"big_core" : "A53_1",
"modules" : [ "bl", "cpufreq" ]
},
// Energy Model related functions requires cluster
// to be named "big.LITTLE". Thus, for the time being,
// let's use this naming also for HiKey. This is going
// to be updated once we introduce proper SMP and
// multi-cluster support.
"nrg_model": {
"little": {
"cluster": {
"nrg_max": 112
},
"cpu": {
"nrg_max": 670, "cap_max": 1024
}
},
"big": {
"cluster": {
"nrg_max": 112
},
"cpu": {
"nrg_max": 670,
"cap_max": 1024
}
}
}
}
// vim: set tabstop=4:
###Markdown
Single configuration dictionary
###Code
# Check which Android devices are available
!adb devices
ADB_DEVICE = '00b43d0b08a8a4b8'
# Unified configuration dictionary
my_conf = {
# Target platform
"platform" : 'android',
# Location of external tools (adb, fastboot, systrace, etc)
# These properties can be used to override the environment variables:
# ANDROID_HOME and CATAPULT_HOME
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
# Boards specific settings can be collected into a JSON
# platform description file, to be placed under:
# LISA_HOME/libs/utils/platforms
"board" : 'hikey',
# If you have multiple Android device connected, here
# we can specify which one to target
"device" : ADB_DEVICE,
# Folder where all the results will be collected
"results_dir" : "ReleaseNotes_v16.09",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
target = te.target
###Output
2016-09-23 18:26:53,404 INFO : Target - Using base path: /home/derkling/Code/lisa
2016-09-23 18:26:53,404 INFO : Target - Loading custom (inline) target configuration
2016-09-23 18:26:53,404 INFO : Target - External tools using:
2016-09-23 18:26:53,405 INFO : Target - ANDROID_HOME: /opt/android-sdk-linux
2016-09-23 18:26:53,405 INFO : Target - CATAPULT_HOME: /home/derkling/Code/lisa/tools/catapult
2016-09-23 18:26:53,405 INFO : Platform - Loading board:
2016-09-23 18:26:53,405 INFO : Platform - /home/derkling/Code/lisa/libs/utils/platforms/hikey.json
2016-09-23 18:26:53,406 INFO : Target - Devlib modules to load: [u'bl', u'cpufreq']
2016-09-23 18:26:53,406 INFO : Target - Connecting Android target [607A87C400055E6E]
2016-09-23 18:26:53,406 INFO : Target - Connection settings:
2016-09-23 18:26:53,406 INFO : Target - {'device': '607A87C400055E6E'}
2016-09-23 18:26:53,951 INFO : Target - Initializing target workdir:
2016-09-23 18:26:53,952 INFO : Target - /data/local/tmp/devlib-target
2016-09-23 18:26:54,746 INFO : Target - Topology:
2016-09-23 18:26:54,747 INFO : Target - [[0, 1, 2, 3], [4, 5, 6, 7]]
2016-09-23 18:26:54,946 INFO : Platform - Loading default EM:
2016-09-23 18:26:54,946 INFO : Platform - /home/derkling/Code/lisa/libs/utils/platforms/hikey.json
2016-09-23 18:26:54,947 WARNING : TestEnv - Wipe previous contents of the results folder:
2016-09-23 18:26:54,947 WARNING : TestEnv - /home/derkling/Code/lisa/results/ReleaseNotes_v16.09
2016-09-23 18:26:54,960 INFO : AEP - AEP configuration
2016-09-23 18:26:54,960 INFO : AEP - {'instrument': 'aep', 'channel_map': {'LITTLE': 'LITTLE'}, 'conf': {'resistor_values': [0.033], 'device_entry': '/dev/ttyACM0'}}
2016-09-23 18:26:54,967 INFO : AEP - Channels selected for energy sampling:
[CHAN(LITTLE_current), CHAN(LITTLE_power), CHAN(LITTLE_voltage)]
2016-09-23 18:26:54,967 INFO : TestEnv - Set results folder to:
2016-09-23 18:26:54,968 INFO : TestEnv - /home/derkling/Code/lisa/results/ReleaseNotes_v16.09
2016-09-23 18:26:54,968 INFO : TestEnv - Experiment results available also in:
2016-09-23 18:26:54,968 INFO : TestEnv - /home/derkling/Code/lisa/results_latest
###Markdown
Energy Meters Support - Simple unified interface for multiple acquisition board - exposes two simple methods: **reset()** and **report()** - reporting **energy** consumptions - reports additional info supported by the specific probe, e.g. collected samples, stats on current and voltages, etc.
###Code
from time import sleep
def sample_energy(energy_meter, time_s):
# Reset the configured energy counters
energy_meter.reset()
# Run the workload you want to measure
#
# In this simple example we just wait some time while the
# energy counters accumulate power samples
sleep(time_s)
# Read and report the measured energy (since last reset)
return energy_meter.report(te.res_dir)
###Output
_____no_output_____
###Markdown
- Channels mapping support - allows to give a custom name to each channel used ARM Energy Probe (AEP) Requirements:1. the **caimin binary tool** must be availabe in PATH https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirementsarm-energy-probe-aep2. the **ttyACMx device** created once you plug in the AEP device
###Code
!ls -la /dev/ttyACM*
ACM_DEVICE = '/dev/ttyACM1'
###Output
_____no_output_____
###Markdown
Direct usage
###Code
# Energy Meters Configuration for ARM Energy Probe
aep_conf = {
'conf' : {
# Value of the shunt resistor [Ohm] for each channel
'resistor_values' : [0.010],
# Device entry assigned to the probe on the host
'device_entry' : ACM_DEVICE,
},
'channel_map' : {
'BAT' : 'CH0'
}
}
from energy import AEP
ape_em = AEP(target, aep_conf, '/tmp')
nrg_report = sample_energy(ape_em, 2)
print nrg_report
!cat $nrg_report.report_file
###Output
{
"BAT": 0.00341733929419313
}
###Markdown
Usage via TestEnv
###Code
my_conf = {
# Configure the energy meter to use
"emeter" : {
# Require usage of an AEP meter
"instrument" : "aep",
# Configuration parameters require by the AEP device
"conf" : {
# Value of the shunt resistor in Ohm
'resistor_values' : [0.099],
# Device entry assigned to the probe on the host
'device_entry' : ACM_DEVICE,
},
# Map AEP's channels to logical names (used to generate reports)
'channel_map' : {
'BAT' : 'CH0'
}
},
# Other target configurations
"platform" : 'android',
"board" : 'hikey',
"device" : ADB_DEVICE,
"results_dir" : "ReleaseNotes_v16.09",
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
for i in xrange(1,11):
nrg_report = sample_energy(te.emeter, 1)
nrg_bat = float(nrg_report.channels['BAT'])
print "Sample {:2d}: {:.3f}".format(i, nrg_bat)
###Output
_____no_output_____
###Markdown
BayLibre's ACME board (ACME) Requirements:1. the **iio-capture tool** must be available in PATH https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirementsiiocapture---baylibre-acme-cape2. the ACME CAPE should be reacable by network
###Code
!ping -c1 baylibre-acme.local | grep '64 bytes'
###Output
64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=1.60 ms
###Markdown
Direct usage
###Code
# Energy Meters Configuration for BayLibre's ACME
acme_conf = {
"conf" : {
#'iio-capture' : '/usr/bin/iio-capture',
#'ip_address' : 'baylibre-acme.local',
},
"channel_map" : {
"Device0" : 0,
"Device1" : 1,
},
}
from energy import ACME
acme_em = ACME(target, acme_conf, '/tmp')
nrg_report = sample_energy(acme_em, 2)
print nrg_report
!cat $nrg_report.report_file
###Output
{
"Device0": 0.0,
"Device1": 654.82
}
###Markdown
Usage via TestEnv
###Code
my_conf = {
# Configure the energy meter to use
"emeter" : {
# Require usage of an AEP meter
"instrument" : "acme",
"conf" : {
#'iio-capture' : '/usr/bin/iio-capture',
#'ip_address' : 'baylibre-acme.local',
},
'channel_map' : {
'Device0' : 0,
'Device1' : 1,
},
},
# Other target configurations
"platform" : 'android',
"board" : 'hikey',
"device" : ADB_DEVICE,
"results_dir" : "ReleaseNotes_v16.09",
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
target = te.target
for i in xrange(1,11):
nrg_report = sample_energy(te.emeter, 1)
nrg_bat = float(nrg_report.channels['Device1'])
print "Sample {:2d}: {:.3f}".format(i, nrg_bat)
###Output
Sample 1: 342.110
Sample 2: 325.660
Sample 3: 324.120
Sample 4: 369.300
Sample 5: 331.140
Sample 6: 315.130
Sample 7: 326.100
Sample 8: 345.180
Sample 9: 328.730
Sample 10: 328.510
###Markdown
Android Integration A new Android library has been added which provides APIs to: - simplify the interaction with a device - execute interesting workloads and benchmarks - make it easy the integration of new workloads and benchmarks Not intended to replace WA, but instead to provide a Python basedprogramming interface to **automate reproducible experiments** onand Android device. System control APIs
###Code
from android import System
print "Supported functions:"
for f in dir(System):
if "__" in f:
continue
print " ", f
###Output
Supported functions:
back
force_stop
gfxinfo_get
gfxinfo_reset
home
hswipe
list_packages
menu
monkey
packages_info
set_airplane_mode
start_action
start_activity
start_app
tap
vswipe
###Markdown
Capturing main useful actions, for example: - ensure we set AIRPLAIN_MODE before measuring scheduler energy - provide simple support for input actions (relative swipes)
###Code
# logging.getLogger().setLevel(logging.DEBUG)
# Example (use tab to complete)
System.
System.menu(target)
System.back(target)
youtube_apk = System.list_packages(target, 'YouTube')
if youtube_apk:
System.start_app(target, youtube_apk[0])
logging.getLogger().setLevel(logging.INFO)
###Output
_____no_output_____
###Markdown
Screen control APIs
###Code
from android import Screen
print "Supported functions:"
for f in dir(Screen):
if "__" in f:
continue
print " ", f
# logging.getLogger().setLevel(logging.DEBUG)
# Example (use TAB to complete)
Screen.
Screen.set_brightness(target, auto=False, percent=100)
Screen.set_orientation(target, auto=False, portrait=False)
# logging.getLogger().setLevel(logging.INFO)
###Output
_____no_output_____
###Markdown
Workloads Execution A simple workload class allows to easily add a wrapper for the exectionof a specific Android application.**NOTE:** To keep things simple, LISA does not provide APKs installation support.*All the exposes APIs assume that the required packages are already installedin the target. Whenever a package is missing, LISA reports that and it's upto the user to install it before using it.*A wrapper class usually requires to specify:- a package name which will be used to verify if the APK is available in the target- a run method which usually exploits the other Android APIs to defined a **reproducible exection** of the specified workload A reproducible experiment should take care of:- setups wirelesse **connections status**- setup **screen orientation and backlight** level- properly collect **energy measurements** across the sensible part of the experiment- possibly collect **frames statistics** whenever available Example of YouTube integration Here is an example wrapper which allows to play a YouTubevideo for a specified number of seconds:https://github.com/ARM-software/lisa/blob/master/libs/utils/android/workloads/youtube.py Example usage of the Workload API
###Code
# logging.getLogger().setLevel(logging.DEBUG)
from android import Workload
# Get the list of available workloads
wloads = Workload(te)
wloads.availables(target)
yt = Workload.get(te, name='YouTube')
# Playback big bug bunny for 15s starting from 1:20s
video_id = 'XSGBVzeBUbk'
video_url = "https://youtu.be/{}?t={}s".format(video_id, 80)
# Play video and measure energy consumption
results = yt.run(te.res_dir,
video_url, video_duration_s=16,
collect='energy')
results
framestats = results[0]
!cat $framestats
###Output
Applications Graphics Acceleration Info:
Uptime: 135646296 Realtime: 248599085
** Graphics info for pid 23414 [com.google.android.youtube] **
Stats since: 135590611534135ns
Total frames rendered: 342
Janky frames: 59 (17.25%)
50th percentile: 8ms
90th percentile: 21ms
95th percentile: 40ms
99th percentile: 125ms
Number Missed Vsync: 21
Number High input latency: 0
Number Slow UI thread: 27
Number Slow bitmap uploads: 2
Number Slow issue draw commands: 28
HISTOGRAM: 5ms=70 6ms=32 7ms=35 8ms=38 9ms=31 10ms=21 11ms=21 12ms=14 13ms=10 14ms=7 15ms=3 16ms=4 17ms=3 18ms=3 19ms=7 20ms=4 21ms=7 22ms=0 23ms=3 24ms=1 25ms=2 26ms=2 27ms=1 28ms=0 29ms=2 30ms=0 31ms=0 32ms=0 34ms=2 36ms=0 38ms=0 40ms=2 42ms=0 44ms=0 46ms=2 48ms=1 53ms=2 57ms=2 61ms=0 65ms=1 69ms=0 73ms=0 77ms=2 81ms=0 85ms=2 89ms=0 93ms=0 97ms=0 101ms=0 105ms=0 109ms=0 113ms=0 117ms=0 121ms=1 125ms=1 129ms=0 133ms=0 150ms=2 200ms=0 250ms=1 300ms=0 350ms=0 400ms=0 450ms=0 500ms=0 550ms=0 600ms=0 650ms=0 700ms=0 750ms=0 800ms=0 850ms=0 900ms=0 950ms=0 1000ms=0 1050ms=0 1100ms=0 1150ms=0 1200ms=0 1250ms=0 1300ms=0 1350ms=0 1400ms=0 1450ms=0 1500ms=0 1550ms=0 1600ms=0 1650ms=0 1700ms=0 1750ms=0 1800ms=0 1850ms=0 1900ms=0 1950ms=0 2000ms=0 2050ms=0 2100ms=0 2150ms=0 2200ms=0 2250ms=0 2300ms=0 2350ms=0 2400ms=0 2450ms=0 2500ms=0 2550ms=0 2600ms=0 2650ms=0 2700ms=0 2750ms=0 2800ms=0 2850ms=0 2900ms=0 2950ms=0 3000ms=0 3050ms=0 3100ms=0 3150ms=0 3200ms=0 3250ms=0 3300ms=0 3350ms=0 3400ms=0 3450ms=0 3500ms=0 3550ms=0 3600ms=0 3650ms=0 3700ms=0 3750ms=0 3800ms=0 3850ms=0 3900ms=0 3950ms=0 4000ms=0 4050ms=0 4100ms=0 4150ms=0 4200ms=0 4250ms=0 4300ms=0 4350ms=0 4400ms=0 4450ms=0 4500ms=0 4550ms=0 4600ms=0 4650ms=0 4700ms=0 4750ms=0 4800ms=0 4850ms=0 4900ms=0 4950ms=0
Caches:
Current memory usage / total memory usage (bytes):
TextureCache 3484348 / 58720256
LayerCache 0 / 33554432 (numLayers = 0)
Layers total 0 (numLayers = 0)
RenderBufferCache 0 / 8388608
GradientCache 16384 / 1048576
PathCache 0 / 16777216
TessellationCache 2232 / 1048576
TextDropShadowCache 0 / 6291456
PatchCache 64 / 131072
FontRenderer A8 1048576 / 1048576
FontRenderer RGBA 0 / 0
FontRenderer total 1048576 / 1048576
Other:
FboCache 0 / 0
Total memory usage:
4551604 bytes, 4.34 MB
Pipeline=FrameBuilder
Profile data in ms:
com.google.android.youtube/com.google.android.apps.youtube.app.WatchWhileActivity/android.view.ViewRootImpl@74c8306 (visibility=0)
View hierarchy:
com.google.android.youtube/com.google.android.apps.youtube.app.WatchWhileActivity/android.view.ViewRootImpl@74c8306
275 views, 271.73 kB of display lists
Total ViewRootImpl: 1
Total Views: 275
Total DisplayList: 271.73 kB
###Markdown
Benchmarks Android benchmarks can be integrated as standalone Notebook, like for examplewhat we provide for PCMark: https://github.com/ARM-software/lisa/blob/master/ipynb/android/benchmarks/Android_PCMark.ipynbAlternatively we are adding other benchmarks as predefined Android workloads. UiBench support Here is an example of UiBench workload which can run a specified numberof tests:
###Code
from android import Workload
ui = Workload.get(te, name='UiBench')
# Play video and measure energy consumption
results = ui.run(te.res_dir,
ui.test_GlTextureView,
duration_s=5,
collect='energy')
results
framestats = results[0]
!cat $framestats
###Output
Applications Graphics Acceleration Info:
Uptime: 135665445 Realtime: 248618234
** Graphics info for pid 23836 [com.android.test.uibench] **
Stats since: 135653517039151ns
Total frames rendered: 620
Janky frames: 580 (93.55%)
50th percentile: 19ms
90th percentile: 21ms
95th percentile: 22ms
99th percentile: 26ms
Number Missed Vsync: 2
Number High input latency: 0
Number Slow UI thread: 3
Number Slow bitmap uploads: 1
Number Slow issue draw commands: 574
HISTOGRAM: 5ms=11 6ms=2 7ms=3 8ms=1 9ms=4 10ms=2 11ms=1 12ms=1 13ms=3 14ms=3 15ms=5 16ms=8 17ms=45 18ms=109 19ms=192 20ms=147 21ms=49 22ms=14 23ms=8 24ms=3 25ms=1 26ms=3 27ms=0 28ms=1 29ms=2 30ms=1 31ms=0 32ms=0 34ms=0 36ms=0 38ms=0 40ms=0 42ms=0 44ms=0 46ms=0 48ms=1 53ms=0 57ms=0 61ms=0 65ms=0 69ms=0 73ms=0 77ms=0 81ms=0 85ms=0 89ms=0 93ms=0 97ms=0 101ms=0 105ms=0 109ms=0 113ms=0 117ms=0 121ms=0 125ms=0 129ms=0 133ms=0 150ms=0 200ms=0 250ms=0 300ms=0 350ms=0 400ms=0 450ms=0 500ms=0 550ms=0 600ms=0 650ms=0 700ms=0 750ms=0 800ms=0 850ms=0 900ms=0 950ms=0 1000ms=0 1050ms=0 1100ms=0 1150ms=0 1200ms=0 1250ms=0 1300ms=0 1350ms=0 1400ms=0 1450ms=0 1500ms=0 1550ms=0 1600ms=0 1650ms=0 1700ms=0 1750ms=0 1800ms=0 1850ms=0 1900ms=0 1950ms=0 2000ms=0 2050ms=0 2100ms=0 2150ms=0 2200ms=0 2250ms=0 2300ms=0 2350ms=0 2400ms=0 2450ms=0 2500ms=0 2550ms=0 2600ms=0 2650ms=0 2700ms=0 2750ms=0 2800ms=0 2850ms=0 2900ms=0 2950ms=0 3000ms=0 3050ms=0 3100ms=0 3150ms=0 3200ms=0 3250ms=0 3300ms=0 3350ms=0 3400ms=0 3450ms=0 3500ms=0 3550ms=0 3600ms=0 3650ms=0 3700ms=0 3750ms=0 3800ms=0 3850ms=0 3900ms=0 3950ms=0 4000ms=0 4050ms=0 4100ms=0 4150ms=0 4200ms=0 4250ms=0 4300ms=0 4350ms=0 4400ms=0 4450ms=0 4500ms=0 4550ms=0 4600ms=0 4650ms=0 4700ms=0 4750ms=0 4800ms=0 4850ms=0 4900ms=0 4950ms=0
Caches:
Current memory usage / total memory usage (bytes):
TextureCache 0 / 58720256
LayerCache 0 / 33554432 (numLayers = 0)
Layer size 1080x1584; isTextureLayer()=1; texid=4 fbo=0; refs=1
Layers total 6842880 (numLayers = 1)
RenderBufferCache 0 / 8388608
GradientCache 0 / 1048576
PathCache 0 / 16777216
TessellationCache 0 / 1048576
TextDropShadowCache 0 / 6291456
PatchCache 0 / 131072
FontRenderer A8 1048576 / 1048576
FontRenderer RGBA 0 / 0
FontRenderer total 1048576 / 1048576
Other:
FboCache 0 / 0
Total memory usage:
7891456 bytes, 7.53 MB
Pipeline=FrameBuilder
Profile data in ms:
com.android.test.uibench/com.android.test.uibench.MainActivity/android.view.ViewRootImpl@abf726f (visibility=8)
com.android.test.uibench/com.android.test.uibench.GlTextureViewActivity/android.view.ViewRootImpl@31bc075 (visibility=0)
View hierarchy:
com.android.test.uibench/com.android.test.uibench.MainActivity/android.view.ViewRootImpl@abf726f
24 views, 23.25 kB of display lists
com.android.test.uibench/com.android.test.uibench.GlTextureViewActivity/android.view.ViewRootImpl@31bc075
14 views, 17.86 kB of display lists
Total ViewRootImpl: 2
Total Views: 38
Total DisplayList: 41.11 kB
###Markdown
Improved Trace Analysis support The Trace module is a wrapper around the TRAPpy library which has beenupdated to:- support parsing of **systrace** file format requires catapult locally installed https://github.com/catapult-project/catapult- parsing and DataFrame generation for **custom events** Create an example trace **NOTE:** the cells in this sections are required just to create a trace file to be used by the following sections
###Code
# The following exanples uses an HiKey board
ADB_DEVICE = '607A87C400055E6E'
# logging.getLogger().setLevel(logging.DEBUG)
# Unified configuration dictionary
my_conf = {
# Tools required
"tools" : ['rt-app', 'trace-cmd'],
# RTApp calibration
#"modules" : ['cpufreq'],
"rtapp-calib" : {
"0": 254, "1": 252, "2": 252, "3": 251,
"4": 251, "5": 252, "6": 251, "7": 251
},
# FTrace configuration
"ftrace" : {
# Events to trace
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_wakeup_tracking",
"sched_stat_wait",
"sched_overutilized",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_tune_config",
"sched_tune_filter",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff",
"cpu_capacity",
"cpu_frequency",
"cpu_idle",
"walt_update_task_ravg",
"walt_update_history",
"walt_migration_update_sum",
],
# # Kernel functions to profile
# "functions" : [
# "pick_next_task_fair",
# "select_task_rq_fair",
# "enqueue_task_fair",
# "update_curr_fair",
# "dequeue_task_fair",
# ],
# Per-CPU buffer configuration
"buffsize" : 10 * 1024,
},
# Target platform
"platform" : 'android',
"board" : 'hikey',
"device" : ADB_DEVICE,
"results_dir" : "ReleaseNotes_v16.09",
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
target = te.target
from wlgen import RTA,Ramp
# Let's run a simple RAMP task
rta = RTA(target, 'ramp')
rta.conf(
kind='profile',
params = {
'ramp' : Ramp().get()
}
);
te.ftrace.start()
target.execute("echo 'my_marker: label=START' > /sys/kernel/debug/tracing/trace_marker",
as_root=True)
rta.run(out_dir=te.res_dir)
target.execute("echo 'my_marker: label=STOP' > /sys/kernel/debug/tracing/trace_marker",
as_root=True)
te.ftrace.stop()
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
###Output
2016-09-23 18:58:00,939 INFO : WlGen - Workload execution START:
2016-09-23 18:58:00,939 INFO : WlGen - /data/local/tmp/bin/rt-app /data/local/tmp/devlib-target/ramp_00.json 2>&1
###Markdown
DataFrame namespace
###Code
from trace import Trace
events_to_parse = my_conf['ftrace']['events']
events_to_parse += ['my_marker']
trace = Trace(te.platform, trace_file, events=events_to_parse)
trace.available_events
# Use TAB to complete
trace.data_frame.
rt_tasks = trace.data_frame.rt_tasks()
rt_tasks.head()
lat_df = trace.data_frame.latency_df('ramp')
lat_df.head()
custom_df = trace.data_frame.trace_event('my_marker')
custom_df
ctxsw_df = trace.data_frame.trace_event('sched_switch')
ctxsw_df.head()
###Output
_____no_output_____
###Markdown
Analysis namespace
###Code
# Use TAB to complete
trace.analysis.
trace.analysis.tasks.plotTasks(tasks='ramp',
signals=['util_avg', 'boosted_util',
'sched_overutilized', 'residencies'])
lat_data = trace.analysis.latency.plotLatency('ramp')
lat_data.T
trace.analysis.frequency.plotClusterFrequencies()
trace.analysis.frequency.plotClusterFrequencyResidency(pct=True, active=True)
trace.analysis.frequency.plotClusterFrequencyResidency(pct=True, active=False)
###Output
2016-09-23 18:58:35,289 WARNING : Cluster frequency is not coherent, plot DISABLED!
###Markdown
Target Connectivity Board specific settings Boards specific settings can be collected into a JSONplatform description file:
###Code
!ls -la $LISA_HOME/libs/utils/platforms/
!cat $LISA_HOME/libs/utils/platforms/hikey.json
###Output
{
// HiKey boards have two SMP clusters
// Even being an SMP platform, being a two cluster system
// we can still load the devlib's bl module to get access
// to all the per-cluster functions it exposes
"board" : {
"cores" : [
"A53_0", "A53_0", "A53_0", "A53_0",
"A53_1", "A53_1", "A53_1", "A53_1"
],
"big_core" : "A53_1",
"modules" : [ "bl", "cpufreq" ]
},
// Energy Model related functions requires cluster
// to be named "big.LITTLE". Thus, for the time being,
// let's use this naming also for HiKey. This is going
// to be updated once we introduce proper SMP and
// multi-cluster support.
"nrg_model": {
"little": {
"cluster": {
"nrg_max": 112
},
"cpu": {
"nrg_max": 670, "cap_max": 1024
}
},
"big": {
"cluster": {
"nrg_max": 112
},
"cpu": {
"nrg_max": 670,
"cap_max": 1024
}
}
}
}
// vim: set tabstop=4:
###Markdown
Single configuration dictionary
###Code
# Check which Android devices are available
!adb devices
ADB_DEVICE = '00b43d0b08a8a4b8'
# ADB_DEVICE = '607A87C400055E6E'
# Unified configuration dictionary
my_conf = {
# Target platform
"platform" : 'android',
# Location of external tools (adb, fastboot, systrace, etc)
# These properties can be used to override the environment variables:
# ANDROID_HOME and CATAPULT_HOME
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
# Boards specific settings can be collected into a JSON
# platform description file, to be placed under:
# LISA_HOME/libs/utils/platforms
"board" : 'hikey',
# If you have multiple Android device connected, here
# we can specify which one to target
"device" : ADB_DEVICE,
# Folder where all the results will be collected
"results_dir" : "ReleaseNotes_v16.09",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
target = te.target
###Output
2016-09-23 18:26:53,404 INFO : Target - Using base path: /home/derkling/Code/lisa
2016-09-23 18:26:53,404 INFO : Target - Loading custom (inline) target configuration
2016-09-23 18:26:53,404 INFO : Target - External tools using:
2016-09-23 18:26:53,405 INFO : Target - ANDROID_HOME: /opt/android-sdk-linux
2016-09-23 18:26:53,405 INFO : Target - CATAPULT_HOME: /home/derkling/Code/lisa/tools/catapult
2016-09-23 18:26:53,405 INFO : Platform - Loading board:
2016-09-23 18:26:53,405 INFO : Platform - /home/derkling/Code/lisa/libs/utils/platforms/hikey.json
2016-09-23 18:26:53,406 INFO : Target - Devlib modules to load: [u'bl', u'cpufreq']
2016-09-23 18:26:53,406 INFO : Target - Connecting Android target [607A87C400055E6E]
2016-09-23 18:26:53,406 INFO : Target - Connection settings:
2016-09-23 18:26:53,406 INFO : Target - {'device': '607A87C400055E6E'}
2016-09-23 18:26:53,951 INFO : Target - Initializing target workdir:
2016-09-23 18:26:53,952 INFO : Target - /data/local/tmp/devlib-target
2016-09-23 18:26:54,746 INFO : Target - Topology:
2016-09-23 18:26:54,747 INFO : Target - [[0, 1, 2, 3], [4, 5, 6, 7]]
2016-09-23 18:26:54,946 INFO : Platform - Loading default EM:
2016-09-23 18:26:54,946 INFO : Platform - /home/derkling/Code/lisa/libs/utils/platforms/hikey.json
2016-09-23 18:26:54,947 WARNING : TestEnv - Wipe previous contents of the results folder:
2016-09-23 18:26:54,947 WARNING : TestEnv - /home/derkling/Code/lisa/results/ReleaseNotes_v16.09
2016-09-23 18:26:54,960 INFO : AEP - AEP configuration
2016-09-23 18:26:54,960 INFO : AEP - {'instrument': 'aep', 'channel_map': {'LITTLE': 'LITTLE'}, 'conf': {'resistor_values': [0.033], 'device_entry': '/dev/ttyACM0'}}
2016-09-23 18:26:54,967 INFO : AEP - Channels selected for energy sampling:
[CHAN(LITTLE_current), CHAN(LITTLE_power), CHAN(LITTLE_voltage)]
2016-09-23 18:26:54,967 INFO : TestEnv - Set results folder to:
2016-09-23 18:26:54,968 INFO : TestEnv - /home/derkling/Code/lisa/results/ReleaseNotes_v16.09
2016-09-23 18:26:54,968 INFO : TestEnv - Experiment results available also in:
2016-09-23 18:26:54,968 INFO : TestEnv - /home/derkling/Code/lisa/results_latest
###Markdown
Energy Meters Support - Simple unified interface for multiple acquisition board - exposes two simple methods: **reset()** and **report()** - reporting **energy** consumptions - reports additional info supported by the specific probe, e.g. collected samples, stats on current and voltages, etc.
###Code
from time import sleep
def sample_energy(energy_meter, time_s):
# Reset the configured energy counters
energy_meter.reset()
# Run the workload you want to measure
#
# In this simple example we just wait some time while the
# energy counters accumulate power samples
sleep(time_s)
# Read and report the measured energy (since last reset)
return energy_meter.report(te.res_dir)
###Output
_____no_output_____
###Markdown
- Channels mapping support - allows to give a custom name to each channel used ARM Energy Probe (AEP) Requirements:1. the **caimin binary tool** must be availabe in PATH https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirementsarm-energy-probe-aep2. the **ttyACMx device** created once you plug in the AEP device
###Code
!ls -la /dev/ttyACM*
ACM_DEVICE = '/dev/ttyACM1'
###Output
_____no_output_____
###Markdown
Direct usage
###Code
# Energy Meters Configuration for ARM Energy Probe
aep_conf = {
'conf' : {
# Value of the shunt resistor [Ohm] for each channel
'resistor_values' : [0.010],
# Device entry assigned to the probe on the host
'device_entry' : ACM_DEVICE,
},
'channel_map' : {
'BAT' : 'CH0'
}
}
from energy import AEP
ape_em = AEP(target, aep_conf, '/tmp')
nrg_report = sample_energy(ape_em, 2)
print nrg_report
!cat $nrg_report.report_file
###Output
{
"BAT": 0.00341733929419313
}
###Markdown
Usage via TestEnv
###Code
my_conf = {
# Configure the energy meter to use
"emeter" : {
# Require usage of an AEP meter
"instrument" : "aep",
# Configuration parameters require by the AEP device
"conf" : {
# Value of the shunt resistor in Ohm
'resistor_values' : [0.099],
# Device entry assigned to the probe on the host
'device_entry' : ACM_DEVICE,
},
# Map AEP's channels to logical names (used to generate reports)
'channel_map' : {
'BAT' : 'CH0'
}
},
# Other target configurations
"platform" : 'android',
"board" : 'hikey',
"device" : ADB_DEVICE,
"results_dir" : "ReleaseNotes_v16.09",
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
for i in xrange(1,11):
nrg_report = sample_energy(te.emeter, 1)
nrg_bat = float(nrg_report.channels['BAT'])
print "Sample {:2d}: {:.3f}".format(i, nrg_bat)
###Output
_____no_output_____
###Markdown
BayLibre's ACME board (ACME) Requirements:1. the **iio-capture tool** must be available in PATH https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirementsiiocapture---baylibre-acme-cape2. the ACME CAPE should be reacable by network
###Code
!ping -c1 baylibre-acme.local | grep '64 bytes'
###Output
64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=1.60 ms
###Markdown
Direct usage
###Code
# Energy Meters Configuration for BayLibre's ACME
acme_conf = {
"conf" : {
#'iio-capture' : '/usr/bin/iio-capture',
#'ip_address' : 'baylibre-acme.local',
},
"channel_map" : {
"Device0" : 0,
"Device1" : 1,
},
}
from energy import ACME
acme_em = ACME(target, acme_conf, '/tmp')
nrg_report = sample_energy(acme_em, 2)
print nrg_report
!cat $nrg_report.report_file
###Output
{
"Device0": 0.0,
"Device1": 654.82
}
###Markdown
Usage via TestEnv
###Code
my_conf = {
# Configure the energy meter to use
"emeter" : {
# Require usage of an AEP meter
"instrument" : "acme",
"conf" : {
#'iio-capture' : '/usr/bin/iio-capture',
#'ip_address' : 'baylibre-acme.local',
},
'channel_map' : {
'Device0' : 0,
'Device1' : 1,
},
},
# Other target configurations
"platform" : 'android',
"board" : 'hikey',
"device" : ADB_DEVICE,
"results_dir" : "ReleaseNotes_v16.09",
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
target = te.target
for i in xrange(1,11):
nrg_report = sample_energy(te.emeter, 1)
nrg_bat = float(nrg_report.channels['Device1'])
print "Sample {:2d}: {:.3f}".format(i, nrg_bat)
###Output
Sample 1: 342.110
Sample 2: 325.660
Sample 3: 324.120
Sample 4: 369.300
Sample 5: 331.140
Sample 6: 315.130
Sample 7: 326.100
Sample 8: 345.180
Sample 9: 328.730
Sample 10: 328.510
###Markdown
Android Integration A new Android library has been added which provides APIs to: - simplify the interaction with a device - execute interesting workloads and benchmarks - make it easy the integration of new workloads and benchmarks Not intended to replace WA, but instead to provide a Python basedprogramming interface to **automate reproducible experiments** onand Android device. System control APIs
###Code
from android import System
print "Supported functions:"
for f in dir(System):
if "__" in f:
continue
print " ", f
###Output
Supported functions:
back
force_stop
gfxinfo_get
gfxinfo_reset
home
hswipe
list_packages
menu
monkey
packages_info
set_airplane_mode
start_action
start_activity
start_app
tap
vswipe
###Markdown
Capturing main useful actions, for example: - ensure we set AIRPLAIN_MODE before measuring scheduler energy - provide simple support for input actions (relative swipes)
###Code
# logging.getLogger().setLevel(logging.DEBUG)
# Example (use tab to complete)
System.
System.menu(target)
System.back(target)
youtube_apk = System.list_packages(target, 'YouTube')
if youtube_apk:
System.start_app(target, youtube_apk[0])
logging.getLogger().setLevel(logging.INFO)
###Output
_____no_output_____
###Markdown
Screen control APIs
###Code
from android import Screen
print "Supported functions:"
for f in dir(Screen):
if "__" in f:
continue
print " ", f
#logging.getLogger().setLevel(logging.DEBUG)
# Example (use TAB to complete)
Screen.
Screen.set_brightness(target, auto=False, percent=100)
Screen.set_orientation(target, auto=False, portrait=False)
# logging.getLogger().setLevel(logging.INFO)
###Output
_____no_output_____
###Markdown
Workloads Execution A simple workload class allows to easily add a wrapper for the exectionof a specific Android application.**NOTE:** To keep things simple, LISA does not provide APKs installation support.*All the exposes APIs assume that the required packages are already installedin the target. Whenever a package is missing, LISA reports that and it's upto the user to install it before using it.*A wrapper class usually requires to specify:- a package name which will be used to verify if the APK is available in the target- a run method which usually exploits the other Android APIs to defined a **reproducible exection** of the specified workload A reproducible experiment should take care of:- setups wirelesse **connections status**- setup **screen orientation and backlight** level- properly collect **energy measurements** across the sensible part of the experiment- possibly collect **frames statistics** whenever available Example of YouTube integration Here is an example wrapper which allows to play a YouTubevideo for a specified number of seconds:https://github.com/ARM-software/lisa/blob/master/libs/utils/android/workloads/youtube.py Example usage of the Workload API
###Code
# logging.getLogger().setLevel(logging.DEBUG)
from android import Workload
# Get the list of available workloads
wloads = Workload(te)
wloads.availables(target)
yt = Workload.get(te, name='YouTube')
# Playback big bug bunny for 15s starting from 1:20s
video_id = 'XSGBVzeBUbk'
video_url = "https://youtu.be/{}?t={}s".format(video_id, 80)
# Play video and measure energy consumption
results = yt.run(te.res_dir,
video_url, video_duration_s=16,
collect='energy')
results
framestats = results[0]
!cat $framestats
###Output
Applications Graphics Acceleration Info:
Uptime: 135646296 Realtime: 248599085
** Graphics info for pid 23414 [com.google.android.youtube] **
Stats since: 135590611534135ns
Total frames rendered: 342
Janky frames: 59 (17.25%)
50th percentile: 8ms
90th percentile: 21ms
95th percentile: 40ms
99th percentile: 125ms
Number Missed Vsync: 21
Number High input latency: 0
Number Slow UI thread: 27
Number Slow bitmap uploads: 2
Number Slow issue draw commands: 28
HISTOGRAM: 5ms=70 6ms=32 7ms=35 8ms=38 9ms=31 10ms=21 11ms=21 12ms=14 13ms=10 14ms=7 15ms=3 16ms=4 17ms=3 18ms=3 19ms=7 20ms=4 21ms=7 22ms=0 23ms=3 24ms=1 25ms=2 26ms=2 27ms=1 28ms=0 29ms=2 30ms=0 31ms=0 32ms=0 34ms=2 36ms=0 38ms=0 40ms=2 42ms=0 44ms=0 46ms=2 48ms=1 53ms=2 57ms=2 61ms=0 65ms=1 69ms=0 73ms=0 77ms=2 81ms=0 85ms=2 89ms=0 93ms=0 97ms=0 101ms=0 105ms=0 109ms=0 113ms=0 117ms=0 121ms=1 125ms=1 129ms=0 133ms=0 150ms=2 200ms=0 250ms=1 300ms=0 350ms=0 400ms=0 450ms=0 500ms=0 550ms=0 600ms=0 650ms=0 700ms=0 750ms=0 800ms=0 850ms=0 900ms=0 950ms=0 1000ms=0 1050ms=0 1100ms=0 1150ms=0 1200ms=0 1250ms=0 1300ms=0 1350ms=0 1400ms=0 1450ms=0 1500ms=0 1550ms=0 1600ms=0 1650ms=0 1700ms=0 1750ms=0 1800ms=0 1850ms=0 1900ms=0 1950ms=0 2000ms=0 2050ms=0 2100ms=0 2150ms=0 2200ms=0 2250ms=0 2300ms=0 2350ms=0 2400ms=0 2450ms=0 2500ms=0 2550ms=0 2600ms=0 2650ms=0 2700ms=0 2750ms=0 2800ms=0 2850ms=0 2900ms=0 2950ms=0 3000ms=0 3050ms=0 3100ms=0 3150ms=0 3200ms=0 3250ms=0 3300ms=0 3350ms=0 3400ms=0 3450ms=0 3500ms=0 3550ms=0 3600ms=0 3650ms=0 3700ms=0 3750ms=0 3800ms=0 3850ms=0 3900ms=0 3950ms=0 4000ms=0 4050ms=0 4100ms=0 4150ms=0 4200ms=0 4250ms=0 4300ms=0 4350ms=0 4400ms=0 4450ms=0 4500ms=0 4550ms=0 4600ms=0 4650ms=0 4700ms=0 4750ms=0 4800ms=0 4850ms=0 4900ms=0 4950ms=0
Caches:
Current memory usage / total memory usage (bytes):
TextureCache 3484348 / 58720256
LayerCache 0 / 33554432 (numLayers = 0)
Layers total 0 (numLayers = 0)
RenderBufferCache 0 / 8388608
GradientCache 16384 / 1048576
PathCache 0 / 16777216
TessellationCache 2232 / 1048576
TextDropShadowCache 0 / 6291456
PatchCache 64 / 131072
FontRenderer A8 1048576 / 1048576
FontRenderer RGBA 0 / 0
FontRenderer total 1048576 / 1048576
Other:
FboCache 0 / 0
Total memory usage:
4551604 bytes, 4.34 MB
Pipeline=FrameBuilder
Profile data in ms:
com.google.android.youtube/com.google.android.apps.youtube.app.WatchWhileActivity/android.view.ViewRootImpl@74c8306 (visibility=0)
View hierarchy:
com.google.android.youtube/com.google.android.apps.youtube.app.WatchWhileActivity/android.view.ViewRootImpl@74c8306
275 views, 271.73 kB of display lists
Total ViewRootImpl: 1
Total Views: 275
Total DisplayList: 271.73 kB
###Markdown
Benchmarks Android benchmarks can be integrated as standalone Notebook, like for examplewhat we provide for PCMark: https://github.com/ARM-software/lisa/blob/master/ipynb/android/benchmarks/Android_PCMark.ipynbAlternatively we are adding other benchmarks as predefined Android workloads. UiBench support Here is an example of UiBench workload which can run a specified numberof tests:
###Code
from android import Workload
ui = Workload.get(te, name='UiBench')
# Play video and measure energy consumption
results = ui.run(te.res_dir,
ui.test_GlTextureView,
duration_s=5,
collect='energy')
results
framestats = results[0]
!cat $framestats
###Output
Applications Graphics Acceleration Info:
Uptime: 135665445 Realtime: 248618234
** Graphics info for pid 23836 [com.android.test.uibench] **
Stats since: 135653517039151ns
Total frames rendered: 620
Janky frames: 580 (93.55%)
50th percentile: 19ms
90th percentile: 21ms
95th percentile: 22ms
99th percentile: 26ms
Number Missed Vsync: 2
Number High input latency: 0
Number Slow UI thread: 3
Number Slow bitmap uploads: 1
Number Slow issue draw commands: 574
HISTOGRAM: 5ms=11 6ms=2 7ms=3 8ms=1 9ms=4 10ms=2 11ms=1 12ms=1 13ms=3 14ms=3 15ms=5 16ms=8 17ms=45 18ms=109 19ms=192 20ms=147 21ms=49 22ms=14 23ms=8 24ms=3 25ms=1 26ms=3 27ms=0 28ms=1 29ms=2 30ms=1 31ms=0 32ms=0 34ms=0 36ms=0 38ms=0 40ms=0 42ms=0 44ms=0 46ms=0 48ms=1 53ms=0 57ms=0 61ms=0 65ms=0 69ms=0 73ms=0 77ms=0 81ms=0 85ms=0 89ms=0 93ms=0 97ms=0 101ms=0 105ms=0 109ms=0 113ms=0 117ms=0 121ms=0 125ms=0 129ms=0 133ms=0 150ms=0 200ms=0 250ms=0 300ms=0 350ms=0 400ms=0 450ms=0 500ms=0 550ms=0 600ms=0 650ms=0 700ms=0 750ms=0 800ms=0 850ms=0 900ms=0 950ms=0 1000ms=0 1050ms=0 1100ms=0 1150ms=0 1200ms=0 1250ms=0 1300ms=0 1350ms=0 1400ms=0 1450ms=0 1500ms=0 1550ms=0 1600ms=0 1650ms=0 1700ms=0 1750ms=0 1800ms=0 1850ms=0 1900ms=0 1950ms=0 2000ms=0 2050ms=0 2100ms=0 2150ms=0 2200ms=0 2250ms=0 2300ms=0 2350ms=0 2400ms=0 2450ms=0 2500ms=0 2550ms=0 2600ms=0 2650ms=0 2700ms=0 2750ms=0 2800ms=0 2850ms=0 2900ms=0 2950ms=0 3000ms=0 3050ms=0 3100ms=0 3150ms=0 3200ms=0 3250ms=0 3300ms=0 3350ms=0 3400ms=0 3450ms=0 3500ms=0 3550ms=0 3600ms=0 3650ms=0 3700ms=0 3750ms=0 3800ms=0 3850ms=0 3900ms=0 3950ms=0 4000ms=0 4050ms=0 4100ms=0 4150ms=0 4200ms=0 4250ms=0 4300ms=0 4350ms=0 4400ms=0 4450ms=0 4500ms=0 4550ms=0 4600ms=0 4650ms=0 4700ms=0 4750ms=0 4800ms=0 4850ms=0 4900ms=0 4950ms=0
Caches:
Current memory usage / total memory usage (bytes):
TextureCache 0 / 58720256
LayerCache 0 / 33554432 (numLayers = 0)
Layer size 1080x1584; isTextureLayer()=1; texid=4 fbo=0; refs=1
Layers total 6842880 (numLayers = 1)
RenderBufferCache 0 / 8388608
GradientCache 0 / 1048576
PathCache 0 / 16777216
TessellationCache 0 / 1048576
TextDropShadowCache 0 / 6291456
PatchCache 0 / 131072
FontRenderer A8 1048576 / 1048576
FontRenderer RGBA 0 / 0
FontRenderer total 1048576 / 1048576
Other:
FboCache 0 / 0
Total memory usage:
7891456 bytes, 7.53 MB
Pipeline=FrameBuilder
Profile data in ms:
com.android.test.uibench/com.android.test.uibench.MainActivity/android.view.ViewRootImpl@abf726f (visibility=8)
com.android.test.uibench/com.android.test.uibench.GlTextureViewActivity/android.view.ViewRootImpl@31bc075 (visibility=0)
View hierarchy:
com.android.test.uibench/com.android.test.uibench.MainActivity/android.view.ViewRootImpl@abf726f
24 views, 23.25 kB of display lists
com.android.test.uibench/com.android.test.uibench.GlTextureViewActivity/android.view.ViewRootImpl@31bc075
14 views, 17.86 kB of display lists
Total ViewRootImpl: 2
Total Views: 38
Total DisplayList: 41.11 kB
###Markdown
Improved Trace Analysis support The Trace module is a wrapper around the TRAPpy library which has beenupdated to:- support parsing of **systrace** file format requires catapult locally installed https://github.com/catapult-project/catapult- parsing and DataFrame generation for **custom events** Create an example trace **NOTE:** the cells in this sections are required just to create a trace file to be used by the following sections
###Code
# The following exanples uses an HiKey board
ADB_DEVICE = '607A87C400055E6E'
# logging.getLogger().setLevel(logging.DEBUG)
# Unified configuration dictionary
my_conf = {
# Tools required
"tools" : ['rt-app', 'trace-cmd'],
# RTApp calibration
#"modules" : ['cpufreq'],
"rtapp-calib" : {
"0": 254, "1": 252, "2": 252, "3": 251,
"4": 251, "5": 252, "6": 251, "7": 251
},
# FTrace configuration
"ftrace" : {
# Events to trace
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_wakeup_tracking",
"sched_stat_wait",
"sched_overutilized",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_tune_config",
"sched_tune_filter",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff",
"cpu_capacity",
"cpu_frequency",
"cpu_idle",
"walt_update_task_ravg",
"walt_update_history",
"walt_migration_update_sum",
],
# # Kernel functions to profile
# "functions" : [
# "pick_next_task_fair",
# "select_task_rq_fair",
# "enqueue_task_fair",
# "update_curr_fair",
# "dequeue_task_fair",
# ],
# Per-CPU buffer configuration
"buffsize" : 10 * 1024,
},
# Target platform
"platform" : 'android',
"board" : 'hikey',
"device" : ADB_DEVICE,
"results_dir" : "ReleaseNotes_v16.09",
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
target = te.target
from wlgen import RTA,Ramp
# Let's run a simple RAMP task
rta = RTA(target, 'ramp')
rta.conf(
kind='profile',
params = {
'ramp' : Ramp().get()
}
);
te.ftrace.start()
target.execute("echo 'my_marker: label=START' > /sys/kernel/debug/tracing/trace_marker",
as_root=True)
rta.run(out_dir=te.res_dir)
target.execute("echo 'my_marker: label=STOP' > /sys/kernel/debug/tracing/trace_marker",
as_root=True)
te.ftrace.stop()
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
###Output
2016-09-23 18:58:00,939 INFO : WlGen - Workload execution START:
2016-09-23 18:58:00,939 INFO : WlGen - /data/local/tmp/bin/rt-app /data/local/tmp/devlib-target/ramp_00.json 2>&1
###Markdown
DataFrame namespace
###Code
from trace import Trace
events_to_parse = my_conf['ftrace']['events']
events_to_parse += ['my_marker']
trace = Trace(te.platform, trace_file, events=events_to_parse)
trace.available_events
# Use TAB to complete
trace.data_frame.
rt_tasks = trace.data_frame.rt_tasks()
rt_tasks.head()
lat_df = trace.data_frame.latency_df('ramp')
lat_df.head()
custom_df = trace.data_frame.trace_event('my_marker')
custom_df
ctxsw_df = trace.data_frame.trace_event('sched_switch')
ctxsw_df.head()
###Output
_____no_output_____
###Markdown
Analysis namespace
###Code
# Use TAB to complete
trace.analysis.
trace.analysis.tasks.plotTasks(tasks='ramp',
signals=['util_avg', 'boosted_util',
'sched_overutilized', 'residencies'])
lat_data = trace.analysis.latency.plotLatency('ramp')
lat_data.T
trace.analysis.frequency.plotClusterFrequencies()
trace.analysis.frequency.plotClusterFrequencyResidency(pct=True, active=True)
trace.analysis.frequency.plotClusterFrequencyResidency(pct=True, active=False)
rtapp_df = trace.data_frame.rtapp_tasks()
rtapp_df
for task in rtapp_df.index.tolist():
trace.analysis.perf.plotRTAppPerf(task)
ramp_df = trace.data_frame.rtapp_samples('ramp')
ramp_df.head()
rt_tasks = trace.data_frame.rt_tasks()
rt_tasks.head()
###Output
_____no_output_____ |
docs/Jupyter/streamlines.ipynb | ###Markdown
Streamlines Preparing Notebook
###Code
# This tutorial demonstrates VCS streamline support.
# We show randomly seeded and evenly spaced streamlines.
#
import warnings
warnings.filterwarnings('ignore')
import vcs
import cdms2
# Download the sample data if needed
# vcs.download_sample_data_files()
# read clt.nc
f=cdms2.open(vcs.sample_data+"/clt.nc")
# read two variables
u = f("u")
v = f("v")
# initialize vcs
x=vcs.init(bg=True)
###Output
_____no_output_____
###Markdown
Controling Streamline Graphic Methods
###Code
# create the streamline graphics method
gm = x.createstreamline()
# we set parameters for randomly seeded streamlines
gm.evenlyspaced = False # only available on releases after 2.10 or on the nightly packages.
# streamlines are colored by vector magnitude
gm.coloredbyvector = True
# We want 10 glyphs(arrows) per streamline
gm.numberofglyphs = 10
gm.filledglyph = True
# we place 400 random seeds in a circle that covers the data. This means fewer seeds will be inside the data.
# The number of seeds inside the data will result in streamlines.
gm.numberofseeds = 400
# use the robinson projection for the data.
p = x.createprojection()
p.type = 'robinson'
gm.projection = p
# we plot randomly seeded streamlines
x.plot(u, v, gm, bg=1)
# we plot evenly spaced streamlines
x.clear()
gm.evenlyspaced = True # only available only on releases > 2.10 or on the nightly packages
# We want the streamline to be about one cell apart from each other
gm.separatingdistance = 1
# The seed for the first streamline. All other seeds are generated automatically
gm.startseed = [0, 0, 0]
# create an evenly spaced streamline plot
x.plot(u, v, gm, bg=1)
# we plot randomly seeded streamlines with a red color map
x.clear()
#create a red colormap with low values mapped to low opacity
cmap = x.createcolormap()
for i in range(256):
cmap.setcolorcell(i,100.,0,0,i/2.55)
x.setcolormap(cmap)
gm.evenlyspaced=False # attribute available only on releases > 2.10 or on the nightly packages
x.plot(u, v, gm, bg=1)
###Output
_____no_output_____ |
EmployeeSQL/employee_sql.ipynb | ###Markdown
Engineering Steps 1. Create a connection to the Postgres server. 2. Use the Postgres server to retrieve the three tables required. 3. Clean the data frames by dropping the unwanted columns. 4. Merge the three data frames to get one dataframe with all the required infomation. 5. Group the data by department name and create a new data frame with it. 6. Create bar graphs comparing average department salaries.
###Code
# Imports
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import os as os
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, inspect
# Hide warning messages in notebook.
import warnings
warnings.filterwarnings('ignore')
# Uncomment lines 2-4 and comment line 5 if you want to enter the Postgres password manually.
#import getpass
#p = getpass.getpass()
#p="GoWild51"
from password import password as pwd
# This method will check for the existance of the output folder for the graphs.
# This method will not create a multi level folder structure.
# If needed, it will create the output folder.
# output_folder (String) - The name of the output folder.
def create_output_folders(output_folder):
try:
if not os.path.exists(output_folder):
os.makedirs(output_folder)
print (f"Creating folder {output_folder}")
else:
print (f"Folder \"{output_folder}\" already exists.")
except Exception as ex:
print(f"{ex}")
print (f"Folder \"{output_folder}\" not created.")
# Connect to the Postres database on the local machine.
engine = create_engine(f'postgresql://postgres:{pwd}@localhost:5432/EmployeeSQL')
connection = engine.connect()
# Check which tables are in the db.
inspector = inspect(engine)
inspector.get_table_names()
# Read each table and make a dataframe for each table.
df_departments = pd.read_sql_table("departments", con=engine)
df_dept_emp = pd.read_sql_table("dept_emp", con=engine)
df_salaries = pd.read_sql_table("salaries", con=engine)
# Drop unwanted columns from the tables.
df_dept_emp = df_dept_emp.drop(['from_date', 'to_date'], axis=1)
df_salaries = df_salaries.drop(['from_date', 'to_date'], axis=1)
print(f"{df_salaries.head(2)}")
print(f"{df_dept_emp.head(2)}")
print(f"{df_departments.head(2)}")
# Merge employee with their salaries.
df_merged = pd.merge(df_dept_emp, df_salaries, on="emp_no")
df_merged.head(2)
# Merge in department names.
df_merged = pd.merge(df_merged, df_departments, on="dept_no")
df_merged.head(2)
# Group by department name and get the salary mean for each.
merged_group = df_merged.groupby(["dept_name"])["salary"].mean()
# Convert to DataFrame
df = pd.DataFrame(merged_group)
df.reset_index(inplace=True)
# Extract the information needed for graphing.
deps = df['dept_name'].count()
dep_list = df['dept_name'].tolist()
dep_salary = df['salary'].tolist()
# Create the output folders if they do not exist.
create_output_folders('output')
# Make a horizontal bar chart comparing salaries per department
fig, ax = plt.subplots(figsize=(10,6))
ypos = range(1, deps+1)
ax.barh(ypos, dep_salary[::-1])
ax.set_xlabel("Dollars", fontsize=18)
ax.set_ylabel("Departments", fontsize=18)
ax.set_yticks(ypos)
ax.set_yticklabels(dep_list[::-1], fontsize=12)
ax.set_title("Average Salary by Department", fontsize=18)
fig.tight_layout()
plt.savefig('output/MeanSalaryPerDept_hor.png')
plt.show()
# Just for fun, make a vertical bar chart too.
y_pos = np.arange(deps)
fig, ax = plt.subplots(figsize=(10,6))
plt.bar(y_pos, dep_salary, align='center', alpha=0.5)
plt.xticks(y_pos, dep_list, rotation= 45, horizontalalignment='right', fontsize=12)
plt.ylabel('Dollars', fontsize=18)
plt.title('Average Salary by Department', fontsize=18)
plt.yticks(fontsize=12)
plt.savefig('output/MeanSalaryPerDept_vert.png')
plt.show()
###Output
_____no_output_____ |
Python/Mutable vs Immutable Objects.ipynb | ###Markdown
Introduction (Objects, Values, and Types)All the data in a Python code is represented by **objects** or by **relations** between objects. Every object has an *identity*, a *type*, and a *value*.1. **Identity**An object’s identity never changes once it has been created; you may think of it as the object’s address in memory.The *is* operator compares the identity of two objects; the *id()* function returns an integer representing its identity.
###Code
a = [1, 2, 3]
b = 3
print(f"The id of [[a]]: {id(a)}")
print(f"The id of [[b]]: {id(b)}")
a is b
###Output
The id of [[a]]: 1731029395400
The id of [[b]]: 140704644413520
###Markdown
2. **Type**An object’s type defines the **possible values** and **operations** (e.g. “does it have a length?”) that type supports.The *type()* function returns the type of an object. An object type is **unchangeable** like the identity
###Code
type(a)
###Output
_____no_output_____
###Markdown
3. **Value**The value of some objects can be changed or not. Objects whose value can change are said to be **mutable**; objects whose value is unchangeable once they are created are called **immutable**. The mutability of an object is determined by its type.Some objects contain references to other objects, these objects are called **containers**. Some examples of containers are a *tuple, list, and dictionary*. The value of an immutable container that contains a reference to a mutable object can be changed if that mutable object is changed. However, the container is still considered immutable because when we talk about the mutability of a container only the identities of the contained objects are implied. Mutable and Immutable Data Types in Python1. **Mutable** data types: *list, dictionary, set, bytearray and user-defined classes*.2. **Immutable** data types: *int, float, decimal, complex, bool, string, tuple, range, frozenset, bytes*
###Code
# From both data types, we can access elements by index and we can iterate over them.
# The main difference is that a tuple cannot be changed once it’s defined.
numbers_list = [1, 2, 3] # a list
numbers_tuple = (1, 2, 3) # a tuple
# We can see that when we try to change the tuple we get an error,
# but we don’t have that problem with the list.
numbers_list[0] = 100
print(numbers_list)
numbers_tuple[0] = 100
old_id_list = id(numbers_list)
old_id_tuple = id(numbers_tuple)
numbers_list += [4, 5, 6] # expand a list
numbers_tuple += (4, 5, 6) # expand a tuple
# We can see that the list identity is not changed, while the tuple identity is changed.
# This means that we have expanded our list, but created a completely new tuple.
# Lists are more memory efficient than tuples.
print(f"List Value: {numbers_list}, its ID: {old_id_list} --> {id(numbers_list)}")
print(f"Tuple Value: {numbers_tuple}, its ID: {old_id_tuple} --> {id(old_id_tuple)}")
# Second example about a int type, like other types: string, bool, ....
a = 3 # Once it is initialized, its value cannot be changed
old_id = id(a)
a = a + 3 # create a new object to represent [a + 3]
print(f"Value: {a}, its ID: {old_id} --> {id(a)}")
###Output
Value: 6, its ID: 140704644413520 --> 140704644413616
###Markdown
Copying Mutable Objects by ReferenceWe can see that the variable names have the same identity meaning that they are referencing to the same object in computer memory.
###Code
values = [4, 5, 6]
values2 = values
print(f"[1] The value: [{values} -> {values2}], their corresponding IDs: {id(values)} -> {id(values2)}")
values2.append(7)
print(f"[2] The value: [{values} -> {values2}], their corresponding IDs: {id(values)} -> {id(values2)}")
###Output
[1] The value: [[4, 5, 6] -> [4, 5, 6]], their corresponding IDs: 1731038586120 -> 1731038586120
[2] The value: [[4, 5, 6, 7] -> [4, 5, 6, 7]], their corresponding IDs: 1731038586120 -> 1731038586120
###Markdown
So, when we have changed the values of the second variable, the values of the first one are also changed. This happens only with the mutable objects Copying Immutable ObjectsEvery time when we try to update the value of an immutable object, a new object is created instead. That’s when we have updated the first string it doesn’t change the value of the second.
###Code
import copy
text = "Data Science"
_text = text # both of them refer to same memoby block, their IDs are identical
print(f"[1] The value: [{text} -> {_text}], their corresponding IDs: {id(text)} -> {id(_text)}")
_text_copy = copy.copy(text) # both of them refer to same memoby block, their IDs are identical
print(f"[2] The value: [{text} -> {_text_copy}], their corresponding IDs: {id(text)} -> {id(_text_copy)}")
# Create a new memory block so its id would be changed,
# The original memory block does not change, the [_text] and [_text_copy] still points to this block
text += " with Python"
print(f"[3] The value: [{text} -> {_text_copy} --> {_text}],\n"
f"\t\ttheir corresponding IDs: {id(text)} -> {id(_text_copy)} -->{id(_text)}")
###Output
[1] The value: [Data Science -> Data Science], their corresponding IDs: 1731029037296 -> 1731029037296
[2] The value: [Data Science -> Data Science], their corresponding IDs: 1731029037296 -> 1731029037296
[3] The value: [Data Science with Python -> Data Science --> Data Science],
their corresponding IDs: 1731038459264 -> 1731029037296 -->1731029037296
###Markdown
Immutable Object Changing Its ValueAs we said before the value of an immutable container (Tuple: *person*) that contains a reference to a mutable object (List: *skills*) can be changed if that mutable object is changed. Let’s see an example of this.
###Code
skills = ["Programming", "Machine Learning", "Statistics"]
person = (129392130, skills) # a immutable object containing a mutable object skills
print(f"[1] The person: {person}, \n its type is: {type(person)}, its ID: {id(person)}")
skills[2] = "Maths"
print(f"[2] The person: {person}, \n its type is: {type(person)}, its ID: {id(person)}")
skills += ["Maths"]
print(f"[3] The person: {person}, \n its type is: {type(person)}, its ID: {id(person)}")
person[1] += ["Maths"] # Cannot compile because 'tuple' object does not support item assignment.
###Output
[1] The person: (129392130, ['Programming', 'Machine Learning', 'Statistics']),
its type is: <class 'tuple'>, its ID: 1731029475592
[2] The person: (129392130, ['Programming', 'Machine Learning', 'Maths']),
its type is: <class 'tuple'>, its ID: 1731029475592
[3] The person: (129392130, ['Programming', 'Machine Learning', 'Maths', 'Maths']),
its type is: <class 'tuple'>, its ID: 1731029475592
###Markdown
The object *person* is still considered immutable because when we talk about the mutability of a container only the identities of the contained objects are implied. However, if your immutable object contains only immutable objects, we cannot change their value. Instead a new object is created instead when you try to update the value of an immutable object. Let’s see an example.
###Code
unique_identifier = 42
age = 24
skills = ("Python", "pandas", "scikit-learn")
info = (unique_identifier, age, skills)
print(id(unique_identifier))
print(id(age))
print(info)
unique_identifier = 50 # create a new object
age += 1 # create a new object
skills += ("machine learning", "deep learning") # create a new object
print(id(unique_identifier))
print(id(age))
print(info) # info will not be changed even their old elements have been changed
###Output
140704644414768
140704644414192
(42, 24, ('Python', 'pandas', 'scikit-learn'))
140704644415024
140704644414224
(42, 24, ('Python', 'pandas', 'scikit-learn'))
###Markdown
Mutable objects and agruments to functions or classWhen we pass a mutable object to a function or initialize a class declaraton, if its value would be updated in the objects, its value will be updated accordningly. In order to aviod this situation, we can call *copy()* function on mutable objects to create a new object with identical values. Then passing this copying object to functions or class.
###Code
from typing import List
def divide_and_average(var: List):
for i in range(len(var)):
var[i] /= 2
avg = sum(var)/len(var)
return avg
my_list = [1, 2, 3]
print(divide_and_average(my_list))
print(my_list) # We can see that the value of [my_list] has been updated
from typing import List
class myObject:
def __init__(self, var: List):
self.data = var
def divide_and_average(self):
for i in range(len(self.data)):
self.data[i] /= 2
avg = sum(self.data)/len(self.data)
return avg
my_list = [1, 2, 3]
_my_list = my_list.copy()
data = myObject(_my_list) # pass a copy of my_list, they are different objects
print(f"Their ids: {id(my_list)} --> {id(_my_list)}")
print(data.divide_and_average())
print(my_list) # We can see that the value of [my_list] has been updated
###Output
Their ids: 1731029383624 --> 1731038610568
1.0
[1, 2, 3]
###Markdown
Default Arguments in Functions/ClassA common practice when we are defining a function is to assign default values to its arguments. On the one hand, this allows us to include new parameters without changing the downstream code. Still, it also allows us to call the function with fewer arguments, making it easier to use. Let's see, for example, a function that increases the value of the elements of a list. The code would look like:
###Code
def increase_values(var1=[1, 1], value=0):
value += 1
var1[0] += value
var1[1] += value
return var1, value
print(increase_values())
print(increase_values())
###Output
([2, 2], 1)
([3, 3], 1)
###Markdown
The first time, it prints ([2, 2], 1) as expected, but the second time it prints ([3, 3], 1). It means that the default argument of the function is changing every time we run it. When we run the script, Python evaluates the function **definition only once** and then creates **the default list and the default value**. Because lists are mutable, every time we call the function, we change its default argument. However, value is immutable, and it remains the same for all subsequent function calls. The next logical question is, how can we prevent this from happening. And the short answer is to use **immutable types** as default arguments for functions. We could have used *None*, for instance:
###Code
def increase_values(var1=None, value=0):
if var1 is None:
var1 = [1, 1]
value += 1
var1[0] += value
var1[1] += value
return var1, value
print(increase_values())
print(increase_values())
###Output
([2, 2], 1)
([2, 2], 1)
###Markdown
**Important Usage: Cahce**, Of course, the decision always depends on the use case. We may want to update the default value from one call to another. Imagine the case where we would like to perform a computationally expensive calculation. Still, we don't want to run twice the function with the same input and use a cache of values instead. We could do the following:
###Code
def calculate(var1, var2, cache={}):
try:
value = cache[var1, var2]
except KeyError:
value = expensive_computation(var1, var2)
cache[var1, var2] = value
return value
###Output
_____no_output_____
###Markdown
When we run calculate for the first time, there will be nothing stored in the **cache** dictionary. When we execute the function more than once, **cache** will start changing, appending the new values. If we **repeat the arguments** at some point, they will be part of the **cache** dictionary, and the stored value will be returned. Notice that we are leveraging the handling of exceptions to avoid checking explicitly whether the combination of values already exists in memory. Our immutable objectswe can achieve by reimplementing the "\__setattr\__" method to define a immutable objects. As soon as we try to instantiate this class, the TypeError will be raised. Even within the class itself, assigning values to attributes is achieved through the "\__setattr\__ "method. To bypass it, we need to use the super() object:
###Code
class MyImmutable:
def __init__(self, var1, var2):
super().__setattr__('var1', var1)
super().__setattr__('var2', var2)
def __setattr__(self, key, value):
raise TypeError('MyImmutable cannot be modified after instantiation')
def __str__(self):
return 'MyImmutable var1: {}, var2: {}'.format(self.var1, self.var2)
my_immutable = MyImmutable(1, 2)
print(my_immutable)
# If we instantiate the class and try to assign a value to an attribute of it, an error will appear:
my_immutable.var1 = 2
###Output
_____no_output_____ |
study/Iterator.ipynb | ###Markdown
출처```원본(english) : https://nvie.com/posts/iterators-vs-generators/ 변역(korea) : https://mingrammer.com/translation-iterators-vs-generators/ ``` - 컨테이너 (Container)- 이터레이블 (Iterable)- 이터레이터 (Iterator)- 제너레이터 (Generator)- 제너레이터 표현식 (Generator expression)- {list, set, dict} 컴프리헨션 ({list, set, dict} comprehension)  컨테이너(Container)객체를 담을 수 있으면 컨테이너원소들을 가지고 있는 데이터 구조이며 멤버쉽 테스트를 지원메모리에 상주하는 데이터 구조로, 모든 원소값을 메모리에 가지고 있다.
###Code
ds = [list, set, tuple, dict, str]
from collections import Container
for d in ds:
print(issubclass(d, Container))
class CT:
def __contains__(self, value):
return True
issubclass(CT, Container)
###Output
_____no_output_____
###Markdown
멤버쉽 테스트
###Code
assert 1 in [1, 2, 3] # lists
assert 1 in (1, 2, 3) # tuples
assert 1 in {1, 2, 3} # sets
###Output
_____no_output_____
###Markdown
딕셔너리는 키 값을 체크
###Code
d = {1: 'foo', 2: 'bar', 3: 'qux'}
assert 1 in d
###Output
_____no_output_____
###Markdown
문자열에는 부분 문자열을 포함하고 있는지 체크
###Code
s = 'foobar'
assert 'b' in s
assert 'foo' in s # 문자열은 부분문자열을 모두 "포함"하고 있다
asset 'ttttt' in s
###Output
_____no_output_____
###Markdown
컨테이너의 일반적인 특성 in (not in)
###Code
1 in [1, 2, 3]
'a' in {'a': 1, 'b': 2}
###Output
_____no_output_____
###Markdown
len
###Code
len({1, 2, 3, 4})
len({'a': 1})
###Output
_____no_output_____
###Markdown
max, min
###Code
max([1, 2, 3, 4])
max("한글!@!#123123afsdfsdaf")
ord('a'), ord('c')
###Output
_____no_output_____
###Markdown
Sequence순서가 있는 Container
###Code
from collections import Sequence
ds = [list, set, tuple, dict, str]
for d in ds:
print(d, issubclass(d, Sequence))
###Output
<class 'list'> True
<class 'set'> False
<class 'tuple'> True
<class 'dict'> False
<class 'str'> True
###Markdown
list, tuple, str은 Container 중에서도 Sequencec 시퀀스의 일반적인 기능 더하기 연산
###Code
[1, 2] + [3, 4]
'ab' + 'cd'
{'a'} + {'c'}
###Output
_____no_output_____
###Markdown
곱하기 연산 ( * n)
###Code
[1, 2] * 10 # [1, 2] + [1, 2] + ... + [1, 2]
###Output
_____no_output_____
###Markdown
slice
###Code
[1, 2, 3, 4][1:]
###Output
_____no_output_____
###Markdown
메소드 index
###Code
[10, 9, 8, 7].index(9)
###Output
_____no_output_____
###Markdown
메소드 count
###Code
'aaaaaaaaaaa'.count('a')
###Output
_____no_output_____
###Markdown
이터레이블(Itrable) 반복가능한대부분의 컨테이너는 반복가능하다. = 이터레이블(iteralbe) 하다. 이 말은 객체 내부에 `__iter__` 메소드가 구현되어 있다는 것을 의미합니다.
###Code
x = [1,2,3]
x.__iter__
y = iter(x)
z = iter(x)
###Output
_____no_output_____
###Markdown
`__iter__` 메소드를 사용하게 되면 이터레이터를 만들어줍니다.
###Code
y.__iter__
###Output
_____no_output_____
###Markdown
list object 와 list_iterator 라고 나타나는 것을 확인할 수 있음
###Code
print(type(x))
print(type(y))
###Output
<class 'list'>
<class 'list_iterator'>
###Markdown
그렇다면 itrable한 객체 x와 itrator y의 차이점은 무엇일까요?
###Code
import collections.abc as cols
import pprint
pprint.pprint(cols.Iterable.__dict__)
pprint.pprint(cols.Iterator.__dict__)
able = set(dir(cols.Iterable))
ator = set(dir(cols.Iterator))
print(ator-able)
###Output
{'__next__'}
###Markdown
이터레이터(iterator)`__next__`를 가진 모든 객체는 이터레이터이다.next를 호출할 때 다음값을 생성해내는 상태를 가진 헬퍼 객체  이터레이터에 대한 설명은 여기까지만... 저는 제너레이터를 이야기 하고 싶어요! 제너레이터(Generator)- 모든 제너레이터는 이터레이터이다. (그 반대는 성립하지 않는다.)- 모든 제너레이터는 게으른 팩토리이다. (즉, 값을 그 때 그 때 생성한다.)
###Code
class fib:
def __init__(self):
self.prev = 0
self.curr = 1
def __iter__(self):
return self
def __next__(self):
value = self.curr
self.curr += self.prev
self.prev = value
return value
f = fib()
from itertools import islice
list(islice(f, 0, 10))
###Output
_____no_output_____
###Markdown
itertools에 있는 모든 메소드는 이터레이터
###Code
def fib():
prev, curr = 0, 1
while True:
yield curr
prev, curr = curr, prev + curr
f = fib()
list(islice(f, 0, 10))
numbers = range(6)
[x * x for x in numbers]
lazy_squares = (x * x for x in numbers)
lazy_squares
next(lazy_squares)
list(lazy_squares)
###Output
_____no_output_____
###Markdown
파이썬에서는 왜 이터레이터와 제너레이터를 이렇게 구분해놓을까요?
###Code
numbers = range(1000000)
iter_ = [x * x for x in numbers]
gener_ = (x * x for x in numbers)
import sys
print(sys.getsizeof(iter_))
print(sys.getsizeof(gener_))
###Output
8697464
88
###Markdown
iterator는 이미 메모리에 전부 올라와 있거나, 모두 메로리로 올려도 부담이 없는 작은 규모의 데이터에 합당 하지만 큰 데이터를 다루는 경우에는, 성능 및 자원 관리의 측면에 있어서라도 부분적인 접근에 따라 생성 또는 접근하는 것이 효율적이라 generator를 이용하게 된다.
###Code
%%time
result = 0
for i in iter_:
result += i**2
print(result)
%%time
result = 0
for i in gener_:
result += i**2
print(result)
###Output
199999500000333333333333300000
Wall time: 498 ms
###Markdown
재미있는 파이썬 이모저모 1. 왜 함수로 코드를 짜라는거야? 줄코딩이 어때서!
###Code
N = 10000000
%%time
result = 0
for i in range(N+1):
result += i**2
print(result)
def cal(n):
result = 0
for i in range(n+1):
result += i**2
print(result)
%%time
cal(N)
###Output
333333383333335000000
Wall time: 3.3 s
###Markdown
파이썬에서는 같은 코드라도 줄코딩보다 함수형태가 무조건 빠르다. 2. round 제대로 알고 있으신가요?
###Code
a = round(0.5)
b = round(1.5)
c = round(2.5)
d = round(3.5)
###Output
_____no_output_____ |
distribution.ipynb | ###Markdown
DistributionShow values in a dataset and how often they occur. The shape (or skew) of a distribution can be a memorable way of highlighting the lack of uniformity or equality in the data.
###Code
import pandas as pd
import numpy as np
#ggplot equivalent: plotnine
from plotnine import *
#scales package equivalent: mizani
from mizani.breaks import *
from mizani.formatters import *
import utils
###Output
_____no_output_____
###Markdown
HistogramThe standard way to show a statistical distribution - keep the gaps between columns small to highlight the 'shape' of the data.
###Code
df = pd.read_csv('data/histogram.csv')
df.head()
g = (ggplot(df,aes(x='factor(Bin)',y='CountV')) + #treat bin as factor not numerical
geom_col(fill='red') + #change bar colors
theme_minimal() + #minimal theme
labs(x='',y='') + #no labels
scale_y_continuous(limits=(0,140),breaks=range(0,141,10),labels=range(0,141,10)) #y-axis ticks
)
g
vals=[]
for i in range(df.shape[0]):
for j in range(df['CountV'][i]):
vals.append(df['Bin'][i])
df2 = pd.DataFrame({'idx':range(len(vals)), 'val':vals})
df2.head()
g = (ggplot(df2,aes(x='val')) + #treat bin as factor not numerical
geom_bar(stat='bin',bins=20, fill='red') + #use geom_bar for histogram
theme_minimal() + #minimal theme
labs(x='',y='') + #no labels
scale_y_continuous(limits=(0,140),breaks=range(0,141,10),labels=range(0,141,10)) #y-axis ticks
)
g
###Output
_____no_output_____
###Markdown
Dot plotA simple way of showing the range (min/median/max) of data across multiple categories.
###Code
df = pd.read_csv('data/dot-plot.csv')
df.head()
#aggregate to get min, median and max
agg_df = df.groupby('Sub-Category').agg({'Profit':[np.min,np.median,np.max]}).reset_index()
agg_df.columns = ['Sub-Category','min','median','max']
agg_df
#melt
agg_m = agg_df.melt(id_vars='Sub-Category')
agg_m.head()
g = (ggplot(agg_m,aes(x='Sub-Category',y='value',color='variable',group='Sub-Category')) +
geom_line(color='grey') + geom_point() + theme_minimal() + coord_flip() +
scale_y_continuous(limits=(agg_m.value.min(),14000),
breaks=range(-1000,14000,2000)) + #y-axis ticks
labs(color='', x='',y=''))
g
###Output
_____no_output_____
###Markdown
Dot strip plotGood for showing individual values in a distribution, can be a problem when too many dots have the same value.
###Code
df = pd.read_csv('data/dot-strip-plot.csv')
df.head()
#custom formatter
f = utils.k_format()
g = (ggplot(df, aes(x='Month of Order Date',y='Sales')) +
geom_point(alpha=0.5,color='red',size=3) + #points with decorations
theme_minimal() + coord_flip() +
scale_y_continuous(limits=(0,45000),breaks=range(0,45000,5000),
labels=f)
)
g
###Output
_____no_output_____
###Markdown
Barcode plotLike dot strip plots, good for displaying all the data in a table,they work best when highlighting individual values.
###Code
df = pd.read_csv('data/barcode-plot.csv')
df.head()
#custom formatter
f = utils.k_format()
g = (ggplot(df, aes(x='Sub-Category',y='Avg Sales')) +
geom_jitter(alpha=0.5,color='red',size=5,
position=position_dodge(0.8),
shape=2) + #jitter with shape selection
theme_minimal() + coord_flip() +
scale_y_continuous(limits=(0,2400),breaks=range(0,2400,200),
labels=f)
)
g
###Output
_____no_output_____
###Markdown
BoxplotSummarise multiple distributions by showing the median (centre) and range of the data.
###Code
df = pd.read_csv('data/boxplot.csv')
df.head()
g = (ggplot(df, aes(x='Segment',y='Profit')) +
geom_boxplot() +
geom_jitter(position=position_dodge(1),alpha=0.3,color='red') +
theme_minimal() +
scale_y_continuous(limits=(-6000,16000),
breaks=range(-6000,16000,2000),
labels=f)
)
g
###Output
_____no_output_____
###Markdown
Violin plotSimilar to a box plot but more effective with complex distributions (data that cannot be summarised with simple average).
###Code
df = pd.read_csv('data/violin-plot.csv')
df.head()
g = (ggplot(df, aes(x=1, y='Avg Salary')) +
geom_violin(fill='red',alpha=0.5) +
geom_boxplot(width=0.05,fill='black') +
theme_minimal()
)
g
###Output
_____no_output_____
###Markdown
Population pyramidA standard way for showing the age and sex breakdown of a population distribution; effectively, back to back histograms.
###Code
df = pd.read_csv('data/population-pyramid.csv')
df['people'] = df.apply(lambda x: -x['people'] if x['sex']=='female' else x['people'], 1)
df.head()
#custom formatter
f = utils.m_format()
g = (ggplot(df, aes(x='age',y='people', fill='sex')) + #baseplot
geom_col(width=3) + #type of plot
coord_flip() + #flip coordinates
theme_minimal() + #theme
scale_x_continuous(limits=(-5,95),
breaks=range(-5,95,5),
labels=lambda x: [i if i>=0 else '' for i in x]) + #cusotmize x-axis labels
scale_y_continuous(breaks=range(int(-12*1e6),int(12*1e6),int(2*1e6)), #breaks
limits=(-12*1e6, 12*1e6), #limits
labels=f) #labels
)
g
###Output
_____no_output_____
###Markdown
Cumulative curveA good way of showing how unequal a distribution is: y axis is always cumulative frequency, x axis is always a measure.
###Code
df = pd.read_csv('data/cumulative-curve.csv')
df.head()
g = (ggplot(df, aes(x='Months since First Purchase', y='Running Sum of Sales',
color='Region', group='Region')) + geom_line() +
theme_minimal() + guides(color=False) +
scale_x_continuous(breaks=range(0,50,5),
labels=lambda x: [i if i%5==0 else '' for i in x]) + #show labels by modulus
scale_y_continuous(breaks=range(0,600000,50000),
labels=lambda x: [i if i%50000==0 else '' for i in x]) + #show labels by modulus
#annotate last point
geom_point(df[df['Months since First Purchase']==47],
aes(x='Months since First Purchase', y='Running Sum of Sales')) +
geom_text(df[df['Months since First Purchase']==47],
aes(x='Months since First Purchase', y='Running Sum of Sales',
label='Running Sum of Sales'),
nudge_x=7) #nudge it a bit to the right
)
g
###Output
_____no_output_____
###Markdown
Frequency polygonsFor displaying multiple distributions of data. Like a regular line chart, best limited to a maximum of 3 or 4 datasets.
###Code
df = pd.read_csv('data/frequency-polygons.csv')
df.head()
g = (ggplot(df, aes(x='Time', y='Frequency',
color='Type', group='Type')) +
geom_line() + geom_point() +
theme_minimal() + guides(color=False) +
scale_y_continuous(breaks=range(11),
labels=lambda x: [i for i in x]) +
scale_x_continuous(breaks=range(0,1200,100),
labels=currency_format(prefix='',big_mark=',',digits=0)) #currency format for nice numbers
)
g
###Output
_____no_output_____
###Markdown
BeeswarmUse to emphasize individual points in a distribution. Points can be sized to an additional variable. Best with medium-sized datasets.
###Code
df = pd.read_csv('data/boxplot.csv')
df.head()
g = (ggplot(df, aes(x='Segment',y='Profit',size='Region')) +
geom_jitter(position=position_jitter(),alpha=0.3,color='red') +
theme_minimal() +
scale_y_continuous(limits=(-6000,16000),
breaks=range(-6000,16000,2000),
labels=utils.k_format())
)
g
###Output
_____no_output_____
###Markdown
Visualizing distributions=========================Copyright 2015 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](http://creativecommons.org/licenses/by/4.0/)
###Code
from __future__ import print_function, division
import numpy as np
import thinkstats2
import nsfg
import thinkplot
%matplotlib inline
###Output
_____no_output_____
###Markdown
Let's load up the NSFG pregnancy data.
###Code
preg = nsfg.ReadFemPreg()
preg.shape
###Output
_____no_output_____
###Markdown
And select the rows corresponding to live births.
###Code
live = preg[preg.outcome == 1]
live.shape
###Output
_____no_output_____
###Markdown
We can use `describe` to generate summary statistics.
###Code
live.prglngth.describe()
###Output
_____no_output_____
###Markdown
But there is no substitute for looking at the whole distribution, not just a summary.One way to represent a distribution is a Probability Mass Function (PMF).`thinkstats2` provides a class named `Pmf` that represents a PMF.A Pmf object contains a Python dictionary that maps from each possible value to its probability (that is, how often it appears in the dataset).`Items` returns a sorted list of values and their probabilities:
###Code
pmf = thinkstats2.Pmf(live.prglngth)
for val, prob in pmf.Items():
print(val, prob)
###Output
0 0.00010931351115
4 0.00010931351115
9 0.00010931351115
13 0.00010931351115
17 0.0002186270223
18 0.00010931351115
19 0.00010931351115
20 0.00010931351115
21 0.0002186270223
22 0.00076519457805
23 0.00010931351115
24 0.00142107564495
25 0.00032794053345
26 0.00382597289025
27 0.00032794053345
28 0.0034980323568
29 0.00229558373415
30 0.0150852645387
31 0.00295146480105
32 0.0125710537822
33 0.00535636204635
34 0.006558810669
35 0.0339965019676
36 0.0350896370791
37 0.0497376475732
38 0.066353301268
39 0.513008307827
40 0.121993878443
41 0.064167031045
42 0.0358548316572
43 0.0161783996502
44 0.0050284215129
45 0.0010931351115
46 0.00010931351115
47 0.00010931351115
48 0.00076519457805
50 0.0002186270223
###Markdown
There are some values here that are certainly errors, and some that are suspect. For now we'll take them at face value. There are several ways to visualize Pmfs.`thinkplot` provides functions to plot Pmfs and other types from `thinkstats2`.`thinkplot.Pmf` renders a Pmf as histogram (bar chart).
###Code
thinkplot.PrePlot(1)
thinkplot.Hist(pmf)
thinkplot.Config(xlabel='Pregnancy length (weeks)',
ylabel='PMF',
xlim=[0, 50],
legend=False)
###Output
_____no_output_____
###Markdown
`Pmf` renders the outline of the histogram.
###Code
thinkplot.PrePlot(1)
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel='Pregnancy length (weeks)',
ylabel='PMF',
xlim=[0, 50])
###Output
_____no_output_____
###Markdown
`Pdf` tries to render the Pmf with a smooth curve.
###Code
thinkplot.PrePlot(1)
thinkplot.Pdf(pmf)
thinkplot.Config(xlabel='Pregnancy length (weeks)',
ylabel='PMF',
xlim=[0, 50])
###Output
_____no_output_____
###Markdown
I started with PMFs and histograms because they are familiar, but I think they are bad for exploration.For one thing, they don't hold up well when the number of values increases.
###Code
pmf_weight = thinkstats2.Pmf(live.totalwgt_lb)
thinkplot.PrePlot(1)
thinkplot.Hist(pmf_weight)
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='PMF')
pmf_weight = thinkstats2.Pmf(live.totalwgt_lb)
thinkplot.PrePlot(1)
thinkplot.Pmf(pmf_weight)
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='PMF')
pmf_weight = thinkstats2.Pmf(live.totalwgt_lb)
thinkplot.PrePlot(1)
thinkplot.Pdf(pmf_weight)
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='PMF')
###Output
_____no_output_____
###Markdown
Sometimes you can make the visualization better by binning the data:
###Code
def bin_and_pmf(weights, num_bins):
bins = np.linspace(0, 15.5, num_bins)
indices = np.digitize(weights, bins)
values = bins[indices]
pmf_weight = thinkstats2.Pmf(values)
thinkplot.PrePlot(1)
thinkplot.Pdf(pmf_weight)
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='PMF')
bin_and_pmf(live.totalwgt_lb.dropna(), 50)
###Output
_____no_output_____
###Markdown
Binning is simple enough, but it is still a nuisance.And it is fragile. If you have too many bins, the result is noisy. Too few, you obliterate features that might be important.And if the bin boundaries don't align well with data boundaries, you can create artifacts.
###Code
bin_and_pmf(live.totalwgt_lb.dropna(), 51)
###Output
_____no_output_____
###Markdown
There must be a better way!Indeed there is. In my opinion, cumulative distribution functions (CDFs) are a better choice for data exploration.You don't have to bin the data or make any other transformation.`thinkstats2` provides a function that makes CDFs, and `thinkplot` provides a function for plotting them.
###Code
data = [1, 2, 2, 5]
pmf = thinkstats2.Pmf(data)
pmf
cdf = thinkstats2.Cdf(data)
cdf
thinkplot.PrePlot(1)
thinkplot.Cdf(cdf)
thinkplot.Config(ylabel='CDF',
xlim=[0.5, 5.5])
###Output
_____no_output_____
###Markdown
Let's see what that looks like for real data.
###Code
cdf_weight = thinkstats2.Cdf(live.totalwgt_lb)
thinkplot.PrePlot(1)
thinkplot.Cdf(cdf_weight)
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='CDF')
###Output
_____no_output_____
###Markdown
A CDF is a map from each value to its cumulative probability.You can use it to compute percentiles:
###Code
cdf_weight.Percentile(50)
###Output
_____no_output_____
###Markdown
Or if you are given a value, you can compute its percentile rank.
###Code
cdf_weight.PercentileRank(8.3)
###Output
_____no_output_____
###Markdown
Looking at the CDF, it is easy to see the range of values, the central tendency and spread, as well as the overall shape of the distribution.If there are particular values that appear often, they are visible as vertical lines. If there are ranges where no values appear, they are visible as horizontal lines.And one of the best things about CDFs is that you can plot several of them on the same axes for comparison. For example, let's see if first babies are lighter than others.
###Code
firsts = live[live.birthord == 1]
others = live[live.birthord != 1]
len(firsts), len(others)
cdf_firsts = thinkstats2.Cdf(firsts.totalwgt_lb, label='firsts')
cdf_others = thinkstats2.Cdf(others.totalwgt_lb, label='others')
thinkplot.PrePlot(2)
thinkplot.Cdfs([cdf_firsts, cdf_others])
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='CDF',
legend=True)
###Output
_____no_output_____
###Markdown
Plotting the two distributions on the same axes, we can see that the distribution for others is shifted to the right; that is, toward higher values. And we can see that the shift is close to the same over the whole distribution.Let's see how well we can make this comparison with PMFs:
###Code
pmf_firsts = thinkstats2.Pmf(firsts.totalwgt_lb, label='firsts')
pmf_others = thinkstats2.Pmf(others.totalwgt_lb, label='others')
thinkplot.PrePlot(2)
thinkplot.Pdfs([pmf_firsts, pmf_others])
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='PMF')
###Output
_____no_output_____
###Markdown
With PMFs it is hard to compare distributions. And if you plot more than two PMFs on the same axes, it is likely to be a mess.Reading CDFs takes some getting used to, but it is worth it! For data exploration and visualization, CDFs are better than PMFs in almost every way.But if you really have to generate a PMF, a good option is to estimate a smoothed PDF using Kernel Density Estimation (KDE).
###Code
pdf_firsts = thinkstats2.EstimatedPdf(firsts.totalwgt_lb.dropna(), label='firsts')
pdf_others = thinkstats2.EstimatedPdf(others.totalwgt_lb.dropna(), label='others')
thinkplot.PrePlot(2)
thinkplot.Pdfs([pdf_firsts, pdf_others])
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='PDF')
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis in Python Supplementary material: Explaining the `Pmf` and `Cdf` objects. To support my class, I define classes to represent probability mass functions (PMFs) and cumulative distribution functions (CDFs). These classes are based on Pandas Series, and use method defined by Pandas and NumPy.The primary interface they provide is:* PMF forward lookup: for a given value, return the corresponding probability mass.* CDF forward lookup: for a given value, return the cumulative probability.* CDF inverse lookup: for a given probability, return the corresponding value.This notebook explains my implementation of these methods.Allen B. Downey[MIT License](https://en.wikipedia.org/wiki/MIT_License)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import unittest
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='white')
def underride(d, **options):
"""Add key-value pairs to d only if key is not in d.
d: dictionary
options: keyword args to add to d
"""
for key, val in options.items():
d.setdefault(key, val)
return d
###Output
_____no_output_____
###Markdown
PMFThe `Pmf` class inherits from a Pandas `Series`, so all `Series` methods also work on `Pmf` objects.I override the constructor so it takes a sequence of any kind and computes the "value counts", which is map from each unique value to the number of times it appears (if the `Pmf` is not normalized) or a probability mass (if the `Pmf` is normalized).
###Code
class Pmf(pd.Series):
def __init__(self, seq, name='Pmf', **options):
"""Make a PMF from a sequence.
seq: sequence of values
name: string
sort: boolean, whether to sort the values, default True
normalize: boolean, whether to normalize the Pmf, default True
dropna: boolean, whether to drop NaN, default True
"""
# get the sort flag
sort = options.pop('sort', True)
# normalize unless the caller said not to
underride(options, normalize=True)
# put the seq in a Series so we can use value_counts
series = pd.Series(seq, copy=False)
# make the counts
# by default value_counts sorts by frequency, which
# is not what we want
options['sort'] = False
counts = series.value_counts(**options)
# sort by value
if sort:
counts.sort_index(inplace=True)
# call Series.__init__
super().__init__(counts, name=name)
@property
def qs(self):
return self.index.values
@property
def ps(self):
return self.values
def __call__(self, qs):
"""Look up a value in the PMF."""
return self.get(qs, 0)
def normalize(self):
"""Normalize the PMF."""
self /= self.sum()
def bar(self, **options):
"""Plot the PMF as a bar plot."""
underride(options, label=self.name)
plt.bar(self.index, self.values, **options)
def plot(self, **options):
"""Plot the PMF with lines."""
underride(options, label=self.name)
plt.plot(self.index, self.values, **options)
###Output
_____no_output_____
###Markdown
Here's an example that makes a normalized `Pmf`, which is the default.
###Code
seq = [5, 3, 2, 2, 1]
pmf = Pmf(seq)
###Output
_____no_output_____
###Markdown
Forward lookup uses function call syntax, since the `Pmf` represents a mathematical function.
###Code
pmf(1)
pmf(2)
###Output
_____no_output_____
###Markdown
The PMF of a value that does not appear in the sequence is 0.
###Code
pmf(4)
###Output
_____no_output_____
###Markdown
`Pmf` provides a plot method that displays the `Pmf` as a line plot.
###Code
pmf.plot()
###Output
_____no_output_____
###Markdown
And a bar method that displays the `Pmf` as a bar plot.
###Code
pmf.bar()
###Output
_____no_output_____
###Markdown
`Pmf` makes the quantities available as a property, `qs`.
###Code
pmf.qs
###Output
_____no_output_____
###Markdown
And the probabilities available as a property, `ps`. Both return NumPy arrays.
###Code
pmf.ps
###Output
_____no_output_____
###Markdown
The following are unit tests for `Pmf` objects:
###Code
def run_tests():
unittest.main(argv=['first-arg-is-ignored'], exit=False)
class TestPmf(unittest.TestCase):
def test_pmf(self):
seq = [5, 3, 2, 2, 1]
pmf = Pmf(seq)
self.assertEqual(pmf(1), 0.2)
self.assertEqual(pmf(2), 0.4)
self.assertEqual(pmf(4), 0.0)
def test_pmf_normalize_false(self):
seq = [5, 3, 2, 2, 1]
pmf = Pmf(seq, normalize=False)
self.assertEqual(pmf(1), 1)
self.assertEqual(pmf(2), 2)
self.assertEqual(pmf(4), 0)
run_tests()
###Output
..
----------------------------------------------------------------------
Ran 2 tests in 0.007s
OK
###Markdown
CDFThe `Cdf` class also inherits from `Series`.The constructor takes a sequence of values. First it makes a sorted `Pmf`, then it computes the cumulative sum of the probabilities. If normalized, it divides the cumulative probabilities by the last cumulative probability, which is the total.
###Code
from scipy.interpolate import interp1d
class Cdf(pd.Series):
def __init__(self, seq, name='Cdf', **options):
"""Make a CDF from a sequence.
seq: sequence of values
name: string
sort: boolean, whether to sort the values, default True
normalize: boolean, whether to normalize the Cdf, default True
dropna: boolean, whether to drop NaN, default True
"""
# get the normalize option
normalize = options.pop('normalize', True)
# make the PMF and CDF
pmf = Pmf(seq, normalize=False, **options)
cdf = pmf.cumsum()
# normalizing the CDF, rather than the PMF,
# avoids floating-point errors and guarantees
# that the last proability is 1.0
if normalize:
cdf /= cdf.values[-1]
super().__init__(cdf, name=name, copy=False)
@property
def qs(self):
return self.index.values
@property
def ps(self):
return self.values
@property
def forward(self):
return interp1d(self.qs, self.ps,
kind='previous',
assume_sorted=True,
bounds_error=False,
fill_value=(0,1))
@property
def inverse(self):
return interp1d(self.ps, self.qs,
kind='next',
assume_sorted=True,
bounds_error=False,
fill_value=(self.qs[0], np.nan))
def __call__(self, qs):
return self.forward(qs)
def percentile_rank(self, qs):
return self.forward(qs) * 100
def percentile(self, percentile_ranks):
return self.inverse(percentile_ranks / 100)
def step(self, **options):
"""Plot the CDF as a step function."""
underride(options, label=self.name, where='post')
plt.step(self.index, self.values, **options)
def plot(self, **options):
"""Plot the CDF as a line."""
underride(options, label=self.name)
plt.plot(self.index, self.values, **options)
###Output
_____no_output_____
###Markdown
Here's an example.
###Code
cdf = Cdf([5, 3, 2, 2, 1])
###Output
_____no_output_____
###Markdown
The quantities are available as a property, `qs`:
###Code
cdf.qs
###Output
_____no_output_____
###Markdown
The cumulative probabilities are available as a property, `ps`:
###Code
cdf.ps
###Output
_____no_output_____
###Markdown
Cdf provides a method, `step`, that plots the CDF as a step function (which is technically what it is).
###Code
cdf.step()
###Output
_____no_output_____
###Markdown
It also provides `plot`, which plots the CDF as a line plot.
###Code
cdf.plot()
###Output
_____no_output_____
###Markdown
Here's an example that uses forward lookup to get the cumulative probabilities for a sequence of quantities.
###Code
qs = [0, 1, 1.5, 2, 2.5, 3, 4, 5, 6]
cdf(qs)
###Output
_____no_output_____
###Markdown
The function call syntax is equivalent to calling the `forward` method.
###Code
qs = [0, 1, 1.5, 2, 2.5, 3, 4, 5, 6]
cdf.forward(qs)
###Output
_____no_output_____
###Markdown
Here's an example that uses `inverse` to compute the quantities for range of probabilities.
###Code
ps = np.linspace(0, 1, 6)
cdf.inverse(ps)
###Output
_____no_output_____
###Markdown
And here are some unit tests for `Cdf`.
###Code
class TestCdf(unittest.TestCase):
def test_cdf(self):
seq = [5, 3, 2, 2, 1]
cdf = Cdf(seq)
self.assertAlmostEqual(cdf(0), 0)
self.assertAlmostEqual(cdf(1), 0.2)
self.assertAlmostEqual(cdf(2), 0.6)
self.assertAlmostEqual(cdf(3), 0.8)
self.assertAlmostEqual(cdf(4), 0.8)
self.assertAlmostEqual(cdf(5), 1)
self.assertAlmostEqual(cdf(6), 1)
def test_cdf_inverse(self):
seq = [5, 3, 2, 2, 1]
cdf = Cdf(seq)
self.assertAlmostEqual(cdf.inverse(0), 1)
self.assertAlmostEqual(cdf.inverse(0.2), 1)
self.assertAlmostEqual(cdf.inverse(0.3), 2)
self.assertAlmostEqual(cdf.inverse(0.4), 2)
self.assertAlmostEqual(cdf.inverse(0.41), 2)
self.assertAlmostEqual(cdf.inverse(0.6), 2)
self.assertAlmostEqual(cdf.inverse(0.8), 3)
self.assertAlmostEqual(cdf.inverse(1), 5)
run_tests()
###Output
....
----------------------------------------------------------------------
Ran 4 tests in 0.007s
OK
###Markdown
INITIALISATION
###Code
from time import time
from pathlib import Path
from IPython.display import Image, display
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from torch.utils import data
from torchvision import datasets, transforms
from torchvision.transforms.functional import to_pil_image, resize, to_tensor
from torchvision.transforms.functional import normalize
import os
import shutil
import random
from copy import deepcopy
from zipfile import ZipFile, ZIP_DEFLATED
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
SEEDS = [0,1,2,3,4]
FORCE = False
INTENS = 0.4
DEFAULTS = {
"w0": 0.2, # float >= 0, regularisation parameter
"w": 0.2, # float >= 0, harmonisation parameter
"lr_gen": 0.02, # float > 0, learning rate of global model
"lr_node": 0.02, # float > 0, learning rate of local models
"NN" : "base", # "base" or "conv", neural network architecture
"opt": optim.Adam, # any torch otpimizer
"gen_freq": 1, # int >= 1, number of global steps for 1 local step
"nbn": 1000, # int >= 1, number of nodes
"nbd": 60_000, # int >= 1, nbd/nbn must be in [1, 60_000], total data
"fracdish": 0, # float in [0,1]
"typ_dish": "zeros",# in ["honest", "zeros", "jokers", "one_evil",
# "byzantine", "trolls", "strats"]
"heter": 0, # int >= 0, heterogeneity of data repartition
"nb_epochs": 100 # int >= 1, number of training epochs
}
def defaults_help():
''' Structure of DEFAULTS dictionnary :
"w0": 0.2, # float >= 0, regularisation parameter
"w": 0.2, # float >= 0, harmonisation parameter
"lr_gen": 0.02, # float > 0, learning rate of global model
"lr_node": 0.02, # float > 0, learning rate of local models
"NN" : "base", # "base" or "conv", neural network architecture
"opt": optim.Adam, # any torch otpimizer
"gen_freq": 1, # int >= 1, number of global steps for
1 local step
"nbn": 1000, # int >= 1, number of nodes
"nbd": 60_000, # int >= 1, total data
- nbd/nbn must be in [1, 60_000]
"fracdish": 0, # float in [0,1]
"typ_dish": "zeros",# in ["honest", "zeros", "jokers", "one_evil",
"byzantine", "trolls", "strats"]
"heter": 0, # int >= 0, heterogeneity of data repartition
"nb_epochs": 100, # int >= 1, number of training epochs
'''
None
METRICS = ({"lab":"fit", "ord": "Training Loss", "f_name": "loss"},
{"lab":"gen", "ord": "Training Loss", "f_name": "loss"},
{"lab":"reg", "ord": "Training Loss", "f_name": "loss"},
{"lab":"acc", "ord": "Accuracy", "f_name": "acc"},
{"lab":"l2_dist", "ord": "l2 norm", "f_name": "l2dist"},
{"lab":"l2_norm", "ord": "l2 norm", "f_name": "l2dist"},
{"lab":"grad_sp", "ord": "Scalar Product", "f_name": "grad"},
{"lab":"grad_norm", "ord": "Scalar Product", "f_name": "grad"}
)
os.chdir("/content")
os.makedirs("distribution", exist_ok=True)
os.chdir("/content/distribution")
###Output
_____no_output_____
###Markdown
DATA functions
###Code
# data import and management
def load_mnist(img_size=32):
''' return data and labels for train and test mnist dataset '''
#---------------- train data -------------------
mnist_train = datasets.MNIST('data', train=True, download=True)
data_train = mnist_train.data
labels_train = [mnist_train[i][1] for i in range(len(data_train))]
pics = []
for pic in data_train:
pic = to_pil_image(pic)
if img_size != 28:
pic = resize(pic, img_size) # Resize image if needed
pic = to_tensor(pic) # Tensor conversion normalizes in [0,1]
pics.append(pic)
data_train = torch.stack(pics)
#------------------ test data -----------------------
mnist_test = datasets.MNIST('data', train=False, download=True)
data_test = mnist_test.data
labels_test = [mnist_test[i][1] for i in range(len(data_test))]
pics = []
for pic in data_test:
pic = to_pil_image(pic)
if img_size != 28:
pic = resize(pic, img_size) # Resize image if needed
pic = to_tensor(pic) # Tensor conversion normalizes in [0,1]
pics.append(pic)
data_test = torch.stack(pics)
return (data_train,labels_train), (data_test,labels_test)
def query(datafull, nb, bias=0, fav=0):
''' return -nb random samples of -datafull '''
data, labels = datafull
idxs = list(range(len(data)))
l = []
h, w = data[0][0].shape
d = torch.empty(nb, 1, h, w)
if bias == 0:
indexes = random.sample(idxs, nb) # drawing nb random indexes
else :
indexes = []
for i in range(nb):
idx = one_query(labels, idxs, bias, fav)
indexes.append(idx)
idxs.remove(idx) # to draw only once each index max
for k, i in enumerate(indexes): # filling our query
d[k] = data[i]
l.append(labels[i])
return d, l
def one_query(labels, idxs, redraws, fav):
''' labels : list of labels
idxs : list of available indexes
draws an index with a favorite label choice
fav : favorite label
redraws : max nb of random redraws while fav not found
'''
lab = -1
while lab != fav and redraws >= 0:
idx = idxs[random.randint(0, len(idxs)-1)]
lab = labels[idx]
redraws -= 1
return idx
def list_to_longtens(l):
''' change a list into torch.long tensor '''
tens = torch.empty(len(l), dtype=torch.long)
for i, lab in enumerate(l):
tens[i] = lab
return tens
def swap(l, n, m):
''' swap n and m values in l list '''
return [m if (v==n) else n if (v==m) else v for v in l]
def distribute_data_rd(datafull, distrib, fav_lab=(0,0),
dish=False, dish_lab=0, gpu=True):
'''draw random data on N nodes following distrib
data, labels : raw data and labels
distrib : int list, list of nb of data points for each node
pref_lab : (prefered label, strength of preference (int))
dish : boolean, if nodes are dishonest
dish_lab : 0 to 4, labelisation method
returns : (list of batches of images, list of batches of labels)
'''
global FORCING1
global FORCING2
global FORCE
data, labels = datafull
N = len(distrib)
data_dist = [] # list of len N
labels_dist = [] # list of len N
fav, strength = fav_lab
for n, number in enumerate(distrib): #for each node
# if strength == 0: # if no preference
d, l = query(datafull, number, strength, fav)
# else:
# d, l = query(datafull, number, strength, fav)
if gpu:
data_dist.append(torch.FloatTensor(d).cuda())
else:
data_dist.append(torch.FloatTensor(d))
if dish: # if dishonest node
# labels modification
if dish_lab == 0: # random
tens = torch.randint(10, (number,), dtype=torch.long)
elif dish_lab == 1: # zeros
tens = torch.zeros(number, dtype=torch.long)
elif dish_lab == 2: # swap 1-7
l = swap(l, 1, 7)
tens = list_to_longtens(l)
elif dish_lab == 3: # swap 2 random (maybe same)
if FORCE: # to force same swap multiple times
if FORCING1 == -1:
FORCING1, FORCING2 = random.randint(0,9), random.randint(0,9)
l = swap(l, FORCING1, FORCING2)
else:
n, m = random.randint(0,9), random.randint(0,9)
l = swap(l, n, m)
tens = list_to_longtens(l)
elif dish_lab == 4: # label +1
tens = (list_to_longtens(l) + 1) % 10
else: # if honest node
tens = list_to_longtens(l) # needed for CrossEntropy later
if gpu:
tens = tens.cuda()
labels_dist.append(tens)
return data_dist, labels_dist
def zipping(dir_name):
'''zip a local folder to local directory'''
f = ZipFile(dir_name +'.zip', mode='w', compression=ZIP_DEFLATED)
for fil in os.listdir(dir_name):
if fil[0] != ".":
f.write(dir_name +'/' + fil)
f.close()
###Output
_____no_output_____
###Markdown
get data
###Code
# downloading data
if 'train' not in globals(): # to avoid loading data every time
train, test = load_mnist()
if torch.cuda.is_available():
test_gpu = torch.tensor(test[0]).cuda(), torch.tensor(test[1]).cuda()
###Output
_____no_output_____
###Markdown
MODEL
###Code
#model structure
def get_base_classifier(gpu=True):
''' returns linear baseline classifier '''
model = nn.Sequential(
nn.Flatten(),
nn.Linear(1024, 10),
)
if gpu:
return model.cuda()
return model
class classifier(nn.Module):
'''CNN Model'''
def __init__(self):
super(classifier, self).__init__()
# Convolution 1
self.cnn1 = nn.Conv2d(in_channels=1, out_channels=16,
kernel_size=3, stride=1, padding=0)
self.relu1 = nn.ReLU()
# Max pool 1
self.maxpool1 = nn.MaxPool2d(kernel_size=2)
# Convolution 2
self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32,
kernel_size=3, stride=1, padding=0)
self.relu2 = nn.ReLU()
# Max pool 2
self.maxpool2 = nn.MaxPool2d(kernel_size=2)
# Fully connected 1
self.fc1 = nn.Linear(32 * 6 * 6, 10)
def forward(self, x):
# Set 1
out = self.cnn1(x)
out = self.relu1(out)
out = self.maxpool1(out)
# Set 2
out = self.cnn2(out)
out = self.relu2(out)
out = self.maxpool2(out)
#Flatten
out = out.view(out.size(0), -1)
#Dense
out = self.fc1(out)
return out
def get_conv_classifier(gpu=True):
if gpu:
return classifier().cuda()
return classifier()
MODELS = {"base": get_base_classifier, "conv": get_conv_classifier}
###Output
_____no_output_____
###Markdown
TRAINING STRUCTURE Losses
###Code
#loss and scoring functions
def local_loss(model_loc, x, y):
''' classification loss '''
loss = nn.CrossEntropyLoss()
predicted = model_loc(x)
local = loss(predicted,y)
return local
def models_dist(model_loc, model_glob, pow=(1,1)):
''' l1 distance between global and local parameter
will be mutliplied by w_n
pow : (internal power, external power)
'''
q, p = pow
dist = sum(((theta - rho)**q).abs().sum() for theta, rho in
zip(model_loc.parameters(), model_glob.parameters()))**p
return dist
def model_norm(model_glob, pow=(2,1)):
''' l2 squared regularisation of global parameter
will be multiplied by w_0
pow : (internal power, external power)
'''
q, p = pow
norm = sum((param**q).abs().sum() for param in model_glob.parameters())**p
return norm
def round_loss(tens, dec=0):
'''from an input scalar tensor returns rounded integer'''
if type(tens)==int or type(tens)==float:
return round(tens, dec)
else:
return round(tens.item(), dec)
def tens_count(tens, val):
''' counts nb of -val in tensor -tens '''
return len(tens) - round_loss(torch.count_nonzero(tens-val))
def score(model, datafull):
''' returns accuracy provided models, images and GTs '''
out = model(datafull[0])
predictions = torch.max(out, 1)[1]
c=0
for a, b in zip(predictions, datafull[1]):
c += int(a==b)
return c/len(datafull[0])
###Output
_____no_output_____
###Markdown
Flower flower class
###Code
# nodes repartition
class Flower():
''' Training structure including local models and general one
Allowing to add and remove nodes at will
.pop
.add_nodes
.rem_nodes
.train
.display
.check
'''
def __init__(self, test, gpu=True, **kwargs):
''' opt : optimizer
test : test data couple (imgs,labels)
w0 : regularisation strength
'''
self.d_test = test
self.w0 = kwargs["w0"]
self.gpu = gpu
self.opt = kwargs["opt"]
self.lr_node = kwargs["lr_node"]
self.lr_gen = kwargs["lr_gen"]
self.gen_freq = kwargs["gen_freq"] # generalisation frequency (>=1)
self.get_classifier = MODELS[kwargs["NN"]]
self.general_model = self.get_classifier(gpu)
self.init_model = deepcopy(self.general_model)
self.last_grad = None
self.opt_gen = self.opt(self.general_model.parameters(), lr=self.lr_gen)
self.pow_gen = (1,1) # choice of norms for Licchavi loss
self.pow_reg = (2,1) # (internal power, external power)
self.data = []
self.labels = []
self.typ = []
self.models = []
self.weights = []
self.age = []
self.opt_nodes = []
self.nb_nodes = 0
self.dic = {"honest" : -1, "trolls" : 0, "zeros" : 1,
"one_evil" : 2, "strats" : 3, "jokers" : 4, "byzantine" : -1}
self.history = ([], [], [], [], [], [], [], [])
# self.h_legend = ("fit", "gen", "reg", "acc", "l2_dist", "l2_norm", "grad_sp", "grad_norm")
self.localtest = ([], []) # (which to pick for each node, list of (data,labels) pairs)
self.size = nb_params(self.general_model) / 10_000
# ------------ population methods --------------------
def set_localtest(self, datafull, size, nodes, fav_lab=(0,0), typ="honest"):
''' create a local data for some nodes
datafull : source data
size : size of test sample
fav_labs : (label, strength)
nodes : list of nodes which use this data
'''
id = self.dic[typ]
dish = (id != -1) # boolean for dishonesty
dt, lb = distribute_data_rd(datafull, [size], fav_lab,
dish, dish_lab=id, gpu=self.gpu)
dtloc = (dt[0], lb[0])
self.localtest[1].append(dtloc)
id = len(self.localtest[1]) - 1
for n in nodes:
self.localtest[0][n] = id
def add_nodes(self, datafull, pop, typ, fav_lab=(0,0), verb=1, **kwargs):
''' add nodes to the Flower
datafull : data to put on node (sampled from it)
pop : (nb of nodes, size of nodes)
typ : type of nodes (str keywords)
fav_lab : (favorite label, strength)
w : int, weight of new nodes
'''
w = kwargs["w"] # taking global variable if -w not provided
nb, size = pop
id = self.dic[typ]
dish = (id != -1) # boolean for dishonesty
dt, lb = distribute_data_rd(datafull, [size] * nb, fav_lab,
dish, dish_lab=id, gpu=self.gpu)
self.data += dt
self.labels += lb
self.typ += [typ] * nb
self.models += [self.get_classifier(self.gpu) for i in range(nb)]
self.weights += [w] * nb
self.age += [0] * nb
for i in range(nb):
self.localtest[0].append(-1)
self.nb_nodes += nb
self.opt_nodes += [self.opt(self.models[n].parameters(), lr=self.lr_node)
for n in range(self.nb_nodes - nb, self.nb_nodes)
]
if verb:
print("Added {} {} nodes of {} data points".format(nb, typ, size))
print("Total number of nodes : {}".format(self.nb_nodes))
def rem_nodes(self, first, last, verb=1):
''' remove nodes of indexes -first (included) to -last (excluded) '''
nb = last - first
if last > self.nb_nodes:
print("-last is out of range, remove canceled")
else:
del self.data[first : last]
del self.labels[first : last]
del self.typ[first : last]
del self.models[first : last]
del self.weights[first : last]
del self.age[first : last]
del self.opt_nodes[first : last]
del self.localtest[0][first : last]
self.nb_nodes -= nb
if verb: print("Removed {} nodes".format(nb))
def hm(self, ty):
''' count nb of nodes of this type '''
return self.typ.count(ty)
def pop(self):
''' return dictionnary of population '''
c = {}
for ty in self.dic.keys():
c[ty] = self.hm(ty)
return c
# ------------- scoring methods -----------
def score_glob(self, datafull):
''' return accuracy provided images and GTs '''
return score(self.general_model, datafull)
def test_loc(self, node):
''' score of node on local test data '''
id_data = self.localtest[0][node]
if id_data == -1:
# print("No local test data")
return None
else:
nodetest = score(self.models[node], self.localtest[1][id_data])
return nodetest
def test_full(self, node):
''' score of node on global test data '''
return score(self.models[node], self.d_test)
def test_train(self, node):
''' score of node on its train data '''
return score(self.models[node], (self.data[node], self.labels[node]))
def display(self, node):
''' display accuracy for selected node
node = -1 for global model
'''
if node == -1: # global model
print("global model")
print("accuracy on test data :",
self.score_glob(self.d_test))
else: # we asked for a node
loc_train = self.test_train(node)
loc_test = self.test_loc(node)
full_test = self.test_full(node)
print("node number :", node, ", dataset size :",
len(self.labels[node]), ", type :", self.typ[node],
", age :", self.age[node])
print("accuracy on local train data :", loc_train)
print("accuracy on local test data :", loc_test)
print("accuracy on global test data :", full_test)
repart = {str(k) : tens_count(self.labels[node], k)
for k in range(10)}
print("labels repartition :", repart)
# ---------- methods for training ------------
def _set_lr(self):
'''set learning rates of optimizers according to Flower setting'''
for n in range(self.nb_nodes): # updating lr in optimizers
self.opt_nodes[n].param_groups[0]['lr'] = self.lr_node
self.opt_gen.param_groups[0]['lr'] = self.lr_gen
def _zero_opt(self):
'''reset gradients of all models'''
for n in range(self.nb_nodes):
self.opt_nodes[n].zero_grad()
self.opt_gen.zero_grad()
def _update_hist(self, epoch, test_freq, fit, gen, reg, verb=1):
''' update history '''
if epoch % test_freq == 0: # printing accuracy on test data
acc = self.score_glob(self.d_test)
if verb: print("TEST ACCURACY : ", acc)
for i in range(test_freq):
self.history[3].append(acc)
self.history[0].append(round_loss(fit))
self.history[1].append(round_loss(gen))
self.history[2].append(round_loss(reg))
dist = models_dist(self.init_model, self.general_model, pow=(2,0.5))
norm = model_norm(self.general_model, pow=(2,0.5))
self.history[4].append(round_loss(dist, 1))
self.history[5].append(round_loss(norm, 1))
grad_gen = extract_grad(self.general_model)
if epoch > 1: # no last model for first epoch
scal_grad = sp(self.last_grad, grad_gen)
self.history[6].append(scal_grad)
else:
self.history[6].append(0) # default value for first epoch
self.last_grad = deepcopy(extract_grad(self.general_model))
grad_norm = sp(grad_gen, grad_gen) # use sqrt ?
self.history[7].append(grad_norm)
def _old(self, years):
''' increment age (after training) '''
for i in range(self.nb_nodes):
self.age[i] += years
def _counters(self, c_gen, c_fit):
'''update internal training counters'''
fit_step = (c_fit >= c_gen)
if fit_step:
c_gen += self.gen_freq
else:
c_fit += 1
return fit_step, c_gen, c_fit
def _do_step(self, fit_step):
'''step for appropriate optimizer(s)'''
if fit_step: # updating local or global alternatively
for n in range(self.nb_nodes):
self.opt_nodes[n].step()
else:
self.opt_gen.step()
def _print_losses(self, tot, fit, gen, reg):
'''print losses'''
print("total loss : ", tot)
print("fitting : ", round_loss(fit),
', generalisation : ', round_loss(gen),
', regularisation : ', round_loss(reg))
# ==================== TRAINING ==================
def train(self, nb_epochs=None, test_freq=1, verb=1):
'''training loop'''
nb_epochs = EPOCHS if nb_epochs is None else nb_epochs
time_train = time()
self._set_lr()
# initialisation to avoid undefined variables at epoch 1
loss, fit_loss, gen_loss, reg_loss = 0, 0, 0, 0
c_fit, c_gen = 0, 0
fit_scale = 20 / self.nb_nodes
gen_scale = 1 / self.nb_nodes / self.size
reg_scale = self.w0 / self.size
reg_loss = reg_scale * model_norm(self.general_model, self.pow_reg)
# training loop
nb_steps = self.gen_freq + 1
for epoch in range(1, nb_epochs + 1):
if verb: print("\nepoch {}/{}".format(epoch, nb_epochs))
time_ep = time()
for step in range(1, nb_steps + 1):
fit_step, c_gen, c_fit = self._counters(c_gen, c_fit)
if verb >= 2:
txt = "(fit)" if fit_step else "(gen)"
print("step :", step, '/', nb_steps, txt)
self._zero_opt() # resetting gradients
#---------------- Licchavi loss -------------------------
# only first 2 terms of loss updated
if fit_step:
fit_loss, gen_loss, diff = 0, 0, 0
for n in range(self.nb_nodes): # for each node
if self.typ[n] == "byzantine":
fit = local_loss(self.models[n],
self.data[n], self.labels[n])
fit_loss -= fit
diff += 2 * fit # dirty trick CHANGE
else:
fit_loss += local_loss(self.models[n],
self.data[n], self.labels[n])
g = models_dist(self.models[n],
self.general_model, self.pow_gen)
gen_loss += self.weights[n] * g # generalisation term
fit_loss *= fit_scale
gen_loss *= gen_scale
loss = fit_loss + gen_loss
# only last 2 terms of loss updated
else:
gen_loss, reg_loss = 0, 0
for n in range(self.nb_nodes): # for each node
g = models_dist(self.models[n],
self.general_model, self.pow_gen)
gen_loss += self.weights[n] * g # generalisation term
reg_loss = model_norm(self.general_model, self.pow_reg)
gen_loss *= gen_scale
reg_loss *= reg_scale
loss = gen_loss + reg_loss
total_out = round_loss(fit_loss + diff
+ gen_loss + reg_loss)
if verb >= 2:
self._print_losses(total_out, fit_loss + diff,
gen_loss, reg_loss)
# Gradient descent
loss.backward()
self._do_step(fit_step)
if verb: print("epoch time :", round(time() - time_ep, 2))
self._update_hist(epoch, test_freq, fit_loss, gen_loss, reg_loss, verb)
self._old(1) # aging all nodes
# ----------------- end of training -------------------------------
for i in range(nb_epochs % test_freq): # to maintain same history length
self.history[3].append(acc)
print("training time :", round(time() - time_train, 2))
return self.history
# ------------ to check for problems --------------------------
def check(self):
''' perform some tests on internal parameters adequation '''
# population check
b1 = (self.nb_nodes == len(self.data) == len(self.labels)
== len(self.typ) == len(self.models) == len(self.opt_nodes)
== len(self.weights) == len(self.age) == len(self.localtest[0]))
# history check
b2 = True
for l in self.history:
b2 = b2 and (len(l) == len(self.history[0]) >= max(self.age))
# local test data check
b3 = (max(self.localtest[0]) + 1 <= len(self.localtest[1]) )
if (b1 and b2 and b3):
print("No Problem")
else:
print("OULALA non ça va pas là")
###Output
_____no_output_____
###Markdown
flower utility
###Code
def get_flower(gpu=True, **kwargs):
'''get a Flower using the appropriate test data (gpu or not)'''
if gpu:
return Flower(test_gpu, gpu=gpu, **kwargs)
else:
return Flower(test, gpu=gpu, **kwargs)
# def grad_sp(m1, m2):
# ''' scalar product of gradients of 2 models '''
# s = 0
# for p1, p2 in zip(m1.parameters(), m2.parameters()):
# s += (p1.grad * p2.grad).sum()
# return s
def extract_grad(model):
'''return list of gradients of a model'''
l_grad = [p.grad for p in model.parameters()]
return l_grad
def sp(l_grad1, l_grad2):
'''scalar product of 2 lists of gradients'''
s = 0
for g1, g2 in zip(l_grad1, l_grad2):
s += (g1 * g2).sum()
return round_loss(s, 4)
def nb_params(model):
'''return number of parameters of a model'''
return sum(p.numel() for p in model.parameters())
###Output
_____no_output_____
###Markdown
GETTING PLOTS Plotting utilities
###Code
def seedall(s):
'''seed all sources of randomness'''
reproducible = (s >= 0)
torch.manual_seed(s)
random.seed(s)
np.random.seed(s)
torch.backends.cudnn.deterministic = reproducible
torch.backends.cudnn.benchmark = not reproducible
print("\nSeeded all to", s)
def replace_dir(path):
''' create or replace directory '''
if os.path.exists(path):
shutil.rmtree(path)
os.makedirs(path)
def get_style():
'''give different line styles for plots'''
l = ["-","-.",":","--"]
for i in range(10000):
yield l[i % 4]
def get_color():
'''give different line styles for plots'''
l = ["red","green","blue","grey"]
for i in range(10000):
yield l[i % 4]
STYLES = get_style() # generator for looping styles
COLORS = get_color()
def title_save(title=None, path=None, suff=".png"):
''' add title and save plot '''
if title is not None:
plt.title(title)
if path is not None:
plt.savefig(path + suff)
def legendize(y):
''' label axis of plt plot '''
plt.xlabel("Epochs")
plt.ylabel(y)
plt.legend()
def clean_dic(dic):
''' replace some values by more readable ones '''
if "opt" in dic.keys():
dic = deepcopy(dic)
op = dic["opt"]
dic["opt"] = "Adam" if op == optim.Adam else "SGD" if op == optim.SGD else None
return dic
def get_title(conf, ppl=4):
''' converts a dictionnary in str of approriate shape
ppl : parameters per line
'''
title = ""
c = 0 # enumerate ?
for key, val in clean_dic(conf).items():
c += 1
title += "{}: {}".format(key,val)
title += " \n" if (c % ppl) == 0 else ', '
return title[:-2]
###Output
_____no_output_____
###Markdown
Plotting from history
###Code
# functions to display training history
def means_bounds(arr):
''' from array return 1 array of means,
1 of (mean - var), 1 of (mean + var)
'''
means = np.mean(arr, axis=0)
var = np.var(arr, axis = 0)
low, up = means - var, means + var
return means, low, up
# ----------- to display multiple accuracy curves on same plot -----------
def add_acc_var(arr, label):
''' from array add curve of accuracy '''
acc = arr[:,3,:]
means, low, up = means_bounds(acc)
epochs = range(1, len(means) + 1)
plt.plot(epochs, means, label=label, linestyle=next(STYLES))
plt.fill_between(epochs, up, low, alpha=0.4)
def plot_runs_acc(l_runs, title=None, path=None, **kwargs):
''' plot several acc_var on one graph '''
arr = np.asarray(l_runs)
l_param = get_possibilities(**kwargs) # for legend
for run, param in zip(arr, l_param): # adding one curve for each parameter combination (run)
add_acc_var(run, param)
plt.ylim([0,1])
plt.grid(True, which='major', linewidth=1, axis='y', alpha=1)
plt.minorticks_on()
plt.grid(True, which='minor', linewidth=0.8, axis='y', alpha=0.8)
legendize("Test Accuracy")
title_save(title, path, suff=".png")
plt.show()
# ------------- utility for what follows -------------------------
def plot_var(l_hist, l_idx):
''' add curve of asked indexes of history to the plot '''
arr_hist = np.asarray(l_hist)
epochs = range(1, arr_hist.shape[2] + 1)
for idx in l_idx:
vals = arr_hist[:,idx,:]
vals_m, vals_l, vals_u = means_bounds(vals)
style, color = next(STYLES), next(COLORS)
plt.plot(epochs, vals_m, label=METRICS[idx]["lab"], linestyle=style, color=color)
plt.fill_between(epochs, vals_u, vals_l, alpha=INTENS, color=color)
def plotfull_var(l_hist, l_idx, title=None, path=None, show=True):
''' plot metrics asked in -l_idx and save if -path provided '''
plot_var(l_hist, l_idx)
idx = l_idx[0]
legendize(METRICS[idx]["ord"])
title_save(title, path, suff=" {}.png".format(METRICS[idx]["f_name"]))
if show:
plt.show()
# ------- groups of metrics on a same plot -----------
def loss_var(l_hist, title=None, path=None):
''' plot losses with variance from a list of historys '''
plotfull_var(l_hist, [0,1,2], title, path)
def acc_var(l_hist, title=None, path=None):
''' plot accuracy with variance from a list of historys '''
plt.ylim([0,1])
plt.grid(True, which='major', linewidth=1, axis='y', alpha=1)
plt.minorticks_on()
plt.grid(True, which='minor', linewidth=0.8, axis='y', alpha=0.8)
plotfull_var(l_hist, [3], title, path)
def l2_var(l_hist, title=None, path=None):
'''plot l2 norm of gen model from a list of historys'''
plotfull_var(l_hist, [4,5], title, path)
def gradsp_var(l_hist, title=None, path=None):
''' plot scalar product of gradients between 2 consecutive epochs
from a list of historys
'''
plotfull_var(l_hist, [6,7], title, path)
# plotting all we have
def plot_metrics(l_hist, title=None, path=None):
'''plot and save the different metrics from list of historys'''
acc_var(l_hist, title, path)
loss_var(l_hist, title, path)
l2_var(l_hist, title, path)
gradsp_var(l_hist, title, path)
###Output
_____no_output_____
###Markdown
Running, plotting, saving utilities
###Code
def adapt(obj):
''' -obj is a parameter or an iterable over values of a parameter
return generator of values of the parameter (event if only 1)
'''
if hasattr(obj, '__iter__') and type(obj) != str:
for v in obj:
yield v
else:
yield obj
def is_end(it, dist=0):
''' check if iterator is empty '''
it2 = deepcopy(it)
try:
for a in range(dist + 1):
a = next(it2)
return False
except StopIteration:
return True
def explore(dic):
''' dic is a dictionnary of parameters (some may have multiple values)
return a list of dictionnarys of all possible combinations
'''
it = iter(dic)
_LIST = []
def _explo(it, dic, **kwargs): # **kwargs is the output
'''yield a dictionnary with only one value for each param'''
if not is_end(it): # if iterator not empty
key = next(it)
for par in adapt(dic[key]):
_explo(deepcopy(it), dic, **kwargs, **{key: par})
else: # end of recursion
_LIST.append(kwargs)
_explo(it, dic)
return _LIST
def add_defaults(config):
''' add default values for non-specified parameters '''
fullconf = deepcopy(DEFAULTS)
for key, val in config.items():
fullconf[key] = val
return fullconf
def my_confs(**kwargs):
''' return all possible configurations '''
for config in explore(kwargs):
fullconf = add_defaults(config)
yield fullconf
# FUSE THE 2 FUNCTIONS ?
def get_possibilities(**kwargs):
''' identify variations of parameters '''
l_confs = explore(kwargs)
leg_keys = [] # parameters used for legend
for key, val in kwargs.items():
if len(list(adapt(val))) > 1: # if this param is not constant
leg_keys.append(key)
legends = []
for conf in l_confs:
leg = get_title({k:conf[k] for k in leg_keys})
legends.append(leg)
return legends
def get_constants(**kwargs):
''' identify constant parameters '''
l_confs = my_confs(**kwargs)
leg_keys = [] # parameters used for legend
for key, val in kwargs.items():
if len(list(adapt(val))) > 1: # if this param is not constant
leg_keys.append(key)
constants = []
for conf in l_confs:
cst = get_title({k:conf[k] for k in DEFAULTS.keys() if (k not in leg_keys)})
constants.append(cst)
return constants # NOT CLEAN BEACAUSE CONSTANT LIST
def legend_to_name(legend):
''' convert legend text format to filename format '''
name = legend.replace(': ','_') # deepcopy ?
name = name.replace('\n', ' ')
name = name.replace(',', '')
return name
###Output
_____no_output_____
###Markdown
core
###Code
def get_custom_flower(verb=1, gpu=True, **kwargs):
nbn = kwargs["nbn"]
ppn = kwargs["nbd"] // nbn # points per node
nbdish = int(kwargs["fracdish"] * nbn)
nbh = nbn - nbdish
typ_dish = kwargs["typ_dish"]
heter = kwargs["heter"]
flow = get_flower(gpu=gpu, **kwargs)
if heter:
nbh_lab = nbh // 10 # for each label
nbdish_lab = nbdish // 10
nb_lab = nbh_lab + nbdish_lab
for lab in range(10): # for each label
flow.add_nodes(train, (nbh_lab, ppn), "honest", (lab, heter), verb=verb, **kwargs)
flow.add_nodes(train, (nbdish_lab, ppn), typ_dish, (lab, heter), verb=verb, **kwargs)
# if gpu:
# flow.set_localtest(test_gpu, 100, range(lab * nb_lab, (lab + 1) * nb_lab), (lab, heter))
# else:
# flow.set_localtest(test, 100, range(lab * nb_lab, (lab + 1) * nb_lab), (lab, heter))
else:
# print(kwargs)
flow.add_nodes(train, (nbh, ppn), "honest", verb=verb, **kwargs)
flow.add_nodes(train, (nbdish, ppn), typ_dish, verb=verb, **kwargs)
return flow
def run_whatever(config, path, verb=0, gpu=True):
'''config is a dictionnary with all parameters'''
nb_epochs = config["nb_epochs"]
l_hist = [] # list of historys
for s in SEEDS:
seedall(s)
flow = get_custom_flower(verb=verb, gpu=gpu, **config)
h = flow.train(nb_epochs, verb=verb)
l_hist.append(h)
title = get_title(config)
plot_metrics(l_hist, title, path)
return l_hist
def run_whatever_mult(name="name", verb=0, gpu=True, **kwargs):
''' User-friendly running-and-plotting-and-saving interface
Each parameter of DEFAULTS can be
inputted as single value, as an iterable of values or not inputted
All parameters combinations are computed in a grid fashion
name : used for folder name and filenames
verb : 0, 1 or 2, verbosity level
gpu : boolean
**kwargs : structure and training parameters,
see "defaults_help?" for full parameters list
Return : all training historys
'''
l_runs = [] # list of historys for each parameter
replace_dir(name)
path = name + "/" + name + " "
l_legend = get_possibilities(**kwargs)
l_confs = my_confs(**kwargs)
for legend, config in zip(l_legend, l_confs): # iterating over all combinations
curr_path = path + legend_to_name(legend)
l_hist = run_whatever(config, curr_path, verb, gpu)
l_runs.append(l_hist)
title = get_constants(**kwargs)[0]
plot_runs_acc(l_runs, title, path, **kwargs)
zipping(name)
return l_runs
###Output
_____no_output_____
###Markdown
Some more
###Code
# functions to train and display history at the end
# - heterogeneity of data with different styles of notation depending on nodes -
# def get_flower_heter_strats(heter, verb=0, gpu=True):
# '''initialize and add nodes according to parameter'''
# global FORCING1
# global FORCING2
# global FORCE
# nbn = NBN
# flow = get_flower(gpu)
# ppn = 60_000 // nbn # points per node
# nb_lab = nbn // 10
# FORCE = True
# for lab in range(10):
# for n in range(nb_lab):
# FORCING1, FORCING2 = -1, -1
# flow.add_nodes(train, (1, ppn), "strats", (lab, heter), verb=verb)
# flow.set_localtest(test_gpu, 100, [lab * nb_lab + n], (lab, heter), typ="strats")
# FORCE = False
# return flow
# def run_heter_strats(heter, verb=0, gpu=True):
# ''' create a flower of honest nodes and trains it for 200 eps
# display graphs of loss and accuracy
# heter : heterogeneity of data
# '''
# flow = get_flower_heter_strats(heter, verb, gpu)
# flow.gen_freq = 1
# h = flow.train(epochs, verb=verb)
# flow.check()
# t1 = "heter : {}, nbn : {}, lrnode : {}, lrgen : {}, genfrq : {}"
# t2 = "\ntype : only strats"
# text = t1 + t2
# title = text.format(heter, flow.nb_nodes, flow.lr_node,
# flow.lr_gen, flow.gen_freq)
# plot_metrics([h], title, path)
# return flow
# def compare(flow_centr, flow_distr): # for run_heter
# ''' return average accuracy on local test sets
# for both centralized and distributed models
# '''
# central, gen, distr = 0, 0, 0
# N = flow_distr.nb_nodes
# for lab in range(10):
# sc = score(flow_centr.models[0], flow_distr.localtest[1][lab])
# central += sc
# for lab in range(10):
# sc = score(flow_distr.general_model, flow_distr.localtest[1][lab])
# gen += sc
# for n in range(N):
# sc = flow_distr.test_loc(n)
# distr += sc
# distr = distr / N
# central = central / 10
# gen = gen / 10
# return central, gen, distr
# def compare2(flow_centr, flow_distr): # for run_heter_strats
# ''' return average accuracy on local test sets
# for both centralized and distributed models
# '''
# central, gen, distr = 0, 0, 0
# N = flow_distr.nb_nodes
# for n in range(N):
# sc = score(flow_centr.models[0], flow_distr.localtest[1][n])
# central += sc
# for n in range(N):
# sc = score(flow_distr.general_model, flow_distr.localtest[1][n])
# gen += sc
# for n in range(N):
# sc = flow_distr.test_loc(n)
# distr += sc
# distr = distr / N
# central = central / N
# gen = gen / N
# return central, gen, distr
###Output
_____no_output_____
###Markdown
THAT'S WHERE YOU RUN STUFF Help
###Code
help(run_whatever_mult)
help(defaults_help)
###Output
Help on function defaults_help in module __main__:
defaults_help()
Structure of DEFAULTS dictionnary :
"w0": 0.2, # float >= 0, regularisation parameter
"w": 0.2, # float >= 0, harmonisation parameter
"lr_gen": 0.02, # float > 0, learning rate of global model
"lr_node": 0.02, # float > 0, learning rate of local models
"NN" : "base", # "base" or "conv", neural network architecture
"opt": optim.Adam, # any torch otpimizer
"gen_freq": 1, # int >= 1, number of global steps for
1 local step
"nbn": 1000, # int >= 1, number of nodes
"nbd": 60_000, # int >= 1, total data
- nbd/nbn must be in [1, 60_000]
"fracdish": 0, # float in [0,1]
"typ_dish": "zeros",# in ["honest", "zeros", "jokers", "one_evil",
"byzantine", "randoms", "trolls", "strats"]
"heter": 0, # int >= 0, heterogeneity of data repartition
"nb_epochs": 100, # int >= 1, number of training epochs
###Markdown
Run
###Code
SEEDS = [1,2,3,4,5]
historys = run_whatever_mult(nb_epochs=10, verb=0)
###Output
_____no_output_____
###Markdown
MANUAL
###Code
seedall(51)
tulip = get_flower(**DEFAULTS)
tulip.add_nodes(train, (1, 60000), "honest")
tulip.check()
# tulip.lr_node = 0.2
# tulip.lr_gen = 0.05
# tulip.w0 = 0
h1 = tulip.train(2, verb=2)
tulip.check()
tulip.display(0)
plot_metrics([h1])
###Output
_____no_output_____
###Markdown
Visualizing distributions=========================
###Code
from __future__ import print_function, division
import numpy as np
import thinkstats2
import nsfg
import thinkplot
%matplotlib inline
###Output
_____no_output_____
###Markdown
Let's load up the NSFG pregnancy data.
###Code
preg = nsfg.ReadFemPreg()
preg.shape
###Output
_____no_output_____
###Markdown
And select the rows corresponding to live births.
###Code
live = preg[preg.outcome == 1]
live.shape
###Output
_____no_output_____
###Markdown
We can use `describe` to generate summary statistics.
###Code
live.prglngth.describe()
###Output
_____no_output_____
###Markdown
But there is no substitute for looking at the whole distribution, not just a summary.One way to represent a distribution is a Probability Mass Function (PMF).`thinkstats2` provides a class named `Pmf` that represents a PMF.A Pmf object contains a Python dictionary that maps from each possible value to its probability (that is, how often it appears in the dataset).`Items` returns a sorted list of values and their probabilities:
###Code
pmf = thinkstats2.Pmf(live.prglngth)
for val, prob in pmf.Items():
print(val, prob)
###Output
0 0.00010931351115
4 0.00010931351115
9 0.00010931351115
13 0.00010931351115
17 0.0002186270223
18 0.00010931351115
19 0.00010931351115
20 0.00010931351115
21 0.0002186270223
22 0.00076519457805
23 0.00010931351115
24 0.00142107564495
25 0.00032794053345
26 0.00382597289025
27 0.00032794053345
28 0.0034980323568
29 0.00229558373415
30 0.0150852645387
31 0.00295146480105
32 0.0125710537822
33 0.00535636204635
34 0.006558810669
35 0.0339965019676
36 0.0350896370791
37 0.0497376475732
38 0.066353301268
39 0.513008307827
40 0.121993878443
41 0.064167031045
42 0.0358548316572
43 0.0161783996502
44 0.0050284215129
45 0.0010931351115
46 0.00010931351115
47 0.00010931351115
48 0.00076519457805
50 0.0002186270223
###Markdown
There are some values here that are certainly errors, and some that are suspect. For now we'll take them at face value. There are several ways to visualize Pmfs.`thinkplot` provides functions to plot Pmfs and other types from `thinkstats2`.`thinkplot.Pmf` renders a Pmf as histogram (bar chart).
###Code
thinkplot.PrePlot(1)
thinkplot.Hist(pmf)
thinkplot.Config(xlabel='Pregnancy length (weeks)',
ylabel='PMF',
xlim=[0, 50],
legend=False)
###Output
_____no_output_____
###Markdown
`Pmf` renders the outline of the histogram.
###Code
thinkplot.PrePlot(1)
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel='Pregnancy length (weeks)',
ylabel='PMF',
xlim=[0, 50])
###Output
_____no_output_____
###Markdown
`Pdf` tries to render the Pmf with a smooth curve.
###Code
thinkplot.PrePlot(1)
thinkplot.Pdf(pmf)
thinkplot.Config(xlabel='Pregnancy length (weeks)',
ylabel='PMF',
xlim=[0, 50])
###Output
_____no_output_____
###Markdown
I started with PMFs and histograms because they are familiar, but I think they are bad for exploration.For one thing, they don't hold up well when the number of values increases.
###Code
pmf_weight = thinkstats2.Pmf(live.totalwgt_lb)
thinkplot.PrePlot(1)
thinkplot.Hist(pmf_weight)
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='PMF')
pmf_weight = thinkstats2.Pmf(live.totalwgt_lb)
thinkplot.PrePlot(1)
thinkplot.Pmf(pmf_weight)
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='PMF')
pmf_weight = thinkstats2.Pmf(live.totalwgt_lb)
thinkplot.PrePlot(1)
thinkplot.Pdf(pmf_weight)
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='PMF')
###Output
_____no_output_____
###Markdown
Sometimes you can make the visualization better by binning the data:
###Code
def bin_and_pmf(weights, num_bins):
bins = np.linspace(0, 15.5, num_bins)
indices = np.digitize(weights, bins)
values = bins[indices]
pmf_weight = thinkstats2.Pmf(values)
thinkplot.PrePlot(1)
thinkplot.Pdf(pmf_weight)
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='PMF')
bin_and_pmf(live.totalwgt_lb.dropna(), 50)
###Output
_____no_output_____
###Markdown
Binning is simple enough, but it is still a nuisance.And it is fragile. If you have too many bins, the result is noisy. Too few, you obliterate features that might be important.And if the bin boundaries don't align well with data boundaries, you can create artifacts.
###Code
bin_and_pmf(live.totalwgt_lb.dropna(), 51)
###Output
_____no_output_____
###Markdown
There must be a better way!Indeed there is. In my opinion, cumulative distribution functions (CDFs) are a better choice for data exploration.You don't have to bin the data or make any other transformation.`thinkstats2` provides a function that makes CDFs, and `thinkplot` provides a function for plotting them.
###Code
data = [1, 2, 2, 5]
pmf = thinkstats2.Pmf(data)
pmf
cdf = thinkstats2.Cdf(data)
cdf
thinkplot.PrePlot(1)
thinkplot.Cdf(cdf)
thinkplot.Config(ylabel='CDF',
xlim=[0.5, 5.5])
###Output
_____no_output_____
###Markdown
Let's see what that looks like for real data.
###Code
cdf_weight = thinkstats2.Cdf(live.totalwgt_lb)
thinkplot.PrePlot(1)
thinkplot.Cdf(cdf_weight)
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='CDF')
###Output
_____no_output_____
###Markdown
A CDF is a map from each value to its cumulative probability.You can use it to compute percentiles:
###Code
cdf_weight.Percentile(50)
###Output
_____no_output_____
###Markdown
Or if you are given a value, you can compute its percentile rank.
###Code
cdf_weight.PercentileRank(8.3)
###Output
_____no_output_____
###Markdown
Looking at the CDF, it is easy to see the range of values, the central tendency and spread, as well as the overall shape of the distribution.If there are particular values that appear often, they are visible as vertical lines. If there are ranges where no values appear, they are visible as horizontal lines.And one of the best things about CDFs is that you can plot several of them on the same axes for comparison. For example, let's see if first babies are lighter than others.
###Code
firsts = live[live.birthord == 1]
others = live[live.birthord != 1]
len(firsts), len(others)
cdf_firsts = thinkstats2.Cdf(firsts.totalwgt_lb, label='firsts')
cdf_others = thinkstats2.Cdf(others.totalwgt_lb, label='others')
thinkplot.PrePlot(2)
thinkplot.Cdfs([cdf_firsts, cdf_others])
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='CDF',
legend=True)
###Output
_____no_output_____
###Markdown
Plotting the two distributions on the same axes, we can see that the distribution for others is shifted to the right; that is, toward higher values. And we can see that the shift is close to the same over the whole distribution.Let's see how well we can make this comparison with PMFs:
###Code
pmf_firsts = thinkstats2.Pmf(firsts.totalwgt_lb, label='firsts')
pmf_others = thinkstats2.Pmf(others.totalwgt_lb, label='others')
thinkplot.PrePlot(2)
thinkplot.Pdfs([pmf_firsts, pmf_others])
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='PMF')
###Output
_____no_output_____
###Markdown
With PMFs it is hard to compare distributions. And if you plot more than two PMFs on the same axes, it is likely to be a mess.Reading CDFs takes some getting used to, but it is worth it! For data exploration and visualization, CDFs are better than PMFs in almost every way.But if you really have to generate a PMF, a good option is to estimate a smoothed PDF using Kernel Density Estimation (KDE).
###Code
pdf_firsts = thinkstats2.EstimatedPdf(firsts.totalwgt_lb.dropna(), label='firsts')
pdf_others = thinkstats2.EstimatedPdf(others.totalwgt_lb.dropna(), label='others')
thinkplot.PrePlot(2)
thinkplot.Pdfs([pdf_firsts, pdf_others])
thinkplot.Config(xlabel='Birth weight (lbs)',
ylabel='PDF')
###Output
_____no_output_____
###Markdown
Load the pretrained model
###Code
ckpt = torch.load("./save/vgg7_quant/vgg7_quant_w4_a4_mode_mean_asymm_wd0.0_swipe_train/model_best.pth.tar")
state_dict = ckpt["state_dict"]
###Output
_____no_output_____
###Markdown
Get the weights of the last layer
###Code
weight = state_dict['features.17.weight']
print("Weight size = {}".format(list(weight.size())))
###Output
_____no_output_____
###Markdown
Low precision weight
###Code
from models import quant
# precision
nbit = 4
cellBit = 1
# quantize
weight_q, wscale = quant.stats_quant(weight, nbit=nbit, dequantize=False)
weight_q = weight_q.add(7)
print("Unique levels of the {}bit weight: \n{}".format(nbit, weight_q.unique().cpu().numpy()))
weight_b = quant.decimal2binary(weight_q, nbit, cellBit)
print("\nBinary weight size = {}".format(list(weight_b.size())))
def binary2dec(wbit, weight_b, cellBit):
weight_int = 0
cellRange = 2**cellBit
for k in range(wbit//cellBit):
remainder = weight_b[k]
scaler = cellRange**k
weight_int += scaler*remainder
return weight_int
###Output
_____no_output_____
###Markdown
Conductance
###Code
hrs, lrs = 1e-6, 1.66e-4
nonideal_unit = lrs - hrs
###Output
_____no_output_____
###Markdown
Scenario 0: Typicall value only
###Code
wb = weight_b.clone()
w_ref = quant.bit2cond(wb, hrs, lrs)
w_ref_q = w_ref.div(nonideal_unit)
# ideally quantized
wq_ideal = binary2dec(nbit, weight_b, cellBit=cellBit)
# typical value
wq_typicall = binary2dec(nbit, w_ref_q, cellBit=cellBit)
###Output
_____no_output_____
###Markdown
Scenario 1: SWIPE for all the levels
###Code
swipe_ll = [-1]
w_swipe = quant.program_noise_cond(weight_q, weight_b, hrs, lrs, swipe_ll)
w_swipe = w_swipe.div(nonideal_unit)
# swipe
wq_swipe = binary2dec(nbit, w_swipe, cellBit=cellBit)
ql = wq_ideal.unique().cpu().numpy()
print(ql)
plt.figure(figsize=(10,6))
plt.scatter(ql, np.zeros(ql.shape), marker='s', s=100)
sns.distplot(wq_swipe.view(-1).cpu().numpy())
plt.xticks([ii for ii in range(15)])
plt.title("4-bit Weight Programmed with SWIPE scheme", fontsize=16, fontweight='bold')
plt.grid(True)
plt.savefig("./save/figs/swipe_all_4bit.png", bbox_inches = 'tight', pad_inches = 0.1)
###Output
_____no_output_____
###Markdown
Scenario 2: Non-SWIPE for level 7
###Code
swipe_ll = [7]
w_swipe = quant.program_noise_cond(weight_q, weight_b, hrs, lrs, swipe_ll)
w_swipe = w_swipe.div(nonideal_unit)
# swipe
wq_swipe = binary2dec(nbit, w_swipe, cellBit=cellBit)
plt.figure(figsize=(10,6))
plt.scatter(ql, np.zeros(ql.shape), marker='s', s=100)
sns.distplot(wq_swipe.view(-1).cpu().numpy())
plt.xticks([ii for ii in range(15)])
plt.title("4-bit Weight Programmed with SWIPE scheme except level 7", fontsize=16, fontweight='bold')
plt.grid(True)
plt.savefig("./save/figs/nonswipe7_4bit.png", bbox_inches = 'tight', pad_inches = 0.1)
###Output
_____no_output_____
###Markdown
Scenario 3: Non-SWIPE for level 7, 8, 9
###Code
swipe_ll = [7,8,9]
w_swipe = quant.program_noise_cond(weight_q, weight_b, hrs, lrs, swipe_ll)
w_swipe = w_swipe.div(nonideal_unit)
# swipe
wq_swipe = binary2dec(nbit, w_swipe, cellBit=cellBit)
plt.figure(figsize=(10,6))
plt.scatter(ql, np.zeros(ql.shape), marker='s', s=100)
sns.distplot(wq_swipe.view(-1).cpu().numpy())
plt.xticks([ii for ii in range(15)])
plt.title("4-bit Weight Programmed with SWIPE scheme except level 7 8 9", fontsize=16, fontweight='bold')
plt.grid(True)
plt.savefig("./save/figs/nonswipe789_4bit.png", bbox_inches = 'tight', pad_inches = 0.1)
###Output
_____no_output_____
###Markdown
Scenario 4: Non-SWIPE for level 6,7,8,9
###Code
swipe_ll = [6,7,8,9]
w_swipe = quant.program_noise_cond(weight_q, weight_b, hrs, lrs, swipe_ll)
w_swipe = w_swipe.div(nonideal_unit)
# swipe
wq_swipe = binary2dec(nbit, w_swipe, cellBit=cellBit)
plt.figure(figsize=(10,6))
plt.scatter(ql, np.zeros(ql.shape), marker='s', s=100)
sns.distplot(wq_swipe.view(-1).cpu().numpy())
plt.xticks([ii for ii in range(15)])
plt.title("4-bit Weight Programmed with SWIPE scheme except level 6 7 8 9", fontsize=16, fontweight='bold')
plt.grid(True)
plt.savefig("./save/figs/nonswipe6789_4bit.png", bbox_inches = 'tight', pad_inches = 0.1)
###Output
_____no_output_____
###Markdown
Scenario 4: Non-SWIPE for level for all
###Code
swipe_ll = [ii for ii in range(15)]
w_swipe = quant.program_noise_cond(weight_q, weight_b, hrs, lrs, swipe_ll)
w_swipe = w_swipe.div(nonideal_unit)
# swipe
wq_swipe = binary2dec(nbit, w_swipe, cellBit=cellBit)
plt.figure(figsize=(10,6))
plt.scatter(ql, np.zeros(ql.shape), marker='s', s=100)
sns.distplot(wq_swipe.view(-1).cpu().numpy())
plt.xticks([ii for ii in range(15)])
plt.title("4-bit Weight Programmed with Non-SWIPE scheme", fontsize=16, fontweight='bold')
plt.grid(True)
plt.savefig("./save/figs/nonswipe_4bit.png", bbox_inches = 'tight', pad_inches = 0.1)
###Output
_____no_output_____
###Markdown
Layer level statistics
###Code
total = weight_q.numel()
swipe = [3,7,8,9]
swipe_perc = 0
all_perc = 0
for ii in weight_q.unique():
n = weight_q[weight_q==ii].numel()
perc = n/total * 100
if ii in swipe:
swipe_perc += perc
print("Level: {}; Percentage: {:.3f}%".format(int(ii),perc))
all_perc += perc
print("{:.2f}% of weights are programmed with SWIPE; {:.2f}% of weights are programmed by Non-SWIPE scheme".format(swipe_perc, all_perc-swipe_perc))
###Output
_____no_output_____
###Markdown
Model level statistics
###Code
total_w = 0
level_element = np.zeros(15)
for k, v in state_dict.items():
if len(v.size()) == 4 and v.size(1) > 3:
wq, wscale = quant.stats_quant(v, nbit=nbit, dequantize=False)
wq = wq.add(7)
total_w += wq.numel()
layer_element = []
for ii in wq.unique():
n = wq[wq==ii].numel()
layer_element.append(n)
print(layer_element)
level_element += np.array(layer_element)
perc = level_element / total_w * 100
swipe_perc = 0
swipe = [6, 7,8,9]
for ii, p in enumerate(perc):
if ii in swipe:
swipe_perc += p
print("Percentage of {} = {:.2f}".format(swipe, swipe_perc))
###Output
_____no_output_____
###Markdown
Original Data Distribution
###Code
hcv.hist(bins=40, figsize=(25,25), layout=(10,3))
plt.savefig('hcv_distribution.png')
###Output
_____no_output_____
###Markdown
Synthetic Data 1 Distribution
###Code
syn_data1.hist(bins=40, figsize=(25,25), layout=(10,3))
plt.savefig('syn_data1_distribution.png')
###Output
_____no_output_____
###Markdown
Synthetic Data 2 Distribution
###Code
syn_data2.hist(bins=40, figsize=(25,25), layout=(10,3))
plt.savefig('syn_data2_distribution.png')
###Output
_____no_output_____
###Markdown
Original Discretized Data Distribution
###Code
disc_hcv.hist(bins=40, figsize=(25,25), layout=(10,3))
plt.savefig('disc_hcv_distribution.png')
###Output
_____no_output_____
###Markdown
Synthetic Data 3 Distribution
###Code
syn_data3.hist(bins=40, figsize=(25,25), layout=(10,3))
plt.savefig('syn_data3_distribution.png')
###Output
_____no_output_____
###Markdown
What is a distribution?An object-oriented exploration of one of the most useful concepts in statistics.Copyright 2016 Allen DowneyMIT License: http://opensource.org/licenses/MIT
###Code
from __future__ import print_function, division
%matplotlib inline
%precision 6
import matplotlib.pyplot as plt
import numpy as np
from numpy.fft import fft, ifft
from inspect import getsourcelines
def show_code(func):
lines, _ = getsourcelines(func)
for line in lines:
print(line, end='')
###Output
_____no_output_____
###Markdown
Playing dice with the universeOne of the recurring themes of my books is the use of object-oriented programming to explore mathematical ideas. Many mathematical entities are hard to define because they are so abstract. Representing them in Python puts the focus on what operations each entity supports -- that is, what the objects can *do* -- rather than on what they *are*.In this notebook, I explore the idea of a probability distribution, which is one of the most important ideas in statistics, but also one of the hardest to explain.To keep things concrete, I'll start with one of the usual examples: rolling dice. When you roll a standard six-sided die, there are six possible outcomes -- numbers 1 through 6 -- and all outcomes are equally likely.If you roll two dice and add up the total, there are 11 possible outcomes -- numbers 2 through 12 -- but they are not equally likely. The least likely outcomes, 2 and 12, only happen once in 36 tries; the most likely outcome happens 1 times in 6.And if you roll three dice and add them up, you get a different set of possible outcomes with a different set of probabilities.What I've just described are three random number generators, which are also called **random processes**. The output from a random process is a **random variable**, or more generally a set of random variables. And each random variable has **probability distribution**, which is the set of possible outcomes and the corresponding set of probabilities.There are many ways to represent a probability distribution. The most obvious is a **probability mass function**, or PMF, which is a function that maps from each possible outcome to its probability. And in Python, the most obvious way to represent a PMF is a dictionary that maps from outcomes to probabilities.Here's a definition for a class named `Pmf` that represents a PMF.
###Code
class Pmf:
def __init__(self, d=None):
"""Initializes the distribution.
d: map from values to probabilities
"""
self.d = {} if d is None else d
def items(self):
"""Returns a sequence of (value, prob) pairs."""
return self.d.items()
def __repr__(self):
"""Returns a string representation of the object."""
cls = self.__class__.__name__
return '%s(%s)' % (cls, repr(self.d))
def __getitem__(self, value):
"""Looks up the probability of a value."""
return self.d.get(value, 0)
def __setitem__(self, value, prob):
"""Sets the probability associated with a value."""
self.d[value] = prob
def __add__(self, other):
"""Computes the Pmf of the sum of values drawn from self and other.
other: another Pmf or a scalar
returns: new Pmf
"""
pmf = Pmf()
for v1, p1 in self.items():
for v2, p2 in other.items():
pmf[v1 + v2] += p1 * p2
return pmf
def total(self):
"""Returns the total of the probabilities."""
return sum(self.d.values())
def normalize(self):
"""Normalizes this PMF so the sum of all probs is 1.
Args:
fraction: what the total should be after normalization
Returns: the total probability before normalizing
"""
total = self.total()
for x in self.d:
self.d[x] /= total
return total
def mean(self):
"""Computes the mean of a PMF."""
return sum(p * x for x, p in self.items())
def var(self, mu=None):
"""Computes the variance of a PMF.
mu: the point around which the variance is computed;
if omitted, computes the mean
"""
if mu is None:
mu = self.mean()
return sum(p * (x - mu) ** 2 for x, p in self.items())
def expect(self, func):
"""Computes the expectation of a given function, E[f(x)]
func: function
"""
return sum(p * func(x) for x, p in self.items())
def display(self):
"""Displays the values and probabilities."""
for value, prob in self.items():
print(value, prob)
def plot_pmf(self, **options):
"""Plots the values and probabilities."""
xs, ps = zip(*sorted(self.items()))
plt.plot(xs, ps, **options)
###Output
_____no_output_____
###Markdown
Each `Pmf` contains a dictionary named `d` that contains the values and probabilities. To show how this class is used, I'll create a `Pmf` that represents a six-sided die:
###Code
d6 = Pmf()
for x in range(1, 7):
d6[x] = 1
d6.display()
###Output
1 1
2 1
3 1
4 1
5 1
6 1
###Markdown
Initially the "probabilities" are all 1, so the total probability in the `Pmf` is 6, which doesn't make a lot of sense. In a proper, meaningful, PMF, the probabilities add up to 1, which implies that one outcome, and only one outcome, will occur (for any given roll of the die).We can take this "unnormalized" distribution and make it a proper `Pmf` using the `normalize` method. Here's what the method looks like:
###Code
show_code(Pmf.normalize)
###Output
def normalize(self):
"""Normalizes this PMF so the sum of all probs is 1.
Args:
fraction: what the total should be after normalization
Returns: the total probability before normalizing
"""
total = self.total()
for x in self.d:
self.d[x] /= total
return total
###Markdown
`normalize` adds up the probabilities in the PMF and divides through by the total. The result is a `Pmf` with probabilities that add to 1.Here's how it's used:
###Code
d6.normalize()
d6.display()
###Output
1 0.16666666666666666
2 0.16666666666666666
3 0.16666666666666666
4 0.16666666666666666
5 0.16666666666666666
6 0.16666666666666666
###Markdown
The fundamental operation provided by a `Pmf` is a "lookup"; that is, we can look up an outcome and get the corresponding probability. `Pmf` provides `__getitem__`, so we can use bracket notation to look up an outcome:
###Code
d6[3]
###Output
_____no_output_____
###Markdown
And if you look up a value that's not in the `Pmf`, the probability is 0.
###Code
d6[7]
###Output
_____no_output_____
###Markdown
**Exerise:** Create a `Pmf` that represents a six-sided die that is red on two sides and blue on the other four.
###Code
# Solution
die = Pmf(dict(red=2, blue=4))
die.normalize()
die.display()
###Output
blue 0.6666666666666666
red 0.3333333333333333
###Markdown
Is that all there is?So is a `Pmf` a distribution? No. At least in this framework, a `Pmf` is one of several representations of a distribution. Other representations include the **cumulative distribution function**, or CDF, and the **characteristic function**.These representations are equivalent in the sense that they all contain the same informaton; if I give you any one of them, you can figure out the others (and we'll see how soon).So why would we want different representations of the same information? The fundamental reason is that there are many different operations we would like to perform with distributions; that is, questions we would like to answer. Some representations are better for some operations, but none of them is the best for all operations.So what are the questions we would like a distribution to answer? They include:* What is the probability of a given outcome?* What is the mean of the outcomes, taking into account their probabilities?* What is the variance, and other moments, of the outcome?* What is the probability that the outcome exceeds (or falls below) a threshold?* What is the median of the outcomes, that is, the 50th percentile?* What are the other percentiles?* How can get generate a random sample from this distribution, with the appropriate probabilities?* If we run two random processes and choose the maximum of the outcomes (or minimum), what is the distribution of the result?* If we run two random processes and add up the results, what is the distribution of the sum?Each of these questions corresponds to a method we would like a distribution to provide. But as I said, there is no one representation that answers all of them easily and efficiently. So let's look at the different representations and see what they can do.Getting back to the `Pmf`, we've already seen how to look up the probability of a given outcome. Here's the code:
###Code
show_code(Pmf.__getitem__)
###Output
def __getitem__(self, value):
"""Looks up the probability of a value."""
return self.d.get(value, 0)
###Markdown
Python dictionaries are implemented using hash tables, so we expect `__getitem__` to be fast. In terms of algorithmic complexity, it is constant time, or $O(1)$. Moments and expecationsThe `Pmf` representation is also good for computing mean, variance, and other moments. Here's the implementation of `Pmf.mean`:
###Code
show_code(Pmf.mean)
###Output
def mean(self):
"""Computes the mean of a PMF."""
return sum(p * x for x, p in self.items())
###Markdown
This implementation is efficient, in the sense that it is $O(n)$, and because it uses a comprehension to traverse the outcomes, the overhead is low. The implementation of `Pmf.var` is similar:
###Code
show_code(Pmf.var)
###Output
def var(self, mu=None):
"""Computes the variance of a PMF.
mu: the point around which the variance is computed;
if omitted, computes the mean
"""
if mu is None:
mu = self.mean()
return sum(p * (x - mu) ** 2 for x, p in self.items())
###Markdown
And here's how they are used:
###Code
d6.mean(), d6.var()
###Output
_____no_output_____
###Markdown
The structure of `mean` and `var` is the same: they traverse the outcomes and their probabilities, `x` and `p`, and add up the product of `p` and some function of `x`.We can generalize this structure to compute the **expectation** of any function of `x`, which is defined as$E[f] = \sum_x p(x) f(x)$`Pmf` provides `expect`, which takes a function object, `func`, and returns the expectation of `func`:
###Code
show_code(Pmf.expect)
###Output
def expect(self, func):
"""Computes the expectation of a given function, E[f(x)]
func: function
"""
return sum(p * func(x) for x, p in self.items())
###Markdown
As an example, we can use `expect` to compute the third central moment of the distribution:
###Code
mu = d6.mean()
d6.expect(lambda x: (x-mu)**3)
###Output
_____no_output_____
###Markdown
Because the distribution is symmetric, the third central moment is 0. AdditionThe next question we'll answer is the last one on the list: if we run two random processes and add up the results, what is the distribution of the sum? In other words, if the result of the first process is a random variable, $X$, and the result of the second is $Y$, what is the distribution of $X+Y$?The `Pmf` representation of the distribution can answer this question pretty well, but we'll see later that the characteristic function is even better.Here's the implementation:
###Code
show_code(Pmf.__add__)
###Output
def __add__(self, other):
"""Computes the Pmf of the sum of values drawn from self and other.
other: another Pmf or a scalar
returns: new Pmf
"""
pmf = Pmf()
for v1, p1 in self.items():
for v2, p2 in other.items():
pmf[v1 + v2] += p1 * p2
return pmf
###Markdown
The outer loop traverses the outcomes and probabilities of the first `Pmf`; the inner loop traverses the second `Pmf`. Each time through the loop, we compute the sum of the outcome pair, `v1` and `v2`, and the probability that the pair occurs.Note that this method implicitly assumes that the two processes are independent; that is, the outcome from one does not affect the other. That's why we can compute the probability of the pair by multiplying the probabilities of the outcomes.To demonstrate this method, we'll start with `d6` again. Here's what it looks like:
###Code
d6.plot_pmf()
###Output
_____no_output_____
###Markdown
When we use the `+` operator, Python invokes the `__add__` method, which returns a new `Pmf` object. Here's the `Pmf` that represents the sum of two dice:
###Code
twice = d6 + d6
twice.plot_pmf(color='green')
###Output
_____no_output_____
###Markdown
And here's the `Pmf` that represents the sum of three dice.
###Code
thrice = twice + d6
d6.plot_pmf()
twice.plot_pmf()
thrice.plot_pmf()
###Output
_____no_output_____
###Markdown
As we add up more dice, the result converges to the bell shape of the Gaussian distribution. **Exercise:** If you did the previous exercise, you have a `Pmf` that represents a die with red on 2 sides and blue on the other 4. Use the `+` operator to compute the outcomes of rolling two of these dice and the probabilities of the outcomes.Note: if you represent the outcomes as strings, the `__add__` method will concatenate them instead of adding, which actually works.
###Code
# Solution
dice = die + die
dice.display()
###Output
redred 0.1111111111111111
redblue 0.2222222222222222
blueblue 0.4444444444444444
bluered 0.2222222222222222
###Markdown
Cumulative probabilities The next few questions on the list are related to the median and other percentiles. They are harder to answer with the `Pmf` representation, but easier with a **cumulative distribution function** (CDF).A CDF is a map from an outcome, $x$, to its cumulative probability, which is the probability that the outcome is less than or equal to $x$. In math notation:$CDF(x) = Prob(X \le x)$where $X$ is the outcome of a random process, and $x$ is the threshold we are interested in. For example, if $CDF$ is the cumulative distribution for the sum of three dice, the probability of getting 5 or less is $CDF(5)$, and the probability of getting 6 or more is $1 - CDF(5)$.To represent a CDF in Python, I use a sorted list of outcomes and the corresponding list of cumulative probabilities.
###Code
class Cdf:
def __init__(self, xs, ps):
self.xs = xs
self.ps = ps
def __repr__(self):
return 'Cdf(%s, %s)' % (repr(self.xs), repr(self.ps))
def __getitem__(self, x):
return self.cumprobs([x])[0]
def cumprobs(self, values):
"""Gets probabilities for a sequence of values.
values: any sequence that can be converted to NumPy array
returns: NumPy array of cumulative probabilities
"""
values = np.asarray(values)
index = np.searchsorted(self.xs, values, side='right')
ps = self.ps[index-1]
ps[values < self.xs[0]] = 0.0
return ps
def values(self, ps):
"""Returns InverseCDF(p), the value that corresponds to probability p.
ps: sequence of numbers in the range [0, 1]
returns: NumPy array of values
"""
ps = np.asarray(ps)
if np.any(ps < 0) or np.any(ps > 1):
raise ValueError('Probability p must be in range [0, 1]')
index = np.searchsorted(self.ps, ps, side='left')
return self.xs[index]
def sample(self, shape):
"""Generates a random sample from the distribution.
shape: dimensions of the resulting NumPy array
"""
ps = np.random.random(shape)
return self.values(ps)
def maximum(self, k):
"""Computes the CDF of the maximum of k samples from the distribution."""
return Cdf(self.xs, self.ps**k)
def display(self):
"""Displays the values and cumulative probabilities."""
for x, p in zip(self.xs, self.ps):
print(x, p)
def plot_cdf(self, **options):
"""Plots the cumulative probabilities."""
plt.plot(self.xs, self.ps, **options)
###Output
_____no_output_____
###Markdown
`compute_cumprobs` takes a dictionary that maps outcomes to probabilities, sorts the outcomes in increasing order, then makes two NumPy arrays: `xs` is the sorted sequence of values; `ps` is the sequence of cumulative probabilities:
###Code
def compute_cumprobs(d):
"""Computes cumulative probabilities.
d: map from values to probabilities
"""
xs, freqs = zip(*sorted(d.items()))
xs = np.asarray(xs)
ps = np.cumsum(freqs, dtype=np.float)
ps /= ps[-1]
return xs, ps
###Output
_____no_output_____
###Markdown
Here's how we use it to create a `Cdf` object for the sum of three dice:
###Code
xs, ps = compute_cumprobs(thrice.d)
cdf = Cdf(xs, ps)
cdf.display()
###Output
3 0.00462962962963
4 0.0185185185185
5 0.0462962962963
6 0.0925925925926
7 0.162037037037
8 0.259259259259
9 0.375
10 0.5
11 0.625
12 0.740740740741
13 0.837962962963
14 0.907407407407
15 0.953703703704
16 0.981481481481
17 0.99537037037
18 1.0
###Markdown
Because we have to sort the values, the time to compute a `Cdf` is $O(n \log n)$.Here's what the CDF looks like:
###Code
cdf.plot_cdf()
###Output
_____no_output_____
###Markdown
The range of the CDF is always from 0 to 1.Now we can compute $CDF(x)$ by searching the `xs` to find the right location, or index, and then looking up the corresponding probability. Because the `xs` are sorted, we can use bisection search, which is $O(\log n)$.`Cdf` provides `cumprobs`, which takes an array of values and returns the corresponding probabilities:
###Code
show_code(Cdf.cumprobs)
###Output
def cumprobs(self, values):
"""Gets probabilities for a sequence of values.
values: any sequence that can be converted to NumPy array
returns: NumPy array of cumulative probabilities
"""
values = np.asarray(values)
index = np.searchsorted(self.xs, values, side='right')
ps = self.ps[index-1]
ps[values < self.xs[0]] = 0.0
return ps
###Markdown
The details here are a little tricky because we have to deal with some "off by one" problems, and if any of the values are less than the smallest value in the `Cdf`, we have to handle that as a special case. But the basic idea is simple, and the implementation is efficient.Now we can look up probabilities for a sequence of values:
###Code
cdf.cumprobs((2, 10, 18))
###Output
_____no_output_____
###Markdown
`Cdf` also provides `__getitem__`, so we can use brackets to look up a single value:
###Code
cdf[5]
###Output
_____no_output_____
###Markdown
**Exercise:** If you roll three dice, what is the probability of getting 15 or more?
###Code
# Solution
1 - cdf[14]
###Output
_____no_output_____
###Markdown
Reverse lookupYou might wonder why I represent a `Cdf` with two lists rather than a dictionary. After all, a dictionary lookup is constant time and bisection search is logarithmic. The reason is that we often want to use a `Cdf` to do a reverse lookup; that is, given a probability, we would like to find the corresponding value. With two sorted lists, a reverse lookup has the same performance as a forward loopup, $O(\log n)$.Here's the implementation:
###Code
show_code(Cdf.values)
###Output
def values(self, ps):
"""Returns InverseCDF(p), the value that corresponds to probability p.
ps: sequence of numbers in the range [0, 1]
returns: NumPy array of values
"""
ps = np.asarray(ps)
if np.any(ps < 0) or np.any(ps > 1):
raise ValueError('Probability p must be in range [0, 1]')
index = np.searchsorted(self.ps, ps, side='left')
return self.xs[index]
###Markdown
And here's an example that finds the 10th, 50th, and 90th percentiles:
###Code
cdf.values((0.1, 0.5, 0.9))
###Output
_____no_output_____
###Markdown
The `Cdf` representation is also good at generating random samples, by choosing a probability uniformly from 0 to 1 and finding the corresponding value. Here's the method `Cdf` provides:
###Code
show_code(Cdf.sample)
###Output
def sample(self, shape):
"""Generates a random sample from the distribution.
shape: dimensions of the resulting NumPy array
"""
ps = np.random.random(shape)
return self.values(ps)
###Markdown
The result is a NumPy array with the given `shape`. The time to generate each random choice is $O(\log n)$Here are some examples that use it.
###Code
cdf.sample(1)
cdf.sample(6)
cdf.sample((2, 2))
###Output
_____no_output_____
###Markdown
**Exercise:** Write a function that takes a `Cdf` object and returns the interquartile range (IQR), which is the difference between the 75th and 25th percentiles.
###Code
# Solution
def iqr(cdf):
values = cdf.values((0.25, 0.75))
return np.diff(values)[0]
iqr(cdf)
###Output
_____no_output_____
###Markdown
Max and minThe `Cdf` representation is particularly good for finding the distribution of a maximum. For example, in Dungeons and Dragons, players create characters with random properties like strength and intelligence. The properties are generated by rolling three dice and adding them, so the CDF for each property is the `Cdf` we used in this example. Each character has 6 properties, so we might wonder what the distribution is for the best of the six.Here's the method that computes it:
###Code
show_code(Cdf.maximum)
###Output
def maximum(self, k):
"""Computes the CDF of the maximum of k samples from the distribution."""
return Cdf(self.xs, self.ps**k)
###Markdown
To get the distribution of the maximum, we make a new `Cdf` with the same values as the original, and with the `ps` raised to the `k`th power. Simple, right?To see how it works, suppose you generate six properties and your best is only a 10. That's unlucky, but you might wonder how unlucky. So, what is the chance of rolling 3 dice six times, and never getting anything better than 10?Well, that means that all six values were 10 or less. The probability that each of them is 10 or less is $CDF(10)$, because that's what the CDF means. So the probability that all 6 are 10 or less is $CDF(10)^6$. Now we can generalize that by replacing $10$ with any value of $x$ and $6$ with any integer $k$. The result is $CDF(x)^k$, which is the probability that all $k$ rolls are $x$ or less, and that is the CDF of the maximum.Here's how we use `Cdf.maximum`:
###Code
best = cdf.maximum(6)
best.plot_cdf()
best[10]
###Output
_____no_output_____
###Markdown
So the chance of generating a character whose best property is 10 is less than 2%. **Exercise:** Write a function that takes a CDF and returns the CDF of the *minimum* of `k` values.Hint: If the minimum is less than $x$, that means all `k` values must be less than $x$.
###Code
# Solution
def minimum(cdf, k):
return Cdf(cdf.xs, 1 - (1-cdf.ps)**k)
worst = minimum(cdf, 6)
worst.plot_cdf()
###Output
_____no_output_____
###Markdown
Characteristic functionAt this point we've answered all the questions on the list, but I want to come back to addition, because the algorithm we used with the `Pmf` representation is not as efficient as it could be. It enumerates all pairs of outcomes, so if there are $n$ values in each `Pmf`, the run time is $O(n^2)$. We can do better.The key is the **characteristic function**, which is the Fourier transform (FT) of the PMF. If you are familiar with the Fourier transform and the Convolution Theorem, keep reading. Otherwise, skip the rest of this cell and get to the code, which is much simpler than the explanation. Details for people who know about convolutionIf you are familiar with the FT in the context of spectral analysis of signals, you might wonder why we would possibly want to compute the FT of a PMF. The reason is the Convolution Theorem.It turns out that the algorithm we used to "add" two `Pmf` objects is a form of convolution. To see how that works, suppose we are computing the distribution of $Z = X+Y$. To make things concrete, let's compute the probability that the sum, $Z$ is 5. To do that, we can enumerate all possible values of $X$ like this:$Prob(Z=5) = \sum_x Prob(X=x) \cdot Prob(Y=5-x)$Now we can write each of those probabilities in terms of the PMF of $X$, $Y$, and $Z$:$PMF_Z(5) = \sum_x PMF_X(x) \cdot PMF_Y(5-x)$And now we can generalize by replacing 5 with any value of $z$:$PMF_Z(z) = \sum_x PMF_X(x) \cdot PMF_Y(z-x)$You might recognize that computation as convolution, denoted with the operator $\ast$. $PMF_Z = PMF_X \ast PMF_Y$Now, according to the Convolution Theorem:$FT(PMF_X \ast Y) = FT(PMF_X) \cdot FT(PMF_Y)$Or, taking the inverse FT of both sides:$PMF_X \ast PMF_Y = IFT(FT(PMF_X) \cdot FT(PMF_Y))$In words, to compute the convolution of $PMF_X$ and $PMF_Y$, we can compute the FT of $PMF_X$ and $PMF_Y$ and multiply them together, then compute the inverse FT of the result.Let's see how that works. Here's a class that represents a characteristic function.
###Code
class CharFunc:
def __init__(self, hs):
"""Initializes the CF.
hs: NumPy array of complex
"""
self.hs = hs
def __mul__(self, other):
"""Computes the elementwise product of two CFs."""
return CharFunc(self.hs * other.hs)
def make_pmf(self, thresh=1e-11):
"""Converts a CF to a PMF.
Values with probabilities below `thresh` are dropped.
"""
ps = ifft(self.hs)
d = dict((i, p) for i, p in enumerate(ps.real) if p > thresh)
return Pmf(d)
def plot_cf(self, **options):
"""Plots the real and imaginary parts of the CF."""
n = len(self.hs)
xs = np.arange(-n//2, n//2)
hs = np.roll(self.hs, len(self.hs) // 2)
plt.plot(xs, hs.real, label='real', **options)
plt.plot(xs, hs.imag, label='imag', **options)
plt.legend()
###Output
_____no_output_____
###Markdown
The attribute, `hs`, is the Fourier transform of the `Pmf`, represented as a NumPy array of complex numbers.The following function takes a dictionary that maps from outcomes to their probabilities, and computes the FT of the PDF:
###Code
def compute_fft(d, n=256):
"""Computes the FFT of a PMF of integers.
Values must be integers less than `n`.
"""
xs, freqs = zip(*d.items())
ps = np.zeros(256)
ps[xs,] = freqs
hs = fft(ps)
return hs
###Output
_____no_output_____
###Markdown
`fft` computes the Fast Fourier Transform (FFT), which is called "fast" because the run time is $O(n \log n)$.Here's what the characteristic function looks like for the sum of three dice (plotting the real and imaginary parts of `hs`):
###Code
hs = compute_fft(thrice)
cf = CharFunc(hs)
cf.plot_cf()
###Output
_____no_output_____
###Markdown
The characteristic function contains all of the information from the `Pmf`, but it is encoded in a form that is hard to interpret. However, if we are given a characteristic function, we can find the corresponding `Pmf`.`CharFunc` provides `make_pmf`, which uses the inverse FFT to get back to the `Pmf` representation. Here's the code:
###Code
show_code(CharFunc.make_pmf)
###Output
def make_pmf(self, thresh=1e-11):
"""Converts a CF to a PMF.
Values with probabilities below `thresh` are dropped.
"""
ps = ifft(self.hs)
d = dict((i, p) for i, p in enumerate(ps.real) if p > thresh)
return Pmf(d)
###Markdown
And here's an example:
###Code
cf.make_pmf().plot_pmf()
###Output
_____no_output_____
###Markdown
Now we can use the characteristic function to compute a convolution. `CharFunc` provides `__mul__`, which multiplies the `hs` elementwise and returns a new `CharFunc` object:
###Code
show_code(CharFunc.__mul__)
###Output
def __mul__(self, other):
"""Computes the elementwise product of two CFs."""
return CharFunc(self.hs * other.hs)
###Markdown
And here's how we can use it to compute the distribution of the sum of 6 dice.
###Code
sixth = (cf * cf).make_pmf()
sixth.plot_pmf()
###Output
_____no_output_____
###Markdown
Here are the probabilities, mean, and variance.
###Code
sixth.display()
sixth.mean(), sixth.var()
###Output
_____no_output_____
###Markdown
This might seem like a roundabout way to compute a convolution, but it is efficient. The time to Compute the `CharFunc` objects is $O(n \log n)$. Multiplying them together is $O(n)$. And converting back to a `Pmf` is $O(n \log n)$.So the whole process is $O(n \log n)$, which is better than `Pmf.__add__`, which is $O(n^2)$. **Exercise:** Plot the magnitude of `cf.hs` using `np.abs`. What does that shape look like?Hint: it might be clearer if you us `np.roll` to put the peak of the CF in the middle.
###Code
#Solution
n = len(cf.hs)
mags = np.abs(cf.hs)
plt.plot(np.roll(mags, n//2))
None
# The result approximates a Gaussian curve because
# the PMF is approximately Gaussian and the FT of a
# Gaussian is also Gaussian
###Output
_____no_output_____
###Markdown
DistributionsFinally, let's back to the question we started with: *what is a distribution?*I've said that `Pmf`, `Cdf`, and `CharFunc` are different ways to represent the same information. For the questions we want to answer, some representations are better than others. But how should we represent the distribution itself?One option is to treat each representation as a **mixin**; that is, a class that provides a set of capabilities. A distribution inherits all of the capabilities from all of the representations. Here's a class that shows what I mean:
###Code
class Dist(Pmf, Cdf, CharFunc):
def __init__(self, d):
"""Initializes the Dist.
Calls all three __init__ methods.
"""
Pmf.__init__(self, d)
Cdf.__init__(self, *compute_cumprobs(d))
CharFunc.__init__(self, compute_fft(d))
def __add__(self, other):
"""Computes the distribution of the sum using Pmf.__add__.
"""
pmf = Pmf.__add__(self, other)
return Dist(pmf.d)
def __mul__(self, other):
"""Computes the distribution of the sum using CharFunc.__mul__.
"""
pmf = CharFunc.__mul__(self, other).make_pmf()
return Dist(pmf.d)
###Output
_____no_output_____
###Markdown
When you create a `Dist`, you provide a dictionary of values and probabilities.`Dist.__init__` calls the other three `__init__` methods to create the `Pmf`, `Cdf`, and `CharFunc` representations. The result is an object that has all the attributes and methods of the three representations.As an example, I'll create a `Dist` that represents the sum of six dice:
###Code
dist = Dist(sixth.d)
dist.plot_pmf()
###Output
_____no_output_____
###Markdown
We inherit `__getitem__` from `Pmf`, so we can look up the probability of a value.
###Code
dist[21]
###Output
_____no_output_____
###Markdown
We also get mean and variance from `Pmf`:
###Code
dist.mean(), dist.var()
###Output
_____no_output_____
###Markdown
But we can also use methods from `Cdf`, like `values`:
###Code
dist.values((0.25, 0.5, 0.75))
###Output
_____no_output_____
###Markdown
And `cumprobs`
###Code
dist.cumprobs((18, 21, 24))
###Output
_____no_output_____
###Markdown
And `sample` and `plot_cdf`
###Code
dist.sample(10)
dist.maximum(6).plot_cdf()
###Output
_____no_output_____
###Markdown
`Dist.__add__` uses `Pmf.__add__`, which performs convolution the slow way:
###Code
twelfth = dist + dist
twelfth.plot_pmf()
twelfth.mean()
###Output
_____no_output_____
###Markdown
`Dist.__mul__` uses `CharFunc.__mul__`, which performs convolution the fast way.
###Code
twelfth_fft = dist * dist
twelfth_fft.plot_pmf()
twelfth_fft.mean()
###Output
_____no_output_____
###Markdown
$$\left=\frac{M}{N}$$$$\implies M=N\left$$$$\left=\int{M(a)P(a)\mathrm{d}a}$$$$\left=\frac{4}{3}\pi\rho\int{a^3P(a)\mathrm{d}a}$$
###Code
density = 2650 # kg/m**3
masses = 4/3*np.pi*density*radii**3
masses.mean()
4/3*np.pi*density*np.trapz(bin_centers**3*number_pdf, bin_centers)
4/3*np.pi*density*np.trapz(bin_centers**3*volume_pdf, bin_centers)
from said.sedimentsizedistribution import SedimentSizeDistribution
size_distribution = SedimentSizeDistribution(2*bin_edges, volume_cdf)
cdf_diameters, calc_number_cdf = size_distribution.get_number_cdf()
_ = plt.plot(number_cdf, calc_number_cdf)
plt.autoscale(tight=True)
_ = plt.xlabel('Number CDF')
_ = plt.ylabel('Number CDF calculated from volume CDF')
cdf_difference = calc_number_cdf - number_cdf
np.abs(cdf_difference).max()
pdf_diameters, calc_number_pdf = size_distribution.get_number_pdf()
calc_number_pdf = 2*calc_number_pdf
_ = plt.plot(number_pdf, calc_number_pdf)
plt.autoscale(tight=True)
_ = plt.xlabel('Number PDF')
_ = plt.ylabel('Number PDF calculated from volume CDF')
pdf_difference = calc_number_pdf - number_pdf
np.abs(pdf_difference).max()
###Output
_____no_output_____ |
notebooks/scientific_python/scipy_lesson.ipynb | ###Markdown
Fitting Data with SciPy Simple Least Squares FitFirst lets try a simple least squares example using noisy data
###Code
# Global imports and settings
# Matplotlib
%matplotlib inline
from matplotlib import pyplot as plt
# Print options
import numpy as np
from scipy import optimize
# Generate data points with noise
num_points = 150
Tx = np.linspace(5., 8., num_points)
tX = 11.86*np.cos(2*np.pi/0.81*Tx-1.32) + 0.64*Tx+4*((0.5-np.random.rand(num_points))*np.exp(2*np.random.rand(num_points)**2))
plt.plot(Tx,tX,"ro")
# Fit the first set
fitfunc = lambda p, x: p[0]*np.cos(2*np.pi/p[1]*x+p[2]) + p[3]*x # Target function
errfunc = lambda p, x, y: fitfunc(p, x) - y # Distance to the target function
p0 = [-15., 0.8, 0., -1.] # Initial guess for the parameters
p1, success = optimize.leastsq(errfunc, p0[:], args=(Tx, tX))
print(p1)
time = np.linspace(Tx.min(), Tx.max(), 100)
plt.plot(Tx, tX, "ro", time, fitfunc(p1, time), "r-") # Plot of the data and the fit
###Output
_____no_output_____
###Markdown
Power Law Fit to error bars
###Code
# Define function for calculating a power law
powerlaw = lambda x, amp, index: amp * (x**index)
##########
# Generate data points with noise
##########
num_points = 20
# Note: all positive, non-zero data
xdata = np.linspace(1.1, 10.1, num_points)
ydata = powerlaw(xdata, 10.0, -2.0) # simulated perfect data
yerr = 0.2 * ydata # simulated errors (10%)
ydata += np.random.randn(num_points) * yerr # simulated noisy data
logx = np.log10(xdata)
logy = np.log10(ydata)
logyerr = yerr / ydata
plt.errorbar(logx, logy, yerr=logyerr, fmt='k.') # Data
# define our (line) fitting function
fitfunc = lambda p, x: p[0] + p[1] * x
errfunc = lambda p, x, y, err: np.power(y - fitfunc(p, x),2) / err
pinit = [1.0, -1.0]
out = optimize.leastsq(errfunc, pinit,
args=(logx, logy, logyerr), full_output=1)
pfinal = out[0]
covar = out[1]
print (pfinal)
print (covar)
index = pfinal[1]
amp = 10.0**pfinal[0]
plt.plot(logx, fitfunc(pfinal, logx), color="red") # Fit
plt.errorbar(logx, logy, yerr=logyerr, fmt='k.') # Data
###Output
_____no_output_____
###Markdown
Interpolation
###Code
from scipy import interpolate
num_points = 30
Tx = np.linspace(5., 8., num_points)
tX = 11.86*np.cos(2*np.pi/0.81*Tx-1.32) + 0.64*Tx+4*((0.5))
plt.plot(Tx,tX,"ro")
# We can use these points as an interpolation grid
interp_grid_lin = interpolate.interp1d(Tx,tX, kind="linear")
interp_grid_cub = interpolate.interp1d(Tx,tX, kind="cubic")
#lets use this to draw the results
px = np.linspace(5., 8., 1000)
interp_points_lin = interp_grid_lin(px)
interp_points_cub = interp_grid_cub(px)
plt.plot(Tx,tX,"ro")
plt.plot(px,interp_points_lin,"r-")
plt.plot(px,interp_points_cub,"b-")
###Output
_____no_output_____
###Markdown
Interpolation in more dimensionsSo far so uninteresting, but we can interpolate in more diminsions
###Code
from scipy import stats, random
num_points=10
x = np.linspace(-1,1, num_points)
y = np.linspace(-1,1, num_points)
X,Y = np.meshgrid(x,y)
r = np.sqrt(X.ravel() * X.ravel() + Y.ravel() * Y.ravel())
weight = stats.norm.pdf(r)
weight = weight.reshape(num_points, num_points)
print(weight.shape)
plt.imshow(weight, interpolation="None")
# Lets try creating a grid interpolator
grid_interp = interpolate.RegularGridInterpolator((x,y), weight)
xi = np.linspace(-1,1, num_points*10)
yi = np.linspace(-1,1, num_points*10)
Xi, Yi = np.meshgrid(xi, yi)
interp_w = grid_interp((Xi.ravel(), Yi.ravel()))
interp_w = interp_w.reshape(num_points*10, num_points*10)
plt.imshow(interp_w, interpolation="None")
# Data need not be on a grid though
x = (random.rand(num_points*num_points) * 2) - 1
y = (random.rand(num_points*num_points) * 2) - 1
r = np.sqrt(x*x +y*y)
weight = stats.norm.pdf(r)
lin_ND_interp = interpolate.LinearNDInterpolator((x,y), weight)
interp_ND_w = lin_ND_interp((Xi.ravel(), Yi.ravel()))
interp_ND_w = interp_ND_w.reshape(num_points*10, num_points*10)
plt.imshow(interp_ND_w, interpolation="None")
###Output
_____no_output_____
###Markdown
Integration
###Code
from scipy import integrate
# Lets just try integrating a gaussian
gaus = lambda x: stats.norm.pdf(x)
integral = integrate.quad(gaus, -2, 2)
print(integral)
###Output
_____no_output_____ |
notebooks/MuzeoEgizioExp.ipynb | ###Markdown
Muzeo EgizioSearch using experimental tabulated phases in DatabaseExp Created using MuzeoEgizio Notebook. Imports
###Code
from XRDXRFutils import Phase,DatabaseXRD, DataXRD, SpectraXRD, GaussNewton, PhaseList, PhaseMap, PhaseSearch, PhaseMapSave
from XRDXRFutils import GammaMap,ChiMap
import os
import pickle
from joblib import Parallel, delayed
import h5py
from sklearn.linear_model import LinearRegression
from scipy.optimize import curve_fit, least_squares
from numpy import linspace,concatenate,sqrt,log,histogram,array
from matplotlib.pyplot import sca,vlines,show,fill_between,sca,legend,imshow,subplots,plot,xlim,ylim,xlabel,ylabel,cm,title,scatter,colorbar,figure,vlines
from sklearn.cluster import KMeans,MiniBatchKMeans
from multiprocessing import Pool
from PIL import Image
def f_linear(x,a,b):
return a*x + b
def f_loss(x,t,y):
return (x[0]*t + x[1]) - y
###Output
_____no_output_____
###Markdown
Define Paths and Spectra Parameters
###Code
path_xrd = '/home/shared/dataXRDXRF/MuseoEgizio2022/VoltoGeroglifici/'
path_database = '/home/shared/DatabaseXRD'
path_data = 'data/' # data of intermediate results, for fast loading
path_figures = 'figures/' # figures generated by the script
path_results = 'results/' # results generated by the script: raw data, tif maps
min_theta = 17
max_theta = 43
min_intensity = 0.1 # among the tabulated peaks, selects only the ones above this threshold of intensity (scale between 0 and 1)
first_n_peaks = None # selects the first n most intense peaks (if None, leaves all the peaks)\
sigma = 0.15
###Output
_____no_output_____
###Markdown
Read XRD Datafrom xrd.h5
###Code
try:
data = DataXRD().load_h5(path_xrd + 'xrd.h5')
except:
print('Reading from raw data.')
data = DataXRD().read_params(path_xrd + 'Scanning_Parameters.txt').read(path_xrd).calibrate_from_file(path_xrd + 'calibration.ini').remove_background(std = 5).save_h5(path_xrd + 'xrd.h5')
print("a: %.1f s: %.1f beta: %.3f"%(data.opt[0],data.opt[1],data.opt[2]))
figure(figsize=(6,4))
im = imshow(data.data.sum(axis=2))
show()
###Output
Loading: /home/shared/dataXRDXRF/MuseoEgizio2022/VoltoGeroglifici/xrd.h5
a: -1327.1 s: 2729.8 beta: 43.202
###Markdown
Read database Define PhasesThis is for simplification. Phases can be selected iteratively from database using 'Tab'
###Code
database = DatabaseXRD().read_cifs(path_database)
databaseExp = DatabaseXRD().read_cifs('DatabaseExp/')
print('Phases in database:',len(database))
print('Phases in databaseEXP:',len(databaseExp))
lazurite = database['Lazurite'][0]
hydrocerussite = database['Hydrocerussite'][0]
cinnabar = database['Cinnabar'][1]
barite = database['Barite'][0]
spinel = database['Spinel'][0]
calcite = database['Calcite'][0]
hematite = database['Hematite'][4]
huntite = database['Huntite'][0]
as4 = database['As4 O6'][0]
orpiment = database['Orpiment'][0]
cuprorivaite = database['Cuprorivaite'][0]
hematite = databaseExp['Hematite'][0]
orpiment = databaseExp['Orpiment'][0]
cuprorivaite = databaseExp['Cuprorivaite'][0]
huntite = databaseExp['Huntite'][0]
as4 = databaseExp['As4 O6'][0]
phases_a_s = PhaseList([hematite,orpiment,cuprorivaite,huntite,as4])
phases_a_s.get_theta(min_intensity=min_intensity,
min_theta = min_theta,
max_theta = max_theta,
first_n_peaks = first_n_peaks)
if 'pmax_a' in locals():
data.opt[0] = pmax_a
data.opt[1] = pmax_s
pme = ChiMap().from_data(data,phases_a_s,sigma = sigma)
%%time
pme = pme.search()
L1loss, MSEloss, overlap_area = pme.metrics()
chi = pme.chi()
fig,ax = subplots(len(pme.phases),1,figsize=(12,10))
for i,phase in enumerate(pme.phases):
ax[i].set_title(phase.label)
p = ax[i].imshow(chi[...,i],vmin=0,vmax=1.1)
colorbar(p,ax = ax[i])
show()
fig,ax = subplots(len(pme.phases),1,figsize=(12,10))
rescaling_chi = pme.chi() * data.rescaling**0.5
for i,phase in enumerate(pme.phases):
ax[i].set_title(phase.label)
p = ax[i].imshow(rescaling_chi[...,i],vmin=0,vmax=20)
colorbar(p,ax = ax[i])
show()
###Output
_____no_output_____
###Markdown
Histogram of $a$If $a$ is spead over too large area it might be that the phases are to right or a phase is missing
###Code
%%time
opt = pme.opt()
a = opt[...,0]
s = opt[...,1]
vmin = -1345
vmax = -1300
h,b = histogram(a,bins=512)
figure(figsize=(12,4))
plot(b[:-1],h)
xlim(b[0],b[-1])
ylim(0,h.max())
vlines(vmin,0,h.max(),'k',ls='--',lw=1)
vlines(vmax,0,h.max(),'k',ls='--',lw=1)
xlabel('$a$')
ylabel(r'count($a$)')
title(r'Histogram of $a$')
figure(figsize=(16,8))
title('Distribution map of $a$')
im = imshow(a,cmap='Spectral',vmin=vmin,vmax=vmax)
colorbar(im,fraction=0.011)
###Output
_____no_output_____
###Markdown
Plotting the $a,s$ dependenceThere is a slight notion of a second $as$ dependence but it is weak.
###Code
%matplotlib inline
opt,var = curve_fit(f_linear,a.flatten(),s.flatten())
res = least_squares(f_loss,x0=opt,args=(a.flatten(),s.flatten()),loss='cauchy')
linear_y = f_linear(a.flatten(),*opt)
cauchy_y = f_linear(a.flatten(),*res['x'])
print('Linear:',opt)
print('Cauchy:',res['x'])
plot(a.flatten(),s.flatten(),'.',alpha=0.01)
x = linspace(a.min(),a.max(),10)
plot(x,f_linear(x,*opt),'-.',lw=2,label='fit linear')
plot(x,f_linear(x,*res['x']),'--',lw=2,label='fit cauchy')
plot(data.opt[0],data.opt[1],'k+',ms=12,label='inital fit')
print(a.mean(),s.mean())
legend(frameon=False)
xlabel(r'$a$')
ylabel(r'$s$')
pmax_a = b[h.argmax()]
pmax_s = f_linear(pmax_a, *res['x'])
print(pmax_a,pmax_s)
plot(pmax_a,pmax_s,'r+',ms=12,label='most likely')
show()
###Output
Linear: [ -3.46585263 -1856.32890111]
Cauchy: [ -1.42271378 803.14234292]
-1306.9784527154347 2673.4658116906644
-1298.4408903761782 2650.4520929896166
|
assignments/assignment_3/assignment_3_Jiaman_Wu.ipynb | ###Markdown
REMEMBER: FIRST CREATE A COPY OF THIS FILE WITH A UNIQUE NAME AND DO YOUR WORK THERE. AND MAKE SURE YOU COMMIT YOUR CHANGES TO THE `hw3_submissions` BRANCH. Assignment 3 | Cleaning and Exploring Data with Pandas In this assignment, you will investigate restaurant food safety scores for restaurants in San Francisco. Above is a sample score card for a restaurant. The scores and violation information have been made available by the San Francisco Department of Public Health. Loading Food Safety DataThere are 2 files in the data directory:1. business.csv containing food establishments in San Francisco1. inspections.csv containing retaurant inspections recordsLet's start by loading them into Pandas dataframes. One of the files, business.csv, has encoding (ISO-8859-1), so you will need to account for that when reading it. Question 1 Question 1aRead the two files noted above into two pandas dataframes named `bus` and `ins`, respectively. Print the first 5 rows of each to inspect them.
###Code
import pandas as pd
bus = pd.read_csv('data/businesses.csv', encoding='ISO-8859-1')
ins = pd.read_csv('data/inspections.csv')
bus.head()
ins.head()
###Output
_____no_output_____
###Markdown
Examining the Business dataFrom its name alone, we expect the `businesses.csv` file to contain information about the restaurants. Let's investigate this dataset. Question 2 Question 2a: How many records are there?
###Code
len(bus)
###Output
_____no_output_____
###Markdown
Question 2b: How many unique business IDs are there?
###Code
len(bus['business_id'].unique())
###Output
_____no_output_____
###Markdown
Question 2c: What are the 5 most common businesses by name, and how many are there in San Francisco?
###Code
bus[bus['city']=='San Francisco']['name'].value_counts().head(5)
###Output
_____no_output_____
###Markdown
Zip codeNext, let's explore some of the variables in the business table. We begin by examining the postal code. Question 3 Question 3aHow are the zip code values stored in python (i.e. data type)?To answer this you might want to examine a particular entry.
###Code
bus['postal_code'].values
type(bus['postal_code'].values[0])
###Output
_____no_output_____
###Markdown
Question 3bWhat are the unique values of postal_code?
###Code
bus['postal_code'].unique()
###Output
_____no_output_____
###Markdown
Question 3cLet's say we decide to exclude the businesses that have no zipcode for our analysis (which might include food trucks for example). Use the list of valid 5-digit zip codes below to create a new dataframe called bus_valid, with only businesses whose postal_codes show up in this list of valid zipcodes. How many businesses are there in this new dataframe?
###Code
validZip = ["94102", "94103", "94104", "94105", "94107", "94108",
"94109", "94110", "94111", "94112", "94114", "94115",
"94116", "94117", "94118", "94121", "94122", "94123",
"94124", "94127", "94131", "94132", "94133", "94134"]
vb = bus[bus['postal_code'].isin(validZip)]
###Output
_____no_output_____
###Markdown
Latitude and LongitudeAnother aspect of the data we want to consider is the prevalence of missing values. If many records have missing values then we might be concerned about whether the nonmissing values are representative of the population. Question 4 Consider the longitude and latitude in the business DataFrame. Question 4aHow many businesses are missing longitude values, working with only the businesses that are in the list of valid zipcodes?
###Code
vb[pd.isnull(vb['longitude'])]
###Output
_____no_output_____
###Markdown
Question 4bCreate a new dataframe with one row for each valid zipcode. The dataframe should include the following three columns:1. `postal_code`: Contains the zip codes in the `validZip` variable above.2. `null_lon`: The number of businesses in that zipcode with missing `longitude` values.3. `not_null_lon`: The number of businesses without missing `longitude` values.
###Code
null_lon = vb[pd.isnull(vb['longitude'])]['postal_code'].value_counts()
not_null_lon = vb[pd.notnull(vb['longitude'])]['postal_code'].value_counts()
df = pd.merge(null_lon,not_null_lon,left_index=True, right_index=True).reset_index()
df.columns = ['postal_code','null_lon','not_null_lon']
df
###Output
_____no_output_____
###Markdown
4c. Do any zip codes appear to have more than their 'fair share' of missing longitude? To answer this, you will want to compute the proportion of missing longitude values for each zip code, and print the proportion missing longitude, and print the top five zipcodes in descending order of proportion missing postal_code.
###Code
df['null_ratio'] = df['null_lon']/(df['null_lon']+df['not_null_lon'])
df.sort_values(['null_ratio'],ascending=False).head(5)
###Output
_____no_output_____
###Markdown
Investigate the inspection dataLet's now turn to the inspection DataFrame. Earlier, we found that `ins` has 4 columns, these are named `business_id`, `score`, `date` and `type`. In this section, we determine the granularity of `ins` and investigate the kinds of information provided for the inspections. Question 5 Question 5aAs with the business data, assess whether there is one inspection record for each business, by counting how many rows are in the data and how many unique businesses there are in the data. If they are exactly the same number, it means there is only one inspection per business, clearly.
###Code
print(len(ins['business_id'].unique()))
print(len(bus['business_id'].unique()))
print(len(ins))
print(len(bus))
###Output
5730
6315
15430
6315
###Markdown
Question 5bWhat values does `type` take on? How many occurrences of each value is in the DataFrame? Create a new dataframe named `ins2` by copying `ins` and keeping only records with values of `type` that occur more than 10 times in the original table. In other words, eliminate records that have values of `type` that occur rarely (< 10 times). Check the result to make sure rare types are eliminated.
###Code
type_list = ins['type'].value_counts()[ins['type'].value_counts()>=10].index
ins2 = ins[ins['type'].isin(type_list)]
ins2
ins2['type'].value_counts()
###Output
_____no_output_____
###Markdown
Question 5cSince the data was stored in a .csv file, the dates are formatted as strings such as `20160503`. Once we read in the data, we would like to have dates in an appropriate format for analysis. Add a new column called `year` by capturing the first four characters of the date column. Hint: we have seen multiple ways of doing this in class, includings `str` operations, `lambda` functions, `datetime` operations, and others. Choose the method that works best for you :)
###Code
ins['year'] = pd.to_datetime(ins.date,format='%Y%m%d').dt.year
ins
###Output
_____no_output_____
###Markdown
Question 5dWhat range of years is covered in this data set? Are there roughly same number of inspections each year? Try dropping records for any years with less than 50 inspections and store the result in a new dataframe named `ins3`.
###Code
ins['year'].unique()
ins.groupby(['year'])['business_id'].count()
year_list = ins.groupby(['year'])['business_id'].count()[ins.groupby(['year'])['business_id'].count()>50].index
ins3 = ins[ins['year'].isin(year_list)]
ins3
###Output
_____no_output_____
###Markdown
Let's examine only the inspections for one year: 2016. This puts businesses on a more equal footing because [inspection guidelines](https://www.sfdph.org/dph/eh/Food/Inspections.asp) generally refer to how many inspections should occur in a given year.
###Code
ins3[ins3['year']==2016].groupby(['business_id']).count()['score']
ins2016=ins3[ins3['year']==2016]
ins2016
###Output
_____no_output_____
###Markdown
Question 6 Question 6aMerge the business and 2016 inspections data, keeping all businesses regardless of whether they show up in the inspections file. Show the first several rows of the resulting dataframe.
###Code
bus_ins = pd.merge(bus,ins2016,on='business_id',how='left')
bus_ins.head()
###Output
_____no_output_____
###Markdown
Question 6bPrint the 20 lowest rated businesses names, their addresses, and their ratings.
###Code
bus_ins.sort_values(['score'],ascending=True).head(20)
###Output
_____no_output_____ |
notebooks/1.04-sfb-explore-line-lengths-enwiki-sql-dumps.ipynb | ###Markdown
Explore text of enwiki SQL dump .sql files(uncollapse for detailed code) setup libraries
###Code
import pandas as pd
import matplotlib.pyplot as plt
from file_read_backwards import FileReadBackwards
###Output
_____no_output_____
###Markdown
paths
###Code
dir_path = '../data/raw/enwiki/'
category_path = '../data/raw/enwiki/enwiki-latest-category.sql'
page_path = '../data/raw/enwiki/enwiki-latest-page.sql'
categorylinks_path = '../data/raw/enwiki/enwiki-latest-categorylinks.sql'
paths = [category_path, page_path, categorylinks_path]
filenames = [path.split(sep='/')[-1] for path in paths]
###Output
_____no_output_____
###Markdown
functions function to open a dumpfile
###Code
def open_dump(url):
"""
Open a mysql dumpfile for reading its text.
"""
return open(url, mode='r', encoding='UTF_8', errors='backslashreplace')
###Output
_____no_output_____
###Markdown
function to get lengths of each line in the sql files
###Code
def get_line_lengths(path:str, encoding:str='UTF_8', errors='backslashreplace', mode='r') -> pd.Series:
"""
Get line lengths from a potentially-large text file.
Input: filepath of text file
Output: pd.Series of integer line-lengths,
index is line-number
"""
d = {}
i = 0
with open(
path,
mode=mode,
encoding=encoding,
errors=errors
) as f:
while (x := f.readline()):
d[i] = len(x)
i += 1
return pd.Series(d, name='line_lengths')
###Output
_____no_output_____
###Markdown
function read_footer
###Code
def read_footer(path):
"""
Leverage the module: 'file_read_backwards' to read the last lines of
a dumpfile, up until the lines are longer than 200 chars.
"""
lines = []
with FileReadBackwards(path, encoding='utf-8') as frb:
while len(l := frb.readline()) < 200 :
if not l:
break
lines.append(l)
return list(reversed(lines))
###Output
_____no_output_____
###Markdown
check line lengths store line-length series for each file in a dict
###Code
dict_of_length_series = {}
for filename, path in zip(filenames, paths):
dict_of_length_series[filename] = get_line_lengths(path)
###Output
_____no_output_____
###Markdown
generate tables line counts
###Code
line_counts = pd.concat(
(pd.Series({i: len(dict_of_length_series[i])}) for i in dict_of_length_series),
axis = 0
).rename('line counts').to_frame()
line_counts
###Output
_____no_output_____
###Markdown
first few line lengths
###Code
head = pd.concat(
(dict_of_length_series[i].head(45).rename(i).to_frame() for i in dict_of_length_series),
axis = 1
).T.rename_axis('line-lengths')
head
###Output
_____no_output_____
###Markdown
last few line lengths
###Code
tail = pd.concat(
(dict_of_length_series[i].tail(45).reset_index(drop=True).rename(i).to_frame() for i in dict_of_length_series),
axis=1
)
tail.index = range(-45,0)
tail = tail.T.rename_axis('line-lengths')
tail
###Output
_____no_output_____
###Markdown
generate plots
###Code
figs = {}; axes = {}
for i in filenames:
figs[i] = plt.figure()
axes[i] = figs[i].gca()
dict_of_length_series[i].plot(ax=axes[i])
axes[i].set_xticks([])
axes[i].set_xlabel(f'rows of file: 1 to {len(dict_of_length_series[i])}')
axes[i].set_ylabel('line length')
axes[i].set_title(i)
plt.show()
###Output
_____no_output_____
###Markdown
peek at text of dumpfiles get ```start_insert_nums``` and ```start_footer_nums``` from line lengths
###Code
def get_first(lambda_boolean_condition:str, ser:pd.Series):
"""
WARNING: USES pd.Series.eval()
Returns index of the first element of a pd.Series that satisfies a lambda boolean condition
Inputs:
lambda_boolean_condition (str): lambda boolean condition as string
ser (pd.Series): series to to find the first of
"""
for i in ser.index:
try:
if ser.eval(condition):
return i
else:
continue
except:
continue
start_insert_nums = {}
start_footer_nums = {}
for i in filenames:
insert_indices = dict_of_length_series[i].to_frame().query('line_lengths > 200').index
start_insert_nums[i] = insert_indices[0]
start_footer_nums[i] = insert_indices[-1] + 1
dict_header_rows, dict_footer_rows = {}, {}
for name in filenames:
with open_dump(dir_path + name) as f:
header_rows, footer_rows = [], []
ct = 0
while ct < start_insert_nums[name]:
header_rows.append(f.readline())
ct += 1
while ct < start_footer_nums[name]:
f.readline()
ct += 1
while (line := f.readline()):
footer_rows.append(f.readline())
dict_header_rows[name] = header_rows
dict_footer_rows[name] = footer_rows
print('\n'.join(dict_header_rows[filenames[0]]))
print('\n'.join(footer_rows))
###Output
_____no_output_____
###Markdown
peek at headers peek at footers peek at an interior row get first 240 characters of line 60 in each dumpfile
###Code
line_num = 60
dict_line60 = {}
for i in filenames:
with open_dump(dir_path + i) as f:
for j in range(line_num):
f.readline()
dict_line60[i] = [f.readline()[:80], f.readline()[80:160], f.readline()[160:240]]
###Output
_____no_output_____
###Markdown
print the first 240 characters of line 60 of each dumpfile
###Code
for name in filenames:
print(name)
print('\n\t'.join(dict_line60[name]))
print()
###Output
enwiki-latest-category.sql
INSERT INTO `category` VALUES (773529,'People_from_Fort_Worth',0,0,0),(773531,'C
1,0),(824007,'Pequot_War',27,0,0),(824009,'Cities_and_towns_in_Fatehgarh_Sahib_d
lms',0,0,0),(6585460,'B-Class_Saint_Lucia_articles',3,0,0),(6585574,'Subgroup_se
enwiki-latest-page.sql
INSERT INTO `page` VALUES (101540,1,'Susquehanna_County,_Pennsylvania','',0,0,0.
,'20211029153637','20211029183623',973939179,12667,'wikitext',NULL),(110842,0,'N
ikitext',NULL),(119449,0,'Lammers_Township,_Beltrami_County,_Minnesota','',0,0,0
enwiki-latest-categorylinks.sql
INSERT INTO `categorylinks` VALUES (9701,'Wikipedia_Version_1.0_articles','2LFN:
03:01:42',' ','uca-default-u-kn','page'),(10191,'Edo_period','20F\xdc','201
3,'CS1_German-language_sources_(de)','8:NPFLZF44:@B\xdc','2021-04-20 13:57
###Markdown
TL;DR Summary of file structure - **Headers and schema:** - from beginning of file to approx 41st row- **Data:** - formatted as sql INSERT commands - each INSERT row has ~10^6 characters - until the footers- **Footers:** - some footers display tables and plots display tables of line-counts, line-lengths
###Code
display(line_counts, head, tail)
###Output
_____no_output_____
###Markdown
display plots of the line lengths
###Code
for i in figs: display(figs[i])
###Output
_____no_output_____ |
rzt.ai.notebook-11.ipynb | ###Markdown
To make the libraries you have uploaded in the Library Manager available in this Notebook, run the command below to get started```run -i platform-libs/initialize.py```
###Code
run -i platform-libs/initialize.py
###Output
_____no_output_____ |
ucb/lora-analysis.ipynb | ###Markdown
LoRa Data Analysis - Upper Confidence Bound We first declare a fixed parameters.Those parameters are not changed during the experiments.Fixed communication parameters are listed below:- Code Rate: 4/5- Frequency: 866.1 MHz- Bandwidth: 125 kHzEnd nodes:- were sending different types of uplink messages.- were sending a single message each 2 minutes.- used an upper confidence bound algorithm (UCB) for communication parameters selection.Access points:- only a single access point was used- capture effect was also considered Initial declaration
###Code
%matplotlib inline
import pandas as pd # import pandas
import numpy as np # import numpy
import matplotlib as mpl # import matplotlib
import matplotlib.pyplot as plt # import plotting module
import statistics
import math
import base64
from IPython.display import set_matplotlib_formats # module for svg export
output_format = 'svg'
set_matplotlib_formats(output_format) # set export to svg file
cut_ratio = 0.05 # Values below 5% of mean value are simply cut from charts to make it more readable
uplink_message_file = './18/uplink_messages.csv'
algorithm = 'ucb'
###Output
_____no_output_____
###Markdown
Analysis of Uplink Messages We read a csv file with uplink messages
###Code
uplink_data = pd.read_csv(uplink_message_file, delimiter=',')
###Output
_____no_output_____
###Markdown
Let us have a look at various columns that are present and can be evaluated.
###Code
uplink_data.head()
###Output
_____no_output_____
###Markdown
Remove all columns that have fixed values or there is no point in their analysis.
###Code
try:
del uplink_data['id']
del uplink_data['msg_group_number']
del uplink_data['is_primary']
del uplink_data['coderate']
del uplink_data['bandwidth']
del uplink_data['receive_time']
except KeyError:
print('Columns have already been removed')
###Output
_____no_output_____
###Markdown
Payload Length
###Code
uplink_data['payload_len'] = uplink_data.app_data.apply(len)
uplink_data.payload_len.describe()
payload_len = round(statistics.mean(uplink_data.payload_len), 2)
print(f'Mean value of payload length is {payload_len}.')
###Output
Mean value of payload length is 43.08.
###Markdown
Communication parameters selection Let us create a new column 'arm'. This columns represents a combination of SF and TP and is referred in multi-armed bandit terminology as arm.
###Code
uplink_data['arm'] = 'S' + uplink_data.spf.astype(str) + 'P' + uplink_data.power.astype(str)
arms = uplink_data.arm.value_counts()
threshold = round(statistics.mean(uplink_data.arm.value_counts()) * cut_ratio, 2)
print(f'Values below {threshold} will be cut from a plot')
arms = arms[arms > threshold]
hist = arms.plot(kind='bar',rot=0, color='b',figsize=(10,4))
hist.set_xlabel('Bandit Arm',fontsize=12)
hist.set_ylabel('Number of Messages',fontsize=12)
plt.title('Utilization of Bandit Arms')
plt.savefig(f'{algorithm}-bandit-arms.{output_format}', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Spreading Factor
###Code
hist = uplink_data.spf.value_counts().plot(kind='bar',rot=0,color='b',figsize=(6,3))
hist.set_xlabel('Spreading Factor',fontsize=12)
hist.set_ylabel('Number of Messages',fontsize=12)
plt.title('Utilization of Spreading Factor')
plt.show()
###Output
_____no_output_____
###Markdown
All nodes used the same frequency to increase a probability of collisions. We have only a single Access Point. Duration of Data Transmission
###Code
airtime = uplink_data.airtime.value_counts()
threshold = 100
airtime = airtime.loc[lambda x : x > threshold]
print(f'Values with low then {threshold} occurences will be cut from a plot')
hist = airtime.plot(kind='bar',rot=0,color='b')
hist.set_xlabel('Time over Air [ms]',fontsize=12)
hist.set_ylabel('Number of Messages',fontsize=12)
plt.title('Message Airtime')
plt.savefig(f'{algorithm}-airtime.{output_format}', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Transmission Power Only two transmission power values were possible in this scenario.To increase TP a value of 14 was used, to decrease TP a value of 10 was used.
###Code
hist = uplink_data.power.value_counts().plot(kind='bar',rot=0,color='b',figsize=(6,3))
hist.set_xlabel('Transmission Power [dBm]', fontsize=12)
hist.set_ylabel('Number of Messages', fontsize=12)
plt.title('Transmission Power Distribution')
plt.show()
###Output
_____no_output_____
###Markdown
Different Types of Messages Let us analyze the ratio of message types.
###Code
message_types = uplink_data.message_type_id.value_counts()
plt.pie(message_types, autopct='%1.1f%%', labels=['Emergency', 'Data'], colors=['b', 'orange'])
plt.title('Ratio of Various Message Types')
# Output is automatically exported
plt.savefig(f'{algorithm}-message-types.{output_format}', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Analysis of End Nodes Analysis of certain aspects (active time, sleep time and collisions) of end devices.
###Code
print(f'Number of end nodes partiipating in the experiment is {uplink_data.node_id.nunique()}.')
uplink_data.node_id.describe()
unique_ens = len(uplink_data.node_id.unique())
unique_aps = len(uplink_data.ap_id.unique())
print(f'Total number of connected end devices: {unique_ens}')
print(f'Total number of connected access points: {unique_aps}')
end_nodes = pd.read_csv(f'./18/{algorithm} - experiment - 18 - 1.csv', delimiter=',')
end_nodes.head()
###Output
_____no_output_____
###Markdown
Collision Histogram Cutting values is disabled.
###Code
no_collisions = end_nodes.collisions.value_counts()
threshold = statistics.mean(end_nodes.collisions.value_counts())
print(f'Values below {threshold} will be cut in a plot')
collisions = end_nodes.collisions[end_nodes.collisions > threshold]
collisions.describe()
max_collisions = max(collisions)
min_collisions = min(collisions)
range_collisions = max_collisions - min_collisions
increment = math.ceil(range_collisions / 4)
# out = pd.cut(collisions, bins=[min_collisions, min_collisions + increment, min_collisions + 2 * increment, min_collisions + 3 * increment, max(collisions)], include_lowest=True)
hist = collisions.value_counts().plot.bar(rot=0,color='b')
hist.set_xlabel('Number of Collisions',fontsize=12)
hist.set_ylabel('Number of Devices',fontsize=12)
plt.title('Collision Rate')
plt.savefig(f'{algorithm}-collisions.{output_format}', dpi=300)
plt.show()
mean_collisions = round(statistics.mean(collisions))
print(f'Mean collision number for each node was {mean_collisions}.')
###Output
Mean collision number for each node was 10.
###Markdown
Ration between active time and total nodes uptime
###Code
energy = (end_nodes.active_time / end_nodes.uptime)
energy.describe()
active_time = round(statistics.mean(energy) * 100, 2)
print(f'The nodes spent {active_time}% of their uptime in active mode.')
###Output
The nodes spent 2.5% of their uptime in active mode.
###Markdown
Packet Delivery Ratio Evaluation of packet delivery ratio for end nodes. Add message count from uplink data and collisions.
###Code
data = uplink_data.node_id.value_counts()
nodes = pd.DataFrame({}, columns = ['dev_id', 'collisions', 'messages'])
collisions = []
messages = []
dev_id = []
for index,value in data.items():
dev_id.append(index)
collision_count = end_nodes.loc[end_nodes.dev_id == index].collisions.values[0]
collisions.append(collision_count)
messages.append(value + collision_count)
nodes['dev_id'] = dev_id
nodes['collisions'] = collisions
nodes['messages'] = messages
nodes['pdr'] = round((1 - (nodes.collisions / nodes.messages))*100, 2)
mean_pdr = round(statistics.mean(nodes.pdr), 2)
print(f'Mean value of Packet Delivery Ratio is {mean_pdr}%.')
max_pdr = max(nodes.pdr)
min_pdr = min(nodes.pdr)
range_pdr = max_pdr - min_pdr
increment = math.ceil(range_pdr / 4)
out = pd.cut(nodes.pdr, bins=[min_pdr, min_pdr + increment, min_pdr + 2 * increment, min_pdr + 3 * increment, max_pdr], include_lowest=True)
hist = out.value_counts().plot.bar(rot=0,color='b',figsize=(6,3))
hist.set_xlabel('Packet Delivery Ratio [%]',fontsize=12)
hist.set_ylabel('Number of Devices',fontsize=12)
plt.title('Packet Delivery Ratio')
plt.savefig(f'{algorithm}-pdr.{output_format}', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Path of Each End Node Data about position are encoded as base64. Decode base64, extract position and save the results to original data frame.
###Code
try:
app_data = uplink_data.app_data.apply(base64.b64decode)
app_data = app_data.astype(str)
app_data = app_data.str.split(',')
df = pd.DataFrame({}, columns = ['node_id', 'x', 'y'])
x = []
y = []
for row in app_data:
x.append(round(float(row[1].split('\'')[0]), 2))
y.append(round(float(row[0].split('\'')[1]), 2))
uplink_data['x'] = x
uplink_data['y'] = y
del uplink_data['app_data']
except KeyError:
print('Column has already been removed')
uplink_data.head()
###Output
_____no_output_____
###Markdown
Now, we draw a path for each end node based on the received coordinates.
###Code
unique_ens = len(uplink_data.node_id.unique())
cmap = mpl.cm.get_cmap('PuBu')
xlim = 10000
ylim = 10000
for i in range(0, unique_ens):
data = uplink_data[uplink_data.node_id == uplink_data.node_id[i]]
plt.plot(data.x, data.y, color=cmap(i / unique_ens))
# Add Access Point
plt.plot(xlim / 2, ylim / 2, '+', mew=10, ms=2, color='black')
plt.title('Path of Each End Node')
plt.ylabel('y [m]')
plt.xlabel('x [m]')
plt.xlim([0,xlim])
plt.ylim([0,ylim])
# Figure is automatically saved
plt.savefig(f'{algorithm}-path.{output_format}', dpi=300)
plt.show()
###Output
_____no_output_____ |
docs/load-normalizer.ipynb | ###Markdown
Normalizer This tutorial is available as an IPython notebook at [Malaya/example/normalizer](https://github.com/huseinzol05/Malaya/tree/master/example/normalizer).
###Code
%%time
import malaya
string1 = 'xjdi ke, y u xsuke makan HUSEIN kt situ tmpt, i hate it. pelikle, pada'
string2 = 'i mmg2 xske mknn HUSEIN kampng tmpat, i love them. pelikle saye'
string3 = 'perdana menteri ke11 sgt suka makn ayam, harganya cuma rm15.50'
string4 = 'pada 10/4, kementerian mengumumkan, 1/100'
string5 = 'Husein Zolkepli dapat tempat ke-12 lumba lari hari ni'
string6 = 'Husein Zolkepli (2011 - 2019) adalah ketua kampng di kedah sekolah King Edward ke-IV'
string7 = '2jam 30 minit aku tunggu kau, 60.1 kg kau ni, suhu harini 31.2c, aku dahaga minum 600ml'
###Output
_____no_output_____
###Markdown
Load normalizerThis normalizer can load any spelling correction model, eg, `malaya.spell.probability`, or `malaya.spell.transformer`.```pythondef normalizer(speller = None, **kwargs): """ Load a Normalizer using any spelling correction model. Parameters ---------- speller : spelling correction object, optional (default = None) Returns ------- result: malaya.normalize.Normalizer class """```
###Code
corrector = malaya.spell.probability()
normalizer = malaya.normalize.normalizer(corrector)
###Output
_____no_output_____
###Markdown
normalize```pythondef normalize( self, string: str, normalize_text: bool = True, normalize_entity: bool = True, normalize_url: bool = False, normalize_email: bool = False, normalize_year: bool = True, normalize_telephone: bool = True, normalize_date: bool = True, normalize_time: bool = True, check_english_func: Callable = is_english, check_malay_func: Callable = is_malay,): """ Normalize a string. Parameters ---------- string : str normalize_text: bool, (default=True) if True, will try to replace shortforms with internal corpus. normalize_entity: bool, (default=True) normalize entities, only effect `date`, `datetime`, `time` and `money` patterns string only. normalize_url: bool, (default=False) if True, replace `://` with empty and `.` with `dot`. `https://huseinhouse.com` -> `https huseinhouse dot com`. normalize_email: bool, (default=False) if True, replace `@` with `di`, `.` with `dot`. `[email protected]` -> `husein dot zol kosong lima di gmail dot com`. normalize_year: bool, (default=True) if True, `tahun 1987` -> `tahun sembilan belas lapan puluh tujuh`. if True, `1970-an` -> `sembilan belas tujuh puluh an`. if False, `tahun 1987` -> `tahun seribu sembilan ratus lapan puluh tujuh`. normalize_telephone: bool, (default=True) if True, `no 012-1234567` -> `no kosong satu dua, satu dua tiga empat lima enam tujuh` normalize_date: bool, (default=True) if True, `01/12/2001` -> `satu disember dua ribu satu`. normalize_time: bool, (default=True) if True, `pukul 2.30` -> `pukul dua tiga puluh minit`. check_english_func: Callable, (default=malaya.text.is_english) function to check a word in english dictionary, default is malaya.text.is_english. check_malay_func: Callable, (default=malaya.text.is_malay) function to check a word in malay dictionary, default is malaya.text.is_malay. Returns ------- string: {'normalize', 'date', 'money'} """```
###Code
string = 'boleh dtg 8pagi esok tak atau minggu depan? 2 oktober 2019 2pm, tlong bayar rm 3.2k sekali tau'
normalizer.normalize(string)
normalizer.normalize(string, normalize_entity = False)
###Output
_____no_output_____
###Markdown
Here you can see, Malaya normalizer will normalize `minggu depan` to datetime object, also `3.2k ringgit` to `RM3200`
###Code
print(normalizer.normalize(string1))
print(normalizer.normalize(string2))
print(normalizer.normalize(string3))
print(normalizer.normalize(string4))
print(normalizer.normalize(string5))
print(normalizer.normalize(string6))
print(normalizer.normalize(string7))
###Output
{'normalize': 'tak jadi ke , kenapa awak tak suka makan HUSEIN kat situ tempat , saya hate it . pelik lah , pada', 'date': {}, 'money': {}}
{'normalize': 'saya memang-memang tak suka makan HUSEIN kampung tempat , saya love them . pelik lah saya', 'date': {}, 'money': {}}
{'normalize': 'perdana menteri kesebelas sangat suka makan ayam , harganya cuma lima belas ringgit lima puluh sen', 'date': {}, 'money': {'rm15.50': 'RM15.50'}}
{'normalize': 'pada sepuluh hari bulan empat , kementerian mengumumkan , satu per seratus', 'date': {}, 'money': {}}
{'normalize': 'Husein Zolkepli dapat tempat kedua belas lumba lari hari ini', 'date': {}, 'money': {}}
{'normalize': 'Husein Zolkepli ( dua ribu sebelas hingga dua ribu sembilan belas ) adalah ketua kampung di kedah sekolah King Edward keempat', 'date': {}, 'money': {}}
{'normalize': 'dua jam tiga puluh minit aku tunggu kamu , enam puluh perpuluhan satu kilogram kamu ini , suhu hari ini tiga puluh satu perpuluhan dua celsius , aku dahaga minum enam ratus milliliter', 'date': {}, 'money': {}}
###Markdown
Skip spelling correctionSimply pass `None` to `speller` to `normalizer = malaya.normalize.normalizer`. By default it is `None`.
###Code
normalizer = malaya.normalize.normalizer(corrector)
without_corrector_normalizer = malaya.normalize.normalizer(None)
normalizer.normalize(string2)
without_corrector_normalizer.normalize(string2)
###Output
_____no_output_____
###Markdown
Pass kwargs preprocessingLet say you want to skip to normalize date pattern, you can pass kwargs to normalizer, check original tokenizer implementation at https://github.com/huseinzol05/Malaya/blob/master/malaya/preprocessing.pyL103
###Code
normalizer = malaya.normalize.normalizer(corrector)
skip_date_normalizer = malaya.normalize.normalizer(corrector, date = False)
normalizer.normalize('tarikh program tersebut 14 mei')
skip_date_normalizer.normalize('tarikh program tersebut 14 mei')
###Output
_____no_output_____
###Markdown
Normalize urlLet say you have an `url` word, example, `https://huseinhouse.com`, this parameter going to,1. replace `://` with empty string.2. replace `.` with ` dot `.3. replace digits with string representation.4. Capitalize `https`, `http`, and `www`.Simply `normalizer.normalize(string, normalize_url = True)`, default is `False`.
###Code
normalizer = malaya.normalize.normalizer()
normalizer.normalize('web saya ialah https://huseinhouse.com')
normalizer.normalize('web saya ialah https://huseinhouse.com', normalize_url = True)
normalizer.normalize('web saya ialah https://huseinhouse02934.com', normalize_url = True)
###Output
_____no_output_____
###Markdown
Normalize emailLet say you have an `email` word, example, `[email protected]`, this parameter going to,1. replace `://` with empty string.2. replace `.` with ` dot `.3. replace `@` with ` di `.4. replace digits with string representation.Simply `normalizer.normalize(string, normalize_email = True)`, default is `False`.
###Code
normalizer = malaya.normalize.normalizer()
normalizer.normalize('email saya ialah [email protected]')
normalizer = malaya.normalize.normalizer()
normalizer.normalize('email saya ialah [email protected]', normalize_email = True)
###Output
_____no_output_____
###Markdown
Normalize year1. if True, `tahun 1987` -> `tahun sembilan belas lapan puluh tujuh`.2. if True, `1970-an` -> `sembilan belas tujuh puluh an`.3. if False, `tahun 1987` -> `tahun seribu sembilan ratus lapan puluh tujuh`.Simply `normalizer.normalize(string, normalize_year = True)`, default is `True`.
###Code
normalizer = malaya.normalize.normalizer()
normalizer.normalize('$400 pada tahun 1998 berbanding lebih $1000')
normalizer.normalize('$400 pada 1970-an berbanding lebih $1000')
normalizer.normalize('$400 pada tahun 1970-an berbanding lebih $1000')
normalizer.normalize('$400 pada tahun 1998 berbanding lebih $1000', normalize_year = False)
###Output
_____no_output_____
###Markdown
Normalize telephone1. if True, `no 012-1234567` -> `no kosong satu dua, satu dua tiga empat lima enam tujuh`.Simply `normalizer.normalize(string, normalize_telephone = True)`, default is `True`.
###Code
normalizer = malaya.normalize.normalizer()
normalizer.normalize('no saya 012-1234567')
normalizer.normalize('no saya 012-1234567', normalize_telephone = False)
###Output
_____no_output_____
###Markdown
Normalize date1. if True, `01/12/2001` -> `satu disember dua ribu satu`.Simply `normalizer.normalize(string, normalize_date = True)`, default is `True`.
###Code
normalizer = malaya.normalize.normalizer()
normalizer.normalize('saya akan gerak pada 1/11/2021')
normalizer = malaya.normalize.normalizer()
normalizer.normalize('saya akan gerak pada 1/11/2021', normalize_date = False)
###Output
_____no_output_____
###Markdown
Normalize time1. if True, `pukul 2.30` -> `pukul dua tiga puluh minit`.Simply `normalizer.normalize(string, normalize_time = True)`, default is `True`.
###Code
s = 'Operasi tamat sepenuhnya pada pukul 1.30 tengah hari'
normalizer = malaya.normalize.normalizer()
normalizer.normalize(s, normalize_time = True)
s = 'Operasi tamat sepenuhnya pada pukul 1:30:50 tengah hari'
normalizer = malaya.normalize.normalizer()
normalizer.normalize(s, normalize_time = True)
s = 'Operasi tamat sepenuhnya pada pukul 1.30 tengah hari'
normalizer = malaya.normalize.normalizer()
normalizer.normalize(s, normalize_time = False)
###Output
_____no_output_____
###Markdown
Ignore normalize money Let say I have a text contains `RM 77 juta` and I wish to maintain it like that.
###Code
text = 'Suatu ketika rakyat Malaysia dikejutkan dengan kontrak pelantikan sebanyak hampir RM 77 juta setahun yang hanya terdedah apabila diasak oleh Datuk Seri Anwar Ibrahim.'
normalizer = malaya.normalize.normalizer()
normalizer.normalize(text)
normalizer = malaya.normalize.normalizer(money = False)
normalizer.normalize(text, normalize_text = False, check_english_func = None)
normalizer.normalize(text, normalize_text = False, check_english_func = None)
###Output
_____no_output_____
###Markdown
Normalizing rules**All these rules will ignore if first letter is capital except for normalizing titles.** 1. Normalize title,```python{ 'dr': 'Doktor', 'yb': 'Yang Berhormat', 'hj': 'Haji', 'ybm': 'Yang Berhormat Mulia', 'tyt': 'Tuan Yang Terutama', 'yab': 'Yang Berhormat', 'ybm': 'Yang Berhormat Mulia', 'yabhg': 'Yang Amat Berbahagia', 'ybhg': 'Yang Berbahagia', 'miss': 'Cik',}```
###Code
normalizer = malaya.normalize.normalizer()
normalizer.normalize('Dr yahaya')
###Output
_____no_output_____
###Markdown
2. expand `x`
###Code
normalizer.normalize('xtahu')
###Output
_____no_output_____
###Markdown
3. normalize `ke -`
###Code
normalizer.normalize('ke-12')
normalizer.normalize('ke - 12')
###Output
_____no_output_____
###Markdown
4. normalize `ke - roman`
###Code
normalizer.normalize('ke-XXI')
normalizer.normalize('ke - XXI')
###Output
_____no_output_____
###Markdown
5. normalize `NUM - NUM`
###Code
normalizer.normalize('2011 - 2019')
normalizer.normalize('2011.01-2019')
###Output
_____no_output_____
###Markdown
6. normalize `pada NUM (/ | -) NUM`
###Code
normalizer.normalize('pada 10/4')
normalizer.normalize('PADA 10 -4')
###Output
_____no_output_____
###Markdown
7. normalize `NUM / NUM`
###Code
normalizer.normalize('10 /4')
###Output
_____no_output_____
###Markdown
8. normalize `rm NUM`
###Code
normalizer.normalize('RM10.5')
###Output
_____no_output_____
###Markdown
9. normalize `rm NUM sen`
###Code
normalizer.normalize('rm 10.5 sen')
###Output
_____no_output_____
###Markdown
10. normalize `NUM sen`
###Code
normalizer.normalize('1015 sen')
###Output
_____no_output_____
###Markdown
11. normalize money
###Code
normalizer.normalize('rm10.4m')
normalizer.normalize('$10.4K')
###Output
_____no_output_____
###Markdown
12. normalize cardinal
###Code
normalizer.normalize('123')
###Output
_____no_output_____
###Markdown
13. normalize ordinal
###Code
normalizer.normalize('ke123')
###Output
_____no_output_____
###Markdown
14. normalize date / time / datetime string to datetime.datetime
###Code
normalizer.normalize('2 hari lepas')
normalizer.normalize('esok')
normalizer.normalize('okt 2019')
normalizer.normalize('2pgi')
normalizer.normalize('pukul 8 malam')
normalizer.normalize('jan 2 2019 12:01pm')
normalizer.normalize('2 ptg jan 2 2019')
###Output
_____no_output_____
###Markdown
15. normalize money string to string number representation
###Code
normalizer.normalize('50 sen')
normalizer.normalize('20.5 ringgit')
normalizer.normalize('20m ringgit')
normalizer.normalize('22.5123334k ringgit')
###Output
_____no_output_____
###Markdown
16. normalize date string to %d/%m/%y
###Code
normalizer.normalize('1 nov 2019')
normalizer.normalize('januari 1 1996')
normalizer.normalize('januari 2019')
###Output
_____no_output_____
###Markdown
17. normalize time string to %H:%M:%S
###Code
normalizer.normalize('2pm')
normalizer.normalize('2:01pm')
normalizer.normalize('2AM')
###Output
_____no_output_____
###Markdown
18. expand repetition shortform
###Code
normalizer.normalize('skit2')
normalizer.normalize('xskit2')
normalizer.normalize('xjdi2')
normalizer.normalize('xjdi4')
normalizer.normalize('xjdi0')
normalizer.normalize('xjdi')
###Output
_____no_output_____
###Markdown
19. normalize `NUM SI-UNIT`
###Code
normalizer.normalize('61.2 kg')
normalizer.normalize('61.2kg')
normalizer.normalize('61kg')
normalizer.normalize('61ml')
normalizer.normalize('61m')
normalizer.normalize('61.3434km')
normalizer.normalize('61.3434c')
normalizer.normalize('61.3434 c')
###Output
_____no_output_____
###Markdown
20. normalize `laughing` pattern
###Code
normalizer.normalize('dia sakai wkwkwkawkw')
normalizer.normalize('dia sakai hhihihu')
###Output
_____no_output_____
###Markdown
21. normalize `mengeluh` pattern
###Code
normalizer.normalize('Haih apa lah si yusuff ni . Mama cari rupanya celah ni')
normalizer.normalize('hais sorrylah syazzz')
###Output
_____no_output_____
###Markdown
22. normalize `percent` pattern
###Code
normalizer.normalize('0.8%')
normalizer.normalize('1213.1012312%')
###Output
_____no_output_____
###Markdown
Normalizer This tutorial is available as an IPython notebook at [Malaya/example/normalizer](https://github.com/huseinzol05/Malaya/tree/master/example/normalizer).
###Code
%%time
import malaya
string1 = 'xjdi ke, y u xsuke makan HUSEIN kt situ tmpt, i hate it. pelikle, pada'
string2 = 'i mmg2 xske mknn HUSEIN kampng tmpat, i love them. pelikle saye'
string3 = 'perdana menteri ke11 sgt suka makn ayam, harganya cuma rm15.50'
string4 = 'pada 10/4, kementerian mengumumkan, 1/100'
string5 = 'Husein Zolkepli dapat tempat ke-12 lumba lari hari ni'
string6 = 'Husein Zolkepli (2011 - 2019) adalah ketua kampng di kedah sekolah King Edward ke-IV'
string7 = '2jam 30 minit aku tunggu kau, 60.1 kg kau ni, suhu harini 31.2c, aku dahaga minum 600ml'
###Output
_____no_output_____
###Markdown
Load normalizerThis normalizer can load any spelling correction model, eg, `malaya.spell.probability`, or `malaya.spell.transformer`.
###Code
corrector = malaya.spell.probability()
normalizer = malaya.normalize.normalizer(corrector)
###Output
_____no_output_____
###Markdown
normalize```pythondef normalize( self, string: str, check_english: bool = True, normalize_text: bool = True, normalize_entity: bool = True, normalize_url: bool = False, normalize_email: bool = False, normalize_year: bool = True, normalize_telephone: bool = True, logging: bool = False,): """ Normalize a string. Parameters ---------- string : str check_english: bool, (default=True) check a word in english dictionary. normalize_text: bool, (default=True) if True, will try to replace shortforms with internal corpus. normalize_entity: bool, (default=True) normalize entities, only effect `date`, `datetime`, `time` and `money` patterns string only. normalize_url: bool, (default=False) if True, replace `://` with empty and `.` with `dot`. `https://huseinhouse.com` -> `https huseinhouse dot com`. normalize_email: bool, (default=False) if True, replace `@` with `di`, `.` with `dot`. `[email protected]` -> `husein dot zol kosong lima di gmail dot com`. normalize_year: bool, (default=True) if True, `tahun 1987` -> `tahun sembilan belas lapan puluh tujuh`. if True, `1970-an` -> `sembilan belas tujuh puluh an`. if False, `tahun 1987` -> `tahun seribu sembilan ratus lapan puluh tujuh`. normalize_telephone: bool, (default=True) if True, `no 012-1234567` -> `no kosong satu dua, satu dua tiga empat lima enam tujuh` logging: bool, (default=False) if True, will log index and token queue using `logging.warn`. Returns ------- string: normalized string """```
###Code
string = 'boleh dtg 8pagi esok tak atau minggu depan? 2 oktober 2019 2pm, tlong bayar rm 3.2k sekali tau'
normalizer.normalize(string)
normalizer.normalize(string, normalize_entity = False)
###Output
_____no_output_____
###Markdown
Here you can see, Malaya normalizer will normalize `minggu depan` to datetime object, also `3.2k ringgit` to `RM3200`
###Code
print(normalizer.normalize(string1))
print(normalizer.normalize(string2))
print(normalizer.normalize(string3))
print(normalizer.normalize(string4))
print(normalizer.normalize(string5))
print(normalizer.normalize(string6))
print(normalizer.normalize(string7))
###Output
{'normalize': 'tak jadi ke , kenapa awak tak suka makan HUSEIN kat situ tempat , saya hate it . pelik lah , pada', 'date': {}, 'money': {}}
{'normalize': 'saya memang-memang tak suka makan HUSEIN kampung tempat , saya love them . pelik lah saya', 'date': {}, 'money': {}}
{'normalize': 'perdana menteri kesebelas sangat suka makan ayam , harganya cuma lima belas ringgit lima puluh sen', 'date': {}, 'money': {'rm15.50': 'RM15.50'}}
{'normalize': 'pada sepuluh hari bulan empat , kementerian mengumumkan , satu per seratus', 'date': {}, 'money': {}}
{'normalize': 'Husein Zolkepli dapat tempat kedua belas lumba lari hari ini', 'date': {}, 'money': {}}
{'normalize': 'Husein Zolkepli ( dua ribu sebelas hingga dua ribu sembilan belas ) adalah ketua kampung di kedah sekolah King Edward keempat', 'date': {}, 'money': {}}
{'normalize': 'dua jam tiga puluh minit aku tunggu kamu , enam puluh perpuluhan satu kilogram kamu ini , suhu hari ini tiga puluh satu perpuluhan dua celsius , aku dahaga minum enam ratus milliliter', 'date': {}, 'money': {}}
###Markdown
Skip spelling correctionSimply pass `None` to `speller` to `normalizer = malaya.normalize.normalizer`. By default it is `None`.
###Code
normalizer = malaya.normalize.normalizer(corrector)
without_corrector_normalizer = malaya.normalize.normalizer(None)
normalizer.normalize(string2)
without_corrector_normalizer.normalize(string2)
###Output
_____no_output_____
###Markdown
Pass kwargs preprocessingLet say you want to skip to normalize date pattern, you can pass kwargs to normalizer, check original tokenizer implementation at https://github.com/huseinzol05/Malaya/blob/master/malaya/preprocessing.pyL103
###Code
normalizer = malaya.normalize.normalizer(corrector)
skip_date_normalizer = malaya.normalize.normalizer(corrector, date = False)
normalizer.normalize('tarikh program tersebut 14 mei')
skip_date_normalizer.normalize('tarikh program tersebut 14 mei')
###Output
_____no_output_____
###Markdown
Normalize urlLet say you have an `url` word, example, `https://huseinhouse.com`, this parameter going to,1. replace `://` with empty string.2. replace `.` with ` dot `.3. replace digits with string representation.Simply `normalizer.normalize(string, normalize_url = True)`, default is `False`.
###Code
normalizer = malaya.normalize.normalizer()
normalizer.normalize('web saya ialah https://huseinhouse.com')
normalizer.normalize('web saya ialah https://huseinhouse.com', normalize_url = True)
normalizer.normalize('web saya ialah https://huseinhouse02934.com', normalize_url = True)
###Output
_____no_output_____
###Markdown
Normalize emailLet say you have an `email` word, example, `[email protected]`, this parameter going to,1. replace `://` with empty string.2. replace `.` with ` dot `.3. replace `@` with ` di `.4. replace digits with string representation.Simply `normalizer.normalize(string, normalize_email = True)`, default is `False`.
###Code
normalizer = malaya.normalize.normalizer()
normalizer.normalize('email saya ialah [email protected]')
normalizer = malaya.normalize.normalizer()
normalizer.normalize('email saya ialah [email protected]', normalize_email = True)
###Output
_____no_output_____
###Markdown
Normalize year1. if True, `tahun 1987` -> `tahun sembilan belas lapan puluh tujuh`.2. if True, `1970-an` -> `sembilan belas tujuh puluh an`.3. if False, `tahun 1987` -> `tahun seribu sembilan ratus lapan puluh tujuh`.Simply `normalizer.normalize(string, normalize_year = True)`, default is `True`.
###Code
normalizer = malaya.normalize.normalizer()
normalizer.normalize('$400 pada tahun 1998 berbanding lebih $1000')
normalizer.normalize('$400 pada 1970-an berbanding lebih $1000')
normalizer.normalize('$400 pada tahun 1970-an berbanding lebih $1000')
normalizer.normalize('$400 pada tahun 1998 berbanding lebih $1000', normalize_year = False)
###Output
_____no_output_____
###Markdown
Normalize telephone1. if True, `no 012-1234567` -> `no kosong satu dua, satu dua tiga empat lima enam tujuh`.Simply `normalizer.normalize(string, normalize_telephone = True)`, default is `True`.
###Code
normalizer = malaya.normalize.normalizer()
normalizer.normalize('no saya 012-1234567')
normalizer.normalize('no saya 012-1234567', normalize_telephone = False)
###Output
_____no_output_____
###Markdown
Ignore normalize money Let say I have a text contains `RM 77 juta` and I wish to maintain it like that.
###Code
text = 'Suatu ketika rakyat Malaysia dikejutkan dengan kontrak pelantikan sebanyak hampir RM 77 juta setahun yang hanya terdedah apabila diasak oleh Datuk Seri Anwar Ibrahim.'
normalizer = malaya.normalize.normalizer()
normalizer.normalize(text)
normalizer = malaya.normalize.normalizer(money = False)
normalizer.normalize(text, normalize_text = False, check_english = False)
normalizer.normalize(text, normalize_text = False, check_english = False)
###Output
_____no_output_____
###Markdown
Normalizing rules**All these rules will ignore if first letter is capital except for normalizing titles.** 1. Normalize title,```python{ 'dr': 'Doktor', 'yb': 'Yang Berhormat', 'hj': 'Haji', 'ybm': 'Yang Berhormat Mulia', 'tyt': 'Tuan Yang Terutama', 'yab': 'Yang Berhormat', 'ybm': 'Yang Berhormat Mulia', 'yabhg': 'Yang Amat Berbahagia', 'ybhg': 'Yang Berbahagia', 'miss': 'Cik',}```
###Code
normalizer = malaya.normalize.normalizer()
normalizer.normalize('Dr yahaya')
###Output
_____no_output_____
###Markdown
2. expand `x`
###Code
normalizer.normalize('xtahu')
###Output
_____no_output_____
###Markdown
3. normalize `ke -`
###Code
normalizer.normalize('ke-12')
normalizer.normalize('ke - 12')
###Output
_____no_output_____
###Markdown
4. normalize `ke - roman`
###Code
normalizer.normalize('ke-XXI')
normalizer.normalize('ke - XXI')
###Output
_____no_output_____
###Markdown
5. normalize `NUM - NUM`
###Code
normalizer.normalize('2011 - 2019')
normalizer.normalize('2011.01-2019')
###Output
_____no_output_____
###Markdown
6. normalize `pada NUM (/ | -) NUM`
###Code
normalizer.normalize('pada 10/4')
normalizer.normalize('PADA 10 -4')
###Output
_____no_output_____
###Markdown
7. normalize `NUM / NUM`
###Code
normalizer.normalize('10 /4')
###Output
_____no_output_____
###Markdown
8. normalize `rm NUM`
###Code
normalizer.normalize('RM10.5')
###Output
_____no_output_____
###Markdown
9. normalize `rm NUM sen`
###Code
normalizer.normalize('rm 10.5 sen')
###Output
_____no_output_____
###Markdown
10. normalize `NUM sen`
###Code
normalizer.normalize('1015 sen')
###Output
_____no_output_____
###Markdown
11. normalize money
###Code
normalizer.normalize('rm10.4m')
normalizer.normalize('$10.4K')
###Output
_____no_output_____
###Markdown
12. normalize cardinal
###Code
normalizer.normalize('123')
###Output
_____no_output_____
###Markdown
13. normalize ordinal
###Code
normalizer.normalize('ke123')
###Output
_____no_output_____
###Markdown
14. normalize date / time / datetime string to datetime.datetime
###Code
normalizer.normalize('2 hari lepas')
normalizer.normalize('esok')
normalizer.normalize('okt 2019')
normalizer.normalize('2pgi')
normalizer.normalize('pukul 8 malam')
normalizer.normalize('jan 2 2019 12:01pm')
normalizer.normalize('2 ptg jan 2 2019')
###Output
_____no_output_____
###Markdown
15. normalize money string to string number representation
###Code
normalizer.normalize('50 sen')
normalizer.normalize('20.5 ringgit')
normalizer.normalize('20m ringgit')
normalizer.normalize('22.5123334k ringgit')
###Output
_____no_output_____
###Markdown
16. normalize date string to %d/%m/%y
###Code
normalizer.normalize('1 nov 2019')
normalizer.normalize('januari 1 1996')
normalizer.normalize('januari 2019')
###Output
_____no_output_____
###Markdown
17. normalize time string to %H:%M:%S
###Code
normalizer.normalize('2pm')
normalizer.normalize('2:01pm')
normalizer.normalize('2AM')
###Output
_____no_output_____
###Markdown
18. expand repetition shortform
###Code
normalizer.normalize('skit2')
normalizer.normalize('xskit2')
normalizer.normalize('xjdi2')
normalizer.normalize('xjdi4')
normalizer.normalize('xjdi0')
normalizer.normalize('xjdi')
###Output
_____no_output_____
###Markdown
19. normalize `NUM SI-UNIT`
###Code
normalizer.normalize('61.2 kg')
normalizer.normalize('61.2kg')
normalizer.normalize('61kg')
normalizer.normalize('61ml')
normalizer.normalize('61m')
normalizer.normalize('61.3434km')
normalizer.normalize('61.3434c')
normalizer.normalize('61.3434 c')
###Output
_____no_output_____
###Markdown
20. normalize `laughing` pattern
###Code
normalizer.normalize('dia sakai wkwkwkawkw')
normalizer.normalize('dia sakai hhihihu')
###Output
_____no_output_____
###Markdown
21. normalize `mengeluh` pattern
###Code
normalizer.normalize('Haih apa lah si yusuff ni . Mama cari rupanya celah ni')
normalizer.normalize('hais sorrylah syazzz')
###Output
_____no_output_____ |
architectures/Python-Keras-RealTimeServing/{{cookiecutter.project_name}}/aks/07_RealTimeScoring.ipynb | ###Markdown
Test deployed web application This notebook pulls some images and tests them against the deployed web application on AKS.
###Code
import matplotlib.pyplot as plt
import numpy as np
import requests
from testing_utilities import to_img, plot_predictions, get_auth, read_image_from
from azureml.core.workspace import Workspace
from azureml.core.webservice import AksWebservice
from dotenv import set_key, get_key, find_dotenv
env_path = find_dotenv(raise_error_if_not_found=True)
###Output
_____no_output_____
###Markdown
Get the external url for the web application running on AKS cluster.
###Code
ws = Workspace.from_config(auth=get_auth())
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep="\n")
###Output
Found the config file in: /home/mat/repos/AMLAKSDeploy/Keras_Tensorflow/aml_config/config.json
workspace
deploykerasrg
eastus
edf507a2-6235-46c5-b560-fd463ba2e771
###Markdown
Let's retrieve web service.
###Code
aks_service_name = get_key(env_path, 'aks_service_name')
aks_service = AksWebservice(ws, name=aks_service_name)
aks_service.state
scoring_url = aks_service.scoring_uri
api_key = aks_service.get_keys()[0]
###Output
_____no_output_____
###Markdown
Pull an image of a Lynx to test it with.
###Code
IMAGEURL = "https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Lynx_lynx_poing.jpg/220px-Lynx_lynx_poing.jpg"
plt.imshow(to_img(IMAGEURL))
headers = {'Authorization':('Bearer '+ api_key)}
img_data = read_image_from(IMAGEURL).read()
r = requests.post(scoring_url, files={'image':img_data}, headers=headers) # Run the request twice since the first time takes a
# little longer due to the loading of the model
%time r = requests.post(scoring_url, files={'image':img_data}, headers=headers)
r.json()
###Output
CPU times: user 2.01 ms, sys: 235 µs, total: 2.24 ms
Wall time: 188 ms
###Markdown
From the results above we can see that the model correctly classifies this as an Lynx. Let's try a few more images.
###Code
images = ('https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Lynx_lynx_poing.jpg/220px-Lynx_lynx_poing.jpg',
'https://upload.wikimedia.org/wikipedia/commons/3/3a/Roadster_2.5_windmills_trimmed.jpg',
'http://www.worldshipsociety.org/wp-content/themes/construct/lib/scripts/timthumb/thumb.php?src=http://www.worldshipsociety.org/wp-content/uploads/2013/04/stock-photo-5495905-cruise-ship.jpg&w=570&h=370&zc=1&q=100',
'http://yourshot.nationalgeographic.com/u/ss/fQYSUbVfts-T7pS2VP2wnKyN8wxywmXtY0-FwsgxpiZv_E9ZfPsNV5B0ER8-bOdruvNfMD5EbP4SznWz4PYn/',
'https://cdn.arstechnica.net/wp-content/uploads/2012/04/bohol_tarsier_wiki-4f88309-intro.jpg',
'http://i.telegraph.co.uk/multimedia/archive/03233/BIRDS-ROBIN_3233998b.jpg')
results = [requests.post(scoring_url, files={'image': read_image_from(img).read()}, headers=headers) for img in images]
plot_predictions(images, results)
###Output
_____no_output_____
###Markdown
The labels predicted by our model seem to be consistent with the images supplied. Next let's quickly check what the request response performance is for the deployed model on AKS cluster.
###Code
image_data = list(map(lambda img: read_image_from(img).read(), images)) # Retrieve the images and data
timer_results = list()
for img in image_data:
res=%timeit -r 1 -o -q requests.post(scoring_url, files={'image': img}, headers=headers)
timer_results.append(res.best)
timer_results
print('Average time taken: {0:4.2f} ms'.format(10**3 * np.mean(timer_results)))
###Output
Average time taken: 318.05 ms
|
notebooks/ColorYourCharts.ipynb | ###Markdown
alternative Reds palette, desaturated by 0.9, 40 colors - Beware that palettes are case sensitive
###Code
%kql --palette -pn "Reds" -pd 0.5 -pc 40
###Output
_____no_output_____
###Markdown
Builtin palettes popup all builtin palettes- push the button to open a popup window with the palettes
###Code
%kql --palettes -popup_window
###Output
_____no_output_____
###Markdown
show all builtin palettes desaturated- set option -palette_desaturation or -pd with a value between 0 to 1
###Code
%kql --palettes -palette_desaturation 0.5
###Output
_____no_output_____
###Markdown
popup all builtin n_colors palettes desaturated- set option -palette_colrs or -pc with a value greater than 0- set option -palette_desaturation or -pd with a value between 0 to 1- push the button to open a popup window with the palettes
###Code
%kql --palettes -pd 0.5 -pc 20 -pw
###Output
_____no_output_____
###Markdown
Configure default palette properties- palette_name- palette_desaturation- palette_colors show palette name of the default palette
###Code
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
modify the default palette
###Code
%config Kqlmagic.palette_name = 'Greens'
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
show palette desaturation default
###Code
%config Kqlmagic.palette_desaturation
###Output
_____no_output_____
###Markdown
modify palette desaturation
###Code
%config Kqlmagic.palette_desaturation = 0.95
%config Kqlmagic.palette_desaturation
###Output
_____no_output_____
###Markdown
show number of colors in default palette
###Code
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
modify number of colors in default palette
###Code
%config Kqlmagic.palette_colors = 6
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
show the deault palette based on the new defaults
###Code
%kql --palette
%%kql -palette_desaturation 0.5
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
- *Note - The palette size extends automativally, if required by chart (in above chart extended from 6 colors palette to 10 colors palette* Reverse color order of palette- Set option -palette_reverse / -pr
###Code
%%kql -palette_reverse
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
%config Kqlmagic.palette_colors = 20
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
Palette derivative names- basic names- subset names- reversed names- custom basic pastel palette
###Code
%kql --palette -pn "pastel"
###Output
_____no_output_____
###Markdown
- *Note - When not specified, palette_name, palette_colors and palette_desaturation default properties are used* subset derived from a basic palette
###Code
%kql --palette -pn "pastel[4:11]"
###Output
_____no_output_____
###Markdown
reversed from a basic palette
###Code
%kql --palette -pn "pastel_r"
###Output
_____no_output_____
###Markdown
reversed subset of a basic palette
###Code
%kql --palette -pn "pastel[4:11]_r"
###Output
_____no_output_____
###Markdown
piechart rendered using a derived palette
###Code
%%kql -pn "pastel[4:11]_r"
StormEvents
| summarize count() by State
| sort by count_
| limit 7
| render piechart
###Output
_____no_output_____
###Markdown
config default palette with a derived palette
###Code
%config Kqlmagic.palette_name = "pastel[4:11]_r"
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
barchart rendered using the new default palette
###Code
%%kql
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render barchart
###Output
_____no_output_____
###Markdown
Custom palette- define an alternative basic palette- have same characteristics as basic palette, a subset can be derived and/or reversed show custom palette- make sure that there are no white spaces in the string that defines the custom palette
###Code
%kql --palette -pn "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)']"
###Output
_____no_output_____
###Markdown
show derived subset palette from the custom palette
###Code
%kql --palette -pn "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]"
###Output
_____no_output_____
###Markdown
show derived reversed subset palette from custom palette
###Code
%kql --palette -pn "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r"
###Output
_____no_output_____
###Markdown
columnchart rendered using a custom palette
###Code
%%kql -pn "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r"
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render columnchart
###Output
_____no_output_____
###Markdown
set custom palette as default palette
###Code
%config Kqlmagic.palette_name = "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r"
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
barchart rendered using default palette (set to the custom chart)
###Code
%%kql
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render barchart
###Output
_____no_output_____
###Markdown
Python integration
###Code
%%kql -pn "Spectral" -pd 0.95 -pc 10
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
get last kql result palette
###Code
_kql_raw_result_.palette
###Output
_____no_output_____
###Markdown
get a slice from palette- behave as list of colors
###Code
_kql_raw_result_.palette[3:]
###Output
_____no_output_____
###Markdown
get one color from palette
###Code
_kql_raw_result_.palette[7]
###Output
_____no_output_____
###Markdown
get the palette raw rgb data
###Code
list(_kql_raw_result_.palette)
###Output
_____no_output_____
###Markdown
get palette_name (name of the palette)
###Code
_kql_raw_result_.options['palette_name']
###Output
_____no_output_____
###Markdown
get palette_colors value (number of colors in palette)
###Code
_kql_raw_result_.options['palette_colors']
###Output
_____no_output_____
###Markdown
get palette_desaturation value
###Code
_kql_raw_result_.options['palette_desaturation']
###Output
_____no_output_____
###Markdown
get palette_reverse state
###Code
_kql_raw_result_.options['palette_reverse']
###Output
_____no_output_____
###Markdown
get builin palettes
###Code
_kql_raw_result_.palettes
###Output
_____no_output_____
###Markdown
get builtin palette by name
###Code
_kql_raw_result_.palettes['Oranges']
###Output
_____no_output_____
###Markdown
get builtin palette by index
###Code
_kql_raw_result_.palettes[5]
###Output
_____no_output_____
###Markdown
get slice of an indexed builtin palette
###Code
_kql_raw_result_.palettes[5][:6]
###Output
_____no_output_____
###Markdown
get a color of a slice from an indexed builtin palette
###Code
_kql_raw_result_.palettes[5][:6][3]
###Output
_____no_output_____
###Markdown
get slice of a named builtin palette
###Code
_kql_raw_result_.palettes['terrain'][:6]
###Output
_____no_output_____
###Markdown
get all the names of the builtin palettes
###Code
list(_kql_raw_result_.palettes)
###Output
_____no_output_____
###Markdown
get the raw rgb data of a slice from an indexed buitin palette
###Code
list(_kql_raw_result_.palettes[5][:6])
%kql --help
###Output
_____no_output_____
###Markdown
Kqlmagic - __palette__ features***Explains how to customize and use Kqlmagic **palette** features****** Make sure that you have the lastest version of KqlmagicDownload Kqlmagic from github and install/update(if latest version ims already installed you can skip this step)
###Code
#!pip install Kqlmagic --upgrade
###Output
_____no_output_____
###Markdown
Add Kqlmagic to notebook magics
###Code
#%pushd C:\My Projects\jupyter-Kqlmagic-microsoft\azure
%reload_ext Kqlmagic
#%popd
###Output
_____no_output_____
###Markdown
Authenticate to get access to data
###Code
%kql kusto://code().cluster('help').database('Samples')
###Output
_____no_output_____
###Markdown
Query and render to piechart
###Code
%%kql
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Use palette Reds for the piechart- set option -palette_name or -pn to Reds
###Code
%%kql -palette_name Reds
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
- *Note - to set a different palette, use option -palette_name* Use desaturated Reds for the piechart- set option -palette_desaturation or -pd with a value between 0 to 1
###Code
%%kql -palette_name Reds -palette_desaturation 0.5
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Show current default palette
###Code
%kql -palette
###Output
_____no_output_____
###Markdown
palette desaturated by half (0.5)- -pd is an abreviation of -palette_desaturation- desturation value must be between to 1
###Code
%kql -palette -pd 0.5
###Output
_____no_output_____
###Markdown
- *Note - desaturation value is explicitly specified only if it is less than 1* palette extended to 40 colors- -pc is an abreviation of palettte_colors
###Code
%kql -palette -pd 0.5 -pc 40
###Output
_____no_output_____
###Markdown
alternative Reds palette, desaturated by 0.9, 40 colors
###Code
%kql -palette -pn Reds -pd 0.5 -pc 40
###Output
_____no_output_____
###Markdown
Builtin palettes popup all builtin palettes- push the button to open a popup window with the palettes
###Code
%kql -popup_palettes
###Output
_____no_output_____
###Markdown
popup all builtin palettes desaturated- set option -palette_desaturation or -pd with a value between 0 to 1- push the button to open a popup window with the palettes
###Code
%kql -popup_palettes -palette_desaturation 0.5
###Output
_____no_output_____
###Markdown
popup all builtin n_colors palettes desaturated- set option -palette_colrs or -pc with a value greater than 0- set option -palette_desaturation or -pd with a value between 0 to 1- push the button to open a popup window with the palettes
###Code
%kql -popup_palettes -pd 0.5 -pc 20
###Output
_____no_output_____
###Markdown
Configure default palette properties- palette_name- palette_desaturation- palette_colors show palette name of the default palette
###Code
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
modify the default palette
###Code
%config Kqlmagic.palette_name = 'Greens'
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
show palette desaturation default
###Code
%config Kqlmagic.palette_desaturation
###Output
_____no_output_____
###Markdown
modify palette desaturation
###Code
%config Kqlmagic.palette_desaturation = 0.95
%config Kqlmagic.palette_desaturation
###Output
_____no_output_____
###Markdown
show number of colors in default palette
###Code
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
modify number of colors in default palette
###Code
%config Kqlmagic.palette_colors = 6
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
show the deault palette based on the new defaults
###Code
%kql -palette
%%kql -palette_desaturation 0.5
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
- *Note - The palette size extends automativally, if required by chart (in above chart extended from 6 colors palette to 10 colors palette* Reverse color order of palette- Set option -palette_reverse / -pr
###Code
%%kql -palette_reverse
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
%config Kqlmagic.palette_colors = 20
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
Palette derivative names- basic names- subset names- reversed names- custom basic pastel palette
###Code
%kql -palette -pn pastel
###Output
_____no_output_____
###Markdown
- *Note - When not specified, palette_name, palette_colors and palette_desaturation default properties are used* subset derived from a basic palette
###Code
%kql -palette -pn pastel[4:11]
###Output
_____no_output_____
###Markdown
reversed from a basic palette
###Code
%kql -palette -pn pastel_r
###Output
_____no_output_____
###Markdown
reversed subset of a basic palette
###Code
%kql -palette -pn pastel[4:11]_r
###Output
_____no_output_____
###Markdown
piechart rendered using a derived palette
###Code
%%kql -pn pastel[4:11]_r
StormEvents
| summarize count() by State
| sort by count_
| limit 7
| render piechart
###Output
_____no_output_____
###Markdown
config default palette with a derived palette
###Code
%config Kqlmagic.palette_name = 'pastel[4:11]_r'
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
barchart rendered using the new default palette
###Code
%%kql
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render barchart
###Output
_____no_output_____
###Markdown
Custom palette- define an alternative basic palette- have same characteristics as basic palette, a subset can be derived and/or reversed show custom palette- make sure that there are no white spaces in the string that defines the custom palette
###Code
%kql -palette -pn ['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)']
###Output
_____no_output_____
###Markdown
show derived subset palette from the custom palette
###Code
%kql -palette -pn ['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]
###Output
_____no_output_____
###Markdown
show derived reversed subset palette from custom palette
###Code
%kql -palette -pn ['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r
###Output
_____no_output_____
###Markdown
columnchart rendered using a custom palette
###Code
%%kql -pn ['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render columnchart
###Output
_____no_output_____
###Markdown
set custom palette as default palette
###Code
%config Kqlmagic.palette_name = "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r"
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
barchart rendered using default palette (set to the custom chart)
###Code
%%kql
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render barchart
###Output
_____no_output_____
###Markdown
Python integration
###Code
%%kql -pn Spectral -pd 0.95 -pc 10
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
get last kql result palette
###Code
_kql_raw_result_.palette
###Output
_____no_output_____
###Markdown
get a slice from palette- behave as list of colors
###Code
_kql_raw_result_.palette[3:]
###Output
_____no_output_____
###Markdown
get one color from palette
###Code
_kql_raw_result_.palette[7]
###Output
_____no_output_____
###Markdown
get the palette raw rgb data
###Code
list(_kql_raw_result_.palette)
###Output
_____no_output_____
###Markdown
get palette_name (name of the palette)
###Code
_kql_raw_result_.options['palette_name']
###Output
_____no_output_____
###Markdown
get palette_colors value (number of colors in palette)
###Code
_kql_raw_result_.options['palette_colors']
###Output
_____no_output_____
###Markdown
get palette_desaturation value
###Code
_kql_raw_result_.options['palette_desaturation']
###Output
_____no_output_____
###Markdown
get palette_reverse state
###Code
_kql_raw_result_.options['palette_reverse']
###Output
_____no_output_____
###Markdown
get builin palettes
###Code
_kql_raw_result_.palettes
###Output
_____no_output_____
###Markdown
get builtin palette by name
###Code
_kql_raw_result_.palettes['Oranges']
###Output
_____no_output_____
###Markdown
get builtin palette by index
###Code
_kql_raw_result_.palettes[5]
###Output
_____no_output_____
###Markdown
get slice of an indexed builtin palette
###Code
_kql_raw_result_.palettes[5][:6]
###Output
_____no_output_____
###Markdown
get a color of a slice from an indexed builtin palette
###Code
_kql_raw_result_.palettes[5][:6][3]
###Output
_____no_output_____
###Markdown
get slice of a named builtin palette
###Code
_kql_raw_result_.palettes['terrain'][:6]
###Output
_____no_output_____
###Markdown
get all the names of the builtin palettes
###Code
list(_kql_raw_result_.palettes)
###Output
_____no_output_____
###Markdown
get the raw rgb data of a slice from an indexed buitin palette
###Code
list(_kql_raw_result_.palettes[5][:6])
###Output
_____no_output_____
###Markdown
Kqlmagic - __palette__ features***Explains how to customize and use Kqlmagic **palette** features****** Make sure that you have the lastest version of KqlmagicDownload Kqlmagic from github and install/update(if latest version ims already installed you can skip this step)
###Code
#!pip install git+git://github.com/Microsoft/jupyter-Kqlmagic.git
###Output
_____no_output_____
###Markdown
Add Kqlmagic to notebook magics
###Code
#%pushd C:\My Projects\jupyter-Kqlmagic-microsoft\src
%reload_ext kql
#%popd
###Output
_____no_output_____
###Markdown
Authenticate to get access to data
###Code
%kql kusto://code().cluster('help').database('Samples')
###Output
_____no_output_____
###Markdown
Query and render to piechart
###Code
%%kql
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Use palette Reds for the piechart- set option -palette_name or -pn to Reds
###Code
%%kql -palette_name Reds
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
- *Note - to set a different palette, use option -palette_name* Use desaturated Reds for the piechart- set option -palette_desaturation or -pd with a value between 0 to 1
###Code
%%kql -palette_name Reds -palette_desaturation 0.5
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Show current default palette
###Code
%kql -palette
###Output
_____no_output_____
###Markdown
palette desaturated by half (0.5)- -pd is an abreviation of -palette_desaturation- desturation value must be between to 1
###Code
%kql -palette -pd 0.5
###Output
_____no_output_____
###Markdown
- *Note - desaturation value is explicitly specified only if it is less than 1* palette extended to 40 colors- -pc is an abreviation of palettte_colors
###Code
%kql -palette -pd 0.5 -pc 40
###Output
_____no_output_____
###Markdown
alternative Reds palette, desaturated by 0.9, 40 colors
###Code
%kql -palette -pn Reds -pd 0.5 -pc 40
###Output
_____no_output_____
###Markdown
Builtin palettes popup all builtin palettes- push the button to open a popup window with the palettes
###Code
%kql -popup_palettes
###Output
_____no_output_____
###Markdown
popup all builtin palettes desaturated- set option -palette_desaturation or -pd with a value between 0 to 1- push the button to open a popup window with the palettes
###Code
%kql -popup_palettes -palette_desaturation 0.5
###Output
_____no_output_____
###Markdown
popup all builtin n_colors palettes desaturated- set option -palette_colrs or -pc with a value greater than 0- set option -palette_desaturation or -pd with a value between 0 to 1- push the button to open a popup window with the palettes
###Code
%kql -popup_palettes -pd 0.5 -pc 20
###Output
_____no_output_____
###Markdown
Configure default palette properties- palette_name- palette_desaturation- palette_colors show palette name of the default palette
###Code
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
modify the default palette
###Code
%config Kqlmagic.palette_name = 'Greens'
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
show palette desaturation default
###Code
%config Kqlmagic.palette_desaturation
###Output
_____no_output_____
###Markdown
modify palette desaturation
###Code
%config Kqlmagic.palette_desaturation = 0.95
%config Kqlmagic.palette_desaturation
###Output
_____no_output_____
###Markdown
show number of colors in default palette
###Code
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
modify number of colors in default palette
###Code
%config Kqlmagic.palette_colors = 6
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
show the deault palette based on the new defaults
###Code
%kql -palette
%%kql -palette_desaturation 0.5
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
- *Note - The palette size extends automativally, if required by chart (in above chart extended from 6 colors palette to 10 colors palette* Reverse color order of palette- Set option -palette_reverse / -pr
###Code
%%kql -palette_reverse
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
%config Kqlmagic.palette_colors = 20
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
Palette derivative names- basic names- subset names- reversed names- custom basic pastel palette
###Code
% kql -palette -pn pastel
###Output
_____no_output_____
###Markdown
- *Note - When not specified, palette_name, palette_colors and palette_desaturation default properties are used* subset derived from a basic palette
###Code
% kql -palette -pn pastel[4:11]
###Output
_____no_output_____
###Markdown
reversed from a basic palette
###Code
% kql -palette -pn pastel_r
###Output
_____no_output_____
###Markdown
reversed subset of a basic palette
###Code
% kql -palette -pn pastel[4:11]_r
###Output
_____no_output_____
###Markdown
piechart rendered using a derived palette
###Code
%%kql -pn pastel[4:11]_r
StormEvents
| summarize count() by State
| sort by count_
| limit 7
| render piechart
###Output
_____no_output_____
###Markdown
config default palette with a derived palette
###Code
%config Kqlmagic.palette_name = 'pastel[4:11]_r'
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
barchart rendered using the new default palette
###Code
%%kql
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render barchart
###Output
_____no_output_____
###Markdown
Custom palette- define an alternative basic palette- have same characteristics as basic palette, a subset can be derived and/or reversed show custom palette- make sure that there are no white spaces in the string that defines the custom palette
###Code
%kql -palette -pn ['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)']
###Output
_____no_output_____
###Markdown
show derived subset palette from the custom palette
###Code
%kql -palette -pn ['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]
###Output
_____no_output_____
###Markdown
show derived reversed subset palette from custom palette
###Code
%kql -palette -pn ['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r
###Output
_____no_output_____
###Markdown
columnchart rendered using a custom palette
###Code
%%kql -pn ['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render columnchart
###Output
_____no_output_____
###Markdown
set custom palette as default palette
###Code
%config Kqlmagic.palette_name = "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r"
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
barchart rendered using default palette (set to the custom chart)
###Code
%%kql
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render barchart
###Output
_____no_output_____
###Markdown
Python integration
###Code
%%kql -pn Spectral -pd 0.95 -pc 10
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
get last kql result palette
###Code
_kql_raw_result_.palette
###Output
_____no_output_____
###Markdown
get a slice from palette- behave as list of colors
###Code
_kql_raw_result_.palette[3:]
###Output
_____no_output_____
###Markdown
get one color from palette
###Code
_kql_raw_result_.palette[7]
###Output
_____no_output_____
###Markdown
get the palette raw rgb data
###Code
list(_kql_raw_result_.palette)
###Output
_____no_output_____
###Markdown
get palette_name (name of the palette)
###Code
_kql_raw_result_.options['palette_name']
###Output
_____no_output_____
###Markdown
get palette_colors value (number of colors in palette)
###Code
_kql_raw_result_.options['palette_colors']
###Output
_____no_output_____
###Markdown
get palette_desaturation value
###Code
_kql_raw_result_.options['palette_desaturation']
###Output
_____no_output_____
###Markdown
get palette_reverse state
###Code
_kql_raw_result_.options['palette_reverse']
###Output
_____no_output_____
###Markdown
get builin palettes
###Code
_kql_raw_result_.palettes
###Output
_____no_output_____
###Markdown
get builtin palette by name
###Code
_kql_raw_result_.palettes['Oranges']
###Output
_____no_output_____
###Markdown
get builtin palette by index
###Code
_kql_raw_result_.palettes[5]
###Output
_____no_output_____
###Markdown
get slice of an indexed builtin palette
###Code
_kql_raw_result_.palettes[5][:6]
###Output
_____no_output_____
###Markdown
get a color of a slice from an indexed builtin palette
###Code
_kql_raw_result_.palettes[5][:6][3]
###Output
_____no_output_____
###Markdown
get slice of a named builtin palette
###Code
_kql_raw_result_.palettes['terrain'][:6]
###Output
_____no_output_____
###Markdown
get all the names of the builtin palettes
###Code
list(_kql_raw_result_.palettes)
###Output
_____no_output_____
###Markdown
get the raw rgb data of a slice from an indexed buitin palette
###Code
list(_kql_raw_result_.palettes[5][:6])
###Output
_____no_output_____
###Markdown
Kqlmagic - __palette__ features***Explains how to customize and use Kqlmagic **palette** features****** Make sure that you have the lastest version of KqlmagicDownload Kqlmagic from PyPI and install/update(if latest version ims already installed you can skip this step)
###Code
#!pip install Kqlmagic --no-cache-dir --upgrade
###Output
_____no_output_____
###Markdown
Add Kqlmagic to notebook magics
###Code
%reload_ext Kqlmagic
###Output
_____no_output_____
###Markdown
Authenticate to get access to data
###Code
%kql AzureDataExplorer://code;cluster='help';database='Samples'
###Output
_____no_output_____
###Markdown
Query and render to piechart
###Code
%%kql
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Use palette Reds for the piechart- set option -palette_name or -pn to Reds
###Code
%%kql -palette_name "Reds"
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
- *Note - to set a different palette, use option -palette_name* Use desaturated Reds for the piechart- set option -palette_desaturation or -pd with a value between 0 to 1
###Code
%%kql -palette_name "Reds" -palette_desaturation 0.5
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Parametrized palette and desaturation- set python variables to the palette and distaturioan value, and refer them from the magic
###Code
my_palettes = ["Reds", "Greens", "Blues"]
my_saturation = 0.7
current_palette = 2
%%kql -palette_name my_palettes[current_palette] -palette_desaturation my_saturation
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
current_palette = 0
%%kql -palette_name my_palettes[current_palette] -palette_desaturation my_saturation
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Show current default palette
###Code
%kql --palette
###Output
_____no_output_____
###Markdown
palette desaturated by half (0.5)- -pd is an abreviation of -palette_desaturation- desturation value must be between to 1
###Code
%kql --palette -pd 0.5
###Output
_____no_output_____
###Markdown
- *Note - desaturation value is explicitly specified only if it is less than 1* palette extended to 40 colors- -pc is an abreviation of palettte_colors
###Code
%kql --palette -pd 0.5 -pc 40
###Output
_____no_output_____
###Markdown
alternative Reds palette, desaturated by 0.9, 40 colors - Beware that palettes are case sensitive
###Code
%kql --palette -pn "Reds" -pd 0.5 -pc 40
###Output
_____no_output_____
###Markdown
Builtin palettes popup all builtin palettes- push the button to open a popup window with the palettes
###Code
%kql --palettes -popup_window
###Output
_____no_output_____
###Markdown
show all builtin palettes desaturated- set option -palette_desaturation or -pd with a value between 0 to 1
###Code
%kql --palettes -palette_desaturation 0.5
###Output
_____no_output_____
###Markdown
popup all builtin n_colors palettes desaturated- set option -palette_colrs or -pc with a value greater than 0- set option -palette_desaturation or -pd with a value between 0 to 1- push the button to open a popup window with the palettes
###Code
%kql --palettes -pd 0.5 -pc 20 -pw
###Output
_____no_output_____
###Markdown
Configure default palette properties- palette_name- palette_desaturation- palette_colors show palette name of the default palette
###Code
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
modify the default palette
###Code
%config Kqlmagic.palette_name = 'Greens'
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
show palette desaturation default
###Code
%config Kqlmagic.palette_desaturation
###Output
_____no_output_____
###Markdown
modify palette desaturation
###Code
%config Kqlmagic.palette_desaturation = 0.95
%config Kqlmagic.palette_desaturation
###Output
_____no_output_____
###Markdown
show number of colors in default palette
###Code
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
modify number of colors in default palette
###Code
%config Kqlmagic.palette_colors = 6
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
show the deault palette based on the new defaults
###Code
%kql --palette
%%kql -palette_desaturation 0.5
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
- *Note - The palette size extends automativally, if required by chart (in above chart extended from 6 colors palette to 10 colors palette* Reverse color order of palette- Set option -palette_reverse / -pr
###Code
%%kql -palette_reverse
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
%config Kqlmagic.palette_colors = 20
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
Palette derivative names- basic names- subset names- reversed names- custom basic pastel palette
###Code
%kql --palette -pn "pastel"
###Output
_____no_output_____
###Markdown
- *Note - When not specified, palette_name, palette_colors and palette_desaturation default properties are used* subset derived from a basic palette
###Code
%kql --palette -pn "pastel[4:11]"
###Output
_____no_output_____
###Markdown
reversed from a basic palette
###Code
%kql --palette -pn "pastel_r"
###Output
_____no_output_____
###Markdown
reversed subset of a basic palette
###Code
%kql --palette -pn "pastel[4:11]_r"
###Output
_____no_output_____
###Markdown
piechart rendered using a derived palette
###Code
%%kql -pn "pastel[4:11]_r"
StormEvents
| summarize count() by State
| sort by count_
| limit 7
| render piechart
###Output
_____no_output_____
###Markdown
config default palette with a derived palette
###Code
%config Kqlmagic.palette_name = "pastel[4:11]_r"
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
barchart rendered using the new default palette
###Code
%%kql
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render barchart
###Output
_____no_output_____
###Markdown
Custom palette- define an alternative basic palette- have same characteristics as basic palette, a subset can be derived and/or reversed show custom palette- make sure that there are no white spaces in the string that defines the custom palette
###Code
%kql --palette -pn "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)']"
###Output
_____no_output_____
###Markdown
show derived subset palette from the custom palette
###Code
%kql --palette -pn "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]"
###Output
_____no_output_____
###Markdown
show derived reversed subset palette from custom palette
###Code
%kql --palette -pn "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r"
###Output
_____no_output_____
###Markdown
columnchart rendered using a custom palette
###Code
%%kql -pn "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r"
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render columnchart
###Output
_____no_output_____
###Markdown
set custom palette as default palette
###Code
%config Kqlmagic.palette_name = "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r"
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
barchart rendered using default palette (set to the custom chart)
###Code
%%kql
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render barchart
###Output
_____no_output_____
###Markdown
Python integration
###Code
%%kql -pn "Spectral" -pd 0.95 -pc 10
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
get last kql result palette
###Code
_kql_raw_result_.palette
###Output
_____no_output_____
###Markdown
get a slice from palette- behave as list of colors
###Code
_kql_raw_result_.palette[3:]
###Output
_____no_output_____
###Markdown
get one color from palette
###Code
_kql_raw_result_.palette[7]
###Output
_____no_output_____
###Markdown
get the palette raw rgb data
###Code
list(_kql_raw_result_.palette)
###Output
_____no_output_____
###Markdown
get palette_name (name of the palette)
###Code
_kql_raw_result_.options['palette_name']
###Output
_____no_output_____
###Markdown
get palette_colors value (number of colors in palette)
###Code
_kql_raw_result_.options['palette_colors']
###Output
_____no_output_____
###Markdown
get palette_desaturation value
###Code
_kql_raw_result_.options['palette_desaturation']
###Output
_____no_output_____
###Markdown
get palette_reverse state
###Code
_kql_raw_result_.options['palette_reverse']
###Output
_____no_output_____
###Markdown
get builin palettes
###Code
_kql_raw_result_.palettes
###Output
_____no_output_____
###Markdown
get builtin palette by name
###Code
_kql_raw_result_.palettes['Oranges']
###Output
_____no_output_____
###Markdown
get builtin palette by index
###Code
_kql_raw_result_.palettes[5]
###Output
_____no_output_____
###Markdown
get slice of an indexed builtin palette
###Code
_kql_raw_result_.palettes[5][:6]
###Output
_____no_output_____
###Markdown
get a color of a slice from an indexed builtin palette
###Code
_kql_raw_result_.palettes[5][:6][3]
###Output
_____no_output_____
###Markdown
get slice of a named builtin palette
###Code
_kql_raw_result_.palettes['terrain'][:6]
###Output
_____no_output_____
###Markdown
get all the names of the builtin palettes
###Code
list(_kql_raw_result_.palettes)
###Output
_____no_output_____
###Markdown
get the raw rgb data of a slice from an indexed buitin palette
###Code
list(_kql_raw_result_.palettes[5][:6])
%kql --help
###Output
_____no_output_____
###Markdown
Kqlmagic - __palette__ features***Explains how to customize and use Kqlmagic **palette** features****** Make sure that you have the lastest version of KqlmagicDownload Kqlmagic from PyPI and install/update(if latest version ims already installed you can skip this step)
###Code
#!pip install Kqlmagic --no-cache-dir --upgrade
###Output
_____no_output_____
###Markdown
Add Kqlmagic to notebook magics
###Code
#%env KQLMAGIC_LOG_LEVEL=DEBUG
#%env KQLMAGIC_LOG_FILE_MODE=Append
#%env KQLMAGIC_LOG_FILE=michael.log
#%env KQLMAGIC_LOG_FILE_PREFIX=myLog
%reload_ext Kqlmagic
###Output
_____no_output_____
###Markdown
Authenticate to get access to data
###Code
%kql AzureDataExplorer://code;cluster='help';database='Samples'
###Output
_____no_output_____
###Markdown
Query and render to piechart
###Code
%%kql
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Use palette Reds for the piechart- set option -palette_name or -pn to Reds
###Code
%%kql -palette_name "Reds"
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
- *Note - to set a different palette, use option -palette_name* Use desaturated Reds for the piechart- set option -palette_desaturation or -pd with a value between 0 to 1
###Code
%%kql -palette_name "Reds" -palette_desaturation 0.5
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Parametrized palette and desaturation- set python variables to the palette and distaturioan value, and refer them from the magic
###Code
my_palettes = ["Reds", "Greens", "Blues"]
my_saturation = 0.7
current_palette = 2
%%kql -palette_name my_palettes[current_palette] -palette_desaturation my_saturation
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
current_palette = 0
%%kql -palette_name my_palettes[current_palette] -palette_desaturation my_saturation
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Show current default palette
###Code
%kql --palette
###Output
_____no_output_____
###Markdown
palette desaturated by half (0.5)- -pd is an abreviation of -palette_desaturation- desturation value must be between to 1
###Code
%kql --palette -pd 0.5
###Output
_____no_output_____
###Markdown
- *Note - desaturation value is explicitly specified only if it is less than 1* palette extended to 40 colors- -pc is an abreviation of palettte_colors
###Code
%kql --palette -pd 0.5 -pc 40
###Output
_____no_output_____
###Markdown
Kqlmagic - __palette__ features***Explains how to customize and use Kqlmagic **palette** features****** Make sure that you have the lastest version of KqlmagicDownload Kqlmagic from PyPI and install/update(if latest version ims already installed you can skip this step)
###Code
#!pip install Kqlmagic --no-cache-dir --upgrade
###Output
_____no_output_____
###Markdown
Add Kqlmagic to notebook magics
###Code
#%pushd C:\My Projects\jupyter-Kqlmagic-microsoft\azure
%reload_ext Kqlmagic
#%popd
###Output
_____no_output_____
###Markdown
Authenticate to get access to data
###Code
%kql AzureDataExplorer://code;cluster='help';database='Samples'
###Output
_____no_output_____
###Markdown
Query and render to piechart
###Code
%%kql
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Use palette Reds for the piechart- set option -palette_name or -pn to Reds
###Code
%%kql -palette_name "Reds"
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
- *Note - to set a different palette, use option -palette_name* Use desaturated Reds for the piechart- set option -palette_desaturation or -pd with a value between 0 to 1
###Code
%%kql -palette_name "Reds" -palette_desaturation 0.5
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Parametrized palette and desaturation- set python variables to the palette and distaturioan value, and refer them from the magic
###Code
my_palettes = ["Reds", "Greens", "Blues"]
my_saturation = 0.7
current_palette = 2
%%kql -palette_name my_palettes[current_palette] -palette_desaturation my_saturation
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
current_palette = 0
%%kql -palette_name my_palettes[current_palette] -palette_desaturation my_saturation
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
Show current default palette
###Code
%kql --palette
###Output
_____no_output_____
###Markdown
palette desaturated by half (0.5)- -pd is an abreviation of -palette_desaturation- desturation value must be between to 1
###Code
%kql --palette -pd 0.5
###Output
_____no_output_____
###Markdown
- *Note - desaturation value is explicitly specified only if it is less than 1* palette extended to 40 colors- -pc is an abreviation of palettte_colors
###Code
%kql --palette -pd 0.5 -pc 40
###Output
_____no_output_____
###Markdown
alternative Reds palette, desaturated by 0.9, 40 colors - Beware that palettes are case sensitive
###Code
%kql --palette -pn "Reds" -pd 0.5 -pc 40
###Output
_____no_output_____
###Markdown
Builtin palettes popup all builtin palettes- push the button to open a popup window with the palettes
###Code
%kql --palettes -popup_window
###Output
_____no_output_____
###Markdown
show all builtin palettes desaturated- set option -palette_desaturation or -pd with a value between 0 to 1
###Code
%kql --palettes -palette_desaturation 0.5
###Output
_____no_output_____
###Markdown
popup all builtin n_colors palettes desaturated- set option -palette_colrs or -pc with a value greater than 0- set option -palette_desaturation or -pd with a value between 0 to 1- push the button to open a popup window with the palettes
###Code
%kql --palettes -pd 0.5 -pc 20 -pw
###Output
_____no_output_____
###Markdown
Configure default palette properties- palette_name- palette_desaturation- palette_colors show palette name of the default palette
###Code
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
modify the default palette
###Code
%config Kqlmagic.palette_name = 'Greens'
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
show palette desaturation default
###Code
%config Kqlmagic.palette_desaturation
###Output
_____no_output_____
###Markdown
modify palette desaturation
###Code
%config Kqlmagic.palette_desaturation = 0.95
%config Kqlmagic.palette_desaturation
###Output
_____no_output_____
###Markdown
show number of colors in default palette
###Code
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
modify number of colors in default palette
###Code
%config Kqlmagic.palette_colors = 6
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
show the deault palette based on the new defaults
###Code
%kql --palette
%%kql -palette_desaturation 0.5
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
- *Note - The palette size extends automativally, if required by chart (in above chart extended from 6 colors palette to 10 colors palette* Reverse color order of palette- Set option -palette_reverse / -pr
###Code
%%kql -palette_reverse
StormEvents
| summarize count() by State
| sort by count_
| limit 10
| render piechart
%config Kqlmagic.palette_colors = 20
%config Kqlmagic.palette_colors
###Output
_____no_output_____
###Markdown
Palette derivative names- basic names- subset names- reversed names- custom basic pastel palette
###Code
%kql --palette -pn "pastel"
###Output
_____no_output_____
###Markdown
- *Note - When not specified, palette_name, palette_colors and palette_desaturation default properties are used* subset derived from a basic palette
###Code
%kql --palette -pn "pastel[4:11]"
###Output
_____no_output_____
###Markdown
reversed from a basic palette
###Code
%kql --palette -pn "pastel_r"
###Output
_____no_output_____
###Markdown
reversed subset of a basic palette
###Code
%kql --palette -pn "pastel[4:11]_r"
###Output
_____no_output_____
###Markdown
piechart rendered using a derived palette
###Code
%%kql -pn "pastel[4:11]_r"
StormEvents
| summarize count() by State
| sort by count_
| limit 7
| render piechart
###Output
_____no_output_____
###Markdown
config default palette with a derived palette
###Code
%config Kqlmagic.palette_name = "pastel[4:11]_r"
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
barchart rendered using the new default palette
###Code
%%kql
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render barchart
###Output
_____no_output_____
###Markdown
Custom palette- define an alternative basic palette- have same characteristics as basic palette, a subset can be derived and/or reversed show custom palette- make sure that there are no white spaces in the string that defines the custom palette
###Code
%kql --palette -pn "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)']"
###Output
_____no_output_____
###Markdown
show derived subset palette from the custom palette
###Code
%kql --palette -pn "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]"
###Output
_____no_output_____
###Markdown
show derived reversed subset palette from custom palette
###Code
%kql --palette -pn "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r"
###Output
_____no_output_____
###Markdown
columnchart rendered using a custom palette
###Code
%%kql -pn "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r"
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render columnchart
###Output
_____no_output_____
###Markdown
set custom palette as default palette
###Code
%config Kqlmagic.palette_name = "['rgb(127,155,173)','rgb(103,135,156)','rgb(82,114,140)','rgb(63,93,122)','rgb(48,73,102)'][1:4]_r"
%config Kqlmagic.palette_name
###Output
_____no_output_____
###Markdown
barchart rendered using default palette (set to the custom chart)
###Code
%%kql
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 7
| render barchart
###Output
_____no_output_____
###Markdown
Python integration
###Code
%%kql -pn "Spectral" -pd 0.95 -pc 10
StormEvents
| summarize count() by State
| extend count2 = count_*1.4
| extend count3 = count_*2.4
| sort by count_
| limit 10
| render piechart
###Output
_____no_output_____
###Markdown
get last kql result palette
###Code
_kql_raw_result_.palette
###Output
_____no_output_____
###Markdown
get a slice from palette- behave as list of colors
###Code
_kql_raw_result_.palette[3:]
###Output
_____no_output_____
###Markdown
get one color from palette
###Code
_kql_raw_result_.palette[7]
###Output
_____no_output_____
###Markdown
get the palette raw rgb data
###Code
list(_kql_raw_result_.palette)
###Output
_____no_output_____
###Markdown
get palette_name (name of the palette)
###Code
_kql_raw_result_.options['palette_name']
###Output
_____no_output_____
###Markdown
get palette_colors value (number of colors in palette)
###Code
_kql_raw_result_.options['palette_colors']
###Output
_____no_output_____
###Markdown
get palette_desaturation value
###Code
_kql_raw_result_.options['palette_desaturation']
###Output
_____no_output_____
###Markdown
get palette_reverse state
###Code
_kql_raw_result_.options['palette_reverse']
###Output
_____no_output_____
###Markdown
get builin palettes
###Code
_kql_raw_result_.palettes
###Output
_____no_output_____
###Markdown
get builtin palette by name
###Code
_kql_raw_result_.palettes['Oranges']
###Output
_____no_output_____
###Markdown
get builtin palette by index
###Code
_kql_raw_result_.palettes[5]
###Output
_____no_output_____
###Markdown
get slice of an indexed builtin palette
###Code
_kql_raw_result_.palettes[5][:6]
###Output
_____no_output_____
###Markdown
get a color of a slice from an indexed builtin palette
###Code
_kql_raw_result_.palettes[5][:6][3]
###Output
_____no_output_____
###Markdown
get slice of a named builtin palette
###Code
_kql_raw_result_.palettes['terrain'][:6]
###Output
_____no_output_____
###Markdown
get all the names of the builtin palettes
###Code
list(_kql_raw_result_.palettes)
###Output
_____no_output_____
###Markdown
get the raw rgb data of a slice from an indexed buitin palette
###Code
list(_kql_raw_result_.palettes[5][:6])
###Output
_____no_output_____ |
BayesianAB.ipynb | ###Markdown
- [github](https://github.com/o93/bayesian-ab)- [colab](https://colab.research.google.com/drive/1xkQ-KFfcyZXdXboOCALkDH_XCwQrsB25) 使い方簡易的なベイジアンABテスト検証ツールです.1. 左上「ファイル」から「ドライブにコピーを保存」すると更新可能になります1. ABテストの現状の結果を入力します - アクセス数・CV数それぞれ1以上でないと判定できません1. 判定のしきい値を入力します(任意 - 例えば90%の確率でABの優劣を判定したい場合は,`0.9`と入力します1. サンプルサイズを入力します(任意 - サンプリングを多くすると処理に時間を要しますが,判定のブレが少なくなります1. 左上の実行ボタンを押します
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
def betaf(alpha, beta, rates):
numerator = rates ** (alpha - 1) * (1 - rates) ** (beta - 1)
return numerator / numerator.sum()
def posterior(a, N, prior, rates):
return betaf(a + 1, N - a + 1, rates)
def hmv(xs, ps, alpha):
xps = sorted(zip(xs, ps), key=lambda xp: xp[1], reverse=True)
xps = np.array(xps)
xs = xps[:, 0]
ps = xps[:, 1]
return np.sort(xs[np.cumsum(ps) <= alpha])
def plot_p(name, rates, v, hm, c):
plt.plot(rates, v, label=name, color=c)
region = (hm.min() < rates) & (rates < hm.max())
plt.fill_between(rates[region], v[region], 0, alpha=0.3, color=c)
# 実行
def execute_ab(a, b, threshold, sample_size):
# データフレーム
f = pd.DataFrame([a, b], index=['テストA', 'テストB'], columns=['アクセス数','CV数'])
f['CV率'] = f['CV数'] / f['アクセス数']
rates = np.linspace(0, 1, sample_size + 1, dtype=np.float128)
ra = 1 / len(rates)
rb = 1 / len(rates)
# 確率分布
pa = posterior(a[1], a[0], ra, rates)
pb = posterior(b[1], b[0], rb, rates)
# 誤差
ha = hmv(rates, pa, alpha=threshold)
hb = hmv(rates, pb, alpha=threshold)
f['誤差MIN'] = (np.min(ha), np.min(hb))
f['誤差MAX'] = (np.max(ha), np.max(hb))
# 表示範囲を限定
plot_index = np.where((pa > 0.000001) | (pb > 0.000001))
p_rates = rates[plot_index]
p_pa = pa[plot_index]
p_pb = pb[plot_index]
# 確率分布を表示
plt.figure(figsize=(12, 4))
plot_p('A', p_rates, p_pa, ha, '#FF4444')
plot_p('B', p_rates, p_pb, hb, '#4444FF')
plt.title('Distributions')
plt.legend()
plt.grid(True)
plt.show()
# AB数を揃えたサンプリング
sa = np.random.beta(a[1], a[0], size=sample_size // 2)
sb = np.random.beta(b[1], b[0], size=sample_size // 2)
print(' ')
# 優位確率
delta_a = sa - sb
delta_b = sb - sa
f.loc[f.index == 'テストA', '確率'] = (delta_a > 0).mean()
f.loc[f.index == 'テストB', '確率'] = (delta_b > 0).mean()
f['コメント'] = f.index + 'が優位な確率は' + (f['確率'] * 100).round(1).astype(str) + '%'
# データフレームを表示
display(f)
print(' ')
win = f[f['確率'] > threshold]
if win.shape[0] == 0:
print('しきい値{:.1%}での判定: 優位差無し'.format(threshold))
else:
print('しきい値{:.1%}での判定: {}の勝利!'.format(threshold, win.index.values[0]))
#@markdown ABそれぞれに(アクセス数, CV数)を入力
a = (86231, 335) #@param {type:"raw"}, {type:"raw"}
b = (87098, 395) #@param {type:"raw"}
#@markdown しきい値を入力
threshold = 0.9 #@param {type:"number", min:0.0, max:1.0}
#@markdown サンプルサイズを入力
sample_size = 1000000 #@param {type:"integer", min:0, max:1000000}
if a[0] == 0 or a[1] == 0 or b[0] == 0 or b[1] == 0:
print('a, bには0より大きい値を入力してください!')
else:
execute_ab(a, b, threshold, sample_size)
###Output
_____no_output_____ |
notebooks/features/other/DeepLearning - Flower Image Classification.ipynb | ###Markdown
Deep Learning - Flower Image Classification
###Code
from pyspark.ml import Transformer, Estimator, Pipeline
from pyspark.ml.classification import LogisticRegression
from synapse.ml.downloader import ModelDownloader
import os, sys, time
import os
if os.environ.get("AZURE_SERVICE", None) == "Microsoft.ProjectArcadia":
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
from notebookutils.visualization import display
if os.environ.get("AZURE_SERVICE", None) == "Microsoft.ProjectArcadia":
modelDir = "abfss://[email protected]/models/"
else:
modelDir = "dbfs:/models/"
model = ModelDownloader(spark, modelDir).downloadByName("ResNet50")
# Load the images
# use flowers_and_labels.parquet on larger cluster in order to get better results
imagesWithLabels = (
spark.read.parquet(
"wasbs://[email protected]/flowers_and_labels2.parquet"
)
.withColumnRenamed("bytes", "image")
.sample(0.1)
)
imagesWithLabels.printSchema()
###Output
_____no_output_____
###Markdown

###Code
from synapse.ml.opencv import ImageTransformer
from synapse.ml.image import UnrollImage
from synapse.ml.cntk import ImageFeaturizer
from synapse.ml.stages import *
# Make some featurizers
it = ImageTransformer().setOutputCol("scaled").resize(size=(60, 60))
ur = UnrollImage().setInputCol("scaled").setOutputCol("features")
dc1 = DropColumns().setCols(["scaled", "image"])
lr1 = (
LogisticRegression().setMaxIter(8).setFeaturesCol("features").setLabelCol("labels")
)
dc2 = DropColumns().setCols(["features"])
basicModel = Pipeline(stages=[it, ur, dc1, lr1, dc2])
resnet = (
ImageFeaturizer()
.setInputCol("image")
.setOutputCol("features")
.setModelLocation(model.uri)
.setLayerNames(model.layerNames)
.setCutOutputLayers(1)
)
dc3 = DropColumns().setCols(["image"])
lr2 = (
LogisticRegression().setMaxIter(8).setFeaturesCol("features").setLabelCol("labels")
)
dc4 = DropColumns().setCols(["features"])
deepModel = Pipeline(stages=[resnet, dc3, lr2, dc4])
###Output
_____no_output_____
###Markdown
 How does it work? Run the experiment
###Code
def timedExperiment(model, train, test):
start = time.time()
result = model.fit(train).transform(test).toPandas()
print("Experiment took {}s".format(time.time() - start))
return result
train, test = imagesWithLabels.randomSplit([0.8, 0.2])
train.count(), test.count()
basicResults = timedExperiment(basicModel, train, test)
deepResults = timedExperiment(deepModel, train, test)
###Output
_____no_output_____
###Markdown
Plot confusion matrix.
###Code
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
import numpy as np
def evaluate(results, name):
y, y_hat = results["labels"], results["prediction"]
y = [int(l) for l in y]
accuracy = np.mean([1.0 if pred == true else 0.0 for (pred, true) in zip(y_hat, y)])
cm = confusion_matrix(y, y_hat)
cm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
plt.text(
40, 10, "$Accuracy$ $=$ ${}\%$".format(round(accuracy * 100, 1)), fontsize=14
)
plt.imshow(cm, interpolation="nearest", cmap=plt.cm.Blues)
plt.colorbar()
plt.xlabel("$Predicted$ $label$", fontsize=18)
plt.ylabel("$True$ $Label$", fontsize=18)
plt.title("$Normalized$ $CM$ $for$ ${}$".format(name))
plt.figure(figsize=(12, 5))
plt.subplot(1, 2, 1)
evaluate(deepResults, "CNTKModel + LR")
plt.subplot(1, 2, 2)
evaluate(basicResults, "LR")
# Note that on the larger dataset the accuracy will bump up from 44% to >90%
display(plt.show())
###Output
_____no_output_____
###Markdown
Deep Learning - Flower Image Classification
###Code
from pyspark.ml import Transformer, Estimator, Pipeline
from pyspark.ml.classification import LogisticRegression
from synapse.ml.downloader import ModelDownloader
import os, sys, time
model = ModelDownloader(spark, "dbfs:/models/").downloadByName("ResNet50")
# Load the images
# use flowers_and_labels.parquet on larger cluster in order to get better results
imagesWithLabels = spark.read.parquet("wasbs://[email protected]/flowers_and_labels2.parquet") \
.withColumnRenamed("bytes","image").sample(.1)
imagesWithLabels.printSchema()
###Output
_____no_output_____
###Markdown

###Code
from synapse.ml.opencv import ImageTransformer
from synapse.ml.image import UnrollImage
from synapse.ml.cntk import ImageFeaturizer
from synapse.ml.stages import *
# Make some featurizers
it = ImageTransformer()\
.setOutputCol("scaled")\
.resize(size=(60, 60))
ur = UnrollImage()\
.setInputCol("scaled")\
.setOutputCol("features")
dc1 = DropColumns().setCols(["scaled", "image"])
lr1 = LogisticRegression().setMaxIter(8).setFeaturesCol("features").setLabelCol("labels")
dc2 = DropColumns().setCols(["features"])
basicModel = Pipeline(stages=[it, ur, dc1, lr1, dc2])
resnet = ImageFeaturizer()\
.setInputCol("image")\
.setOutputCol("features")\
.setModelLocation(model.uri)\
.setLayerNames(model.layerNames)\
.setCutOutputLayers(1)
dc3 = DropColumns().setCols(["image"])
lr2 = LogisticRegression().setMaxIter(8).setFeaturesCol("features").setLabelCol("labels")
dc4 = DropColumns().setCols(["features"])
deepModel = Pipeline(stages=[resnet, dc3, lr2, dc4])
###Output
_____no_output_____
###Markdown
 How does it work? Run the experiment
###Code
def timedExperiment(model, train, test):
start = time.time()
result = model.fit(train).transform(test).toPandas()
print("Experiment took {}s".format(time.time() - start))
return result
train, test = imagesWithLabels.randomSplit([.8,.2])
train.count(), test.count()
basicResults = timedExperiment(basicModel, train, test)
deepResults = timedExperiment(deepModel, train, test)
###Output
_____no_output_____
###Markdown
Plot confusion matrix.
###Code
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
import numpy as np
def evaluate(results, name):
y, y_hat = results["labels"],results["prediction"]
y = [int(l) for l in y]
accuracy = np.mean([1. if pred==true else 0. for (pred,true) in zip(y_hat,y)])
cm = confusion_matrix(y, y_hat)
cm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
plt.text(40, 10,"$Accuracy$ $=$ ${}\%$".format(round(accuracy*100,1)),fontsize=14)
plt.imshow(cm, interpolation="nearest", cmap=plt.cm.Blues)
plt.colorbar()
plt.xlabel("$Predicted$ $label$", fontsize=18)
plt.ylabel("$True$ $Label$", fontsize=18)
plt.title("$Normalized$ $CM$ $for$ ${}$".format(name))
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
evaluate(deepResults,"CNTKModel + LR")
plt.subplot(1,2,2)
evaluate(basicResults,"LR")
# Note that on the larger dataset the accuracy will bump up from 44% to >90%
display(plt.show())
###Output
_____no_output_____ |
ML & DL Prediction Model/Generated data/generate data.ipynb | ###Markdown
import libraries
###Code
pip install Faker
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import random
import statistics
from mpl_toolkits import mplot3d
df=pd.read_csv("heart.csv")
###Output
_____no_output_____ |
docs/_downloads/b9b65ebc739602d396a984a8f7a1737a/memory_format_tutorial.ipynb | ###Markdown
(실험용) PyTorch를 사용한 Channels Last Memory Format*********************************************************Author**: `Vitaly Fedyunin `_**번역**: `Choi Yoonjeong `_Channels last가 무엇인가요----------------------------Channels last 메모리 형식(memory format)은 차원 순서를 유지하면서 메모리 상의 NCHW 텐서(tensor)를 정렬하는 또 다른 방식입니다.Channels last 텐서는 채널(Channel)이 가장 밀도가 높은(densest) 차원으로 정렬(예. 이미지를 픽셀x픽셀로 저장)됩니다.예를 들어, (2개의 2 x 2 이미지에 3개의 채널이 존재하는 경우) 전형적인(연속적인) NCHW 텐서의 저장 방식은 다음과 같습니다:.. figure:: /_static/img/classic_memory_format.png :alt: classic_memory_formatChannels last 메모리 형식은 데이터를 다르게 정렬합니다:.. figure:: /_static/img/channels_last_memory_format.png :alt: channels_last_memory_formatPyTorch는 기존의 스트라이드(strides) 구조를 사용함으로써 메모리 형식을 지원(하며, eager, JIT 및 TorchScript를 포함한기존의 모델들과 하위 호환성을 제공)합니다. 예를 들어, Channels last 형식에서 10x3x16x16 배치(batch)는 (768, 1, 48, 3)와같은 폭(strides)을 가지고 있게 됩니다. Channels last 메모리 형식은 오직 4D NCWH Tensors에서만 실행할 수 있습니다. 메모리 형식(Memory Format) API---------------------------------연속 메모리 형식과 channels last 메모리 형식 간에 텐서를 변환하는 방법은 다음과 같습니다. 전형적인 PyTorch의 연속적인 텐서(tensor)
###Code
import torch
N, C, H, W = 10, 3, 32, 32
x = torch.empty(N, C, H, W)
print(x.stride()) # 결과: (3072, 1024, 32, 1)
###Output
_____no_output_____
###Markdown
변환 연산자
###Code
x = x.to(memory_format=torch.channels_last)
print(x.shape) # 결과: (10, 3, 32, 32) 차원 순서는 보존함
print(x.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
연속적인 형식으로 되돌리기
###Code
x = x.to(memory_format=torch.contiguous_format)
print(x.stride()) # 결과: (3072, 1024, 32, 1)
###Output
_____no_output_____
###Markdown
다른 방식
###Code
x = x.contiguous(memory_format=torch.channels_last)
print(x.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
형식(format) 확인
###Code
print(x.is_contiguous(memory_format=torch.channels_last)) # 결과: True
###Output
_____no_output_____
###Markdown
``to`` 와 ``contiguous`` 에는 작은 차이(minor difference)가 있습니다.명시적으로 텐서(tensor)의 메모리 형식을 변환할 때는 ``to`` 를 사용하는 것을권장합니다.대부분의 경우 두 API는 동일하게 동작합니다. 하지만 ``C==1`` 이거나``H == 1 && W == 1`` 인 ``NCHW`` 4D 텐서의 특수한 경우에는 ``to`` 만이Channel last 메모리 형식으로 표현된 적절한 폭(stride)을 생성합니다.이는 위의 두가지 경우에 텐서의 메모리 형식이 모호하기 때문입니다.예를 들어, 크기가 ``N1HW`` 인 연속적인 텐서(contiguous tensor)는``연속적`` 이면서 Channel last 형식으로 메모리에 저장됩니다.따라서, 주어진 메모리 형식에 대해 이미 ``is_contiguous`` 로 간주되어``contiguous`` 호출은 동작하지 않게(no-op) 되어, 폭(stride)을 갱신하지않게 됩니다. 반면에, ``to`` 는 의도한 메모리 형식으로 적절하게 표현하기 위해크기가 1인 차원에서 의미있는 폭(stride)으로 재배열(restride)합니다.
###Code
special_x = torch.empty(4, 1, 4, 4)
print(special_x.is_contiguous(memory_format=torch.channels_last)) # Ouputs: True
print(special_x.is_contiguous(memory_format=torch.contiguous_format)) # Ouputs: True
###Output
_____no_output_____
###Markdown
명시적 치환(permutation) API인 ``permute`` 에서도 동일하게 적용됩니다.모호성이 발생할 수 있는 특별한 경우에, ``permute`` 는 의도한 메모리형식으로 전달되는 폭(stride)을 생성하는 것이 보장되지 않습니다.``to`` 로 명시적으로 메모리 형식을 지정하여 의도치 않은 동작을 피할것을 권장합니다.또한, 3개의 비-배치(non-batch) 차원이 모두 ``1`` 인 극단적인 경우 (``C==1 && H==1 && W==1``), 현재 구현은 텐서를 Channels last 메모리형식으로 표시할 수 없음을 알려드립니다. Channels last 방식으로 생성하기
###Code
x = torch.empty(N, C, H, W, memory_format=torch.channels_last)
print(x.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
``clone`` 은 메모리 형식을 보존합니다.
###Code
y = x.clone()
print(y.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
``to``, ``cuda``, ``float`` ... 등도 메모리 형식을 보존합니다.
###Code
if torch.cuda.is_available():
y = x.cuda()
print(y.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
``empty_like``, ``*_like`` 연산자도 메모리 형식을 보존합니다.
###Code
y = torch.empty_like(x)
print(y.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
Pointwise 연산자도 메모리 형식을 보존합니다.
###Code
z = x + y
print(z.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
Conv, Batchnorm 모듈은 Channels last를 지원합니다. (단, CudNN >=7.6 에서만 동작)합성곱(convolution) 모듈은 이진 p-wise 연산자(binary p-wise operator)와는 다르게Channels last가 주된 메모리 형식입니다. 모든 입력은 연속적인 메모리 형식이며,연산자는 연속된 메모리 형식으로 출력을 생성합니다. 그렇지 않으면, 출력은channels last 메모리 형식입니다.
###Code
if torch.backends.cudnn.version() >= 7603:
model = torch.nn.Conv2d(8, 4, 3).cuda().half()
model = model.to(memory_format=torch.channels_last) # 모듈 인자들은 Channels last로 변환이 필요합니다
input = torch.randint(1, 10, (2, 8, 4, 4), dtype=torch.float32, requires_grad=True)
input = input.to(device="cuda", memory_format=torch.channels_last, dtype=torch.float16)
out = model(input)
print(out.is_contiguous(memory_format=torch.channels_last)) # 결과: True
###Output
_____no_output_____
###Markdown
입력 텐서가 Channels last를 지원하지 않는 연산자를 만나면치환(permutation)이 커널에 자동으로 적용되어 입력 텐서를 연속적인 형식으로복원합니다. 이 경우 과부하가 발생하여 channel last 메모리 형식의 전파가중단됩니다. 그럼에도 불구하고, 올바른 출력은 보장됩니다. 성능 향상-------------------------------------------------------------------------------------------정밀도를 줄인(reduced precision ``torch.float16``) 상태에서 Tensor Cores를 지원하는 Nvidia의 하드웨어에서가장 의미심장한 성능 향상을 보였습니다. `AMP (Automated Mixed Precision)` 학습 스크립트를 활용하여연속적인 형식에 비해 Channels last 방식이 22% 이상의 성능 향승을 확인할 수 있었습니다.이 때, Nvidia가 제공하는 AMP를 사용했습니다. https://github.com/NVIDIA/apex``python main_amp.py -a resnet50 --b 200 --workers 16 --opt-level O2 ./data``
###Code
# opt_level = O2
# keep_batchnorm_fp32 = None <class 'NoneType'>
# loss_scale = None <class 'NoneType'>
# CUDNN VERSION: 7603
# => creating model 'resnet50'
# Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.
# Defaults for this optimization level are:
# enabled : True
# opt_level : O2
# cast_model_type : torch.float16
# patch_torch_functions : False
# keep_batchnorm_fp32 : True
# master_weights : True
# loss_scale : dynamic
# Processing user overrides (additional kwargs that are not None)...
# After processing overrides, optimization options are:
# enabled : True
# opt_level : O2
# cast_model_type : torch.float16
# patch_torch_functions : False
# keep_batchnorm_fp32 : True
# master_weights : True
# loss_scale : dynamic
# Epoch: [0][10/125] Time 0.866 (0.866) Speed 230.949 (230.949) Loss 0.6735125184 (0.6735) Prec@1 61.000 (61.000) Prec@5 100.000 (100.000)
# Epoch: [0][20/125] Time 0.259 (0.562) Speed 773.481 (355.693) Loss 0.6968704462 (0.6852) Prec@1 55.000 (58.000) Prec@5 100.000 (100.000)
# Epoch: [0][30/125] Time 0.258 (0.461) Speed 775.089 (433.965) Loss 0.7877287269 (0.7194) Prec@1 51.500 (55.833) Prec@5 100.000 (100.000)
# Epoch: [0][40/125] Time 0.259 (0.410) Speed 771.710 (487.281) Loss 0.8285319805 (0.7467) Prec@1 48.500 (54.000) Prec@5 100.000 (100.000)
# Epoch: [0][50/125] Time 0.260 (0.380) Speed 770.090 (525.908) Loss 0.7370464802 (0.7447) Prec@1 56.500 (54.500) Prec@5 100.000 (100.000)
# Epoch: [0][60/125] Time 0.258 (0.360) Speed 775.623 (555.728) Loss 0.7592862844 (0.7472) Prec@1 51.000 (53.917) Prec@5 100.000 (100.000)
# Epoch: [0][70/125] Time 0.258 (0.345) Speed 774.746 (579.115) Loss 1.9698858261 (0.9218) Prec@1 49.500 (53.286) Prec@5 100.000 (100.000)
# Epoch: [0][80/125] Time 0.260 (0.335) Speed 770.324 (597.659) Loss 2.2505953312 (1.0879) Prec@1 50.500 (52.938) Prec@5 100.000 (100.000)
###Output
_____no_output_____
###Markdown
``--channels-last true`` 인자를 전달하여 Channels last 형식으로 모델을 실행하면 22%의 성능 향상을 보입니다.``python main_amp.py -a resnet50 --b 200 --workers 16 --opt-level O2 --channels-last true ./data``
###Code
# opt_level = O2
# keep_batchnorm_fp32 = None <class 'NoneType'>
# loss_scale = None <class 'NoneType'>
#
# CUDNN VERSION: 7603
#
# => creating model 'resnet50'
# Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.
#
# Defaults for this optimization level are:
# enabled : True
# opt_level : O2
# cast_model_type : torch.float16
# patch_torch_functions : False
# keep_batchnorm_fp32 : True
# master_weights : True
# loss_scale : dynamic
# Processing user overrides (additional kwargs that are not None)...
# After processing overrides, optimization options are:
# enabled : True
# opt_level : O2
# cast_model_type : torch.float16
# patch_torch_functions : False
# keep_batchnorm_fp32 : True
# master_weights : True
# loss_scale : dynamic
#
# Epoch: [0][10/125] Time 0.767 (0.767) Speed 260.785 (260.785) Loss 0.7579724789 (0.7580) Prec@1 53.500 (53.500) Prec@5 100.000 (100.000)
# Epoch: [0][20/125] Time 0.198 (0.482) Speed 1012.135 (414.716) Loss 0.7007197738 (0.7293) Prec@1 49.000 (51.250) Prec@5 100.000 (100.000)
# Epoch: [0][30/125] Time 0.198 (0.387) Speed 1010.977 (516.198) Loss 0.7113101482 (0.7233) Prec@1 55.500 (52.667) Prec@5 100.000 (100.000)
# Epoch: [0][40/125] Time 0.197 (0.340) Speed 1013.023 (588.333) Loss 0.8943189979 (0.7661) Prec@1 54.000 (53.000) Prec@5 100.000 (100.000)
# Epoch: [0][50/125] Time 0.198 (0.312) Speed 1010.541 (641.977) Loss 1.7113249302 (0.9551) Prec@1 51.000 (52.600) Prec@5 100.000 (100.000)
# Epoch: [0][60/125] Time 0.198 (0.293) Speed 1011.163 (683.574) Loss 5.8537774086 (1.7716) Prec@1 50.500 (52.250) Prec@5 100.000 (100.000)
# Epoch: [0][70/125] Time 0.198 (0.279) Speed 1011.453 (716.767) Loss 5.7595844269 (2.3413) Prec@1 46.500 (51.429) Prec@5 100.000 (100.000)
# Epoch: [0][80/125] Time 0.198 (0.269) Speed 1011.827 (743.883) Loss 2.8196096420 (2.4011) Prec@1 47.500 (50.938) Prec@5 100.000 (100.000)
###Output
_____no_output_____
###Markdown
아래 목록의 모델들은 Channels last 형식을 전적으로 지원(full support)하며 Volta 장비에서 8%-35%의 성능 향상을 보입니다:``alexnet``, ``mnasnet0_5``, ``mnasnet0_75``, ``mnasnet1_0``, ``mnasnet1_3``, ``mobilenet_v2``, ``resnet101``, ``resnet152``, ``resnet18``, ``resnet34``, ``resnet50``, ``resnext50_32x4d``, ``shufflenet_v2_x0_5``, ``shufflenet_v2_x1_0``, ``shufflenet_v2_x1_5``, ``shufflenet_v2_x2_0``, ``squeezenet1_0``, ``squeezenet1_1``, ``vgg11``, ``vgg11_bn``, ``vgg13``, ``vgg13_bn``, ``vgg16``, ``vgg16_bn``, ``vgg19``, ``vgg19_bn``, ``wide_resnet101_2``, ``wide_resnet50_2`` 기존 모델들 변환하기--------------------------Channels last 지원은 기존 모델이 무엇이냐에 따라 제한되지 않습니다.어떠한 모델도 Channels last로 변환할 수 있으며입력(또는 특정 가중치)의 형식만 맞춰주면 (신경망) 그래프를 통해 바로 전파(propagate)할 수 있습니다.
###Code
# 모델을 초기화한(또는 불러온) 이후, 한 번 실행이 필요합니다.
model = model.to(memory_format=torch.channels_last) # 원하는 모델로 교체하기
# 모든 입력에 대해서 실행이 필요합니다.
input = input.to(memory_format=torch.channels_last) # 원하는 입력으로 교체하기
output = model(input)
###Output
_____no_output_____
###Markdown
그러나, 모든 연산자들이 Channels last를 지원하도록 완전히 바뀐 것은 아닙니다(일반적으로는 연속적인 출력을 대신 반환합니다).위의 예시들에서 Channels last를 지원하지 않는 계층(layer)은 메모리 형식 전파를 멈추게 됩니다.그럼에도 불구하고, 모델을 channels last 형식으로 변환했으므로, Channels last 메모리 형식으로 4차원의 가중치를 갖는각 합성곱 계층(convolution layer)에서는 Channels last 형식으로 복원되고 더 빠른 커널(faster kernel)의 이점을 누릴 수 있게 됩니다.하지만 Channels last를 지원하지 않는 연산자들은 치환(permutation)에 의해 과부하가 발생하게 됩니다.선택적으로, 변환된 모델의 성능을 향상시키고 싶은 경우 모델의 연산자들 중 channel last를 지원하지 않는 연산자를 조사하고 식별할 수 있습니다.이는 Channel Last 지원 연산자 목록 https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support 에서 사용한 연산자들이 존재하는지 확인하거나,eager 실행 모드에서 메모리 형식 검사를 도입하고 모델을 실행해야 합니다.아래 코드에서, 연산자들의 출력이 입력의 메모리 형식과 일치하지 않으면 예외(exception)를 발생시킵니다.
###Code
def contains_cl(args):
for t in args:
if isinstance(t, torch.Tensor):
if t.is_contiguous(memory_format=torch.channels_last) and not t.is_contiguous():
return True
elif isinstance(t, list) or isinstance(t, tuple):
if contains_cl(list(t)):
return True
return False
def print_inputs(args, indent=''):
for t in args:
if isinstance(t, torch.Tensor):
print(indent, t.stride(), t.shape, t.device, t.dtype)
elif isinstance(t, list) or isinstance(t, tuple):
print(indent, type(t))
print_inputs(list(t), indent=indent + ' ')
else:
print(indent, t)
def check_wrapper(fn):
name = fn.__name__
def check_cl(*args, **kwargs):
was_cl = contains_cl(args)
try:
result = fn(*args, **kwargs)
except Exception as e:
print("`{}` inputs are:".format(name))
print_inputs(args)
print('-------------------')
raise e
failed = False
if was_cl:
if isinstance(result, torch.Tensor):
if result.dim() == 4 and not result.is_contiguous(memory_format=torch.channels_last):
print("`{}` got channels_last input, but output is not channels_last:".format(name),
result.shape, result.stride(), result.device, result.dtype)
failed = True
if failed and True:
print("`{}` inputs are:".format(name))
print_inputs(args)
raise Exception(
'Operator `{}` lost channels_last property'.format(name))
return result
return check_cl
old_attrs = dict()
def attribute(m):
old_attrs[m] = dict()
for i in dir(m):
e = getattr(m, i)
exclude_functions = ['is_cuda', 'has_names', 'numel',
'stride', 'Tensor', 'is_contiguous', '__class__']
if i not in exclude_functions and not i.startswith('_') and '__call__' in dir(e):
try:
old_attrs[m][i] = e
setattr(m, i, check_wrapper(e))
except Exception as e:
print(i)
print(e)
attribute(torch.Tensor)
attribute(torch.nn.functional)
attribute(torch)
###Output
_____no_output_____
###Markdown
만약 Channels last 텐서를 지원하지 않는 연산자를 발견하였고, 기여하기를 원한다면다음 개발 문서를 참고해주세요.https://github.com/pytorch/pytorch/wiki/Writing-memory-format-aware-operators 아래 코드는 torch의 속성(attributes)를 복원합니다.
###Code
for (m, attrs) in old_attrs.items():
for (k,v) in attrs.items():
setattr(m, k, v)
###Output
_____no_output_____
###Markdown
명시적 치환(permutation) API인 ``permute`` 에서도 동일하게 적용됩니다.모호성이 발생할 수 있는 특별한 경우에, ``permute`` 는 의도한 메모리형식으로 전달되는 폭(stride)을 생성하는 것이 보장되지 않습니다.``to`` 로 명시적으로 메모리 형식을 지정하여 의도치 않은 동작을 피할것을 권장합니다.또한, 3개의 비-배치(non-batch) 차원이 모두 ``1`` 인 극단적인 경우 (``C==1 && H==1 && W==1``), 현재 구현은 텐서를 Channels last 메모리형식으로 표시할 수 없음을 알려드립니다. Channels last 방식으로 생성하기
###Code
x = torch.empty(N, C, H, W, memory_format=torch.channels_last)
print(x.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
``clone`` 은 메모리 형식을 보존합니다.
###Code
y = x.clone()
print(y.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
``to``, ``cuda``, ``float`` ... 등도 메모리 형식을 보존합니다.
###Code
if torch.cuda.is_available():
y = x.cuda()
print(y.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
``empty_like``, ``*_like`` 연산자도 메모리 형식을 보존합니다.
###Code
y = torch.empty_like(x)
print(y.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
Pointwise 연산자도 메모리 형식을 보존합니다.
###Code
z = x + y
print(z.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
Conv, Batchnorm 모듈은 Channels last를 지원합니다. (단, CudNN >=7.6 에서만 동작)합성곱(convolution) 모듈은 이진 p-wise 연산자(binary p-wise operator)와는 다르게Channels last가 주된 메모리 형식입니다. 모든 입력은 연속적인 메모리 형식이며,연산자는 연속된 메모리 형식으로 출력을 생성합니다. 그렇지 않으면, 출력은channels last 메모리 형식입니다.
###Code
if torch.backends.cudnn.version() >= 7603:
model = torch.nn.Conv2d(8, 4, 3).cuda().half()
model = model.to(memory_format=torch.channels_last) # 모듈 인자들은 Channels last로 변환이 필요합니다
input = torch.randint(1, 10, (2, 8, 4, 4), dtype=torch.float32, requires_grad=True)
input = input.to(device="cuda", memory_format=torch.channels_last, dtype=torch.float16)
out = model(input)
print(out.is_contiguous(memory_format=torch.channels_last)) # 결과: True
###Output
_____no_output_____
###Markdown
입력 텐서가 Channels last를 지원하지 않는 연산자를 만나면치환(permutation)이 커널에 자동으로 적용되어 입력 텐서를 연속적인 형식으로복원합니다. 이 경우 과부하가 발생하여 channel last 메모리 형식의 전파가중단됩니다. 그럼에도 불구하고, 올바른 출력은 보장됩니다. 성능 향상-------------------------------------------------------------------------------------------정밀도를 줄인(reduced precision ``torch.float16``) 상태에서 Tensor Cores를 지원하는 Nvidia의 하드웨어에서가장 의미심장한 성능 향상을 보였습니다. `AMP (Automated Mixed Precision)` 학습 스크립트를 활용하여연속적인 형식에 비해 Channels last 방식이 22% 이상의 성능 향승을 확인할 수 있었습니다.이 때, Nvidia가 제공하는 AMP를 사용했습니다. https://github.com/NVIDIA/apex``python main_amp.py -a resnet50 --b 200 --workers 16 --opt-level O2 ./data``
###Code
# opt_level = O2
# keep_batchnorm_fp32 = None <class 'NoneType'>
# loss_scale = None <class 'NoneType'>
# CUDNN VERSION: 7603
# => creating model 'resnet50'
# Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.
# Defaults for this optimization level are:
# enabled : True
# opt_level : O2
# cast_model_type : torch.float16
# patch_torch_functions : False
# keep_batchnorm_fp32 : True
# master_weights : True
# loss_scale : dynamic
# Processing user overrides (additional kwargs that are not None)...
# After processing overrides, optimization options are:
# enabled : True
# opt_level : O2
# cast_model_type : torch.float16
# patch_torch_functions : False
# keep_batchnorm_fp32 : True
# master_weights : True
# loss_scale : dynamic
# Epoch: [0][10/125] Time 0.866 (0.866) Speed 230.949 (230.949) Loss 0.6735125184 (0.6735) Prec@1 61.000 (61.000) Prec@5 100.000 (100.000)
# Epoch: [0][20/125] Time 0.259 (0.562) Speed 773.481 (355.693) Loss 0.6968704462 (0.6852) Prec@1 55.000 (58.000) Prec@5 100.000 (100.000)
# Epoch: [0][30/125] Time 0.258 (0.461) Speed 775.089 (433.965) Loss 0.7877287269 (0.7194) Prec@1 51.500 (55.833) Prec@5 100.000 (100.000)
# Epoch: [0][40/125] Time 0.259 (0.410) Speed 771.710 (487.281) Loss 0.8285319805 (0.7467) Prec@1 48.500 (54.000) Prec@5 100.000 (100.000)
# Epoch: [0][50/125] Time 0.260 (0.380) Speed 770.090 (525.908) Loss 0.7370464802 (0.7447) Prec@1 56.500 (54.500) Prec@5 100.000 (100.000)
# Epoch: [0][60/125] Time 0.258 (0.360) Speed 775.623 (555.728) Loss 0.7592862844 (0.7472) Prec@1 51.000 (53.917) Prec@5 100.000 (100.000)
# Epoch: [0][70/125] Time 0.258 (0.345) Speed 774.746 (579.115) Loss 1.9698858261 (0.9218) Prec@1 49.500 (53.286) Prec@5 100.000 (100.000)
# Epoch: [0][80/125] Time 0.260 (0.335) Speed 770.324 (597.659) Loss 2.2505953312 (1.0879) Prec@1 50.500 (52.938) Prec@5 100.000 (100.000)
###Output
_____no_output_____
###Markdown
``--channels-last true`` 인자를 전달하여 Channels last 형식으로 모델을 실행하면 22%의 성능 향상을 보입니다.``python main_amp.py -a resnet50 --b 200 --workers 16 --opt-level O2 --channels-last true ./data``
###Code
# opt_level = O2
# keep_batchnorm_fp32 = None <class 'NoneType'>
# loss_scale = None <class 'NoneType'>
#
# CUDNN VERSION: 7603
#
# => creating model 'resnet50'
# Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.
#
# Defaults for this optimization level are:
# enabled : True
# opt_level : O2
# cast_model_type : torch.float16
# patch_torch_functions : False
# keep_batchnorm_fp32 : True
# master_weights : True
# loss_scale : dynamic
# Processing user overrides (additional kwargs that are not None)...
# After processing overrides, optimization options are:
# enabled : True
# opt_level : O2
# cast_model_type : torch.float16
# patch_torch_functions : False
# keep_batchnorm_fp32 : True
# master_weights : True
# loss_scale : dynamic
#
# Epoch: [0][10/125] Time 0.767 (0.767) Speed 260.785 (260.785) Loss 0.7579724789 (0.7580) Prec@1 53.500 (53.500) Prec@5 100.000 (100.000)
# Epoch: [0][20/125] Time 0.198 (0.482) Speed 1012.135 (414.716) Loss 0.7007197738 (0.7293) Prec@1 49.000 (51.250) Prec@5 100.000 (100.000)
# Epoch: [0][30/125] Time 0.198 (0.387) Speed 1010.977 (516.198) Loss 0.7113101482 (0.7233) Prec@1 55.500 (52.667) Prec@5 100.000 (100.000)
# Epoch: [0][40/125] Time 0.197 (0.340) Speed 1013.023 (588.333) Loss 0.8943189979 (0.7661) Prec@1 54.000 (53.000) Prec@5 100.000 (100.000)
# Epoch: [0][50/125] Time 0.198 (0.312) Speed 1010.541 (641.977) Loss 1.7113249302 (0.9551) Prec@1 51.000 (52.600) Prec@5 100.000 (100.000)
# Epoch: [0][60/125] Time 0.198 (0.293) Speed 1011.163 (683.574) Loss 5.8537774086 (1.7716) Prec@1 50.500 (52.250) Prec@5 100.000 (100.000)
# Epoch: [0][70/125] Time 0.198 (0.279) Speed 1011.453 (716.767) Loss 5.7595844269 (2.3413) Prec@1 46.500 (51.429) Prec@5 100.000 (100.000)
# Epoch: [0][80/125] Time 0.198 (0.269) Speed 1011.827 (743.883) Loss 2.8196096420 (2.4011) Prec@1 47.500 (50.938) Prec@5 100.000 (100.000)
###Output
_____no_output_____
###Markdown
아래 목록의 모델들은 Channels last 형식을 전적으로 지원(full support)하며 Volta 장비에서 8%-35%의 성능 향상을 보입니다:``alexnet``, ``mnasnet0_5``, ``mnasnet0_75``, ``mnasnet1_0``, ``mnasnet1_3``, ``mobilenet_v2``, ``resnet101``, ``resnet152``, ``resnet18``, ``resnet34``, ``resnet50``, ``resnext50_32x4d``, ``shufflenet_v2_x0_5``, ``shufflenet_v2_x1_0``, ``shufflenet_v2_x1_5``, ``shufflenet_v2_x2_0``, ``squeezenet1_0``, ``squeezenet1_1``, ``vgg11``, ``vgg11_bn``, ``vgg13``, ``vgg13_bn``, ``vgg16``, ``vgg16_bn``, ``vgg19``, ``vgg19_bn``, ``wide_resnet101_2``, ``wide_resnet50_2`` 기존 모델들 변환하기--------------------------Channels last 지원은 기존 모델이 무엇이냐에 따라 제한되지 않습니다.어떠한 모델도 Channels last로 변환할 수 있으며입력(또는 특정 가중치)의 형식만 맞춰주면 (신경망) 그래프를 통해 바로 전파(propagate)할 수 있습니다.
###Code
# 모델을 초기화한(또는 불러온) 이후, 한 번 실행이 필요합니다.
model = model.to(memory_format=torch.channels_last) # 원하는 모델로 교체하기
# 모든 입력에 대해서 실행이 필요합니다.
input = input.to(memory_format=torch.channels_last) # 원하는 입력으로 교체하기
output = model(input)
###Output
_____no_output_____
###Markdown
그러나, 모든 연산자들이 Channels last를 지원하도록 완전히 바뀐 것은 아닙니다(일반적으로는 연속적인 출력을 대신 반환합니다).위의 예시들에서 Channels last를 지원하지 않는 계층(layer)은 메모리 형식 전파를 멈추게 됩니다.그럼에도 불구하고, 모델을 channels last 형식으로 변환했으므로, Channels last 메모리 형식으로 4차원의 가중치를 갖는각 합성곱 계층(convolution layer)에서는 Channels last 형식으로 복원되고 더 빠른 커널(faster kernel)의 이점을 누릴 수 있게 됩니다.하지만 Channels last를 지원하지 않는 연산자들은 치환(permutation)에 의해 과부하가 발생하게 됩니다.선택적으로, 변환된 모델의 성능을 향상시키고 싶은 경우 모델의 연산자들 중 channel last를 지원하지 않는 연산자를 조사하고 식별할 수 있습니다.이는 Channel Last 지원 연산자 목록 https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support 에서 사용한 연산자들이 존재하는지 확인하거나,eager 실행 모드에서 메모리 형식 검사를 도입하고 모델을 실행해야 합니다.아래 코드에서, 연산자들의 출력이 입력의 메모리 형식과 일치하지 않으면 예외(exception)를 발생시킵니다.
###Code
def contains_cl(args):
for t in args:
if isinstance(t, torch.Tensor):
if t.is_contiguous(memory_format=torch.channels_last) and not t.is_contiguous():
return True
elif isinstance(t, list) or isinstance(t, tuple):
if contains_cl(list(t)):
return True
return False
def print_inputs(args, indent=''):
for t in args:
if isinstance(t, torch.Tensor):
print(indent, t.stride(), t.shape, t.device, t.dtype)
elif isinstance(t, list) or isinstance(t, tuple):
print(indent, type(t))
print_inputs(list(t), indent=indent + ' ')
else:
print(indent, t)
def check_wrapper(fn):
name = fn.__name__
def check_cl(*args, **kwargs):
was_cl = contains_cl(args)
try:
result = fn(*args, **kwargs)
except Exception as e:
print("`{}` inputs are:".format(name))
print_inputs(args)
print('-------------------')
raise e
failed = False
if was_cl:
if isinstance(result, torch.Tensor):
if result.dim() == 4 and not result.is_contiguous(memory_format=torch.channels_last):
print("`{}` got channels_last input, but output is not channels_last:".format(name),
result.shape, result.stride(), result.device, result.dtype)
failed = True
if failed and True:
print("`{}` inputs are:".format(name))
print_inputs(args)
raise Exception(
'Operator `{}` lost channels_last property'.format(name))
return result
return check_cl
old_attrs = dict()
def attribute(m):
old_attrs[m] = dict()
for i in dir(m):
e = getattr(m, i)
exclude_functions = ['is_cuda', 'has_names', 'numel',
'stride', 'Tensor', 'is_contiguous', '__class__']
if i not in exclude_functions and not i.startswith('_') and '__call__' in dir(e):
try:
old_attrs[m][i] = e
setattr(m, i, check_wrapper(e))
except Exception as e:
print(i)
print(e)
attribute(torch.Tensor)
attribute(torch.nn.functional)
attribute(torch)
###Output
_____no_output_____
###Markdown
만약 Channels last 텐서를 지원하지 않는 연산자를 발견하였고, 기여하기를 원한다면다음 개발 문서를 참고해주세요.https://github.com/pytorch/pytorch/wiki/Writing-memory-format-aware-operators 아래 코드는 torch의 속성(attributes)를 복원합니다.
###Code
for (m, attrs) in old_attrs.items():
for (k,v) in attrs.items():
setattr(m, k, v)
###Output
_____no_output_____
###Markdown
(베타) PyTorch를 사용한 Channels Last 메모리 형식*********************************************************Author**: `Vitaly Fedyunin `_**번역**: `Choi Yoonjeong `_Channels last가 무엇인가요----------------------------Channels last 메모리 형식(memory format)은 차원 순서를 유지하면서 메모리 상의 NCHW 텐서(tensor)를 정렬하는 또 다른 방식입니다.Channels last 텐서는 채널(Channel)이 가장 밀도가 높은(densest) 차원으로 정렬(예. 이미지를 픽셀x픽셀로 저장)됩니다.예를 들어, (2개의 2 x 2 이미지에 3개의 채널이 존재하는 경우) 전형적인(연속적인) NCHW 텐서의 저장 방식은 다음과 같습니다:.. figure:: /_static/img/classic_memory_format.png :alt: classic_memory_formatChannels last 메모리 형식은 데이터를 다르게 정렬합니다:.. figure:: /_static/img/channels_last_memory_format.png :alt: channels_last_memory_formatPyTorch는 기존의 스트라이드(strides) 구조를 사용함으로써 메모리 형식을 지원(하며, eager, JIT 및 TorchScript를 포함한기존의 모델들과 하위 호환성을 제공)합니다. 예를 들어, Channels last 형식에서 10x3x16x16 배치(batch)는 (768, 1, 48, 3)와같은 폭(strides)을 가지고 있게 됩니다. Channels last 메모리 형식은 오직 4D NCWH Tensors에서만 실행할 수 있습니다. 메모리 형식(Memory Format) API---------------------------------연속 메모리 형식과 channels last 메모리 형식 간에 텐서를 변환하는 방법은 다음과 같습니다. 전형적인 PyTorch의 연속적인 텐서(tensor)
###Code
import torch
N, C, H, W = 10, 3, 32, 32
x = torch.empty(N, C, H, W)
print(x.stride()) # 결과: (3072, 1024, 32, 1)
###Output
_____no_output_____
###Markdown
변환 연산자
###Code
x = x.to(memory_format=torch.channels_last)
print(x.shape) # 결과: (10, 3, 32, 32) 차원 순서는 보존함
print(x.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
연속적인 형식으로 되돌리기
###Code
x = x.to(memory_format=torch.contiguous_format)
print(x.stride()) # 결과: (3072, 1024, 32, 1)
###Output
_____no_output_____
###Markdown
다른 방식
###Code
x = x.contiguous(memory_format=torch.channels_last)
print(x.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
형식(format) 확인
###Code
print(x.is_contiguous(memory_format=torch.channels_last)) # 결과: True
###Output
_____no_output_____
###Markdown
``to`` 와 ``contiguous`` 에는 작은 차이(minor difference)가 있습니다.명시적으로 텐서(tensor)의 메모리 형식을 변환할 때는 ``to`` 를 사용하는 것을권장합니다.대부분의 경우 두 API는 동일하게 동작합니다. 하지만 ``C==1`` 이거나``H == 1 && W == 1`` 인 ``NCHW`` 4D 텐서의 특수한 경우에는 ``to`` 만이Channel last 메모리 형식으로 표현된 적절한 폭(stride)을 생성합니다.이는 위의 두가지 경우에 텐서의 메모리 형식이 모호하기 때문입니다.예를 들어, 크기가 ``N1HW`` 인 연속적인 텐서(contiguous tensor)는``연속적`` 이면서 Channel last 형식으로 메모리에 저장됩니다.따라서, 주어진 메모리 형식에 대해 이미 ``is_contiguous`` 로 간주되어``contiguous`` 호출은 동작하지 않게(no-op) 되어, 폭(stride)을 갱신하지않게 됩니다. 반면에, ``to`` 는 의도한 메모리 형식으로 적절하게 표현하기 위해크기가 1인 차원에서 의미있는 폭(stride)으로 재배열(restride)합니다.
###Code
special_x = torch.empty(4, 1, 4, 4)
print(special_x.is_contiguous(memory_format=torch.channels_last)) # Ouputs: True
print(special_x.is_contiguous(memory_format=torch.contiguous_format)) # Ouputs: True
###Output
_____no_output_____
###Markdown
(베타) PyTorch를 사용한 Channels Last 메모리 형식*********************************************************Author**: `Vitaly Fedyunin `_**번역**: `Choi Yoonjeong `_Channels last가 무엇인가요----------------------------Channels last 메모리 형식(memory format)은 차원 순서를 유지하면서 메모리 상의 NCHW 텐서(tensor)를 정렬하는 또 다른 방식입니다.Channels last 텐서는 채널(Channel)이 가장 밀도가 높은(densest) 차원으로 정렬(예. 이미지를 픽셀x픽셀로 저장)됩니다.예를 들어, (2개의 4 x 4 이미지에 3개의 채널이 존재하는 경우) 전형적인(연속적인) NCHW 텐서의 저장 방식은 다음과 같습니다:.. figure:: /_static/img/classic_memory_format.png :alt: classic_memory_formatChannels last 메모리 형식은 데이터를 다르게 정렬합니다:.. figure:: /_static/img/channels_last_memory_format.png :alt: channels_last_memory_formatPyTorch는 기존의 스트라이드(strides) 구조를 사용함으로써 메모리 형식을 지원(하며, eager, JIT 및 TorchScript를 포함한기존의 모델들과 하위 호환성을 제공)합니다. 예를 들어, Channels last 형식에서 10x3x16x16 배치(batch)는 (768, 1, 48, 3)와같은 폭(strides)을 가지고 있게 됩니다. Channels last 메모리 형식은 오직 4D NCWH Tensors에서만 실행할 수 있습니다. 메모리 형식(Memory Format) API---------------------------------연속 메모리 형식과 channels last 메모리 형식 간에 텐서를 변환하는 방법은 다음과 같습니다. 전형적인 PyTorch의 연속적인 텐서(tensor)
###Code
import torch
N, C, H, W = 10, 3, 32, 32
x = torch.empty(N, C, H, W)
print(x.stride()) # 결과: (3072, 1024, 32, 1)
###Output
_____no_output_____
###Markdown
변환 연산자
###Code
x = x.to(memory_format=torch.channels_last)
print(x.shape) # 결과: (10, 3, 32, 32) 차원 순서는 보존함
print(x.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
연속적인 형식으로 되돌리기
###Code
x = x.to(memory_format=torch.contiguous_format)
print(x.stride()) # 결과: (3072, 1024, 32, 1)
###Output
_____no_output_____
###Markdown
다른 방식
###Code
x = x.contiguous(memory_format=torch.channels_last)
print(x.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
형식(format) 확인
###Code
print(x.is_contiguous(memory_format=torch.channels_last)) # 결과: True
###Output
_____no_output_____
###Markdown
``to`` 와 ``contiguous`` 에는 작은 차이(minor difference)가 있습니다.명시적으로 텐서(tensor)의 메모리 형식을 변환할 때는 ``to`` 를 사용하는 것을권장합니다.대부분의 경우 두 API는 동일하게 동작합니다. 하지만 ``C==1`` 이거나``H == 1 && W == 1`` 인 ``NCHW`` 4D 텐서의 특수한 경우에는 ``to`` 만이Channel last 메모리 형식으로 표현된 적절한 폭(stride)을 생성합니다.이는 위의 두가지 경우에 텐서의 메모리 형식이 모호하기 때문입니다.예를 들어, 크기가 ``N1HW`` 인 연속적인 텐서(contiguous tensor)는``연속적`` 이면서 Channel last 형식으로 메모리에 저장됩니다.따라서, 주어진 메모리 형식에 대해 이미 ``is_contiguous`` 로 간주되어``contiguous`` 호출은 동작하지 않게(no-op) 되어, 폭(stride)을 갱신하지않게 됩니다. 반면에, ``to`` 는 의도한 메모리 형식으로 적절하게 표현하기 위해크기가 1인 차원에서 의미있는 폭(stride)으로 재배열(restride)합니다.
###Code
special_x = torch.empty(4, 1, 4, 4)
print(special_x.is_contiguous(memory_format=torch.channels_last)) # Ouputs: True
print(special_x.is_contiguous(memory_format=torch.contiguous_format)) # Ouputs: True
###Output
_____no_output_____
###Markdown
명시적 치환(permutation) API인 ``permute`` 에서도 동일하게 적용됩니다.모호성이 발생할 수 있는 특별한 경우에, ``permute`` 는 의도한 메모리형식으로 전달되는 폭(stride)을 생성하는 것이 보장되지 않습니다.``to`` 로 명시적으로 메모리 형식을 지정하여 의도치 않은 동작을 피할것을 권장합니다.또한, 3개의 비-배치(non-batch) 차원이 모두 ``1`` 인 극단적인 경우 (``C==1 && H==1 && W==1``), 현재 구현은 텐서를 Channels last 메모리형식으로 표시할 수 없음을 알려드립니다. Channels last 방식으로 생성하기
###Code
x = torch.empty(N, C, H, W, memory_format=torch.channels_last)
print(x.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
``clone`` 은 메모리 형식을 보존합니다.
###Code
y = x.clone()
print(y.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
``to``, ``cuda``, ``float`` ... 등도 메모리 형식을 보존합니다.
###Code
if torch.cuda.is_available():
y = x.cuda()
print(y.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
``empty_like``, ``*_like`` 연산자도 메모리 형식을 보존합니다.
###Code
y = torch.empty_like(x)
print(y.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
Pointwise 연산자도 메모리 형식을 보존합니다.
###Code
z = x + y
print(z.stride()) # 결과: (3072, 1, 96, 3)
###Output
_____no_output_____
###Markdown
Conv, Batchnorm 모듈은 Channels last를 지원합니다. (단, CudNN >=7.6 에서만 동작)합성곱(convolution) 모듈은 이진 p-wise 연산자(binary p-wise operator)와는 다르게Channels last가 주된 메모리 형식입니다. 모든 입력은 연속적인 메모리 형식이며,연산자는 연속된 메모리 형식으로 출력을 생성합니다. 그렇지 않으면, 출력은channels last 메모리 형식입니다.
###Code
if torch.backends.cudnn.version() >= 7603:
model = torch.nn.Conv2d(8, 4, 3).cuda().half()
model = model.to(memory_format=torch.channels_last) # 모듈 인자들은 Channels last로 변환이 필요합니다
input = torch.randint(1, 10, (2, 8, 4, 4), dtype=torch.float32, requires_grad=True)
input = input.to(device="cuda", memory_format=torch.channels_last, dtype=torch.float16)
out = model(input)
print(out.is_contiguous(memory_format=torch.channels_last)) # 결과: True
###Output
_____no_output_____
###Markdown
입력 텐서가 Channels last를 지원하지 않는 연산자를 만나면치환(permutation)이 커널에 자동으로 적용되어 입력 텐서를 연속적인 형식으로복원합니다. 이 경우 과부하가 발생하여 channel last 메모리 형식의 전파가중단됩니다. 그럼에도 불구하고, 올바른 출력은 보장됩니다. 성능 향상-------------------------------------------------------------------------------------------정밀도를 줄인(reduced precision ``torch.float16``) 상태에서 Tensor Cores를 지원하는 Nvidia의 하드웨어에서가장 의미심장한 성능 향상을 보였습니다. `AMP (Automated Mixed Precision)` 학습 스크립트를 활용하여연속적인 형식에 비해 Channels last 방식이 22% 이상의 성능 향승을 확인할 수 있었습니다.이 때, Nvidia가 제공하는 AMP를 사용했습니다. https://github.com/NVIDIA/apex``python main_amp.py -a resnet50 --b 200 --workers 16 --opt-level O2 ./data``
###Code
# opt_level = O2
# keep_batchnorm_fp32 = None <class 'NoneType'>
# loss_scale = None <class 'NoneType'>
# CUDNN VERSION: 7603
# => creating model 'resnet50'
# Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.
# Defaults for this optimization level are:
# enabled : True
# opt_level : O2
# cast_model_type : torch.float16
# patch_torch_functions : False
# keep_batchnorm_fp32 : True
# master_weights : True
# loss_scale : dynamic
# Processing user overrides (additional kwargs that are not None)...
# After processing overrides, optimization options are:
# enabled : True
# opt_level : O2
# cast_model_type : torch.float16
# patch_torch_functions : False
# keep_batchnorm_fp32 : True
# master_weights : True
# loss_scale : dynamic
# Epoch: [0][10/125] Time 0.866 (0.866) Speed 230.949 (230.949) Loss 0.6735125184 (0.6735) Prec@1 61.000 (61.000) Prec@5 100.000 (100.000)
# Epoch: [0][20/125] Time 0.259 (0.562) Speed 773.481 (355.693) Loss 0.6968704462 (0.6852) Prec@1 55.000 (58.000) Prec@5 100.000 (100.000)
# Epoch: [0][30/125] Time 0.258 (0.461) Speed 775.089 (433.965) Loss 0.7877287269 (0.7194) Prec@1 51.500 (55.833) Prec@5 100.000 (100.000)
# Epoch: [0][40/125] Time 0.259 (0.410) Speed 771.710 (487.281) Loss 0.8285319805 (0.7467) Prec@1 48.500 (54.000) Prec@5 100.000 (100.000)
# Epoch: [0][50/125] Time 0.260 (0.380) Speed 770.090 (525.908) Loss 0.7370464802 (0.7447) Prec@1 56.500 (54.500) Prec@5 100.000 (100.000)
# Epoch: [0][60/125] Time 0.258 (0.360) Speed 775.623 (555.728) Loss 0.7592862844 (0.7472) Prec@1 51.000 (53.917) Prec@5 100.000 (100.000)
# Epoch: [0][70/125] Time 0.258 (0.345) Speed 774.746 (579.115) Loss 1.9698858261 (0.9218) Prec@1 49.500 (53.286) Prec@5 100.000 (100.000)
# Epoch: [0][80/125] Time 0.260 (0.335) Speed 770.324 (597.659) Loss 2.2505953312 (1.0879) Prec@1 50.500 (52.938) Prec@5 100.000 (100.000)
###Output
_____no_output_____
###Markdown
``--channels-last true`` 인자를 전달하여 Channels last 형식으로 모델을 실행하면 22%의 성능 향상을 보입니다.``python main_amp.py -a resnet50 --b 200 --workers 16 --opt-level O2 --channels-last true ./data``
###Code
# opt_level = O2
# keep_batchnorm_fp32 = None <class 'NoneType'>
# loss_scale = None <class 'NoneType'>
#
# CUDNN VERSION: 7603
#
# => creating model 'resnet50'
# Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.
#
# Defaults for this optimization level are:
# enabled : True
# opt_level : O2
# cast_model_type : torch.float16
# patch_torch_functions : False
# keep_batchnorm_fp32 : True
# master_weights : True
# loss_scale : dynamic
# Processing user overrides (additional kwargs that are not None)...
# After processing overrides, optimization options are:
# enabled : True
# opt_level : O2
# cast_model_type : torch.float16
# patch_torch_functions : False
# keep_batchnorm_fp32 : True
# master_weights : True
# loss_scale : dynamic
#
# Epoch: [0][10/125] Time 0.767 (0.767) Speed 260.785 (260.785) Loss 0.7579724789 (0.7580) Prec@1 53.500 (53.500) Prec@5 100.000 (100.000)
# Epoch: [0][20/125] Time 0.198 (0.482) Speed 1012.135 (414.716) Loss 0.7007197738 (0.7293) Prec@1 49.000 (51.250) Prec@5 100.000 (100.000)
# Epoch: [0][30/125] Time 0.198 (0.387) Speed 1010.977 (516.198) Loss 0.7113101482 (0.7233) Prec@1 55.500 (52.667) Prec@5 100.000 (100.000)
# Epoch: [0][40/125] Time 0.197 (0.340) Speed 1013.023 (588.333) Loss 0.8943189979 (0.7661) Prec@1 54.000 (53.000) Prec@5 100.000 (100.000)
# Epoch: [0][50/125] Time 0.198 (0.312) Speed 1010.541 (641.977) Loss 1.7113249302 (0.9551) Prec@1 51.000 (52.600) Prec@5 100.000 (100.000)
# Epoch: [0][60/125] Time 0.198 (0.293) Speed 1011.163 (683.574) Loss 5.8537774086 (1.7716) Prec@1 50.500 (52.250) Prec@5 100.000 (100.000)
# Epoch: [0][70/125] Time 0.198 (0.279) Speed 1011.453 (716.767) Loss 5.7595844269 (2.3413) Prec@1 46.500 (51.429) Prec@5 100.000 (100.000)
# Epoch: [0][80/125] Time 0.198 (0.269) Speed 1011.827 (743.883) Loss 2.8196096420 (2.4011) Prec@1 47.500 (50.938) Prec@5 100.000 (100.000)
###Output
_____no_output_____
###Markdown
아래 목록의 모델들은 Channels last 형식을 전적으로 지원(full support)하며 Volta 장비에서 8%-35%의 성능 향상을 보입니다:``alexnet``, ``mnasnet0_5``, ``mnasnet0_75``, ``mnasnet1_0``, ``mnasnet1_3``, ``mobilenet_v2``, ``resnet101``, ``resnet152``, ``resnet18``, ``resnet34``, ``resnet50``, ``resnext50_32x4d``, ``shufflenet_v2_x0_5``, ``shufflenet_v2_x1_0``, ``shufflenet_v2_x1_5``, ``shufflenet_v2_x2_0``, ``squeezenet1_0``, ``squeezenet1_1``, ``vgg11``, ``vgg11_bn``, ``vgg13``, ``vgg13_bn``, ``vgg16``, ``vgg16_bn``, ``vgg19``, ``vgg19_bn``, ``wide_resnet101_2``, ``wide_resnet50_2`` 기존 모델들 변환하기--------------------------Channels last 지원은 기존 모델이 무엇이냐에 따라 제한되지 않습니다.어떠한 모델도 Channels last로 변환할 수 있으며입력(또는 특정 가중치)의 형식만 맞춰주면 (신경망) 그래프를 통해 바로 전파(propagate)할 수 있습니다.
###Code
# 모델을 초기화한(또는 불러온) 이후, 한 번 실행이 필요합니다.
model = model.to(memory_format=torch.channels_last) # 원하는 모델로 교체하기
# 모든 입력에 대해서 실행이 필요합니다.
input = input.to(memory_format=torch.channels_last) # 원하는 입력으로 교체하기
output = model(input)
###Output
_____no_output_____
###Markdown
그러나, 모든 연산자들이 Channels last를 지원하도록 완전히 바뀐 것은 아닙니다(일반적으로는 연속적인 출력을 대신 반환합니다).위의 예시들에서 Channels last를 지원하지 않는 계층(layer)은 메모리 형식 전파를 멈추게 됩니다.그럼에도 불구하고, 모델을 channels last 형식으로 변환했으므로, Channels last 메모리 형식으로 4차원의 가중치를 갖는각 합성곱 계층(convolution layer)에서는 Channels last 형식으로 복원되고 더 빠른 커널(faster kernel)의 이점을 누릴 수 있게 됩니다.하지만 Channels last를 지원하지 않는 연산자들은 치환(permutation)에 의해 과부하가 발생하게 됩니다.선택적으로, 변환된 모델의 성능을 향상시키고 싶은 경우 모델의 연산자들 중 channel last를 지원하지 않는 연산자를 조사하고 식별할 수 있습니다.이는 Channel Last 지원 연산자 목록 https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support 에서 사용한 연산자들이 존재하는지 확인하거나,eager 실행 모드에서 메모리 형식 검사를 도입하고 모델을 실행해야 합니다.아래 코드에서, 연산자들의 출력이 입력의 메모리 형식과 일치하지 않으면 예외(exception)를 발생시킵니다.
###Code
def contains_cl(args):
for t in args:
if isinstance(t, torch.Tensor):
if t.is_contiguous(memory_format=torch.channels_last) and not t.is_contiguous():
return True
elif isinstance(t, list) or isinstance(t, tuple):
if contains_cl(list(t)):
return True
return False
def print_inputs(args, indent=""):
for t in args:
if isinstance(t, torch.Tensor):
print(indent, t.stride(), t.shape, t.device, t.dtype)
elif isinstance(t, list) or isinstance(t, tuple):
print(indent, type(t))
print_inputs(list(t), indent=indent + " ")
else:
print(indent, t)
def check_wrapper(fn):
name = fn.__name__
def check_cl(*args, **kwargs):
was_cl = contains_cl(args)
try:
result = fn(*args, **kwargs)
except Exception as e:
print("`{}` inputs are:".format(name))
print_inputs(args)
print("-------------------")
raise e
failed = False
if was_cl:
if isinstance(result, torch.Tensor):
if result.dim() == 4 and not result.is_contiguous(memory_format=torch.channels_last):
print(
"`{}` got channels_last input, but output is not channels_last:".format(name),
result.shape,
result.stride(),
result.device,
result.dtype,
)
failed = True
if failed and True:
print("`{}` inputs are:".format(name))
print_inputs(args)
raise Exception("Operator `{}` lost channels_last property".format(name))
return result
return check_cl
old_attrs = dict()
def attribute(m):
old_attrs[m] = dict()
for i in dir(m):
e = getattr(m, i)
exclude_functions = ["is_cuda", "has_names", "numel", "stride", "Tensor", "is_contiguous", "__class__"]
if i not in exclude_functions and not i.startswith("_") and "__call__" in dir(e):
try:
old_attrs[m][i] = e
setattr(m, i, check_wrapper(e))
except Exception as e:
print(i)
print(e)
attribute(torch.Tensor)
attribute(torch.nn.functional)
attribute(torch)
###Output
_____no_output_____
###Markdown
만약 Channels last 텐서를 지원하지 않는 연산자를 발견하였고, 기여하기를 원한다면다음 개발 문서를 참고해주세요.https://github.com/pytorch/pytorch/wiki/Writing-memory-format-aware-operators 아래 코드는 torch의 속성(attributes)를 복원합니다.
###Code
for (m, attrs) in old_attrs.items():
for (k,v) in attrs.items():
setattr(m, k, v)
###Output
_____no_output_____ |
Join/save_all_joins.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Load a primary 'collection': Landsat imagery.
primary = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterDate('2014-04-01', '2014-06-01') \
.filterBounds(ee.Geometry.Point(-122.092, 37.42))
# Load a secondary 'collection': MODIS imagery.
modSecondary = ee.ImageCollection('MODIS/006/MOD09GA') \
.filterDate('2014-03-01', '2014-07-01')
# Define an allowable time difference: two days in milliseconds.
twoDaysMillis = 2 * 24 * 60 * 60 * 1000
# Create a time filter to define a match as overlapping timestamps.
timeFilter = ee.Filter.Or(
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_start',
'rightField': 'system:time_end'
}),
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_end',
'rightField': 'system:time_start'
})
)
# Define the join.
saveAllJoin = ee.Join.saveAll(**{
'matchesKey': 'terra',
'ordering': 'system:time_start',
'ascending': True
})
# Apply the join.
landsatModis = saveAllJoin.apply(primary, modSecondary, timeFilter)
# Display the result.
print('Join.saveAll:', landsatModis.getInfo())
###Output
Join.saveAll: {'type': 'ImageCollection', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}}], 'id': 'LANDSAT/LC08/C01/T1_TOA', 'version': 1581772792877035, 'properties': {'system:visualization_0_min': '0.0', 'type_name': 'ImageCollection', 'visualization_1_bands': 'B5,B4,B3', 'thumb': 'https://mw1.google.com/ges/dd/images/LANDSAT_TOA_thumb.png', 'visualization_1_max': '30000.0', 'description': '<p>Landsat 8 Collection 1 Tier 1\n calibrated top-of-atmosphere (TOA) reflectance.\n Calibration coefficients are extracted from the image metadata. See<a href="http://www.sciencedirect.com/science/article/pii/S0034425709000169">\n Chander et al. (2009)</a> for details on the TOA computation.</p></p>\n<p><b>Revisit Interval</b>\n<br>\n 16 days\n</p>\n<p><b>Bands</b>\n<table class="eecat">\n<tr>\n<th scope="col">Name</th>\n<th scope="col">Resolution</th>\n<th scope="col">Wavelength</th>\n<th scope="col">Description</th>\n</tr>\n<tr>\n<td>B1</td>\n<td>\n 30 meters\n</td>\n<td>0.43 - 0.45 µm</td>\n<td><p>Coastal aerosol</p></td>\n</tr>\n<tr>\n<td>B2</td>\n<td>\n 30 meters\n</td>\n<td>0.45 - 0.51 µm</td>\n<td><p>Blue</p></td>\n</tr>\n<tr>\n<td>B3</td>\n<td>\n 30 meters\n</td>\n<td>0.53 - 0.59 µm</td>\n<td><p>Green</p></td>\n</tr>\n<tr>\n<td>B4</td>\n<td>\n 30 meters\n</td>\n<td>0.64 - 0.67 µm</td>\n<td><p>Red</p></td>\n</tr>\n<tr>\n<td>B5</td>\n<td>\n 30 meters\n</td>\n<td>0.85 - 0.88 µm</td>\n<td><p>Near infrared</p></td>\n</tr>\n<tr>\n<td>B6</td>\n<td>\n 30 meters\n</td>\n<td>1.57 - 1.65 µm</td>\n<td><p>Shortwave infrared 1</p></td>\n</tr>\n<tr>\n<td>B7</td>\n<td>\n 30 meters\n</td>\n<td>2.11 - 2.29 µm</td>\n<td><p>Shortwave infrared 2</p></td>\n</tr>\n<tr>\n<td>B8</td>\n<td>\n 15 meters\n</td>\n<td>0.52 - 0.90 µm</td>\n<td><p>Band 8 Panchromatic</p></td>\n</tr>\n<tr>\n<td>B9</td>\n<td>\n 15 meters\n</td>\n<td>1.36 - 1.38 µm</td>\n<td><p>Cirrus</p></td>\n</tr>\n<tr>\n<td>B10</td>\n<td>\n 30 meters\n</td>\n<td>10.60 - 11.19 µm</td>\n<td><p>Thermal infrared 1, resampled from 100m to 30m</p></td>\n</tr>\n<tr>\n<td>B11</td>\n<td>\n 30 meters\n</td>\n<td>11.50 - 12.51 µm</td>\n<td><p>Thermal infrared 2, resampled from 100m to 30m</p></td>\n</tr>\n<tr>\n<td>BQA</td>\n<td>\n</td>\n<td></td>\n<td><p>Landsat Collection 1 QA Bitmask (<a href="https://www.usgs.gov/land-resources/nli/landsat/landsat-collection-1-level-1-quality-assessment-band">See Landsat QA page</a>)</p></td>\n</tr>\n<tr>\n<td colspan=100>\n Bitmask for BQA\n<ul>\n<li>\n Bit 0: Designated Fill\n<ul>\n<li>0: No</li>\n<li>1: Yes</li>\n</ul>\n</li>\n<li>\n Bit 1: Terrain Occlusion\n<ul>\n<li>0: No</li>\n<li>1: Yes</li>\n</ul>\n</li>\n<li>\n Bits 2-3: Radiometric Saturation\n<ul>\n<li>0: No bands contain saturation</li>\n<li>1: 1-2 bands contain saturation</li>\n<li>2: 3-4 bands contain saturation</li>\n<li>3: 5 or more bands contain saturation</li>\n</ul>\n</li>\n<li>\n Bit 4: Cloud\n<ul>\n<li>0: No</li>\n<li>1: Yes</li>\n</ul>\n</li>\n<li>\n Bits 5-6: Cloud Confidence\n<ul>\n<li>0: Not Determined / Condition does not exist.</li>\n<li>1: Low, (0-33 percent confidence)</li>\n<li>2: Medium, (34-66 percent confidence)</li>\n<li>3: High, (67-100 percent confidence)</li>\n</ul>\n</li>\n<li>\n Bits 7-8: Cloud Shadow Confidence\n<ul>\n<li>0: Not Determined / Condition does not exist.</li>\n<li>1: Low, (0-33 percent confidence)</li>\n<li>2: Medium, (34-66 percent confidence)</li>\n<li>3: High, (67-100 percent confidence)</li>\n</ul>\n</li>\n<li>\n Bits 9-10: Snow / Ice Confidence\n<ul>\n<li>0: Not Determined / Condition does not exist.</li>\n<li>1: Low, (0-33 percent confidence)</li>\n<li>2: Medium, (34-66 percent confidence)</li>\n<li>3: High, (67-100 percent confidence)</li>\n</ul>\n</li>\n<li>\n Bits 11-12: Cirrus Confidence\n<ul>\n<li>0: Not Determined / Condition does not exist.</li>\n<li>1: Low, (0-33 percent confidence)</li>\n<li>2: Medium, (34-66 percent confidence)</li>\n<li>3: High, (67-100 percent confidence)</li>\n</ul>\n</li>\n</ul>\n</td>\n</tr>\n</table>\n<p><b>Image Properties</b>\n<table class="eecat">\n<tr>\n<th scope="col">Name</th>\n<th scope="col">Type</th>\n<th scope="col">Description</th>\n</tr>\n<tr>\n<td>BPF_NAME_OLI</td>\n<td>STRING</td>\n<td><p>The file name for the Bias Parameter File (BPF) used to generate the product, if applicable. This only applies to products that contain OLI bands.</p></td>\n</tr>\n<tr>\n<td>BPF_NAME_TIRS</td>\n<td>STRING</td>\n<td><p>The file name for the Bias Parameter File (BPF) used to generate the product, if applicable. This only applies to products that contain TIRS bands.</p></td>\n</tr>\n<tr>\n<td>CLOUD_COVER</td>\n<td>DOUBLE</td>\n<td><p>Percentage cloud cover, -1 = not calculated.</p></td>\n</tr>\n<tr>\n<td>CLOUD_COVER_LAND</td>\n<td>DOUBLE</td>\n<td><p>Percentage cloud cover over land, -1 = not calculated.</p></td>\n</tr>\n<tr>\n<td>COLLECTION_CATEGORY</td>\n<td>STRING</td>\n<td><p>Tier of scene. (T1 or T2)</p></td>\n</tr>\n<tr>\n<td>COLLECTION_NUMBER</td>\n<td>DOUBLE</td>\n<td><p>Number of collection.</p></td>\n</tr>\n<tr>\n<td>CPF_NAME</td>\n<td>STRING</td>\n<td><p>Calibration parameter file name.</p></td>\n</tr>\n<tr>\n<td>DATA_TYPE</td>\n<td>STRING</td>\n<td><p>Data type identifier. (L1T or L1G)</p></td>\n</tr>\n<tr>\n<td>DATE_ACQUIRED</td>\n<td>STRING</td>\n<td><p>Image acquisition date. "YYYY-MM-DD"</p></td>\n</tr>\n<tr>\n<td>DATUM</td>\n<td>STRING</td>\n<td><p>Datum used in image creation.</p></td>\n</tr>\n<tr>\n<td>EARTH_SUN_DISTANCE</td>\n<td>DOUBLE</td>\n<td><p>Earth sun distance in astronomical units (AU).</p></td>\n</tr>\n<tr>\n<td>ELEVATION_SOURCE</td>\n<td>STRING</td>\n<td><p>Elevation model source used for standard terrain corrected (L1T) products.</p></td>\n</tr>\n<tr>\n<td>ELLIPSOID</td>\n<td>STRING</td>\n<td><p>Ellipsoid used in image creation.</p></td>\n</tr>\n<tr>\n<td>EPHEMERIS_TYPE</td>\n<td>STRING</td>\n<td><p>Ephemeris data type used to perform geometric correction. (Definitive or Predictive)</p></td>\n</tr>\n<tr>\n<td>FILE_DATE</td>\n<td>DOUBLE</td>\n<td><p>File date in milliseconds since epoch.</p></td>\n</tr>\n<tr>\n<td>GEOMETRIC_RMSE_MODEL</td>\n<td>DOUBLE</td>\n<td><p>Combined Root Mean Square Error (RMSE) of the geometric residuals\n(metres) in both across-track and along-track directions\nmeasured on the GCPs used in geometric precision correction.\nNot present in L1G products.</p></td>\n</tr>\n<tr>\n<td>GEOMETRIC_RMSE_MODEL_X</td>\n<td>DOUBLE</td>\n<td><p>RMSE of the X direction geometric residuals (in metres) measured\non the GCPs used in geometric precision correction. Not present in\nL1G products.</p></td>\n</tr>\n<tr>\n<td>GEOMETRIC_RMSE_MODEL_Y</td>\n<td>DOUBLE</td>\n<td><p>RMSE of the Y direction geometric residuals (in metres) measured\non the GCPs used in geometric precision correction. Not present in\nL1G products.</p></td>\n</tr>\n<tr>\n<td>GRID_CELL_SIZE_PANCHROMATIC</td>\n<td>DOUBLE</td>\n<td><p>Grid cell size used in creating the image for the panchromatic band.</p></td>\n</tr>\n<tr>\n<td>GRID_CELL_SIZE_REFLECTIVE</td>\n<td>DOUBLE</td>\n<td><p>Grid cell size used in creating the image for the reflective band.</p></td>\n</tr>\n<tr>\n<td>GRID_CELL_SIZE_THERMAL</td>\n<td>DOUBLE</td>\n<td><p>Grid cell size used in creating the image for the thermal band.</p></td>\n</tr>\n<tr>\n<td>GROUND_CONTROL_POINTS_MODEL</td>\n<td>DOUBLE</td>\n<td><p>The number of ground control points used. Not used in L1GT products.\nValues: 0 - 999 (0 is used for L1T products that have used\nMulti-scene refinement).</p></td>\n</tr>\n<tr>\n<td>GROUND_CONTROL_POINTS_VERSION</td>\n<td>DOUBLE</td>\n<td><p>The number of ground control points used in the verification of\nthe terrain corrected product. Values: -1 to 1615 (-1 = not available)</p></td>\n</tr>\n<tr>\n<td>IMAGE_QUALITY</td>\n<td>DOUBLE</td>\n<td><p>Image quality, 0 = worst, 9 = best, -1 = quality not calculated</p></td>\n</tr>\n<tr>\n<td>IMAGE_QUALITY_OLI</td>\n<td>DOUBLE</td>\n<td><p>The composite image quality for the OLI bands. Values: 9 = Best. 1 = Worst. 0 = Image quality not calculated. This parameter is only present if OLI bands are present in the product.</p></td>\n</tr>\n<tr>\n<td>IMAGE_QUALITY_TIRS</td>\n<td>DOUBLE</td>\n<td><p>The composite image quality for the TIRS bands. Values: 9 = Best. 1 = Worst. 0 = Image quality not calculated. This parameter is only present if OLI bands are present in the product.</p></td>\n</tr>\n<tr>\n<td>K1_CONSTANT_BAND_10</td>\n<td>DOUBLE</td>\n<td><p>Calibration K1 constant for Band 10 radiance to temperature conversion.</p></td>\n</tr>\n<tr>\n<td>K1_CONSTANT_BAND_11</td>\n<td>DOUBLE</td>\n<td><p>Calibration K1 constant for Band 11 radiance to temperature conversion.</p></td>\n</tr>\n<tr>\n<td>K2_CONSTANT_BAND_10</td>\n<td>DOUBLE</td>\n<td><p>Calibration K2 constant for Band 10 radiance to temperature conversion.</p></td>\n</tr>\n<tr>\n<td>K2_CONSTANT_BAND_11</td>\n<td>DOUBLE</td>\n<td><p>Calibration K2 constant for Band 11 radiance to temperature conversion.</p></td>\n</tr>\n<tr>\n<td>LANDSAT_PRODUCT_ID</td>\n<td>STRING</td>\n<td><p>The naming convention of each Landsat Collection 1 Level-1 image based\non acquisition parameters and processing parameters.</p>\n<p>Format: LXSS_LLLL_PPPRRR_YYYYMMDD_yyyymmdd_CC_TX</p>\n<ul>\n<li>L = Landsat</li>\n<li>X = Sensor (O = Operational Land Imager,\nT = Thermal Infrared Sensor, C = Combined OLI/TIRS)</li>\n<li>SS = Satellite (08 = Landsat 8)</li>\n<li>LLLL = Processing Correction Level (L1TP = precision and terrain,\nL1GT = systematic terrain, L1GS = systematic)</li>\n<li>PPP = WRS Path</li>\n<li>RRR = WRS Row</li>\n<li>YYYYMMDD = Acquisition Date expressed in Year, Month, Day</li>\n<li>yyyymmdd = Processing Date expressed in Year, Month, Day</li>\n<li>CC = Collection Number (01)</li>\n<li>TX = Collection Category (RT = Real Time, T1 = Tier 1, T2 = Tier 2)</li>\n</ul></td>\n</tr>\n<tr>\n<td>LANDSAT_SCENE_ID</td>\n<td>STRING</td>\n<td><p>The Pre-Collection naming convention of each image is based on acquisition\nparameters. This was the naming convention used prior to Collection 1.</p>\n<p>Format: LXSPPPRRRYYYYDDDGSIVV</p>\n<ul>\n<li>L = Landsat</li>\n<li>X = Sensor (O = Operational Land Imager, T = Thermal Infrared Sensor, C = Combined OLI/TIRS)</li>\n<li>S = Satellite (08 = Landsat 8)</li>\n<li>PPP = WRS Path</li>\n<li>RRR = WRS Row</li>\n<li>YYYY = Year of Acquisition</li>\n<li>DDD = Julian Day of Acquisition</li>\n<li>GSI = Ground Station Identifier</li>\n<li>VV = Version</li>\n</ul></td>\n</tr>\n<tr>\n<td>MAP_PROJECTION</td>\n<td>STRING</td>\n<td><p>Projection used to represent the 3-dimensional surface of the earth for the Level-1 product.</p></td>\n</tr>\n<tr>\n<td>NADIR_OFFNADIR</td>\n<td>STRING</td>\n<td><p>Nadir or Off-Nadir condition of the scene.</p></td>\n</tr>\n<tr>\n<td>ORIENTATION</td>\n<td>STRING</td>\n<td><p>Orientation used in creating the image. Values: NOMINAL = Nominal Path, NORTH_UP = North Up, TRUE_NORTH = True North, USER = User</p></td>\n</tr>\n<tr>\n<td>PANCHROMATIC_LINES</td>\n<td>DOUBLE</td>\n<td><p>Number of product lines for the panchromatic band.</p></td>\n</tr>\n<tr>\n<td>PANCHROMATIC_SAMPLES</td>\n<td>DOUBLE</td>\n<td><p>Number of product samples for the panchromatic bands.</p></td>\n</tr>\n<tr>\n<td>PROCESSING_SOFTWARE_VERSION</td>\n<td>STRING</td>\n<td><p>Name and version of the processing software used to generate the L1 product.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_1</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 1.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_10</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 10.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_11</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 11.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_2</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 2.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_3</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 3.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_4</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 4.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_5</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 5.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_6</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 6.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_7</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 7.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_8</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 8.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_9</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 9.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_1</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 1 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_10</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 10 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_11</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 11 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_2</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 2 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_3</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 3 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_4</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 4 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_5</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 5 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_6</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 6 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_7</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 7 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_8</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 8 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_9</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 9 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_1</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 1 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_2</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 2 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_3</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 3 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_4</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 4 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_5</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 5 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_7</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 7 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_8</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 8 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_9</td>\n<td>DOUBLE</td>\n<td><p>Minimum achievable spectral reflectance value for Band 8.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_1</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 1 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_2</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 2 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_3</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 3 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_4</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 4 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_5</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 5 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_6</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 6 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_7</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 7 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_8</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 8 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_9</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 9 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTIVE_LINES</td>\n<td>DOUBLE</td>\n<td><p>Number of product lines for the reflective bands.</p></td>\n</tr>\n<tr>\n<td>REFLECTIVE_SAMPLES</td>\n<td>DOUBLE</td>\n<td><p>Number of product samples for the reflective bands.</p></td>\n</tr>\n<tr>\n<td>REQUEST_ID</td>\n<td>STRING</td>\n<td><p>Request id, nnnyymmdd0000_0000</p>\n<ul>\n<li>nnn = node number</li>\n<li>yy = year</li>\n<li>mm = month</li>\n<li>dd = day</li>\n</ul></td>\n</tr>\n<tr>\n<td>RESAMPLING_OPTION</td>\n<td>STRING</td>\n<td><p>Resampling option used in creating the image.</p></td>\n</tr>\n<tr>\n<td>RLUT_FILE_NAME</td>\n<td>STRING</td>\n<td><p>The file name for the Response Linearization Lookup Table (RLUT) used to generate the product, if applicable.</p></td>\n</tr>\n<tr>\n<td>ROLL_ANGLE</td>\n<td>DOUBLE</td>\n<td><p>The amount of spacecraft roll angle at the scene center.</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_1</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 1 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_10</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 10 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_11</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 11 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_2</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 2 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_3</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 3 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_4</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 4 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_5</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 5 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_6</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 6 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_7</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 7 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_8</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 8 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_9</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 9 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SCENE_CENTER_TIME</td>\n<td>STRING</td>\n<td><p>Scene center time of acquired image. HH:MM:SS.SSSSSSSZ</p>\n<ul>\n<li>HH = Hour (00-23)</li>\n<li>MM = Minutes</li>\n<li>SS.SSSSSSS = Fractional seconds</li>\n<li>Z = "Zulu" time (same as GMT)</li>\n</ul></td>\n</tr>\n<tr>\n<td>SENSOR_ID</td>\n<td>STRING</td>\n<td><p>Sensor used to capture data.</p></td>\n</tr>\n<tr>\n<td>SPACECRAFT_ID</td>\n<td>STRING</td>\n<td><p>Spacecraft identification.</p></td>\n</tr>\n<tr>\n<td>STATION_ID</td>\n<td>STRING</td>\n<td><p>Ground Station/Organisation that received the data.</p></td>\n</tr>\n<tr>\n<td>SUN_AZIMUTH</td>\n<td>DOUBLE</td>\n<td><p>Sun azimuth angle in degrees for the image center location at the image centre acquisition time.</p></td>\n</tr>\n<tr>\n<td>SUN_ELEVATION</td>\n<td>DOUBLE</td>\n<td><p>Sun elevation angle in degrees for the image center location at the image centre acquisition time.</p></td>\n</tr>\n<tr>\n<td>TARGET_WRS_PATH</td>\n<td>DOUBLE</td>\n<td><p>Nearest WRS-2 path to the line-of-sight scene center of the image.</p></td>\n</tr>\n<tr>\n<td>TARGET_WRS_ROW</td>\n<td>DOUBLE</td>\n<td><p>Nearest WRS-2 row to the line-of-sight scene center of the image. Rows 880-889 and 990-999 are reserved for the polar regions where it is undefined in the WRS-2.</p></td>\n</tr>\n<tr>\n<td>THERMAL_LINES</td>\n<td>DOUBLE</td>\n<td><p>Number of product lines for the thermal band.</p></td>\n</tr>\n<tr>\n<td>THERMAL_SAMPLES</td>\n<td>DOUBLE</td>\n<td><p>Number of product samples for the thermal band.</p></td>\n</tr>\n<tr>\n<td>TIRS_SSM_MODEL</td>\n<td>STRING</td>\n<td><p>Due to an anomalous condition on the Thermal Infrared\nSensor (TIRS) Scene Select Mirror (SSM) encoder electronics,\nthis field has been added to indicate which model was used to process the data.\n(Actual, Preliminary, Final)</p></td>\n</tr>\n<tr>\n<td>TIRS_SSM_POSITION_STATUS</td>\n<td>STRING</td>\n<td><p>TIRS SSM position status.</p></td>\n</tr>\n<tr>\n<td>TIRS_STRAY_LIGHT_CORRECTION_SOURCE</td>\n<td>STRING</td>\n<td><p>TIRS stray light correction source.</p></td>\n</tr>\n<tr>\n<td>TRUNCATION_OLI</td>\n<td>STRING</td>\n<td><p>Region of OLCI truncated.</p></td>\n</tr>\n<tr>\n<td>UTM_ZONE</td>\n<td>DOUBLE</td>\n<td><p>UTM zone number used in product map projection.</p></td>\n</tr>\n<tr>\n<td>WRS_PATH</td>\n<td>DOUBLE</td>\n<td><p>The WRS orbital path number (001 - 251).</p></td>\n</tr>\n<tr>\n<td>WRS_ROW</td>\n<td>DOUBLE</td>\n<td><p>Landsat satellite WRS row (001-248).</p></td>\n</tr>\n</table>\n<style>\n table.eecat {\n border: 1px solid black;\n border-collapse: collapse;\n font-size: 13px;\n }\n table.eecat td, tr, th {\n text-align: left; vertical-align: top;\n border: 1px solid gray; padding: 3px;\n }\n td.nobreak { white-space: nowrap; }\n</style>', 'source_tags': ['landsat', 'usgs'], 'visualization_1_name': 'Near Infrared (543)', 'visualization_0_max': '30000.0', 'title': 'USGS Landsat 8 Collection 1 Tier 1 TOA Reflectance', 'visualization_0_gain': '500.0', 'system:visualization_2_max': '30000.0', 'product_tags': ['global', 'toa', 'oli_tirs', 'lc8', 'c1', 't1', 'l8', 'tier1', 'radiance'], 'visualization_1_gain': '500.0', 'provider': 'USGS/Google', 'visualization_1_min': '0.0', 'system:visualization_2_name': 'Shortwave Infrared (753)', 'visualization_0_min': '0.0', 'system:visualization_1_bands': 'B5,B4,B3', 'system:visualization_1_max': '30000.0', 'visualization_0_name': 'True Color (432)', 'date_range': [1365638400000, 1581206400000], 'visualization_2_bands': 'B7,B5,B3', 'visualization_2_name': 'Shortwave Infrared (753)', 'period': 0, 'system:visualization_2_min': '0.0', 'system:visualization_0_bands': 'B4,B3,B2', 'visualization_2_min': '0.0', 'visualization_2_gain': '500.0', 'provider_url': 'http://landsat.usgs.gov/', 'sample': 'https://mw1.google.com/ges/dd/images/LANDSAT_TOA_sample.png', 'system:visualization_1_name': 'Near Infrared (543)', 'tags': ['landsat', 'usgs', 'global', 'toa', 'oli_tirs', 'lc8', 'c1', 't1', 'l8', 'tier1', 'radiance'], 'system:visualization_0_max': '30000.0', 'visualization_2_max': '30000.0', 'system:visualization_2_bands': 'B7,B5,B3', 'system:visualization_1_min': '0.0', 'system:visualization_0_name': 'True Color (432)', 'visualization_0_bands': 'B4,B3,B2'}, 'features': [{'type': 'Image', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [15341, 15581], 'crs': 'EPSG:32610', 'crs_transform': [15, 0, 460792.5, 0, -15, 4264207.5]}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}], 'version': 1581772792877035, 'id': 'LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140403', 'properties': {'terra': [{'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504497547369101, 'id': 'MODIS/006/MOD09GA/2014_04_01', 'properties': {'system:time_start': 1396310400000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1396396800000, 'system:asset_size': 24622391229, 'system:index': '2014_04_01'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504501593115095, 'id': 'MODIS/006/MOD09GA/2014_04_02', 'properties': {'system:time_start': 1396396800000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1396483200000, 'system:asset_size': 23573993585, 'system:index': '2014_04_02'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504488721459188, 'id': 'MODIS/006/MOD09GA/2014_04_03', 'properties': {'system:time_start': 1396483200000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1396569600000, 'system:asset_size': 24476076998, 'system:index': '2014_04_03'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504499525329416, 'id': 'MODIS/006/MOD09GA/2014_04_04', 'properties': {'system:time_start': 1396569600000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1396656000000, 'system:asset_size': 23697729158, 'system:index': '2014_04_04'}}], 'RADIANCE_MULT_BAND_5': 0.00611429987475276, 'RADIANCE_MULT_BAND_6': 0.0015206000534817576, 'RADIANCE_MULT_BAND_3': 0.011849000118672848, 'RADIANCE_MULT_BAND_4': 0.009991499595344067, 'RADIANCE_MULT_BAND_1': 0.012556999921798706, 'RADIANCE_MULT_BAND_2': 0.01285799965262413, 'K2_CONSTANT_BAND_11': 1201.1441650390625, 'K2_CONSTANT_BAND_10': 1321.078857421875, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-120.8473141778081, 38.05593855929062], [-120.8399593728871, 38.079323071287384], [-120.82522434534502, 38.126298845124154], [-120.82517062317932, 38.12810935862697], [-120.8677905264658, 38.13653674526281], [-121.37735830917396, 38.23574890955089], [-122.92397603591857, 38.5218201625494], [-122.94540185152168, 38.52557313562304], [-122.94781508421401, 38.52557420469068], [-122.9538620955667, 38.50519466790785], [-123.43541566635548, 36.80572425461524], [-123.43388775775958, 36.8051169737102], [-121.36103157158686, 36.408726677230895], [-121.3601864919046, 36.410036730606365], [-121.3547960201613, 36.42754948797928], [-121.22805212441246, 36.84032220234662], [-121.10161450053057, 37.247264521511426], [-120.99043851266156, 37.60225211028372], [-120.94687053372499, 37.7406010941523], [-120.88475337745422, 37.93745112674764], [-120.8473141778081, 38.05593855929062]]}, 'REFLECTIVE_SAMPLES': 7671, 'SUN_AZIMUTH': 143.3709716796875, 'CPF_NAME': 'LC08CPF_20140401_20140630_01.01', 'DATE_ACQUIRED': '2014-04-03', 'ELLIPSOID': 'WGS84', 'google:registration_offset_x': 0, 'google:registration_offset_y': 0, 'STATION_ID': 'LGN', 'RESAMPLING_OPTION': 'CUBIC_CONVOLUTION', 'ORIENTATION': 'NORTH_UP', 'WRS_ROW': 34, 'RADIANCE_MULT_BAND_9': 0.002389600034803152, 'TARGET_WRS_ROW': 34, 'RADIANCE_MULT_BAND_7': 0.0005125200259499252, 'RADIANCE_MULT_BAND_8': 0.011308000423014164, 'IMAGE_QUALITY_TIRS': 9, 'TRUNCATION_OLI': 'UPPER', 'CLOUD_COVER': 28.1200008392334, 'GEOMETRIC_RMSE_VERIFY': 3.2160000801086426, 'COLLECTION_CATEGORY': 'T1', 'GRID_CELL_SIZE_REFLECTIVE': 30, 'CLOUD_COVER_LAND': 31.59000015258789, 'GEOMETRIC_RMSE_MODEL': 6.959000110626221, 'COLLECTION_NUMBER': 1, 'IMAGE_QUALITY_OLI': 9, 'LANDSAT_SCENE_ID': 'LC80440342014093LGN01', 'WRS_PATH': 44, 'google:registration_count': 0, 'PANCHROMATIC_SAMPLES': 15341, 'PANCHROMATIC_LINES': 15581, 'GEOMETRIC_RMSE_MODEL_Y': 4.63700008392334, 'REFLECTIVE_LINES': 7791, 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE': 'TIRS', 'GEOMETRIC_RMSE_MODEL_X': 5.188000202178955, 'system:asset_size': 1208697743, 'system:index': 'LC08_044034_20140403', 'REFLECTANCE_ADD_BAND_1': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_2': -0.10000000149011612, 'DATUM': 'WGS84', 'REFLECTANCE_ADD_BAND_3': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_4': -0.10000000149011612, 'RLUT_FILE_NAME': 'LC08RLUT_20130211_20150302_01_11.h5', 'REFLECTANCE_ADD_BAND_5': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_6': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_7': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_8': -0.10000000149011612, 'BPF_NAME_TIRS': 'LT8BPF20140403182815_20140403190449.01', 'GROUND_CONTROL_POINTS_VERSION': 4, 'DATA_TYPE': 'L1TP', 'UTM_ZONE': 10, 'LANDSAT_PRODUCT_ID': 'LC08_L1TP_044034_20140403_20170306_01_T1', 'REFLECTANCE_ADD_BAND_9': -0.10000000149011612, 'google:registration_ratio': 0, 'GRID_CELL_SIZE_PANCHROMATIC': 15, 'RADIANCE_ADD_BAND_4': -49.95764923095703, 'REFLECTANCE_MULT_BAND_7': 1.9999999494757503e-05, 'system:time_start': 1396550776290, 'RADIANCE_ADD_BAND_5': -30.571590423583984, 'REFLECTANCE_MULT_BAND_6': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_6': -7.602880001068115, 'REFLECTANCE_MULT_BAND_9': 1.9999999494757503e-05, 'PROCESSING_SOFTWARE_VERSION': 'LPGS_2.7.0', 'RADIANCE_ADD_BAND_7': -2.562580108642578, 'REFLECTANCE_MULT_BAND_8': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_1': -62.78356170654297, 'RADIANCE_ADD_BAND_2': -64.29113006591797, 'RADIANCE_ADD_BAND_3': -59.24372863769531, 'REFLECTANCE_MULT_BAND_1': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_8': -56.53831100463867, 'REFLECTANCE_MULT_BAND_3': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_9': -11.94806957244873, 'REFLECTANCE_MULT_BAND_2': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_5': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_4': 1.9999999494757503e-05, 'THERMAL_LINES': 7791, 'TIRS_SSM_POSITION_STATUS': 'NOMINAL', 'GRID_CELL_SIZE_THERMAL': 30, 'NADIR_OFFNADIR': 'NADIR', 'RADIANCE_ADD_BAND_11': 0.10000000149011612, 'REQUEST_ID': '0501703063782_00025', 'EARTH_SUN_DISTANCE': 0.9999619126319885, 'TIRS_SSM_MODEL': 'ACTUAL', 'FILE_DATE': 1488829355000, 'SCENE_CENTER_TIME': '18:46:16.2881730Z', 'SUN_ELEVATION': 52.549800872802734, 'BPF_NAME_OLI': 'LO8BPF20140403183209_20140403190356.01', 'RADIANCE_ADD_BAND_10': 0.10000000149011612, 'ROLL_ANGLE': -0.0010000000474974513, 'K1_CONSTANT_BAND_10': 774.8853149414062, 'SATURATION_BAND_1': 'N', 'SATURATION_BAND_2': 'N', 'SATURATION_BAND_3': 'N', 'SATURATION_BAND_4': 'N', 'SATURATION_BAND_5': 'Y', 'MAP_PROJECTION': 'UTM', 'SATURATION_BAND_6': 'Y', 'SENSOR_ID': 'OLI_TIRS', 'SATURATION_BAND_7': 'Y', 'K1_CONSTANT_BAND_11': 480.8883056640625, 'SATURATION_BAND_8': 'N', 'SATURATION_BAND_9': 'N', 'TARGET_WRS_PATH': 44, 'RADIANCE_MULT_BAND_11': 0.00033420001273043454, 'RADIANCE_MULT_BAND_10': 0.00033420001273043454, 'GROUND_CONTROL_POINTS_MODEL': 385, 'SPACECRAFT_ID': 'LANDSAT_8', 'ELEVATION_SOURCE': 'GLS2000', 'THERMAL_SAMPLES': 7671, 'GROUND_CONTROL_POINTS_VERIFY': 98}}, {'type': 'Image', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [15341, 15581], 'crs': 'EPSG:32610', 'crs_transform': [15, 0, 461392.5, 0, -15, 4264207.5]}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}], 'version': 1581772792877035, 'id': 'LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140419', 'properties': {'terra': [{'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504497353500683, 'id': 'MODIS/006/MOD09GA/2014_04_17', 'properties': {'system:time_start': 1397692800000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1397779200000, 'system:asset_size': 24174490963, 'system:index': '2014_04_17'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504503327058258, 'id': 'MODIS/006/MOD09GA/2014_04_18', 'properties': {'system:time_start': 1397779200000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1397865600000, 'system:asset_size': 23100180324, 'system:index': '2014_04_18'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504501012619889, 'id': 'MODIS/006/MOD09GA/2014_04_19', 'properties': {'system:time_start': 1397865600000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1397952000000, 'system:asset_size': 23961163982, 'system:index': '2014_04_19'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504497190553985, 'id': 'MODIS/006/MOD09GA/2014_04_20', 'properties': {'system:time_start': 1397952000000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1398038400000, 'system:asset_size': 23219292499, 'system:index': '2014_04_20'}}], 'RADIANCE_MULT_BAND_5': 0.006059799809008837, 'RADIANCE_MULT_BAND_6': 0.0015069999499246478, 'RADIANCE_MULT_BAND_3': 0.011742999777197838, 'RADIANCE_MULT_BAND_4': 0.009902399964630604, 'RADIANCE_MULT_BAND_1': 0.012445000000298023, 'RADIANCE_MULT_BAND_2': 0.012744000181555748, 'K2_CONSTANT_BAND_11': 1201.1441650390625, 'K2_CONSTANT_BAND_10': 1321.078857421875, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-120.8431379362771, 38.052617966765766], [-120.83578218089683, 38.07600217001765], [-120.81963729012756, 38.12767081181165], [-120.82234049239531, 38.12843879727159], [-122.94102091600229, 38.525570980595205], [-122.94293147316415, 38.52557196694168], [-122.94542248503689, 38.51776440194044], [-122.9490448046238, 38.50559823329617], [-123.430644945337, 36.8057166125035], [-123.42903372114263, 36.80507606772225], [-122.57913602686314, 36.64741782585057], [-121.50262683064466, 36.438064670880586], [-121.35593613505138, 36.40870641506648], [-121.35503796940482, 36.40940804319249], [-121.22502589113704, 36.8329762319502], [-121.10052631685265, 37.23379807333198], [-120.9755883879769, 37.632705519232594], [-120.88376082672839, 37.92399755184342], [-120.85385887049235, 38.01862509330369], [-120.8431379362771, 38.052617966765766]]}, 'REFLECTIVE_SAMPLES': 7671, 'SUN_AZIMUTH': 139.7012176513672, 'CPF_NAME': 'LC08CPF_20140401_20140630_01.01', 'DATE_ACQUIRED': '2014-04-19', 'ELLIPSOID': 'WGS84', 'google:registration_offset_x': 0, 'google:registration_offset_y': 0, 'STATION_ID': 'LGN', 'RESAMPLING_OPTION': 'CUBIC_CONVOLUTION', 'ORIENTATION': 'NORTH_UP', 'WRS_ROW': 34, 'RADIANCE_MULT_BAND_9': 0.002368299989029765, 'TARGET_WRS_ROW': 34, 'RADIANCE_MULT_BAND_7': 0.000507950026076287, 'RADIANCE_MULT_BAND_8': 0.011207000352442265, 'IMAGE_QUALITY_TIRS': 9, 'TRUNCATION_OLI': 'UPPER', 'CLOUD_COVER': 12.920000076293945, 'GEOMETRIC_RMSE_VERIFY': 3.380000114440918, 'COLLECTION_CATEGORY': 'T1', 'GRID_CELL_SIZE_REFLECTIVE': 30, 'CLOUD_COVER_LAND': 0.75, 'GEOMETRIC_RMSE_MODEL': 6.547999858856201, 'COLLECTION_NUMBER': 1, 'IMAGE_QUALITY_OLI': 9, 'LANDSAT_SCENE_ID': 'LC80440342014109LGN01', 'WRS_PATH': 44, 'google:registration_count': 0, 'PANCHROMATIC_SAMPLES': 15341, 'PANCHROMATIC_LINES': 15581, 'GEOMETRIC_RMSE_MODEL_Y': 4.453999996185303, 'REFLECTIVE_LINES': 7791, 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE': 'TIRS', 'GEOMETRIC_RMSE_MODEL_X': 4.798999786376953, 'system:asset_size': 1203236382, 'system:index': 'LC08_044034_20140419', 'REFLECTANCE_ADD_BAND_1': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_2': -0.10000000149011612, 'DATUM': 'WGS84', 'REFLECTANCE_ADD_BAND_3': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_4': -0.10000000149011612, 'RLUT_FILE_NAME': 'LC08RLUT_20130211_20150302_01_11.h5', 'REFLECTANCE_ADD_BAND_5': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_6': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_7': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_8': -0.10000000149011612, 'BPF_NAME_TIRS': 'LT8BPF20140419183133_20140419190432.01', 'GROUND_CONTROL_POINTS_VERSION': 4, 'DATA_TYPE': 'L1TP', 'UTM_ZONE': 10, 'LANDSAT_PRODUCT_ID': 'LC08_L1TP_044034_20140419_20170307_01_T1', 'REFLECTANCE_ADD_BAND_9': -0.10000000149011612, 'google:registration_ratio': 0, 'GRID_CELL_SIZE_PANCHROMATIC': 15, 'RADIANCE_ADD_BAND_4': -49.512229919433594, 'REFLECTANCE_MULT_BAND_7': 1.9999999494757503e-05, 'system:time_start': 1397933159240, 'RADIANCE_ADD_BAND_5': -30.299020767211914, 'REFLECTANCE_MULT_BAND_6': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_6': -7.53508996963501, 'REFLECTANCE_MULT_BAND_9': 1.9999999494757503e-05, 'PROCESSING_SOFTWARE_VERSION': 'LPGS_2.7.0', 'RADIANCE_ADD_BAND_7': -2.5397300720214844, 'REFLECTANCE_MULT_BAND_8': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_1': -62.22378921508789, 'RADIANCE_ADD_BAND_2': -63.717918395996094, 'RADIANCE_ADD_BAND_3': -58.715518951416016, 'REFLECTANCE_MULT_BAND_1': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_8': -56.03422164916992, 'REFLECTANCE_MULT_BAND_3': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_9': -11.841540336608887, 'REFLECTANCE_MULT_BAND_2': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_5': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_4': 1.9999999494757503e-05, 'THERMAL_LINES': 7791, 'TIRS_SSM_POSITION_STATUS': 'NOMINAL', 'GRID_CELL_SIZE_THERMAL': 30, 'NADIR_OFFNADIR': 'NADIR', 'RADIANCE_ADD_BAND_11': 0.10000000149011612, 'REQUEST_ID': '0501703064332_00025', 'EARTH_SUN_DISTANCE': 1.004449725151062, 'TIRS_SSM_MODEL': 'ACTUAL', 'FILE_DATE': 1488882124000, 'SCENE_CENTER_TIME': '18:45:59.2402600Z', 'SUN_ELEVATION': 58.094696044921875, 'BPF_NAME_OLI': 'LO8BPF20140419183527_20140419190339.01', 'RADIANCE_ADD_BAND_10': 0.10000000149011612, 'ROLL_ANGLE': -0.0010000000474974513, 'K1_CONSTANT_BAND_10': 774.8853149414062, 'SATURATION_BAND_1': 'Y', 'SATURATION_BAND_2': 'Y', 'SATURATION_BAND_3': 'Y', 'SATURATION_BAND_4': 'Y', 'SATURATION_BAND_5': 'Y', 'MAP_PROJECTION': 'UTM', 'SATURATION_BAND_6': 'Y', 'SENSOR_ID': 'OLI_TIRS', 'SATURATION_BAND_7': 'Y', 'K1_CONSTANT_BAND_11': 480.8883056640625, 'SATURATION_BAND_8': 'N', 'SATURATION_BAND_9': 'N', 'TARGET_WRS_PATH': 44, 'RADIANCE_MULT_BAND_11': 0.00033420001273043454, 'RADIANCE_MULT_BAND_10': 0.00033420001273043454, 'GROUND_CONTROL_POINTS_MODEL': 509, 'SPACECRAFT_ID': 'LANDSAT_8', 'ELEVATION_SOURCE': 'GLS2000', 'THERMAL_SAMPLES': 7671, 'GROUND_CONTROL_POINTS_VERIFY': 169}}, {'type': 'Image', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [15341, 15581], 'crs': 'EPSG:32610', 'crs_transform': [15, 0, 461392.5, 0, -15, 4264207.5]}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}], 'version': 1581772792877035, 'id': 'LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140505', 'properties': {'terra': [{'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504498782758567, 'id': 'MODIS/006/MOD09GA/2014_05_03', 'properties': {'system:time_start': 1399075200000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1399161600000, 'system:asset_size': 23608680756, 'system:index': '2014_05_03'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504502586801816, 'id': 'MODIS/006/MOD09GA/2014_05_04', 'properties': {'system:time_start': 1399161600000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1399248000000, 'system:asset_size': 22616093760, 'system:index': '2014_05_04'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504502692153885, 'id': 'MODIS/006/MOD09GA/2014_05_05', 'properties': {'system:time_start': 1399248000000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1399334400000, 'system:asset_size': 23559225642, 'system:index': '2014_05_05'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504491491371582, 'id': 'MODIS/006/MOD09GA/2014_05_06', 'properties': {'system:time_start': 1399334400000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1399420800000, 'system:asset_size': 22777088609, 'system:index': '2014_05_06'}}], 'RADIANCE_MULT_BAND_5': 0.006009500008076429, 'RADIANCE_MULT_BAND_6': 0.0014944999711588025, 'RADIANCE_MULT_BAND_3': 0.011645999737083912, 'RADIANCE_MULT_BAND_4': 0.009820199571549892, 'RADIANCE_MULT_BAND_1': 0.012341000139713287, 'RADIANCE_MULT_BAND_2': 0.012637999840080738, 'K2_CONSTANT_BAND_11': 1201.1441650390625, 'K2_CONSTANT_BAND_10': 1321.078857421875, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-121.23130694632096, 38.20890167865334], [-122.47808618435543, 38.442905249886934], [-122.9416241270812, 38.52616106461051], [-122.94257304228283, 38.52467261055228], [-122.94438908458714, 38.518980549130696], [-122.9480116995035, 38.506814434795785], [-123.42945547884437, 36.807365583536495], [-123.42944546960602, 36.80558241062019], [-121.35650439967876, 36.40925950162913], [-121.35462928167787, 36.409233706436694], [-121.2209704109367, 36.84467814167406], [-121.09380664017438, 37.25395464587639], [-120.98744109880928, 37.59368464704816], [-120.92971288838983, 37.77715018781449], [-120.874792117132, 37.95100539896876], [-120.85505283148036, 38.013433126642376], [-120.83525753541217, 38.07639805962481], [-120.81911222539682, 38.12806656677994], [-120.8214394607643, 38.1287277611953], [-120.83942642052946, 38.13230813141151], [-121.23130694632096, 38.20890167865334]]}, 'REFLECTIVE_SAMPLES': 7671, 'SUN_AZIMUTH': 134.8988800048828, 'CPF_NAME': 'LC08CPF_20140401_20140630_01.01', 'DATE_ACQUIRED': '2014-05-05', 'ELLIPSOID': 'WGS84', 'google:registration_offset_x': 0, 'google:registration_offset_y': 0, 'STATION_ID': 'LGN', 'RESAMPLING_OPTION': 'CUBIC_CONVOLUTION', 'ORIENTATION': 'NORTH_UP', 'WRS_ROW': 34, 'RADIANCE_MULT_BAND_9': 0.0023485999554395676, 'TARGET_WRS_ROW': 34, 'RADIANCE_MULT_BAND_7': 0.0005037300288677216, 'RADIANCE_MULT_BAND_8': 0.011114000342786312, 'IMAGE_QUALITY_TIRS': 9, 'TRUNCATION_OLI': 'UPPER', 'CLOUD_COVER': 24.25, 'GEOMETRIC_RMSE_VERIFY': 3.5369999408721924, 'COLLECTION_CATEGORY': 'T1', 'GRID_CELL_SIZE_REFLECTIVE': 30, 'CLOUD_COVER_LAND': 30.09000015258789, 'GEOMETRIC_RMSE_MODEL': 7.320000171661377, 'COLLECTION_NUMBER': 1, 'IMAGE_QUALITY_OLI': 9, 'LANDSAT_SCENE_ID': 'LC80440342014125LGN01', 'WRS_PATH': 44, 'google:registration_count': 0, 'PANCHROMATIC_SAMPLES': 15341, 'PANCHROMATIC_LINES': 15581, 'GEOMETRIC_RMSE_MODEL_Y': 4.623000144958496, 'REFLECTIVE_LINES': 7791, 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE': 'TIRS', 'GEOMETRIC_RMSE_MODEL_X': 5.675000190734863, 'system:asset_size': 1263423627, 'system:index': 'LC08_044034_20140505', 'REFLECTANCE_ADD_BAND_1': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_2': -0.10000000149011612, 'DATUM': 'WGS84', 'REFLECTANCE_ADD_BAND_3': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_4': -0.10000000149011612, 'RLUT_FILE_NAME': 'LC08RLUT_20130211_20150302_01_11.h5', 'REFLECTANCE_ADD_BAND_5': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_6': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_7': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_8': -0.10000000149011612, 'BPF_NAME_TIRS': 'LT8BPF20140505181139_20140505190416.01', 'GROUND_CONTROL_POINTS_VERSION': 4, 'DATA_TYPE': 'L1TP', 'UTM_ZONE': 10, 'LANDSAT_PRODUCT_ID': 'LC08_L1TP_044034_20140505_20170307_01_T1', 'REFLECTANCE_ADD_BAND_9': -0.10000000149011612, 'google:registration_ratio': 0, 'GRID_CELL_SIZE_PANCHROMATIC': 15, 'RADIANCE_ADD_BAND_4': -49.10100173950195, 'REFLECTANCE_MULT_BAND_7': 1.9999999494757503e-05, 'system:time_start': 1399315542790, 'RADIANCE_ADD_BAND_5': -30.047359466552734, 'REFLECTANCE_MULT_BAND_6': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_6': -7.472509860992432, 'REFLECTANCE_MULT_BAND_9': 1.9999999494757503e-05, 'PROCESSING_SOFTWARE_VERSION': 'LPGS_2.7.0', 'RADIANCE_ADD_BAND_7': -2.518630027770996, 'REFLECTANCE_MULT_BAND_8': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_1': -61.70698165893555, 'RADIANCE_ADD_BAND_2': -63.18870162963867, 'RADIANCE_ADD_BAND_3': -58.227840423583984, 'REFLECTANCE_MULT_BAND_1': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_8': -55.56882095336914, 'REFLECTANCE_MULT_BAND_3': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_9': -11.743189811706543, 'REFLECTANCE_MULT_BAND_2': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_5': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_4': 1.9999999494757503e-05, 'THERMAL_LINES': 7791, 'TIRS_SSM_POSITION_STATUS': 'NOMINAL', 'GRID_CELL_SIZE_THERMAL': 30, 'NADIR_OFFNADIR': 'NADIR', 'RADIANCE_ADD_BAND_11': 0.10000000149011612, 'REQUEST_ID': '0501703064572_00027', 'EARTH_SUN_DISTANCE': 1.0086472034454346, 'TIRS_SSM_MODEL': 'ACTUAL', 'FILE_DATE': 1488903671000, 'SCENE_CENTER_TIME': '18:45:42.7916370Z', 'SUN_ELEVATION': 62.584102630615234, 'BPF_NAME_OLI': 'LO8BPF20140505183026_20140505190323.01', 'RADIANCE_ADD_BAND_10': 0.10000000149011612, 'ROLL_ANGLE': -0.0010000000474974513, 'K1_CONSTANT_BAND_10': 774.8853149414062, 'SATURATION_BAND_1': 'Y', 'SATURATION_BAND_2': 'Y', 'SATURATION_BAND_3': 'Y', 'SATURATION_BAND_4': 'Y', 'SATURATION_BAND_5': 'Y', 'MAP_PROJECTION': 'UTM', 'SATURATION_BAND_6': 'Y', 'SENSOR_ID': 'OLI_TIRS', 'SATURATION_BAND_7': 'Y', 'K1_CONSTANT_BAND_11': 480.8883056640625, 'SATURATION_BAND_8': 'N', 'SATURATION_BAND_9': 'N', 'TARGET_WRS_PATH': 44, 'RADIANCE_MULT_BAND_11': 0.00033420001273043454, 'RADIANCE_MULT_BAND_10': 0.00033420001273043454, 'GROUND_CONTROL_POINTS_MODEL': 289, 'SPACECRAFT_ID': 'LANDSAT_8', 'ELEVATION_SOURCE': 'GLS2000', 'THERMAL_SAMPLES': 7671, 'GROUND_CONTROL_POINTS_VERIFY': 62}}, {'type': 'Image', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [15341, 15601], 'crs': 'EPSG:32610', 'crs_transform': [15, 0, 464692.5, 0, -15, 4264507.5]}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}], 'version': 1581772792877035, 'id': 'LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140521', 'properties': {'terra': [{'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504500914552187, 'id': 'MODIS/006/MOD09GA/2014_05_19', 'properties': {'system:time_start': 1400457600000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1400544000000, 'system:asset_size': 23343381618, 'system:index': '2014_05_19'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504503270957152, 'id': 'MODIS/006/MOD09GA/2014_05_20', 'properties': {'system:time_start': 1400544000000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1400630400000, 'system:asset_size': 22344174886, 'system:index': '2014_05_20'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504499085062986, 'id': 'MODIS/006/MOD09GA/2014_05_21', 'properties': {'system:time_start': 1400630400000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1400716800000, 'system:asset_size': 23263811253, 'system:index': '2014_05_21'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504500880529169, 'id': 'MODIS/006/MOD09GA/2014_05_22', 'properties': {'system:time_start': 1400716800000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1400803200000, 'system:asset_size': 22511022912, 'system:index': '2014_05_22'}}], 'RADIANCE_MULT_BAND_5': 0.005967800039798021, 'RADIANCE_MULT_BAND_6': 0.0014841000083833933, 'RADIANCE_MULT_BAND_3': 0.01156499981880188, 'RADIANCE_MULT_BAND_4': 0.009752199985086918, 'RADIANCE_MULT_BAND_1': 0.012256000190973282, 'RADIANCE_MULT_BAND_2': 0.012550000101327896, 'K2_CONSTANT_BAND_11': 1201.1441650390625, 'K2_CONSTANT_BAND_10': 1321.078857421875, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-120.9221114406814, 37.68244619012667], [-120.89633560745239, 37.76390614408945], [-120.83746336237951, 37.94945600779687], [-120.82098495481172, 38.00141006480963], [-120.78179975086263, 38.125049388247994], [-120.78173908398541, 38.12705556142276], [-120.79512978776856, 38.12976361438609], [-121.73406240469221, 38.31178421248136], [-122.79279800879766, 38.50701449179694], [-122.88876971795369, 38.5241778933743], [-122.9038553878929, 38.52682543966657], [-123.3934724535376, 36.80801002145629], [-123.3934642377511, 36.80639615821769], [-123.14252377291987, 36.76031119223474], [-121.39556579260922, 36.42323515794831], [-121.3201532766815, 36.40807244280241], [-121.31926234184606, 36.40876798117092], [-121.1964526203538, 36.807060467012924], [-121.07492303846685, 37.19674766434507], [-120.94691203296651, 37.60392056819356], [-120.9221114406814, 37.68244619012667]]}, 'REFLECTIVE_SAMPLES': 7671, 'SUN_AZIMUTH': 129.40968322753906, 'CPF_NAME': 'LC08CPF_20140401_20140630_01.01', 'DATE_ACQUIRED': '2014-05-21', 'ELLIPSOID': 'WGS84', 'google:registration_offset_x': 93.1732177734375, 'google:registration_offset_y': -389.06402587890625, 'STATION_ID': 'LGN', 'RESAMPLING_OPTION': 'CUBIC_CONVOLUTION', 'ORIENTATION': 'NORTH_UP', 'WRS_ROW': 34, 'RADIANCE_MULT_BAND_9': 0.0023324000649154186, 'TARGET_WRS_ROW': 34, 'RADIANCE_MULT_BAND_7': 0.0005002400139346719, 'RADIANCE_MULT_BAND_8': 0.011037000454962254, 'IMAGE_QUALITY_TIRS': 9, 'TRUNCATION_OLI': 'UPPER', 'CLOUD_COVER': 35.439998626708984, 'GEOMETRIC_RMSE_VERIFY': 3.2890000343322754, 'COLLECTION_CATEGORY': 'T1', 'GRID_CELL_SIZE_REFLECTIVE': 30, 'CLOUD_COVER_LAND': 14.020000457763672, 'GEOMETRIC_RMSE_MODEL': 5.670000076293945, 'COLLECTION_NUMBER': 1, 'IMAGE_QUALITY_OLI': 9, 'LANDSAT_SCENE_ID': 'LC80440342014141LGN01', 'WRS_PATH': 44, 'google:registration_count': 66, 'PANCHROMATIC_SAMPLES': 15341, 'PANCHROMATIC_LINES': 15601, 'GEOMETRIC_RMSE_MODEL_Y': 3.8980000019073486, 'REFLECTIVE_LINES': 7801, 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE': 'TIRS', 'GEOMETRIC_RMSE_MODEL_X': 4.117000102996826, 'system:asset_size': 1261385761, 'system:index': 'LC08_044034_20140521', 'REFLECTANCE_ADD_BAND_1': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_2': -0.10000000149011612, 'DATUM': 'WGS84', 'REFLECTANCE_ADD_BAND_3': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_4': -0.10000000149011612, 'RLUT_FILE_NAME': 'LC08RLUT_20130211_20150302_01_11.h5', 'REFLECTANCE_ADD_BAND_5': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_6': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_7': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_8': -0.10000000149011612, 'BPF_NAME_TIRS': 'LT8BPF20140521180614_20140521190408.02', 'GROUND_CONTROL_POINTS_VERSION': 4, 'DATA_TYPE': 'L1TP', 'UTM_ZONE': 10, 'LANDSAT_PRODUCT_ID': 'LC08_L1TP_044034_20140521_20170307_01_T1', 'REFLECTANCE_ADD_BAND_9': -0.10000000149011612, 'google:registration_ratio': 0.4370861053466797, 'GRID_CELL_SIZE_PANCHROMATIC': 15, 'RADIANCE_ADD_BAND_4': -48.76087951660156, 'REFLECTANCE_MULT_BAND_7': 1.9999999494757503e-05, 'system:time_start': 1400697934830, 'RADIANCE_ADD_BAND_5': -29.839229583740234, 'REFLECTANCE_MULT_BAND_6': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_6': -7.420740127563477, 'REFLECTANCE_MULT_BAND_9': 1.9999999494757503e-05, 'PROCESSING_SOFTWARE_VERSION': 'LPGS_2.7.0', 'RADIANCE_ADD_BAND_7': -2.501189947128296, 'REFLECTANCE_MULT_BAND_8': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_1': -61.279541015625, 'RADIANCE_ADD_BAND_2': -62.75099182128906, 'RADIANCE_ADD_BAND_3': -57.824501037597656, 'REFLECTANCE_MULT_BAND_1': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_8': -55.18389892578125, 'REFLECTANCE_MULT_BAND_3': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_9': -11.661849975585938, 'REFLECTANCE_MULT_BAND_2': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_5': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_4': 1.9999999494757503e-05, 'THERMAL_LINES': 7801, 'TIRS_SSM_POSITION_STATUS': 'NOMINAL', 'GRID_CELL_SIZE_THERMAL': 30, 'NADIR_OFFNADIR': 'NADIR', 'RADIANCE_ADD_BAND_11': 0.10000000149011612, 'REQUEST_ID': '0501703064217_00034', 'EARTH_SUN_DISTANCE': 1.0121588706970215, 'TIRS_SSM_MODEL': 'ACTUAL', 'FILE_DATE': 1488873846000, 'SCENE_CENTER_TIME': '18:45:34.8277940Z', 'SUN_ELEVATION': 65.65296173095703, 'BPF_NAME_OLI': 'LO8BPF20140521183116_20140521190315.02', 'RADIANCE_ADD_BAND_10': 0.10000000149011612, 'ROLL_ANGLE': -0.0010000000474974513, 'K1_CONSTANT_BAND_10': 774.8853149414062, 'SATURATION_BAND_1': 'Y', 'SATURATION_BAND_2': 'Y', 'SATURATION_BAND_3': 'Y', 'SATURATION_BAND_4': 'Y', 'SATURATION_BAND_5': 'Y', 'MAP_PROJECTION': 'UTM', 'SATURATION_BAND_6': 'Y', 'SENSOR_ID': 'OLI_TIRS', 'SATURATION_BAND_7': 'Y', 'K1_CONSTANT_BAND_11': 480.8883056640625, 'SATURATION_BAND_8': 'N', 'SATURATION_BAND_9': 'N', 'TARGET_WRS_PATH': 44, 'RADIANCE_MULT_BAND_11': 0.00033420001273043454, 'RADIANCE_MULT_BAND_10': 0.00033420001273043454, 'GROUND_CONTROL_POINTS_MODEL': 404, 'SPACECRAFT_ID': 'LANDSAT_8', 'ELEVATION_SOURCE': 'GLS2000', 'THERMAL_SAMPLES': 7671, 'GROUND_CONTROL_POINTS_VERIFY': 150}}]}
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
# Load a primary 'collection': Landsat imagery.
primary = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterDate('2014-04-01', '2014-06-01') \
.filterBounds(ee.Geometry.Point(-122.092, 37.42))
# Load a secondary 'collection': MODIS imagery.
modSecondary = ee.ImageCollection('MODIS/006/MOD09GA') \
.filterDate('2014-03-01', '2014-07-01')
# Define an allowable time difference: two days in milliseconds.
twoDaysMillis = 2 * 24 * 60 * 60 * 1000
# Create a time filter to define a match as overlapping timestamps.
timeFilter = ee.Filter.Or(
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_start',
'rightField': 'system:time_end'
}),
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_end',
'rightField': 'system:time_start'
})
)
# Define the join.
saveAllJoin = ee.Join.saveAll(**{
'matchesKey': 'terra',
'ordering': 'system:time_start',
'ascending': True
})
# Apply the join.
landsatModis = saveAllJoin.apply(primary, modSecondary, timeFilter)
# Display the result.
print('Join.saveAll:', landsatModis.getInfo())
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Load a primary 'collection': Landsat imagery.
primary = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterDate('2014-04-01', '2014-06-01') \
.filterBounds(ee.Geometry.Point(-122.092, 37.42))
# Load a secondary 'collection': MODIS imagery.
modSecondary = ee.ImageCollection('MODIS/006/MOD09GA') \
.filterDate('2014-03-01', '2014-07-01')
# Define an allowable time difference: two days in milliseconds.
twoDaysMillis = 2 * 24 * 60 * 60 * 1000
# Create a time filter to define a match as overlapping timestamps.
timeFilter = ee.Filter.Or(
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_start',
'rightField': 'system:time_end'
}),
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_end',
'rightField': 'system:time_start'
})
)
# Define the join.
saveAllJoin = ee.Join.saveAll(**{
'matchesKey': 'terra',
'ordering': 'system:time_start',
'ascending': True
})
# Apply the join.
landsatModis = saveAllJoin.apply(primary, modSecondary, timeFilter)
# Display the result.
print('Join.saveAll:', landsatModis.getInfo())
###Output
Join.saveAll: {'type': 'ImageCollection', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}}], 'id': 'LANDSAT/LC08/C01/T1_TOA', 'version': 1580563126058134, 'properties': {'system:visualization_0_min': '0.0', 'type_name': 'ImageCollection', 'visualization_1_bands': 'B5,B4,B3', 'thumb': 'https://mw1.google.com/ges/dd/images/LANDSAT_TOA_thumb.png', 'visualization_1_max': '30000.0', 'description': '<p>Landsat 8 Collection 1 Tier 1\n calibrated top-of-atmosphere (TOA) reflectance.\n Calibration coefficients are extracted from the image metadata. See<a href="http://www.sciencedirect.com/science/article/pii/S0034425709000169">\n Chander et al. (2009)</a> for details on the TOA computation.</p></p>\n<p><b>Revisit Interval</b>\n<br>\n 16 days\n</p>\n<p><b>Bands</b>\n<table class="eecat">\n<tr>\n<th scope="col">Name</th>\n<th scope="col">Resolution</th>\n<th scope="col">Wavelength</th>\n<th scope="col">Description</th>\n</tr>\n<tr>\n<td>B1</td>\n<td>\n 30 meters\n</td>\n<td>0.43 - 0.45 µm</td>\n<td><p>Coastal aerosol</p></td>\n</tr>\n<tr>\n<td>B2</td>\n<td>\n 30 meters\n</td>\n<td>0.45 - 0.51 µm</td>\n<td><p>Blue</p></td>\n</tr>\n<tr>\n<td>B3</td>\n<td>\n 30 meters\n</td>\n<td>0.53 - 0.59 µm</td>\n<td><p>Green</p></td>\n</tr>\n<tr>\n<td>B4</td>\n<td>\n 30 meters\n</td>\n<td>0.64 - 0.67 µm</td>\n<td><p>Red</p></td>\n</tr>\n<tr>\n<td>B5</td>\n<td>\n 30 meters\n</td>\n<td>0.85 - 0.88 µm</td>\n<td><p>Near infrared</p></td>\n</tr>\n<tr>\n<td>B6</td>\n<td>\n 30 meters\n</td>\n<td>1.57 - 1.65 µm</td>\n<td><p>Shortwave infrared 1</p></td>\n</tr>\n<tr>\n<td>B7</td>\n<td>\n 30 meters\n</td>\n<td>2.11 - 2.29 µm</td>\n<td><p>Shortwave infrared 2</p></td>\n</tr>\n<tr>\n<td>B8</td>\n<td>\n 15 meters\n</td>\n<td>0.52 - 0.90 µm</td>\n<td><p>Band 8 Panchromatic</p></td>\n</tr>\n<tr>\n<td>B9</td>\n<td>\n 15 meters\n</td>\n<td>1.36 - 1.38 µm</td>\n<td><p>Cirrus</p></td>\n</tr>\n<tr>\n<td>B10</td>\n<td>\n 30 meters\n</td>\n<td>10.60 - 11.19 µm</td>\n<td><p>Thermal infrared 1, resampled from 100m to 30m</p></td>\n</tr>\n<tr>\n<td>B11</td>\n<td>\n 30 meters\n</td>\n<td>11.50 - 12.51 µm</td>\n<td><p>Thermal infrared 2, resampled from 100m to 30m</p></td>\n</tr>\n<tr>\n<td>BQA</td>\n<td>\n</td>\n<td></td>\n<td><p>Landsat Collection 1 QA Bitmask (<a href="https://www.usgs.gov/land-resources/nli/landsat/landsat-collection-1-level-1-quality-assessment-band">See Landsat QA page</a>)</p></td>\n</tr>\n<tr>\n<td colspan=100>\n Bitmask for BQA\n<ul>\n<li>\n Bit 0: Designated Fill\n<ul>\n<li>0: No</li>\n<li>1: Yes</li>\n</ul>\n</li>\n<li>\n Bit 1: Terrain Occlusion\n<ul>\n<li>0: No</li>\n<li>1: Yes</li>\n</ul>\n</li>\n<li>\n Bits 2-3: Radiometric Saturation\n<ul>\n<li>0: No bands contain saturation</li>\n<li>1: 1-2 bands contain saturation</li>\n<li>2: 3-4 bands contain saturation</li>\n<li>3: 5 or more bands contain saturation</li>\n</ul>\n</li>\n<li>\n Bit 4: Cloud\n<ul>\n<li>0: No</li>\n<li>1: Yes</li>\n</ul>\n</li>\n<li>\n Bits 5-6: Cloud Confidence\n<ul>\n<li>0: Not Determined / Condition does not exist.</li>\n<li>1: Low, (0-33 percent confidence)</li>\n<li>2: Medium, (34-66 percent confidence)</li>\n<li>3: High, (67-100 percent confidence)</li>\n</ul>\n</li>\n<li>\n Bits 7-8: Cloud Shadow Confidence\n<ul>\n<li>0: Not Determined / Condition does not exist.</li>\n<li>1: Low, (0-33 percent confidence)</li>\n<li>2: Medium, (34-66 percent confidence)</li>\n<li>3: High, (67-100 percent confidence)</li>\n</ul>\n</li>\n<li>\n Bits 9-10: Snow / Ice Confidence\n<ul>\n<li>0: Not Determined / Condition does not exist.</li>\n<li>1: Low, (0-33 percent confidence)</li>\n<li>2: Medium, (34-66 percent confidence)</li>\n<li>3: High, (67-100 percent confidence)</li>\n</ul>\n</li>\n<li>\n Bits 11-12: Cirrus Confidence\n<ul>\n<li>0: Not Determined / Condition does not exist.</li>\n<li>1: Low, (0-33 percent confidence)</li>\n<li>2: Medium, (34-66 percent confidence)</li>\n<li>3: High, (67-100 percent confidence)</li>\n</ul>\n</li>\n</ul>\n</td>\n</tr>\n</table>\n<p><b>Image Properties</b>\n<table class="eecat">\n<tr>\n<th scope="col">Name</th>\n<th scope="col">Type</th>\n<th scope="col">Description</th>\n</tr>\n<tr>\n<td>BPF_NAME_OLI</td>\n<td>STRING</td>\n<td><p>The file name for the Bias Parameter File (BPF) used to generate the product, if applicable. This only applies to products that contain OLI bands.</p></td>\n</tr>\n<tr>\n<td>BPF_NAME_TIRS</td>\n<td>STRING</td>\n<td><p>The file name for the Bias Parameter File (BPF) used to generate the product, if applicable. This only applies to products that contain TIRS bands.</p></td>\n</tr>\n<tr>\n<td>CLOUD_COVER</td>\n<td>DOUBLE</td>\n<td><p>Percentage cloud cover, -1 = not calculated.</p></td>\n</tr>\n<tr>\n<td>CLOUD_COVER_LAND</td>\n<td>DOUBLE</td>\n<td><p>Percentage cloud cover over land, -1 = not calculated.</p></td>\n</tr>\n<tr>\n<td>COLLECTION_CATEGORY</td>\n<td>STRING</td>\n<td><p>Tier of scene. (T1 or T2)</p></td>\n</tr>\n<tr>\n<td>COLLECTION_NUMBER</td>\n<td>DOUBLE</td>\n<td><p>Number of collection.</p></td>\n</tr>\n<tr>\n<td>CPF_NAME</td>\n<td>STRING</td>\n<td><p>Calibration parameter file name.</p></td>\n</tr>\n<tr>\n<td>DATA_TYPE</td>\n<td>STRING</td>\n<td><p>Data type identifier. (L1T or L1G)</p></td>\n</tr>\n<tr>\n<td>DATE_ACQUIRED</td>\n<td>STRING</td>\n<td><p>Image acquisition date. "YYYY-MM-DD"</p></td>\n</tr>\n<tr>\n<td>DATUM</td>\n<td>STRING</td>\n<td><p>Datum used in image creation.</p></td>\n</tr>\n<tr>\n<td>EARTH_SUN_DISTANCE</td>\n<td>DOUBLE</td>\n<td><p>Earth sun distance in astronomical units (AU).</p></td>\n</tr>\n<tr>\n<td>ELEVATION_SOURCE</td>\n<td>STRING</td>\n<td><p>Elevation model source used for standard terrain corrected (L1T) products.</p></td>\n</tr>\n<tr>\n<td>ELLIPSOID</td>\n<td>STRING</td>\n<td><p>Ellipsoid used in image creation.</p></td>\n</tr>\n<tr>\n<td>EPHEMERIS_TYPE</td>\n<td>STRING</td>\n<td><p>Ephemeris data type used to perform geometric correction. (Definitive or Predictive)</p></td>\n</tr>\n<tr>\n<td>FILE_DATE</td>\n<td>DOUBLE</td>\n<td><p>File date in milliseconds since epoch.</p></td>\n</tr>\n<tr>\n<td>GEOMETRIC_RMSE_MODEL</td>\n<td>DOUBLE</td>\n<td><p>Combined Root Mean Square Error (RMSE) of the geometric residuals\n(metres) in both across-track and along-track directions\nmeasured on the GCPs used in geometric precision correction.\nNot present in L1G products.</p></td>\n</tr>\n<tr>\n<td>GEOMETRIC_RMSE_MODEL_X</td>\n<td>DOUBLE</td>\n<td><p>RMSE of the X direction geometric residuals (in metres) measured\non the GCPs used in geometric precision correction. Not present in\nL1G products.</p></td>\n</tr>\n<tr>\n<td>GEOMETRIC_RMSE_MODEL_Y</td>\n<td>DOUBLE</td>\n<td><p>RMSE of the Y direction geometric residuals (in metres) measured\non the GCPs used in geometric precision correction. Not present in\nL1G products.</p></td>\n</tr>\n<tr>\n<td>GRID_CELL_SIZE_PANCHROMATIC</td>\n<td>DOUBLE</td>\n<td><p>Grid cell size used in creating the image for the panchromatic band.</p></td>\n</tr>\n<tr>\n<td>GRID_CELL_SIZE_REFLECTIVE</td>\n<td>DOUBLE</td>\n<td><p>Grid cell size used in creating the image for the reflective band.</p></td>\n</tr>\n<tr>\n<td>GRID_CELL_SIZE_THERMAL</td>\n<td>DOUBLE</td>\n<td><p>Grid cell size used in creating the image for the thermal band.</p></td>\n</tr>\n<tr>\n<td>GROUND_CONTROL_POINTS_MODEL</td>\n<td>DOUBLE</td>\n<td><p>The number of ground control points used. Not used in L1GT products.\nValues: 0 - 999 (0 is used for L1T products that have used\nMulti-scene refinement).</p></td>\n</tr>\n<tr>\n<td>GROUND_CONTROL_POINTS_VERSION</td>\n<td>DOUBLE</td>\n<td><p>The number of ground control points used in the verification of\nthe terrain corrected product. Values: -1 to 1615 (-1 = not available)</p></td>\n</tr>\n<tr>\n<td>IMAGE_QUALITY</td>\n<td>DOUBLE</td>\n<td><p>Image quality, 0 = worst, 9 = best, -1 = quality not calculated</p></td>\n</tr>\n<tr>\n<td>IMAGE_QUALITY_OLI</td>\n<td>DOUBLE</td>\n<td><p>The composite image quality for the OLI bands. Values: 9 = Best. 1 = Worst. 0 = Image quality not calculated. This parameter is only present if OLI bands are present in the product.</p></td>\n</tr>\n<tr>\n<td>IMAGE_QUALITY_TIRS</td>\n<td>DOUBLE</td>\n<td><p>The composite image quality for the TIRS bands. Values: 9 = Best. 1 = Worst. 0 = Image quality not calculated. This parameter is only present if OLI bands are present in the product.</p></td>\n</tr>\n<tr>\n<td>K1_CONSTANT_BAND_10</td>\n<td>DOUBLE</td>\n<td><p>Calibration K1 constant for Band 10 radiance to temperature conversion.</p></td>\n</tr>\n<tr>\n<td>K1_CONSTANT_BAND_11</td>\n<td>DOUBLE</td>\n<td><p>Calibration K1 constant for Band 11 radiance to temperature conversion.</p></td>\n</tr>\n<tr>\n<td>K2_CONSTANT_BAND_10</td>\n<td>DOUBLE</td>\n<td><p>Calibration K2 constant for Band 10 radiance to temperature conversion.</p></td>\n</tr>\n<tr>\n<td>K2_CONSTANT_BAND_11</td>\n<td>DOUBLE</td>\n<td><p>Calibration K2 constant for Band 11 radiance to temperature conversion.</p></td>\n</tr>\n<tr>\n<td>LANDSAT_PRODUCT_ID</td>\n<td>STRING</td>\n<td><p>The naming convention of each Landsat Collection 1 Level-1 image based\non acquisition parameters and processing parameters.</p>\n<p>Format: LXSS_LLLL_PPPRRR_YYYYMMDD_yyyymmdd_CC_TX</p>\n<ul>\n<li>L = Landsat</li>\n<li>X = Sensor (O = Operational Land Imager,\nT = Thermal Infrared Sensor, C = Combined OLI/TIRS)</li>\n<li>SS = Satellite (08 = Landsat 8)</li>\n<li>LLLL = Processing Correction Level (L1TP = precision and terrain,\nL1GT = systematic terrain, L1GS = systematic)</li>\n<li>PPP = WRS Path</li>\n<li>RRR = WRS Row</li>\n<li>YYYYMMDD = Acquisition Date expressed in Year, Month, Day</li>\n<li>yyyymmdd = Processing Date expressed in Year, Month, Day</li>\n<li>CC = Collection Number (01)</li>\n<li>TX = Collection Category (RT = Real Time, T1 = Tier 1, T2 = Tier 2)</li>\n</ul></td>\n</tr>\n<tr>\n<td>LANDSAT_SCENE_ID</td>\n<td>STRING</td>\n<td><p>The Pre-Collection naming convention of each image is based on acquisition\nparameters. This was the naming convention used prior to Collection 1.</p>\n<p>Format: LXSPPPRRRYYYYDDDGSIVV</p>\n<ul>\n<li>L = Landsat</li>\n<li>X = Sensor (O = Operational Land Imager, T = Thermal Infrared Sensor, C = Combined OLI/TIRS)</li>\n<li>S = Satellite (08 = Landsat 8)</li>\n<li>PPP = WRS Path</li>\n<li>RRR = WRS Row</li>\n<li>YYYY = Year of Acquisition</li>\n<li>DDD = Julian Day of Acquisition</li>\n<li>GSI = Ground Station Identifier</li>\n<li>VV = Version</li>\n</ul></td>\n</tr>\n<tr>\n<td>MAP_PROJECTION</td>\n<td>STRING</td>\n<td><p>Projection used to represent the 3-dimensional surface of the earth for the Level-1 product.</p></td>\n</tr>\n<tr>\n<td>NADIR_OFFNADIR</td>\n<td>STRING</td>\n<td><p>Nadir or Off-Nadir condition of the scene.</p></td>\n</tr>\n<tr>\n<td>ORIENTATION</td>\n<td>STRING</td>\n<td><p>Orientation used in creating the image. Values: NOMINAL = Nominal Path, NORTH_UP = North Up, TRUE_NORTH = True North, USER = User</p></td>\n</tr>\n<tr>\n<td>PANCHROMATIC_LINES</td>\n<td>DOUBLE</td>\n<td><p>Number of product lines for the panchromatic band.</p></td>\n</tr>\n<tr>\n<td>PANCHROMATIC_SAMPLES</td>\n<td>DOUBLE</td>\n<td><p>Number of product samples for the panchromatic bands.</p></td>\n</tr>\n<tr>\n<td>PROCESSING_SOFTWARE_VERSION</td>\n<td>STRING</td>\n<td><p>Name and version of the processing software used to generate the L1 product.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_1</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 1.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_10</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 10.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_11</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 11.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_2</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 2.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_3</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 3.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_4</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 4.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_5</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 5.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_6</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 6.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_7</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 7.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_8</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 8.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_9</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 9.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_1</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 1 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_10</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 10 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_11</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 11 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_2</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 2 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_3</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 3 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_4</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 4 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_5</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 5 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_6</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 6 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_7</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 7 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_8</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 8 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_9</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 9 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_1</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 1 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_2</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 2 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_3</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 3 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_4</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 4 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_5</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 5 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_7</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 7 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_8</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 8 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_9</td>\n<td>DOUBLE</td>\n<td><p>Minimum achievable spectral reflectance value for Band 8.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_1</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 1 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_2</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 2 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_3</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 3 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_4</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 4 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_5</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 5 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_6</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 6 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_7</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 7 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_8</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 8 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_9</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 9 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTIVE_LINES</td>\n<td>DOUBLE</td>\n<td><p>Number of product lines for the reflective bands.</p></td>\n</tr>\n<tr>\n<td>REFLECTIVE_SAMPLES</td>\n<td>DOUBLE</td>\n<td><p>Number of product samples for the reflective bands.</p></td>\n</tr>\n<tr>\n<td>REQUEST_ID</td>\n<td>STRING</td>\n<td><p>Request id, nnnyymmdd0000_0000</p>\n<ul>\n<li>nnn = node number</li>\n<li>yy = year</li>\n<li>mm = month</li>\n<li>dd = day</li>\n</ul></td>\n</tr>\n<tr>\n<td>RESAMPLING_OPTION</td>\n<td>STRING</td>\n<td><p>Resampling option used in creating the image.</p></td>\n</tr>\n<tr>\n<td>RLUT_FILE_NAME</td>\n<td>STRING</td>\n<td><p>The file name for the Response Linearization Lookup Table (RLUT) used to generate the product, if applicable.</p></td>\n</tr>\n<tr>\n<td>ROLL_ANGLE</td>\n<td>DOUBLE</td>\n<td><p>The amount of spacecraft roll angle at the scene center.</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_1</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 1 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_10</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 10 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_11</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 11 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_2</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 2 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_3</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 3 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_4</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 4 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_5</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 5 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_6</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 6 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_7</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 7 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_8</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 8 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_9</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 9 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SCENE_CENTER_TIME</td>\n<td>STRING</td>\n<td><p>Scene center time of acquired image. HH:MM:SS.SSSSSSSZ</p>\n<ul>\n<li>HH = Hour (00-23)</li>\n<li>MM = Minutes</li>\n<li>SS.SSSSSSS = Fractional seconds</li>\n<li>Z = "Zulu" time (same as GMT)</li>\n</ul></td>\n</tr>\n<tr>\n<td>SENSOR_ID</td>\n<td>STRING</td>\n<td><p>Sensor used to capture data.</p></td>\n</tr>\n<tr>\n<td>SPACECRAFT_ID</td>\n<td>STRING</td>\n<td><p>Spacecraft identification.</p></td>\n</tr>\n<tr>\n<td>STATION_ID</td>\n<td>STRING</td>\n<td><p>Ground Station/Organisation that received the data.</p></td>\n</tr>\n<tr>\n<td>SUN_AZIMUTH</td>\n<td>DOUBLE</td>\n<td><p>Sun azimuth angle in degrees for the image center location at the image centre acquisition time.</p></td>\n</tr>\n<tr>\n<td>SUN_ELEVATION</td>\n<td>DOUBLE</td>\n<td><p>Sun elevation angle in degrees for the image center location at the image centre acquisition time.</p></td>\n</tr>\n<tr>\n<td>TARGET_WRS_PATH</td>\n<td>DOUBLE</td>\n<td><p>Nearest WRS-2 path to the line-of-sight scene center of the image.</p></td>\n</tr>\n<tr>\n<td>TARGET_WRS_ROW</td>\n<td>DOUBLE</td>\n<td><p>Nearest WRS-2 row to the line-of-sight scene center of the image. Rows 880-889 and 990-999 are reserved for the polar regions where it is undefined in the WRS-2.</p></td>\n</tr>\n<tr>\n<td>THERMAL_LINES</td>\n<td>DOUBLE</td>\n<td><p>Number of product lines for the thermal band.</p></td>\n</tr>\n<tr>\n<td>THERMAL_SAMPLES</td>\n<td>DOUBLE</td>\n<td><p>Number of product samples for the thermal band.</p></td>\n</tr>\n<tr>\n<td>TIRS_SSM_MODEL</td>\n<td>STRING</td>\n<td><p>Due to an anomalous condition on the Thermal Infrared\nSensor (TIRS) Scene Select Mirror (SSM) encoder electronics,\nthis field has been added to indicate which model was used to process the data.\n(Actual, Preliminary, Final)</p></td>\n</tr>\n<tr>\n<td>TIRS_SSM_POSITION_STATUS</td>\n<td>STRING</td>\n<td><p>TIRS SSM position status.</p></td>\n</tr>\n<tr>\n<td>TIRS_STRAY_LIGHT_CORRECTION_SOURCE</td>\n<td>STRING</td>\n<td><p>TIRS stray light correction source.</p></td>\n</tr>\n<tr>\n<td>TRUNCATION_OLI</td>\n<td>STRING</td>\n<td><p>Region of OLCI truncated.</p></td>\n</tr>\n<tr>\n<td>UTM_ZONE</td>\n<td>DOUBLE</td>\n<td><p>UTM zone number used in product map projection.</p></td>\n</tr>\n<tr>\n<td>WRS_PATH</td>\n<td>DOUBLE</td>\n<td><p>The WRS orbital path number (001 - 251).</p></td>\n</tr>\n<tr>\n<td>WRS_ROW</td>\n<td>DOUBLE</td>\n<td><p>Landsat satellite WRS row (001-248).</p></td>\n</tr>\n</table>\n<style>\n table.eecat {\n border: 1px solid black;\n border-collapse: collapse;\n font-size: 13px;\n }\n table.eecat td, tr, th {\n text-align: left; vertical-align: top;\n border: 1px solid gray; padding: 3px;\n }\n td.nobreak { white-space: nowrap; }\n</style>', 'source_tags': ['landsat', 'usgs'], 'visualization_1_name': 'Near Infrared (543)', 'visualization_0_max': '30000.0', 'title': 'USGS Landsat 8 Collection 1 Tier 1 TOA Reflectance', 'visualization_0_gain': '500.0', 'system:visualization_2_max': '30000.0', 'product_tags': ['global', 'toa', 'tier1', 'lc8', 'c1', 'oli_tirs', 't1', 'l8', 'radiance'], 'visualization_1_gain': '500.0', 'provider': 'USGS/Google', 'visualization_1_min': '0.0', 'system:visualization_2_name': 'Shortwave Infrared (753)', 'visualization_0_min': '0.0', 'system:visualization_1_bands': 'B5,B4,B3', 'system:visualization_1_max': '30000.0', 'visualization_0_name': 'True Color (432)', 'date_range': [1365638400000, 1579910400000], 'visualization_2_bands': 'B7,B5,B3', 'visualization_2_name': 'Shortwave Infrared (753)', 'period': 0, 'system:visualization_2_min': '0.0', 'system:visualization_0_bands': 'B4,B3,B2', 'visualization_2_min': '0.0', 'visualization_2_gain': '500.0', 'provider_url': 'http://landsat.usgs.gov/', 'sample': 'https://mw1.google.com/ges/dd/images/LANDSAT_TOA_sample.png', 'system:visualization_1_name': 'Near Infrared (543)', 'tags': ['landsat', 'usgs', 'global', 'toa', 'tier1', 'lc8', 'c1', 'oli_tirs', 't1', 'l8', 'radiance'], 'system:visualization_0_max': '30000.0', 'visualization_2_max': '30000.0', 'system:visualization_2_bands': 'B7,B5,B3', 'system:visualization_1_min': '0.0', 'system:visualization_0_name': 'True Color (432)', 'visualization_0_bands': 'B4,B3,B2'}, 'features': [{'type': 'Image', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [15341, 15581], 'crs': 'EPSG:32610', 'crs_transform': [15, 0, 460792.5, 0, -15, 4264207.5]}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}], 'version': 1580563126058134, 'id': 'LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140403', 'properties': {'terra': [{'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504497547369101, 'id': 'MODIS/006/MOD09GA/2014_04_01', 'properties': {'system:time_start': 1396310400000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1396396800000, 'system:asset_size': 24622391229, 'system:index': '2014_04_01'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504501593115095, 'id': 'MODIS/006/MOD09GA/2014_04_02', 'properties': {'system:time_start': 1396396800000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1396483200000, 'system:asset_size': 23573993585, 'system:index': '2014_04_02'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504488721459188, 'id': 'MODIS/006/MOD09GA/2014_04_03', 'properties': {'system:time_start': 1396483200000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1396569600000, 'system:asset_size': 24476076998, 'system:index': '2014_04_03'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504499525329416, 'id': 'MODIS/006/MOD09GA/2014_04_04', 'properties': {'system:time_start': 1396569600000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1396656000000, 'system:asset_size': 23697729158, 'system:index': '2014_04_04'}}], 'RADIANCE_MULT_BAND_5': 0.00611429987475276, 'RADIANCE_MULT_BAND_6': 0.0015206000534817576, 'RADIANCE_MULT_BAND_3': 0.011849000118672848, 'RADIANCE_MULT_BAND_4': 0.009991499595344067, 'RADIANCE_MULT_BAND_1': 0.012556999921798706, 'RADIANCE_MULT_BAND_2': 0.01285799965262413, 'K2_CONSTANT_BAND_11': 1201.1441650390625, 'K2_CONSTANT_BAND_10': 1321.078857421875, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-120.8473141778081, 38.05593855929062], [-120.8399593728871, 38.079323071287384], [-120.82522434534502, 38.126298845124154], [-120.82517062317932, 38.12810935862697], [-120.8677905264658, 38.13653674526281], [-121.37735830917396, 38.23574890955089], [-122.92397603591857, 38.5218201625494], [-122.94540185152168, 38.52557313562304], [-122.94781508421401, 38.52557420469068], [-122.9538620955667, 38.50519466790785], [-123.43541566635548, 36.80572425461524], [-123.43388775775958, 36.8051169737102], [-121.36103157158686, 36.408726677230895], [-121.3601864919046, 36.410036730606365], [-121.3547960201613, 36.42754948797928], [-121.22805212441246, 36.84032220234662], [-121.10161450053057, 37.247264521511426], [-120.99043851266156, 37.60225211028372], [-120.94687053372499, 37.7406010941523], [-120.88475337745422, 37.93745112674764], [-120.8473141778081, 38.05593855929062]]}, 'REFLECTIVE_SAMPLES': 7671, 'SUN_AZIMUTH': 143.3709716796875, 'CPF_NAME': 'LC08CPF_20140401_20140630_01.01', 'DATE_ACQUIRED': '2014-04-03', 'ELLIPSOID': 'WGS84', 'google:registration_offset_x': 0, 'google:registration_offset_y': 0, 'STATION_ID': 'LGN', 'RESAMPLING_OPTION': 'CUBIC_CONVOLUTION', 'ORIENTATION': 'NORTH_UP', 'WRS_ROW': 34, 'RADIANCE_MULT_BAND_9': 0.002389600034803152, 'TARGET_WRS_ROW': 34, 'RADIANCE_MULT_BAND_7': 0.0005125200259499252, 'RADIANCE_MULT_BAND_8': 0.011308000423014164, 'IMAGE_QUALITY_TIRS': 9, 'TRUNCATION_OLI': 'UPPER', 'CLOUD_COVER': 28.1200008392334, 'GEOMETRIC_RMSE_VERIFY': 3.2160000801086426, 'COLLECTION_CATEGORY': 'T1', 'GRID_CELL_SIZE_REFLECTIVE': 30, 'CLOUD_COVER_LAND': 31.59000015258789, 'GEOMETRIC_RMSE_MODEL': 6.959000110626221, 'COLLECTION_NUMBER': 1, 'IMAGE_QUALITY_OLI': 9, 'LANDSAT_SCENE_ID': 'LC80440342014093LGN01', 'WRS_PATH': 44, 'google:registration_count': 0, 'PANCHROMATIC_SAMPLES': 15341, 'PANCHROMATIC_LINES': 15581, 'GEOMETRIC_RMSE_MODEL_Y': 4.63700008392334, 'REFLECTIVE_LINES': 7791, 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE': 'TIRS', 'GEOMETRIC_RMSE_MODEL_X': 5.188000202178955, 'system:asset_size': 1208697743, 'system:index': 'LC08_044034_20140403', 'REFLECTANCE_ADD_BAND_1': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_2': -0.10000000149011612, 'DATUM': 'WGS84', 'REFLECTANCE_ADD_BAND_3': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_4': -0.10000000149011612, 'RLUT_FILE_NAME': 'LC08RLUT_20130211_20150302_01_11.h5', 'REFLECTANCE_ADD_BAND_5': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_6': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_7': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_8': -0.10000000149011612, 'BPF_NAME_TIRS': 'LT8BPF20140403182815_20140403190449.01', 'GROUND_CONTROL_POINTS_VERSION': 4, 'DATA_TYPE': 'L1TP', 'UTM_ZONE': 10, 'LANDSAT_PRODUCT_ID': 'LC08_L1TP_044034_20140403_20170306_01_T1', 'REFLECTANCE_ADD_BAND_9': -0.10000000149011612, 'google:registration_ratio': 0, 'GRID_CELL_SIZE_PANCHROMATIC': 15, 'RADIANCE_ADD_BAND_4': -49.95764923095703, 'REFLECTANCE_MULT_BAND_7': 1.9999999494757503e-05, 'system:time_start': 1396550776290, 'RADIANCE_ADD_BAND_5': -30.571590423583984, 'REFLECTANCE_MULT_BAND_6': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_6': -7.602880001068115, 'REFLECTANCE_MULT_BAND_9': 1.9999999494757503e-05, 'PROCESSING_SOFTWARE_VERSION': 'LPGS_2.7.0', 'RADIANCE_ADD_BAND_7': -2.562580108642578, 'REFLECTANCE_MULT_BAND_8': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_1': -62.78356170654297, 'RADIANCE_ADD_BAND_2': -64.29113006591797, 'RADIANCE_ADD_BAND_3': -59.24372863769531, 'REFLECTANCE_MULT_BAND_1': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_8': -56.53831100463867, 'REFLECTANCE_MULT_BAND_3': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_9': -11.94806957244873, 'REFLECTANCE_MULT_BAND_2': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_5': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_4': 1.9999999494757503e-05, 'THERMAL_LINES': 7791, 'TIRS_SSM_POSITION_STATUS': 'NOMINAL', 'GRID_CELL_SIZE_THERMAL': 30, 'NADIR_OFFNADIR': 'NADIR', 'RADIANCE_ADD_BAND_11': 0.10000000149011612, 'REQUEST_ID': '0501703063782_00025', 'EARTH_SUN_DISTANCE': 0.9999619126319885, 'TIRS_SSM_MODEL': 'ACTUAL', 'FILE_DATE': 1488829355000, 'SCENE_CENTER_TIME': '18:46:16.2881730Z', 'SUN_ELEVATION': 52.549800872802734, 'BPF_NAME_OLI': 'LO8BPF20140403183209_20140403190356.01', 'RADIANCE_ADD_BAND_10': 0.10000000149011612, 'ROLL_ANGLE': -0.0010000000474974513, 'K1_CONSTANT_BAND_10': 774.8853149414062, 'SATURATION_BAND_1': 'N', 'SATURATION_BAND_2': 'N', 'SATURATION_BAND_3': 'N', 'SATURATION_BAND_4': 'N', 'SATURATION_BAND_5': 'Y', 'MAP_PROJECTION': 'UTM', 'SATURATION_BAND_6': 'Y', 'SENSOR_ID': 'OLI_TIRS', 'SATURATION_BAND_7': 'Y', 'K1_CONSTANT_BAND_11': 480.8883056640625, 'SATURATION_BAND_8': 'N', 'SATURATION_BAND_9': 'N', 'TARGET_WRS_PATH': 44, 'RADIANCE_MULT_BAND_11': 0.00033420001273043454, 'RADIANCE_MULT_BAND_10': 0.00033420001273043454, 'GROUND_CONTROL_POINTS_MODEL': 385, 'SPACECRAFT_ID': 'LANDSAT_8', 'ELEVATION_SOURCE': 'GLS2000', 'THERMAL_SAMPLES': 7671, 'GROUND_CONTROL_POINTS_VERIFY': 98}}, {'type': 'Image', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [15341, 15581], 'crs': 'EPSG:32610', 'crs_transform': [15, 0, 461392.5, 0, -15, 4264207.5]}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}], 'version': 1580563126058134, 'id': 'LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140419', 'properties': {'terra': [{'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504497353500683, 'id': 'MODIS/006/MOD09GA/2014_04_17', 'properties': {'system:time_start': 1397692800000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1397779200000, 'system:asset_size': 24174490963, 'system:index': '2014_04_17'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504503327058258, 'id': 'MODIS/006/MOD09GA/2014_04_18', 'properties': {'system:time_start': 1397779200000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1397865600000, 'system:asset_size': 23100180324, 'system:index': '2014_04_18'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504501012619889, 'id': 'MODIS/006/MOD09GA/2014_04_19', 'properties': {'system:time_start': 1397865600000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1397952000000, 'system:asset_size': 23961163982, 'system:index': '2014_04_19'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504497190553985, 'id': 'MODIS/006/MOD09GA/2014_04_20', 'properties': {'system:time_start': 1397952000000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1398038400000, 'system:asset_size': 23219292499, 'system:index': '2014_04_20'}}], 'RADIANCE_MULT_BAND_5': 0.006059799809008837, 'RADIANCE_MULT_BAND_6': 0.0015069999499246478, 'RADIANCE_MULT_BAND_3': 0.011742999777197838, 'RADIANCE_MULT_BAND_4': 0.009902399964630604, 'RADIANCE_MULT_BAND_1': 0.012445000000298023, 'RADIANCE_MULT_BAND_2': 0.012744000181555748, 'K2_CONSTANT_BAND_11': 1201.1441650390625, 'K2_CONSTANT_BAND_10': 1321.078857421875, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-120.8431379362771, 38.052617966765766], [-120.83578218089683, 38.07600217001765], [-120.81963729012756, 38.12767081181165], [-120.82234049239531, 38.12843879727159], [-122.94102091600229, 38.525570980595205], [-122.94293147316415, 38.52557196694168], [-122.94542248503689, 38.51776440194044], [-122.9490448046238, 38.50559823329617], [-123.430644945337, 36.8057166125035], [-123.42903372114263, 36.80507606772225], [-122.57913602686314, 36.64741782585057], [-121.50262683064466, 36.438064670880586], [-121.35593613505138, 36.40870641506648], [-121.35503796940482, 36.40940804319249], [-121.22502589113704, 36.8329762319502], [-121.10052631685265, 37.23379807333198], [-120.9755883879769, 37.632705519232594], [-120.88376082672839, 37.92399755184342], [-120.85385887049235, 38.01862509330369], [-120.8431379362771, 38.052617966765766]]}, 'REFLECTIVE_SAMPLES': 7671, 'SUN_AZIMUTH': 139.7012176513672, 'CPF_NAME': 'LC08CPF_20140401_20140630_01.01', 'DATE_ACQUIRED': '2014-04-19', 'ELLIPSOID': 'WGS84', 'google:registration_offset_x': 0, 'google:registration_offset_y': 0, 'STATION_ID': 'LGN', 'RESAMPLING_OPTION': 'CUBIC_CONVOLUTION', 'ORIENTATION': 'NORTH_UP', 'WRS_ROW': 34, 'RADIANCE_MULT_BAND_9': 0.002368299989029765, 'TARGET_WRS_ROW': 34, 'RADIANCE_MULT_BAND_7': 0.000507950026076287, 'RADIANCE_MULT_BAND_8': 0.011207000352442265, 'IMAGE_QUALITY_TIRS': 9, 'TRUNCATION_OLI': 'UPPER', 'CLOUD_COVER': 12.920000076293945, 'GEOMETRIC_RMSE_VERIFY': 3.380000114440918, 'COLLECTION_CATEGORY': 'T1', 'GRID_CELL_SIZE_REFLECTIVE': 30, 'CLOUD_COVER_LAND': 0.75, 'GEOMETRIC_RMSE_MODEL': 6.547999858856201, 'COLLECTION_NUMBER': 1, 'IMAGE_QUALITY_OLI': 9, 'LANDSAT_SCENE_ID': 'LC80440342014109LGN01', 'WRS_PATH': 44, 'google:registration_count': 0, 'PANCHROMATIC_SAMPLES': 15341, 'PANCHROMATIC_LINES': 15581, 'GEOMETRIC_RMSE_MODEL_Y': 4.453999996185303, 'REFLECTIVE_LINES': 7791, 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE': 'TIRS', 'GEOMETRIC_RMSE_MODEL_X': 4.798999786376953, 'system:asset_size': 1203236382, 'system:index': 'LC08_044034_20140419', 'REFLECTANCE_ADD_BAND_1': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_2': -0.10000000149011612, 'DATUM': 'WGS84', 'REFLECTANCE_ADD_BAND_3': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_4': -0.10000000149011612, 'RLUT_FILE_NAME': 'LC08RLUT_20130211_20150302_01_11.h5', 'REFLECTANCE_ADD_BAND_5': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_6': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_7': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_8': -0.10000000149011612, 'BPF_NAME_TIRS': 'LT8BPF20140419183133_20140419190432.01', 'GROUND_CONTROL_POINTS_VERSION': 4, 'DATA_TYPE': 'L1TP', 'UTM_ZONE': 10, 'LANDSAT_PRODUCT_ID': 'LC08_L1TP_044034_20140419_20170307_01_T1', 'REFLECTANCE_ADD_BAND_9': -0.10000000149011612, 'google:registration_ratio': 0, 'GRID_CELL_SIZE_PANCHROMATIC': 15, 'RADIANCE_ADD_BAND_4': -49.512229919433594, 'REFLECTANCE_MULT_BAND_7': 1.9999999494757503e-05, 'system:time_start': 1397933159240, 'RADIANCE_ADD_BAND_5': -30.299020767211914, 'REFLECTANCE_MULT_BAND_6': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_6': -7.53508996963501, 'REFLECTANCE_MULT_BAND_9': 1.9999999494757503e-05, 'PROCESSING_SOFTWARE_VERSION': 'LPGS_2.7.0', 'RADIANCE_ADD_BAND_7': -2.5397300720214844, 'REFLECTANCE_MULT_BAND_8': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_1': -62.22378921508789, 'RADIANCE_ADD_BAND_2': -63.717918395996094, 'RADIANCE_ADD_BAND_3': -58.715518951416016, 'REFLECTANCE_MULT_BAND_1': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_8': -56.03422164916992, 'REFLECTANCE_MULT_BAND_3': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_9': -11.841540336608887, 'REFLECTANCE_MULT_BAND_2': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_5': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_4': 1.9999999494757503e-05, 'THERMAL_LINES': 7791, 'TIRS_SSM_POSITION_STATUS': 'NOMINAL', 'GRID_CELL_SIZE_THERMAL': 30, 'NADIR_OFFNADIR': 'NADIR', 'RADIANCE_ADD_BAND_11': 0.10000000149011612, 'REQUEST_ID': '0501703064332_00025', 'EARTH_SUN_DISTANCE': 1.004449725151062, 'TIRS_SSM_MODEL': 'ACTUAL', 'FILE_DATE': 1488882124000, 'SCENE_CENTER_TIME': '18:45:59.2402600Z', 'SUN_ELEVATION': 58.094696044921875, 'BPF_NAME_OLI': 'LO8BPF20140419183527_20140419190339.01', 'RADIANCE_ADD_BAND_10': 0.10000000149011612, 'ROLL_ANGLE': -0.0010000000474974513, 'K1_CONSTANT_BAND_10': 774.8853149414062, 'SATURATION_BAND_1': 'Y', 'SATURATION_BAND_2': 'Y', 'SATURATION_BAND_3': 'Y', 'SATURATION_BAND_4': 'Y', 'SATURATION_BAND_5': 'Y', 'MAP_PROJECTION': 'UTM', 'SATURATION_BAND_6': 'Y', 'SENSOR_ID': 'OLI_TIRS', 'SATURATION_BAND_7': 'Y', 'K1_CONSTANT_BAND_11': 480.8883056640625, 'SATURATION_BAND_8': 'N', 'SATURATION_BAND_9': 'N', 'TARGET_WRS_PATH': 44, 'RADIANCE_MULT_BAND_11': 0.00033420001273043454, 'RADIANCE_MULT_BAND_10': 0.00033420001273043454, 'GROUND_CONTROL_POINTS_MODEL': 509, 'SPACECRAFT_ID': 'LANDSAT_8', 'ELEVATION_SOURCE': 'GLS2000', 'THERMAL_SAMPLES': 7671, 'GROUND_CONTROL_POINTS_VERIFY': 169}}, {'type': 'Image', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [15341, 15581], 'crs': 'EPSG:32610', 'crs_transform': [15, 0, 461392.5, 0, -15, 4264207.5]}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}], 'version': 1580563126058134, 'id': 'LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140505', 'properties': {'terra': [{'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504498782758567, 'id': 'MODIS/006/MOD09GA/2014_05_03', 'properties': {'system:time_start': 1399075200000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1399161600000, 'system:asset_size': 23608680756, 'system:index': '2014_05_03'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504502586801816, 'id': 'MODIS/006/MOD09GA/2014_05_04', 'properties': {'system:time_start': 1399161600000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1399248000000, 'system:asset_size': 22616093760, 'system:index': '2014_05_04'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504502692153885, 'id': 'MODIS/006/MOD09GA/2014_05_05', 'properties': {'system:time_start': 1399248000000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1399334400000, 'system:asset_size': 23559225642, 'system:index': '2014_05_05'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504491491371582, 'id': 'MODIS/006/MOD09GA/2014_05_06', 'properties': {'system:time_start': 1399334400000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1399420800000, 'system:asset_size': 22777088609, 'system:index': '2014_05_06'}}], 'RADIANCE_MULT_BAND_5': 0.006009500008076429, 'RADIANCE_MULT_BAND_6': 0.0014944999711588025, 'RADIANCE_MULT_BAND_3': 0.011645999737083912, 'RADIANCE_MULT_BAND_4': 0.009820199571549892, 'RADIANCE_MULT_BAND_1': 0.012341000139713287, 'RADIANCE_MULT_BAND_2': 0.012637999840080738, 'K2_CONSTANT_BAND_11': 1201.1441650390625, 'K2_CONSTANT_BAND_10': 1321.078857421875, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-121.23130694632096, 38.20890167865334], [-122.47808618435543, 38.442905249886934], [-122.9416241270812, 38.52616106461051], [-122.94257304228283, 38.52467261055228], [-122.94438908458714, 38.518980549130696], [-122.9480116995035, 38.506814434795785], [-123.42945547884437, 36.807365583536495], [-123.42944546960602, 36.80558241062019], [-121.35650439967876, 36.40925950162913], [-121.35462928167787, 36.409233706436694], [-121.2209704109367, 36.84467814167406], [-121.09380664017438, 37.25395464587639], [-120.98744109880928, 37.59368464704816], [-120.92971288838983, 37.77715018781449], [-120.874792117132, 37.95100539896876], [-120.85505283148036, 38.013433126642376], [-120.83525753541217, 38.07639805962481], [-120.81911222539682, 38.12806656677994], [-120.8214394607643, 38.1287277611953], [-120.83942642052946, 38.13230813141151], [-121.23130694632096, 38.20890167865334]]}, 'REFLECTIVE_SAMPLES': 7671, 'SUN_AZIMUTH': 134.8988800048828, 'CPF_NAME': 'LC08CPF_20140401_20140630_01.01', 'DATE_ACQUIRED': '2014-05-05', 'ELLIPSOID': 'WGS84', 'google:registration_offset_x': 0, 'google:registration_offset_y': 0, 'STATION_ID': 'LGN', 'RESAMPLING_OPTION': 'CUBIC_CONVOLUTION', 'ORIENTATION': 'NORTH_UP', 'WRS_ROW': 34, 'RADIANCE_MULT_BAND_9': 0.0023485999554395676, 'TARGET_WRS_ROW': 34, 'RADIANCE_MULT_BAND_7': 0.0005037300288677216, 'RADIANCE_MULT_BAND_8': 0.011114000342786312, 'IMAGE_QUALITY_TIRS': 9, 'TRUNCATION_OLI': 'UPPER', 'CLOUD_COVER': 24.25, 'GEOMETRIC_RMSE_VERIFY': 3.5369999408721924, 'COLLECTION_CATEGORY': 'T1', 'GRID_CELL_SIZE_REFLECTIVE': 30, 'CLOUD_COVER_LAND': 30.09000015258789, 'GEOMETRIC_RMSE_MODEL': 7.320000171661377, 'COLLECTION_NUMBER': 1, 'IMAGE_QUALITY_OLI': 9, 'LANDSAT_SCENE_ID': 'LC80440342014125LGN01', 'WRS_PATH': 44, 'google:registration_count': 0, 'PANCHROMATIC_SAMPLES': 15341, 'PANCHROMATIC_LINES': 15581, 'GEOMETRIC_RMSE_MODEL_Y': 4.623000144958496, 'REFLECTIVE_LINES': 7791, 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE': 'TIRS', 'GEOMETRIC_RMSE_MODEL_X': 5.675000190734863, 'system:asset_size': 1263423627, 'system:index': 'LC08_044034_20140505', 'REFLECTANCE_ADD_BAND_1': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_2': -0.10000000149011612, 'DATUM': 'WGS84', 'REFLECTANCE_ADD_BAND_3': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_4': -0.10000000149011612, 'RLUT_FILE_NAME': 'LC08RLUT_20130211_20150302_01_11.h5', 'REFLECTANCE_ADD_BAND_5': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_6': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_7': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_8': -0.10000000149011612, 'BPF_NAME_TIRS': 'LT8BPF20140505181139_20140505190416.01', 'GROUND_CONTROL_POINTS_VERSION': 4, 'DATA_TYPE': 'L1TP', 'UTM_ZONE': 10, 'LANDSAT_PRODUCT_ID': 'LC08_L1TP_044034_20140505_20170307_01_T1', 'REFLECTANCE_ADD_BAND_9': -0.10000000149011612, 'google:registration_ratio': 0, 'GRID_CELL_SIZE_PANCHROMATIC': 15, 'RADIANCE_ADD_BAND_4': -49.10100173950195, 'REFLECTANCE_MULT_BAND_7': 1.9999999494757503e-05, 'system:time_start': 1399315542790, 'RADIANCE_ADD_BAND_5': -30.047359466552734, 'REFLECTANCE_MULT_BAND_6': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_6': -7.472509860992432, 'REFLECTANCE_MULT_BAND_9': 1.9999999494757503e-05, 'PROCESSING_SOFTWARE_VERSION': 'LPGS_2.7.0', 'RADIANCE_ADD_BAND_7': -2.518630027770996, 'REFLECTANCE_MULT_BAND_8': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_1': -61.70698165893555, 'RADIANCE_ADD_BAND_2': -63.18870162963867, 'RADIANCE_ADD_BAND_3': -58.227840423583984, 'REFLECTANCE_MULT_BAND_1': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_8': -55.56882095336914, 'REFLECTANCE_MULT_BAND_3': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_9': -11.743189811706543, 'REFLECTANCE_MULT_BAND_2': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_5': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_4': 1.9999999494757503e-05, 'THERMAL_LINES': 7791, 'TIRS_SSM_POSITION_STATUS': 'NOMINAL', 'GRID_CELL_SIZE_THERMAL': 30, 'NADIR_OFFNADIR': 'NADIR', 'RADIANCE_ADD_BAND_11': 0.10000000149011612, 'REQUEST_ID': '0501703064572_00027', 'EARTH_SUN_DISTANCE': 1.0086472034454346, 'TIRS_SSM_MODEL': 'ACTUAL', 'FILE_DATE': 1488903671000, 'SCENE_CENTER_TIME': '18:45:42.7916370Z', 'SUN_ELEVATION': 62.584102630615234, 'BPF_NAME_OLI': 'LO8BPF20140505183026_20140505190323.01', 'RADIANCE_ADD_BAND_10': 0.10000000149011612, 'ROLL_ANGLE': -0.0010000000474974513, 'K1_CONSTANT_BAND_10': 774.8853149414062, 'SATURATION_BAND_1': 'Y', 'SATURATION_BAND_2': 'Y', 'SATURATION_BAND_3': 'Y', 'SATURATION_BAND_4': 'Y', 'SATURATION_BAND_5': 'Y', 'MAP_PROJECTION': 'UTM', 'SATURATION_BAND_6': 'Y', 'SENSOR_ID': 'OLI_TIRS', 'SATURATION_BAND_7': 'Y', 'K1_CONSTANT_BAND_11': 480.8883056640625, 'SATURATION_BAND_8': 'N', 'SATURATION_BAND_9': 'N', 'TARGET_WRS_PATH': 44, 'RADIANCE_MULT_BAND_11': 0.00033420001273043454, 'RADIANCE_MULT_BAND_10': 0.00033420001273043454, 'GROUND_CONTROL_POINTS_MODEL': 289, 'SPACECRAFT_ID': 'LANDSAT_8', 'ELEVATION_SOURCE': 'GLS2000', 'THERMAL_SAMPLES': 7671, 'GROUND_CONTROL_POINTS_VERIFY': 62}}, {'type': 'Image', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [15341, 15601], 'crs': 'EPSG:32610', 'crs_transform': [15, 0, 464692.5, 0, -15, 4264507.5]}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}], 'version': 1580563126058134, 'id': 'LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140521', 'properties': {'terra': [{'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504500914552187, 'id': 'MODIS/006/MOD09GA/2014_05_19', 'properties': {'system:time_start': 1400457600000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1400544000000, 'system:asset_size': 23343381618, 'system:index': '2014_05_19'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504503270957152, 'id': 'MODIS/006/MOD09GA/2014_05_20', 'properties': {'system:time_start': 1400544000000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1400630400000, 'system:asset_size': 22344174886, 'system:index': '2014_05_20'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504499085062986, 'id': 'MODIS/006/MOD09GA/2014_05_21', 'properties': {'system:time_start': 1400630400000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1400716800000, 'system:asset_size': 23263811253, 'system:index': '2014_05_21'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504500880529169, 'id': 'MODIS/006/MOD09GA/2014_05_22', 'properties': {'system:time_start': 1400716800000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1400803200000, 'system:asset_size': 22511022912, 'system:index': '2014_05_22'}}], 'RADIANCE_MULT_BAND_5': 0.005967800039798021, 'RADIANCE_MULT_BAND_6': 0.0014841000083833933, 'RADIANCE_MULT_BAND_3': 0.01156499981880188, 'RADIANCE_MULT_BAND_4': 0.009752199985086918, 'RADIANCE_MULT_BAND_1': 0.012256000190973282, 'RADIANCE_MULT_BAND_2': 0.012550000101327896, 'K2_CONSTANT_BAND_11': 1201.1441650390625, 'K2_CONSTANT_BAND_10': 1321.078857421875, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-120.9221114406814, 37.68244619012667], [-120.89633560745239, 37.76390614408945], [-120.83746336237951, 37.94945600779687], [-120.82098495481172, 38.00141006480963], [-120.78179975086263, 38.125049388247994], [-120.78173908398541, 38.12705556142276], [-120.79512978776856, 38.12976361438609], [-121.73406240469221, 38.31178421248136], [-122.79279800879766, 38.50701449179694], [-122.88876971795369, 38.5241778933743], [-122.9038553878929, 38.52682543966657], [-123.3934724535376, 36.80801002145629], [-123.3934642377511, 36.80639615821769], [-123.14252377291987, 36.76031119223474], [-121.39556579260922, 36.42323515794831], [-121.3201532766815, 36.40807244280241], [-121.31926234184606, 36.40876798117092], [-121.1964526203538, 36.807060467012924], [-121.07492303846685, 37.19674766434507], [-120.94691203296651, 37.60392056819356], [-120.9221114406814, 37.68244619012667]]}, 'REFLECTIVE_SAMPLES': 7671, 'SUN_AZIMUTH': 129.40968322753906, 'CPF_NAME': 'LC08CPF_20140401_20140630_01.01', 'DATE_ACQUIRED': '2014-05-21', 'ELLIPSOID': 'WGS84', 'google:registration_offset_x': 93.1732177734375, 'google:registration_offset_y': -389.06402587890625, 'STATION_ID': 'LGN', 'RESAMPLING_OPTION': 'CUBIC_CONVOLUTION', 'ORIENTATION': 'NORTH_UP', 'WRS_ROW': 34, 'RADIANCE_MULT_BAND_9': 0.0023324000649154186, 'TARGET_WRS_ROW': 34, 'RADIANCE_MULT_BAND_7': 0.0005002400139346719, 'RADIANCE_MULT_BAND_8': 0.011037000454962254, 'IMAGE_QUALITY_TIRS': 9, 'TRUNCATION_OLI': 'UPPER', 'CLOUD_COVER': 35.439998626708984, 'GEOMETRIC_RMSE_VERIFY': 3.2890000343322754, 'COLLECTION_CATEGORY': 'T1', 'GRID_CELL_SIZE_REFLECTIVE': 30, 'CLOUD_COVER_LAND': 14.020000457763672, 'GEOMETRIC_RMSE_MODEL': 5.670000076293945, 'COLLECTION_NUMBER': 1, 'IMAGE_QUALITY_OLI': 9, 'LANDSAT_SCENE_ID': 'LC80440342014141LGN01', 'WRS_PATH': 44, 'google:registration_count': 66, 'PANCHROMATIC_SAMPLES': 15341, 'PANCHROMATIC_LINES': 15601, 'GEOMETRIC_RMSE_MODEL_Y': 3.8980000019073486, 'REFLECTIVE_LINES': 7801, 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE': 'TIRS', 'GEOMETRIC_RMSE_MODEL_X': 4.117000102996826, 'system:asset_size': 1261385761, 'system:index': 'LC08_044034_20140521', 'REFLECTANCE_ADD_BAND_1': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_2': -0.10000000149011612, 'DATUM': 'WGS84', 'REFLECTANCE_ADD_BAND_3': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_4': -0.10000000149011612, 'RLUT_FILE_NAME': 'LC08RLUT_20130211_20150302_01_11.h5', 'REFLECTANCE_ADD_BAND_5': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_6': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_7': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_8': -0.10000000149011612, 'BPF_NAME_TIRS': 'LT8BPF20140521180614_20140521190408.02', 'GROUND_CONTROL_POINTS_VERSION': 4, 'DATA_TYPE': 'L1TP', 'UTM_ZONE': 10, 'LANDSAT_PRODUCT_ID': 'LC08_L1TP_044034_20140521_20170307_01_T1', 'REFLECTANCE_ADD_BAND_9': -0.10000000149011612, 'google:registration_ratio': 0.4370861053466797, 'GRID_CELL_SIZE_PANCHROMATIC': 15, 'RADIANCE_ADD_BAND_4': -48.76087951660156, 'REFLECTANCE_MULT_BAND_7': 1.9999999494757503e-05, 'system:time_start': 1400697934830, 'RADIANCE_ADD_BAND_5': -29.839229583740234, 'REFLECTANCE_MULT_BAND_6': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_6': -7.420740127563477, 'REFLECTANCE_MULT_BAND_9': 1.9999999494757503e-05, 'PROCESSING_SOFTWARE_VERSION': 'LPGS_2.7.0', 'RADIANCE_ADD_BAND_7': -2.501189947128296, 'REFLECTANCE_MULT_BAND_8': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_1': -61.279541015625, 'RADIANCE_ADD_BAND_2': -62.75099182128906, 'RADIANCE_ADD_BAND_3': -57.824501037597656, 'REFLECTANCE_MULT_BAND_1': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_8': -55.18389892578125, 'REFLECTANCE_MULT_BAND_3': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_9': -11.661849975585938, 'REFLECTANCE_MULT_BAND_2': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_5': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_4': 1.9999999494757503e-05, 'THERMAL_LINES': 7801, 'TIRS_SSM_POSITION_STATUS': 'NOMINAL', 'GRID_CELL_SIZE_THERMAL': 30, 'NADIR_OFFNADIR': 'NADIR', 'RADIANCE_ADD_BAND_11': 0.10000000149011612, 'REQUEST_ID': '0501703064217_00034', 'EARTH_SUN_DISTANCE': 1.0121588706970215, 'TIRS_SSM_MODEL': 'ACTUAL', 'FILE_DATE': 1488873846000, 'SCENE_CENTER_TIME': '18:45:34.8277940Z', 'SUN_ELEVATION': 65.65296173095703, 'BPF_NAME_OLI': 'LO8BPF20140521183116_20140521190315.02', 'RADIANCE_ADD_BAND_10': 0.10000000149011612, 'ROLL_ANGLE': -0.0010000000474974513, 'K1_CONSTANT_BAND_10': 774.8853149414062, 'SATURATION_BAND_1': 'Y', 'SATURATION_BAND_2': 'Y', 'SATURATION_BAND_3': 'Y', 'SATURATION_BAND_4': 'Y', 'SATURATION_BAND_5': 'Y', 'MAP_PROJECTION': 'UTM', 'SATURATION_BAND_6': 'Y', 'SENSOR_ID': 'OLI_TIRS', 'SATURATION_BAND_7': 'Y', 'K1_CONSTANT_BAND_11': 480.8883056640625, 'SATURATION_BAND_8': 'N', 'SATURATION_BAND_9': 'N', 'TARGET_WRS_PATH': 44, 'RADIANCE_MULT_BAND_11': 0.00033420001273043454, 'RADIANCE_MULT_BAND_10': 0.00033420001273043454, 'GROUND_CONTROL_POINTS_MODEL': 404, 'SPACECRAFT_ID': 'LANDSAT_8', 'ELEVATION_SOURCE': 'GLS2000', 'THERMAL_SAMPLES': 7671, 'GROUND_CONTROL_POINTS_VERIFY': 150}}]}
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Load a primary 'collection': Landsat imagery.
primary = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterDate('2014-04-01', '2014-06-01') \
.filterBounds(ee.Geometry.Point(-122.092, 37.42))
# Load a secondary 'collection': MODIS imagery.
modSecondary = ee.ImageCollection('MODIS/006/MOD09GA') \
.filterDate('2014-03-01', '2014-07-01')
# Define an allowable time difference: two days in milliseconds.
twoDaysMillis = 2 * 24 * 60 * 60 * 1000
# Create a time filter to define a match as overlapping timestamps.
timeFilter = ee.Filter.Or(
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_start',
'rightField': 'system:time_end'
}),
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_end',
'rightField': 'system:time_start'
})
)
# Define the join.
saveAllJoin = ee.Join.saveAll(**{
'matchesKey': 'terra',
'ordering': 'system:time_start',
'ascending': True
})
# Apply the join.
landsatModis = saveAllJoin.apply(primary, modSecondary, timeFilter)
# Display the result.
print('Join.saveAll:', landsatModis.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Load a primary 'collection': Landsat imagery.
primary = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterDate('2014-04-01', '2014-06-01') \
.filterBounds(ee.Geometry.Point(-122.092, 37.42))
# Load a secondary 'collection': MODIS imagery.
modSecondary = ee.ImageCollection('MODIS/006/MOD09GA') \
.filterDate('2014-03-01', '2014-07-01')
# Define an allowable time difference: two days in milliseconds.
twoDaysMillis = 2 * 24 * 60 * 60 * 1000
# Create a time filter to define a match as overlapping timestamps.
timeFilter = ee.Filter.Or(
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_start',
'rightField': 'system:time_end'
}),
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_end',
'rightField': 'system:time_start'
})
)
# Define the join.
saveAllJoin = ee.Join.saveAll(**{
'matchesKey': 'terra',
'ordering': 'system:time_start',
'ascending': True
})
# Apply the join.
landsatModis = saveAllJoin.apply(primary, modSecondary, timeFilter)
# Display the result.
print('Join.saveAll:', landsatModis.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Load a primary 'collection': Landsat imagery.
primary = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterDate('2014-04-01', '2014-06-01') \
.filterBounds(ee.Geometry.Point(-122.092, 37.42))
# Load a secondary 'collection': MODIS imagery.
modSecondary = ee.ImageCollection('MODIS/006/MOD09GA') \
.filterDate('2014-03-01', '2014-07-01')
# Define an allowable time difference: two days in milliseconds.
twoDaysMillis = 2 * 24 * 60 * 60 * 1000
# Create a time filter to define a match as overlapping timestamps.
timeFilter = ee.Filter.Or(
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_start',
'rightField': 'system:time_end'
}),
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_end',
'rightField': 'system:time_start'
})
)
# Define the join.
saveAllJoin = ee.Join.saveAll(**{
'matchesKey': 'terra',
'ordering': 'system:time_start',
'ascending': True
})
# Apply the join.
landsatModis = saveAllJoin.apply(primary, modSecondary, timeFilter)
# Display the result.
print('Join.saveAll:', landsatModis.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Load a primary 'collection': Landsat imagery.
primary = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterDate('2014-04-01', '2014-06-01') \
.filterBounds(ee.Geometry.Point(-122.092, 37.42))
# Load a secondary 'collection': MODIS imagery.
modSecondary = ee.ImageCollection('MODIS/006/MOD09GA') \
.filterDate('2014-03-01', '2014-07-01')
# Define an allowable time difference: two days in milliseconds.
twoDaysMillis = 2 * 24 * 60 * 60 * 1000
# Create a time filter to define a match as overlapping timestamps.
timeFilter = ee.Filter.Or(
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_start',
'rightField': 'system:time_end'
}),
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_end',
'rightField': 'system:time_start'
})
)
# Define the join.
saveAllJoin = ee.Join.saveAll(**{
'matchesKey': 'terra',
'ordering': 'system:time_start',
'ascending': True
})
# Apply the join.
landsatModis = saveAllJoin.apply(primary, modSecondary, timeFilter)
# Display the result.
print('Join.saveAll:', landsatModis.getInfo())
###Output
Join.saveAll: {'type': 'ImageCollection', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}}], 'id': 'LANDSAT/LC08/C01/T1_TOA', 'version': 1580044754541352, 'properties': {'system:visualization_0_min': '0.0', 'type_name': 'ImageCollection', 'visualization_1_bands': 'B5,B4,B3', 'thumb': 'https://mw1.google.com/ges/dd/images/LANDSAT_TOA_thumb.png', 'visualization_1_max': '30000.0', 'description': '<p>Landsat 8 Collection 1 Tier 1\n calibrated top-of-atmosphere (TOA) reflectance.\n Calibration coefficients are extracted from the image metadata. See<a href="http://www.sciencedirect.com/science/article/pii/S0034425709000169">\n Chander et al. (2009)</a> for details on the TOA computation.</p></p>\n<p><b>Revisit Interval</b>\n<br>\n 16 days\n</p>\n<p><b>Bands</b>\n<table class="eecat">\n<tr>\n<th scope="col">Name</th>\n<th scope="col">Resolution</th>\n<th scope="col">Wavelength</th>\n<th scope="col">Description</th>\n</tr>\n<tr>\n<td>B1</td>\n<td>\n 30 meters\n</td>\n<td>0.43 - 0.45 µm</td>\n<td><p>Coastal aerosol</p></td>\n</tr>\n<tr>\n<td>B2</td>\n<td>\n 30 meters\n</td>\n<td>0.45 - 0.51 µm</td>\n<td><p>Blue</p></td>\n</tr>\n<tr>\n<td>B3</td>\n<td>\n 30 meters\n</td>\n<td>0.53 - 0.59 µm</td>\n<td><p>Green</p></td>\n</tr>\n<tr>\n<td>B4</td>\n<td>\n 30 meters\n</td>\n<td>0.64 - 0.67 µm</td>\n<td><p>Red</p></td>\n</tr>\n<tr>\n<td>B5</td>\n<td>\n 30 meters\n</td>\n<td>0.85 - 0.88 µm</td>\n<td><p>Near infrared</p></td>\n</tr>\n<tr>\n<td>B6</td>\n<td>\n 30 meters\n</td>\n<td>1.57 - 1.65 µm</td>\n<td><p>Shortwave infrared 1</p></td>\n</tr>\n<tr>\n<td>B7</td>\n<td>\n 30 meters\n</td>\n<td>2.11 - 2.29 µm</td>\n<td><p>Shortwave infrared 2</p></td>\n</tr>\n<tr>\n<td>B8</td>\n<td>\n 15 meters\n</td>\n<td>0.52 - 0.90 µm</td>\n<td><p>Band 8 Panchromatic</p></td>\n</tr>\n<tr>\n<td>B9</td>\n<td>\n 15 meters\n</td>\n<td>1.36 - 1.38 µm</td>\n<td><p>Cirrus</p></td>\n</tr>\n<tr>\n<td>B10</td>\n<td>\n 30 meters\n</td>\n<td>10.60 - 11.19 µm</td>\n<td><p>Thermal infrared 1, resampled from 100m to 30m</p></td>\n</tr>\n<tr>\n<td>B11</td>\n<td>\n 30 meters\n</td>\n<td>11.50 - 12.51 µm</td>\n<td><p>Thermal infrared 2, resampled from 100m to 30m</p></td>\n</tr>\n<tr>\n<td>BQA</td>\n<td>\n</td>\n<td></td>\n<td><p>Landsat Collection 1 QA Bitmask (<a href="https://www.usgs.gov/land-resources/nli/landsat/landsat-collection-1-level-1-quality-assessment-band">See Landsat QA page</a>)</p></td>\n</tr>\n<tr>\n<td colspan=100>\n Bitmask for BQA\n<ul>\n<li>\n Bit 0: Designated Fill\n<ul>\n<li>0: No</li>\n<li>1: Yes</li>\n</ul>\n</li>\n<li>\n Bit 1: Terrain Occlusion\n<ul>\n<li>0: No</li>\n<li>1: Yes</li>\n</ul>\n</li>\n<li>\n Bits 2-3: Radiometric Saturation\n<ul>\n<li>0: No bands contain saturation</li>\n<li>1: 1-2 bands contain saturation</li>\n<li>2: 3-4 bands contain saturation</li>\n<li>3: 5 or more bands contain saturation</li>\n</ul>\n</li>\n<li>\n Bit 4: Cloud\n<ul>\n<li>0: No</li>\n<li>1: Yes</li>\n</ul>\n</li>\n<li>\n Bits 5-6: Cloud Confidence\n<ul>\n<li>0: Not Determined / Condition does not exist.</li>\n<li>1: Low, (0-33 percent confidence)</li>\n<li>2: Medium, (34-66 percent confidence)</li>\n<li>3: High, (67-100 percent confidence)</li>\n</ul>\n</li>\n<li>\n Bits 7-8: Cloud Shadow Confidence\n<ul>\n<li>0: Not Determined / Condition does not exist.</li>\n<li>1: Low, (0-33 percent confidence)</li>\n<li>2: Medium, (34-66 percent confidence)</li>\n<li>3: High, (67-100 percent confidence)</li>\n</ul>\n</li>\n<li>\n Bits 9-10: Snow / Ice Confidence\n<ul>\n<li>0: Not Determined / Condition does not exist.</li>\n<li>1: Low, (0-33 percent confidence)</li>\n<li>2: Medium, (34-66 percent confidence)</li>\n<li>3: High, (67-100 percent confidence)</li>\n</ul>\n</li>\n<li>\n Bits 11-12: Cirrus Confidence\n<ul>\n<li>0: Not Determined / Condition does not exist.</li>\n<li>1: Low, (0-33 percent confidence)</li>\n<li>2: Medium, (34-66 percent confidence)</li>\n<li>3: High, (67-100 percent confidence)</li>\n</ul>\n</li>\n</ul>\n</td>\n</tr>\n</table>\n<p><b>Image Properties</b>\n<table class="eecat">\n<tr>\n<th scope="col">Name</th>\n<th scope="col">Type</th>\n<th scope="col">Description</th>\n</tr>\n<tr>\n<td>BPF_NAME_OLI</td>\n<td>STRING</td>\n<td><p>The file name for the Bias Parameter File (BPF) used to generate the product, if applicable. This only applies to products that contain OLI bands.</p></td>\n</tr>\n<tr>\n<td>BPF_NAME_TIRS</td>\n<td>STRING</td>\n<td><p>The file name for the Bias Parameter File (BPF) used to generate the product, if applicable. This only applies to products that contain TIRS bands.</p></td>\n</tr>\n<tr>\n<td>CLOUD_COVER</td>\n<td>DOUBLE</td>\n<td><p>Percentage cloud cover, -1 = not calculated.</p></td>\n</tr>\n<tr>\n<td>CLOUD_COVER_LAND</td>\n<td>DOUBLE</td>\n<td><p>Percentage cloud cover over land, -1 = not calculated.</p></td>\n</tr>\n<tr>\n<td>COLLECTION_CATEGORY</td>\n<td>STRING</td>\n<td><p>Tier of scene. (T1 or T2)</p></td>\n</tr>\n<tr>\n<td>COLLECTION_NUMBER</td>\n<td>DOUBLE</td>\n<td><p>Number of collection.</p></td>\n</tr>\n<tr>\n<td>CPF_NAME</td>\n<td>STRING</td>\n<td><p>Calibration parameter file name.</p></td>\n</tr>\n<tr>\n<td>DATA_TYPE</td>\n<td>STRING</td>\n<td><p>Data type identifier. (L1T or L1G)</p></td>\n</tr>\n<tr>\n<td>DATE_ACQUIRED</td>\n<td>STRING</td>\n<td><p>Image acquisition date. "YYYY-MM-DD"</p></td>\n</tr>\n<tr>\n<td>DATUM</td>\n<td>STRING</td>\n<td><p>Datum used in image creation.</p></td>\n</tr>\n<tr>\n<td>EARTH_SUN_DISTANCE</td>\n<td>DOUBLE</td>\n<td><p>Earth sun distance in astronomical units (AU).</p></td>\n</tr>\n<tr>\n<td>ELEVATION_SOURCE</td>\n<td>STRING</td>\n<td><p>Elevation model source used for standard terrain corrected (L1T) products.</p></td>\n</tr>\n<tr>\n<td>ELLIPSOID</td>\n<td>STRING</td>\n<td><p>Ellipsoid used in image creation.</p></td>\n</tr>\n<tr>\n<td>EPHEMERIS_TYPE</td>\n<td>STRING</td>\n<td><p>Ephemeris data type used to perform geometric correction. (Definitive or Predictive)</p></td>\n</tr>\n<tr>\n<td>FILE_DATE</td>\n<td>DOUBLE</td>\n<td><p>File date in milliseconds since epoch.</p></td>\n</tr>\n<tr>\n<td>GEOMETRIC_RMSE_MODEL</td>\n<td>DOUBLE</td>\n<td><p>Combined Root Mean Square Error (RMSE) of the geometric residuals\n(metres) in both across-track and along-track directions\nmeasured on the GCPs used in geometric precision correction.\nNot present in L1G products.</p></td>\n</tr>\n<tr>\n<td>GEOMETRIC_RMSE_MODEL_X</td>\n<td>DOUBLE</td>\n<td><p>RMSE of the X direction geometric residuals (in metres) measured\non the GCPs used in geometric precision correction. Not present in\nL1G products.</p></td>\n</tr>\n<tr>\n<td>GEOMETRIC_RMSE_MODEL_Y</td>\n<td>DOUBLE</td>\n<td><p>RMSE of the Y direction geometric residuals (in metres) measured\non the GCPs used in geometric precision correction. Not present in\nL1G products.</p></td>\n</tr>\n<tr>\n<td>GRID_CELL_SIZE_PANCHROMATIC</td>\n<td>DOUBLE</td>\n<td><p>Grid cell size used in creating the image for the panchromatic band.</p></td>\n</tr>\n<tr>\n<td>GRID_CELL_SIZE_REFLECTIVE</td>\n<td>DOUBLE</td>\n<td><p>Grid cell size used in creating the image for the reflective band.</p></td>\n</tr>\n<tr>\n<td>GRID_CELL_SIZE_THERMAL</td>\n<td>DOUBLE</td>\n<td><p>Grid cell size used in creating the image for the thermal band.</p></td>\n</tr>\n<tr>\n<td>GROUND_CONTROL_POINTS_MODEL</td>\n<td>DOUBLE</td>\n<td><p>The number of ground control points used. Not used in L1GT products.\nValues: 0 - 999 (0 is used for L1T products that have used\nMulti-scene refinement).</p></td>\n</tr>\n<tr>\n<td>GROUND_CONTROL_POINTS_VERSION</td>\n<td>DOUBLE</td>\n<td><p>The number of ground control points used in the verification of\nthe terrain corrected product. Values: -1 to 1615 (-1 = not available)</p></td>\n</tr>\n<tr>\n<td>IMAGE_QUALITY</td>\n<td>DOUBLE</td>\n<td><p>Image quality, 0 = worst, 9 = best, -1 = quality not calculated</p></td>\n</tr>\n<tr>\n<td>IMAGE_QUALITY_OLI</td>\n<td>DOUBLE</td>\n<td><p>The composite image quality for the OLI bands. Values: 9 = Best. 1 = Worst. 0 = Image quality not calculated. This parameter is only present if OLI bands are present in the product.</p></td>\n</tr>\n<tr>\n<td>IMAGE_QUALITY_TIRS</td>\n<td>DOUBLE</td>\n<td><p>The composite image quality for the TIRS bands. Values: 9 = Best. 1 = Worst. 0 = Image quality not calculated. This parameter is only present if OLI bands are present in the product.</p></td>\n</tr>\n<tr>\n<td>K1_CONSTANT_BAND_10</td>\n<td>DOUBLE</td>\n<td><p>Calibration K1 constant for Band 10 radiance to temperature conversion.</p></td>\n</tr>\n<tr>\n<td>K1_CONSTANT_BAND_11</td>\n<td>DOUBLE</td>\n<td><p>Calibration K1 constant for Band 11 radiance to temperature conversion.</p></td>\n</tr>\n<tr>\n<td>K2_CONSTANT_BAND_10</td>\n<td>DOUBLE</td>\n<td><p>Calibration K2 constant for Band 10 radiance to temperature conversion.</p></td>\n</tr>\n<tr>\n<td>K2_CONSTANT_BAND_11</td>\n<td>DOUBLE</td>\n<td><p>Calibration K2 constant for Band 11 radiance to temperature conversion.</p></td>\n</tr>\n<tr>\n<td>LANDSAT_PRODUCT_ID</td>\n<td>STRING</td>\n<td><p>The naming convention of each Landsat Collection 1 Level-1 image based\non acquisition parameters and processing parameters.</p>\n<p>Format: LXSS_LLLL_PPPRRR_YYYYMMDD_yyyymmdd_CC_TX</p>\n<ul>\n<li>L = Landsat</li>\n<li>X = Sensor (O = Operational Land Imager,\nT = Thermal Infrared Sensor, C = Combined OLI/TIRS)</li>\n<li>SS = Satellite (08 = Landsat 8)</li>\n<li>LLLL = Processing Correction Level (L1TP = precision and terrain,\nL1GT = systematic terrain, L1GS = systematic)</li>\n<li>PPP = WRS Path</li>\n<li>RRR = WRS Row</li>\n<li>YYYYMMDD = Acquisition Date expressed in Year, Month, Day</li>\n<li>yyyymmdd = Processing Date expressed in Year, Month, Day</li>\n<li>CC = Collection Number (01)</li>\n<li>TX = Collection Category (RT = Real Time, T1 = Tier 1, T2 = Tier 2)</li>\n</ul></td>\n</tr>\n<tr>\n<td>LANDSAT_SCENE_ID</td>\n<td>STRING</td>\n<td><p>The Pre-Collection naming convention of each image is based on acquisition\nparameters. This was the naming convention used prior to Collection 1.</p>\n<p>Format: LXSPPPRRRYYYYDDDGSIVV</p>\n<ul>\n<li>L = Landsat</li>\n<li>X = Sensor (O = Operational Land Imager, T = Thermal Infrared Sensor, C = Combined OLI/TIRS)</li>\n<li>S = Satellite (08 = Landsat 8)</li>\n<li>PPP = WRS Path</li>\n<li>RRR = WRS Row</li>\n<li>YYYY = Year of Acquisition</li>\n<li>DDD = Julian Day of Acquisition</li>\n<li>GSI = Ground Station Identifier</li>\n<li>VV = Version</li>\n</ul></td>\n</tr>\n<tr>\n<td>MAP_PROJECTION</td>\n<td>STRING</td>\n<td><p>Projection used to represent the 3-dimensional surface of the earth for the Level-1 product.</p></td>\n</tr>\n<tr>\n<td>NADIR_OFFNADIR</td>\n<td>STRING</td>\n<td><p>Nadir or Off-Nadir condition of the scene.</p></td>\n</tr>\n<tr>\n<td>ORIENTATION</td>\n<td>STRING</td>\n<td><p>Orientation used in creating the image. Values: NOMINAL = Nominal Path, NORTH_UP = North Up, TRUE_NORTH = True North, USER = User</p></td>\n</tr>\n<tr>\n<td>PANCHROMATIC_LINES</td>\n<td>DOUBLE</td>\n<td><p>Number of product lines for the panchromatic band.</p></td>\n</tr>\n<tr>\n<td>PANCHROMATIC_SAMPLES</td>\n<td>DOUBLE</td>\n<td><p>Number of product samples for the panchromatic bands.</p></td>\n</tr>\n<tr>\n<td>PROCESSING_SOFTWARE_VERSION</td>\n<td>STRING</td>\n<td><p>Name and version of the processing software used to generate the L1 product.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_1</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 1.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_10</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 10.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_11</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 11.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_2</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 2.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_3</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 3.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_4</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 4.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_5</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 5.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_6</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 6.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_7</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 7.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_8</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 8.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_ADD_BAND_9</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated DN to radiance for Band 9.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_1</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 1 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_10</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 10 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_11</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 11 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_2</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 2 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_3</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 3 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_4</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 4 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_5</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 5 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_6</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 6 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_7</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 7 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_8</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 8 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>RADIANCE_MULT_BAND_9</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative rescaling factor used to convert calibrated Band 9 DN to radiance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_1</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 1 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_2</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 2 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_3</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 3 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_4</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 4 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_5</td>\n<td>DOUBLE</td>\n<td><p>Additive rescaling factor used to convert calibrated Band 5 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_7</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 7 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_8</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 8 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_ADD_BAND_9</td>\n<td>DOUBLE</td>\n<td><p>Minimum achievable spectral reflectance value for Band 8.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_1</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 1 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_2</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 2 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_3</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 3 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_4</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 4 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_5</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 5 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_6</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 6 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_7</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 7 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_8</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 8 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTANCE_MULT_BAND_9</td>\n<td>DOUBLE</td>\n<td><p>Multiplicative factor used to convert calibrated Band 9 DN to reflectance.</p></td>\n</tr>\n<tr>\n<td>REFLECTIVE_LINES</td>\n<td>DOUBLE</td>\n<td><p>Number of product lines for the reflective bands.</p></td>\n</tr>\n<tr>\n<td>REFLECTIVE_SAMPLES</td>\n<td>DOUBLE</td>\n<td><p>Number of product samples for the reflective bands.</p></td>\n</tr>\n<tr>\n<td>REQUEST_ID</td>\n<td>STRING</td>\n<td><p>Request id, nnnyymmdd0000_0000</p>\n<ul>\n<li>nnn = node number</li>\n<li>yy = year</li>\n<li>mm = month</li>\n<li>dd = day</li>\n</ul></td>\n</tr>\n<tr>\n<td>RESAMPLING_OPTION</td>\n<td>STRING</td>\n<td><p>Resampling option used in creating the image.</p></td>\n</tr>\n<tr>\n<td>RLUT_FILE_NAME</td>\n<td>STRING</td>\n<td><p>The file name for the Response Linearization Lookup Table (RLUT) used to generate the product, if applicable.</p></td>\n</tr>\n<tr>\n<td>ROLL_ANGLE</td>\n<td>DOUBLE</td>\n<td><p>The amount of spacecraft roll angle at the scene center.</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_1</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 1 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_10</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 10 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_11</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 11 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_2</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 2 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_3</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 3 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_4</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 4 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_5</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 5 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_6</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 6 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_7</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 7 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_8</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 8 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SATURATION_BAND_9</td>\n<td>STRING</td>\n<td><p>Flag indicating saturated pixels for band 9 ('Y'/'N')</p></td>\n</tr>\n<tr>\n<td>SCENE_CENTER_TIME</td>\n<td>STRING</td>\n<td><p>Scene center time of acquired image. HH:MM:SS.SSSSSSSZ</p>\n<ul>\n<li>HH = Hour (00-23)</li>\n<li>MM = Minutes</li>\n<li>SS.SSSSSSS = Fractional seconds</li>\n<li>Z = "Zulu" time (same as GMT)</li>\n</ul></td>\n</tr>\n<tr>\n<td>SENSOR_ID</td>\n<td>STRING</td>\n<td><p>Sensor used to capture data.</p></td>\n</tr>\n<tr>\n<td>SPACECRAFT_ID</td>\n<td>STRING</td>\n<td><p>Spacecraft identification.</p></td>\n</tr>\n<tr>\n<td>STATION_ID</td>\n<td>STRING</td>\n<td><p>Ground Station/Organisation that received the data.</p></td>\n</tr>\n<tr>\n<td>SUN_AZIMUTH</td>\n<td>DOUBLE</td>\n<td><p>Sun azimuth angle in degrees for the image center location at the image centre acquisition time.</p></td>\n</tr>\n<tr>\n<td>SUN_ELEVATION</td>\n<td>DOUBLE</td>\n<td><p>Sun elevation angle in degrees for the image center location at the image centre acquisition time.</p></td>\n</tr>\n<tr>\n<td>TARGET_WRS_PATH</td>\n<td>DOUBLE</td>\n<td><p>Nearest WRS-2 path to the line-of-sight scene center of the image.</p></td>\n</tr>\n<tr>\n<td>TARGET_WRS_ROW</td>\n<td>DOUBLE</td>\n<td><p>Nearest WRS-2 row to the line-of-sight scene center of the image. Rows 880-889 and 990-999 are reserved for the polar regions where it is undefined in the WRS-2.</p></td>\n</tr>\n<tr>\n<td>THERMAL_LINES</td>\n<td>DOUBLE</td>\n<td><p>Number of product lines for the thermal band.</p></td>\n</tr>\n<tr>\n<td>THERMAL_SAMPLES</td>\n<td>DOUBLE</td>\n<td><p>Number of product samples for the thermal band.</p></td>\n</tr>\n<tr>\n<td>TIRS_SSM_MODEL</td>\n<td>STRING</td>\n<td><p>Due to an anomalous condition on the Thermal Infrared\nSensor (TIRS) Scene Select Mirror (SSM) encoder electronics,\nthis field has been added to indicate which model was used to process the data.\n(Actual, Preliminary, Final)</p></td>\n</tr>\n<tr>\n<td>TIRS_SSM_POSITION_STATUS</td>\n<td>STRING</td>\n<td><p>TIRS SSM position status.</p></td>\n</tr>\n<tr>\n<td>TIRS_STRAY_LIGHT_CORRECTION_SOURCE</td>\n<td>STRING</td>\n<td><p>TIRS stray light correction source.</p></td>\n</tr>\n<tr>\n<td>TRUNCATION_OLI</td>\n<td>STRING</td>\n<td><p>Region of OLCI truncated.</p></td>\n</tr>\n<tr>\n<td>UTM_ZONE</td>\n<td>DOUBLE</td>\n<td><p>UTM zone number used in product map projection.</p></td>\n</tr>\n<tr>\n<td>WRS_PATH</td>\n<td>DOUBLE</td>\n<td><p>The WRS orbital path number (001 - 251).</p></td>\n</tr>\n<tr>\n<td>WRS_ROW</td>\n<td>DOUBLE</td>\n<td><p>Landsat satellite WRS row (001-248).</p></td>\n</tr>\n</table>\n<style>\n table.eecat {\n border: 1px solid black;\n border-collapse: collapse;\n font-size: 13px;\n }\n table.eecat td, tr, th {\n text-align: left; vertical-align: top;\n border: 1px solid gray; padding: 3px;\n }\n td.nobreak { white-space: nowrap; }\n</style>', 'source_tags': ['landsat', 'usgs'], 'visualization_1_name': 'Near Infrared (543)', 'visualization_0_max': '30000.0', 'title': 'USGS Landsat 8 Collection 1 Tier 1 TOA Reflectance', 'visualization_0_gain': '500.0', 'system:visualization_2_max': '30000.0', 'product_tags': ['global', 'toa', 'tier1', 'oli_tirs', 'c1', 'radiance', 'lc8', 'l8', 't1'], 'visualization_1_gain': '500.0', 'provider': 'USGS/Google', 'visualization_1_min': '0.0', 'system:visualization_2_name': 'Shortwave Infrared (753)', 'visualization_0_min': '0.0', 'system:visualization_1_bands': 'B5,B4,B3', 'system:visualization_1_max': '30000.0', 'visualization_0_name': 'True Color (432)', 'date_range': [1365638400000, 1578700800000], 'visualization_2_bands': 'B7,B5,B3', 'visualization_2_name': 'Shortwave Infrared (753)', 'period': 0, 'system:visualization_2_min': '0.0', 'system:visualization_0_bands': 'B4,B3,B2', 'visualization_2_min': '0.0', 'visualization_2_gain': '500.0', 'provider_url': 'http://landsat.usgs.gov/', 'sample': 'https://mw1.google.com/ges/dd/images/LANDSAT_TOA_sample.png', 'system:visualization_1_name': 'Near Infrared (543)', 'tags': ['landsat', 'usgs', 'global', 'toa', 'tier1', 'oli_tirs', 'c1', 'radiance', 'lc8', 'l8', 't1'], 'system:visualization_0_max': '30000.0', 'visualization_2_max': '30000.0', 'system:visualization_2_bands': 'B7,B5,B3', 'system:visualization_1_min': '0.0', 'system:visualization_0_name': 'True Color (432)', 'visualization_0_bands': 'B4,B3,B2'}, 'features': [{'type': 'Image', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [15341, 15581], 'crs': 'EPSG:32610', 'crs_transform': [15, 0, 460792.5, 0, -15, 4264207.5]}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 460785, 0, -30, 4264215]}], 'version': 1580044754541352, 'id': 'LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140403', 'properties': {'terra': [{'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504497547369101, 'id': 'MODIS/006/MOD09GA/2014_04_01', 'properties': {'system:time_start': 1396310400000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1396396800000, 'system:asset_size': 24622391229, 'system:index': '2014_04_01'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504501593115095, 'id': 'MODIS/006/MOD09GA/2014_04_02', 'properties': {'system:time_start': 1396396800000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1396483200000, 'system:asset_size': 23573993585, 'system:index': '2014_04_02'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504488721459188, 'id': 'MODIS/006/MOD09GA/2014_04_03', 'properties': {'system:time_start': 1396483200000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1396569600000, 'system:asset_size': 24476076998, 'system:index': '2014_04_03'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504499525329416, 'id': 'MODIS/006/MOD09GA/2014_04_04', 'properties': {'system:time_start': 1396569600000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1396656000000, 'system:asset_size': 23697729158, 'system:index': '2014_04_04'}}], 'RADIANCE_MULT_BAND_5': 0.00611429987475276, 'RADIANCE_MULT_BAND_6': 0.0015206000534817576, 'RADIANCE_MULT_BAND_3': 0.011849000118672848, 'RADIANCE_MULT_BAND_4': 0.009991499595344067, 'RADIANCE_MULT_BAND_1': 0.012556999921798706, 'RADIANCE_MULT_BAND_2': 0.01285799965262413, 'K2_CONSTANT_BAND_11': 1201.1441650390625, 'K2_CONSTANT_BAND_10': 1321.078857421875, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-120.8473141778081, 38.05593855929062], [-120.8399593728871, 38.079323071287384], [-120.82522434534502, 38.126298845124154], [-120.82517062317932, 38.12810935862697], [-120.8677905264658, 38.13653674526281], [-121.37735830917396, 38.23574890955089], [-122.92397603591857, 38.5218201625494], [-122.94540185152168, 38.52557313562304], [-122.94781508421401, 38.52557420469068], [-122.9538620955667, 38.50519466790785], [-123.43541566635548, 36.80572425461524], [-123.43388775775958, 36.8051169737102], [-121.36103157158686, 36.408726677230895], [-121.3601864919046, 36.410036730606365], [-121.3547960201613, 36.42754948797928], [-121.22805212441246, 36.84032220234662], [-121.10161450053057, 37.247264521511426], [-120.99043851266156, 37.60225211028372], [-120.94687053372499, 37.7406010941523], [-120.88475337745422, 37.93745112674764], [-120.8473141778081, 38.05593855929062]]}, 'REFLECTIVE_SAMPLES': 7671, 'SUN_AZIMUTH': 143.3709716796875, 'CPF_NAME': 'LC08CPF_20140401_20140630_01.01', 'DATE_ACQUIRED': '2014-04-03', 'ELLIPSOID': 'WGS84', 'google:registration_offset_x': 0, 'google:registration_offset_y': 0, 'STATION_ID': 'LGN', 'RESAMPLING_OPTION': 'CUBIC_CONVOLUTION', 'ORIENTATION': 'NORTH_UP', 'WRS_ROW': 34, 'RADIANCE_MULT_BAND_9': 0.002389600034803152, 'TARGET_WRS_ROW': 34, 'RADIANCE_MULT_BAND_7': 0.0005125200259499252, 'RADIANCE_MULT_BAND_8': 0.011308000423014164, 'IMAGE_QUALITY_TIRS': 9, 'TRUNCATION_OLI': 'UPPER', 'CLOUD_COVER': 28.1200008392334, 'GEOMETRIC_RMSE_VERIFY': 3.2160000801086426, 'COLLECTION_CATEGORY': 'T1', 'GRID_CELL_SIZE_REFLECTIVE': 30, 'CLOUD_COVER_LAND': 31.59000015258789, 'GEOMETRIC_RMSE_MODEL': 6.959000110626221, 'COLLECTION_NUMBER': 1, 'IMAGE_QUALITY_OLI': 9, 'LANDSAT_SCENE_ID': 'LC80440342014093LGN01', 'WRS_PATH': 44, 'google:registration_count': 0, 'PANCHROMATIC_SAMPLES': 15341, 'PANCHROMATIC_LINES': 15581, 'GEOMETRIC_RMSE_MODEL_Y': 4.63700008392334, 'REFLECTIVE_LINES': 7791, 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE': 'TIRS', 'GEOMETRIC_RMSE_MODEL_X': 5.188000202178955, 'system:asset_size': 1208697743, 'system:index': 'LC08_044034_20140403', 'REFLECTANCE_ADD_BAND_1': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_2': -0.10000000149011612, 'DATUM': 'WGS84', 'REFLECTANCE_ADD_BAND_3': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_4': -0.10000000149011612, 'RLUT_FILE_NAME': 'LC08RLUT_20130211_20150302_01_11.h5', 'REFLECTANCE_ADD_BAND_5': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_6': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_7': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_8': -0.10000000149011612, 'BPF_NAME_TIRS': 'LT8BPF20140403182815_20140403190449.01', 'GROUND_CONTROL_POINTS_VERSION': 4, 'DATA_TYPE': 'L1TP', 'UTM_ZONE': 10, 'LANDSAT_PRODUCT_ID': 'LC08_L1TP_044034_20140403_20170306_01_T1', 'REFLECTANCE_ADD_BAND_9': -0.10000000149011612, 'google:registration_ratio': 0, 'GRID_CELL_SIZE_PANCHROMATIC': 15, 'RADIANCE_ADD_BAND_4': -49.95764923095703, 'REFLECTANCE_MULT_BAND_7': 1.9999999494757503e-05, 'system:time_start': 1396550776290, 'RADIANCE_ADD_BAND_5': -30.571590423583984, 'REFLECTANCE_MULT_BAND_6': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_6': -7.602880001068115, 'REFLECTANCE_MULT_BAND_9': 1.9999999494757503e-05, 'PROCESSING_SOFTWARE_VERSION': 'LPGS_2.7.0', 'RADIANCE_ADD_BAND_7': -2.562580108642578, 'REFLECTANCE_MULT_BAND_8': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_1': -62.78356170654297, 'RADIANCE_ADD_BAND_2': -64.29113006591797, 'RADIANCE_ADD_BAND_3': -59.24372863769531, 'REFLECTANCE_MULT_BAND_1': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_8': -56.53831100463867, 'REFLECTANCE_MULT_BAND_3': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_9': -11.94806957244873, 'REFLECTANCE_MULT_BAND_2': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_5': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_4': 1.9999999494757503e-05, 'THERMAL_LINES': 7791, 'TIRS_SSM_POSITION_STATUS': 'NOMINAL', 'GRID_CELL_SIZE_THERMAL': 30, 'NADIR_OFFNADIR': 'NADIR', 'RADIANCE_ADD_BAND_11': 0.10000000149011612, 'REQUEST_ID': '0501703063782_00025', 'EARTH_SUN_DISTANCE': 0.9999619126319885, 'TIRS_SSM_MODEL': 'ACTUAL', 'FILE_DATE': 1488829355000, 'SCENE_CENTER_TIME': '18:46:16.2881730Z', 'SUN_ELEVATION': 52.549800872802734, 'BPF_NAME_OLI': 'LO8BPF20140403183209_20140403190356.01', 'RADIANCE_ADD_BAND_10': 0.10000000149011612, 'ROLL_ANGLE': -0.0010000000474974513, 'K1_CONSTANT_BAND_10': 774.8853149414062, 'SATURATION_BAND_1': 'N', 'SATURATION_BAND_2': 'N', 'SATURATION_BAND_3': 'N', 'SATURATION_BAND_4': 'N', 'SATURATION_BAND_5': 'Y', 'MAP_PROJECTION': 'UTM', 'SATURATION_BAND_6': 'Y', 'SENSOR_ID': 'OLI_TIRS', 'SATURATION_BAND_7': 'Y', 'K1_CONSTANT_BAND_11': 480.8883056640625, 'SATURATION_BAND_8': 'N', 'SATURATION_BAND_9': 'N', 'TARGET_WRS_PATH': 44, 'RADIANCE_MULT_BAND_11': 0.00033420001273043454, 'RADIANCE_MULT_BAND_10': 0.00033420001273043454, 'GROUND_CONTROL_POINTS_MODEL': 385, 'SPACECRAFT_ID': 'LANDSAT_8', 'ELEVATION_SOURCE': 'GLS2000', 'THERMAL_SAMPLES': 7671, 'GROUND_CONTROL_POINTS_VERIFY': 98}}, {'type': 'Image', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [15341, 15581], 'crs': 'EPSG:32610', 'crs_transform': [15, 0, 461392.5, 0, -15, 4264207.5]}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}], 'version': 1580044754541352, 'id': 'LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140419', 'properties': {'terra': [{'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504497353500683, 'id': 'MODIS/006/MOD09GA/2014_04_17', 'properties': {'system:time_start': 1397692800000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1397779200000, 'system:asset_size': 24174490963, 'system:index': '2014_04_17'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504503327058258, 'id': 'MODIS/006/MOD09GA/2014_04_18', 'properties': {'system:time_start': 1397779200000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1397865600000, 'system:asset_size': 23100180324, 'system:index': '2014_04_18'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504501012619889, 'id': 'MODIS/006/MOD09GA/2014_04_19', 'properties': {'system:time_start': 1397865600000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1397952000000, 'system:asset_size': 23961163982, 'system:index': '2014_04_19'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504497190553985, 'id': 'MODIS/006/MOD09GA/2014_04_20', 'properties': {'system:time_start': 1397952000000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1398038400000, 'system:asset_size': 23219292499, 'system:index': '2014_04_20'}}], 'RADIANCE_MULT_BAND_5': 0.006059799809008837, 'RADIANCE_MULT_BAND_6': 0.0015069999499246478, 'RADIANCE_MULT_BAND_3': 0.011742999777197838, 'RADIANCE_MULT_BAND_4': 0.009902399964630604, 'RADIANCE_MULT_BAND_1': 0.012445000000298023, 'RADIANCE_MULT_BAND_2': 0.012744000181555748, 'K2_CONSTANT_BAND_11': 1201.1441650390625, 'K2_CONSTANT_BAND_10': 1321.078857421875, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-120.8431379362771, 38.052617966765766], [-120.83578218089683, 38.07600217001765], [-120.81963729012756, 38.12767081181165], [-120.82234049239531, 38.12843879727159], [-122.94102091600229, 38.525570980595205], [-122.94293147316415, 38.52557196694168], [-122.94542248503689, 38.51776440194044], [-122.9490448046238, 38.50559823329617], [-123.430644945337, 36.8057166125035], [-123.42903372114263, 36.80507606772225], [-122.57913602686314, 36.64741782585057], [-121.50262683064466, 36.438064670880586], [-121.35593613505138, 36.40870641506648], [-121.35503796940482, 36.40940804319249], [-121.22502589113704, 36.8329762319502], [-121.10052631685265, 37.23379807333198], [-120.9755883879769, 37.632705519232594], [-120.88376082672839, 37.92399755184342], [-120.85385887049235, 38.01862509330369], [-120.8431379362771, 38.052617966765766]]}, 'REFLECTIVE_SAMPLES': 7671, 'SUN_AZIMUTH': 139.7012176513672, 'CPF_NAME': 'LC08CPF_20140401_20140630_01.01', 'DATE_ACQUIRED': '2014-04-19', 'ELLIPSOID': 'WGS84', 'google:registration_offset_x': 0, 'google:registration_offset_y': 0, 'STATION_ID': 'LGN', 'RESAMPLING_OPTION': 'CUBIC_CONVOLUTION', 'ORIENTATION': 'NORTH_UP', 'WRS_ROW': 34, 'RADIANCE_MULT_BAND_9': 0.002368299989029765, 'TARGET_WRS_ROW': 34, 'RADIANCE_MULT_BAND_7': 0.000507950026076287, 'RADIANCE_MULT_BAND_8': 0.011207000352442265, 'IMAGE_QUALITY_TIRS': 9, 'TRUNCATION_OLI': 'UPPER', 'CLOUD_COVER': 12.920000076293945, 'GEOMETRIC_RMSE_VERIFY': 3.380000114440918, 'COLLECTION_CATEGORY': 'T1', 'GRID_CELL_SIZE_REFLECTIVE': 30, 'CLOUD_COVER_LAND': 0.75, 'GEOMETRIC_RMSE_MODEL': 6.547999858856201, 'COLLECTION_NUMBER': 1, 'IMAGE_QUALITY_OLI': 9, 'LANDSAT_SCENE_ID': 'LC80440342014109LGN01', 'WRS_PATH': 44, 'google:registration_count': 0, 'PANCHROMATIC_SAMPLES': 15341, 'PANCHROMATIC_LINES': 15581, 'GEOMETRIC_RMSE_MODEL_Y': 4.453999996185303, 'REFLECTIVE_LINES': 7791, 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE': 'TIRS', 'GEOMETRIC_RMSE_MODEL_X': 4.798999786376953, 'system:asset_size': 1203236382, 'system:index': 'LC08_044034_20140419', 'REFLECTANCE_ADD_BAND_1': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_2': -0.10000000149011612, 'DATUM': 'WGS84', 'REFLECTANCE_ADD_BAND_3': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_4': -0.10000000149011612, 'RLUT_FILE_NAME': 'LC08RLUT_20130211_20150302_01_11.h5', 'REFLECTANCE_ADD_BAND_5': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_6': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_7': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_8': -0.10000000149011612, 'BPF_NAME_TIRS': 'LT8BPF20140419183133_20140419190432.01', 'GROUND_CONTROL_POINTS_VERSION': 4, 'DATA_TYPE': 'L1TP', 'UTM_ZONE': 10, 'LANDSAT_PRODUCT_ID': 'LC08_L1TP_044034_20140419_20170307_01_T1', 'REFLECTANCE_ADD_BAND_9': -0.10000000149011612, 'google:registration_ratio': 0, 'GRID_CELL_SIZE_PANCHROMATIC': 15, 'RADIANCE_ADD_BAND_4': -49.512229919433594, 'REFLECTANCE_MULT_BAND_7': 1.9999999494757503e-05, 'system:time_start': 1397933159240, 'RADIANCE_ADD_BAND_5': -30.299020767211914, 'REFLECTANCE_MULT_BAND_6': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_6': -7.53508996963501, 'REFLECTANCE_MULT_BAND_9': 1.9999999494757503e-05, 'PROCESSING_SOFTWARE_VERSION': 'LPGS_2.7.0', 'RADIANCE_ADD_BAND_7': -2.5397300720214844, 'REFLECTANCE_MULT_BAND_8': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_1': -62.22378921508789, 'RADIANCE_ADD_BAND_2': -63.717918395996094, 'RADIANCE_ADD_BAND_3': -58.715518951416016, 'REFLECTANCE_MULT_BAND_1': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_8': -56.03422164916992, 'REFLECTANCE_MULT_BAND_3': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_9': -11.841540336608887, 'REFLECTANCE_MULT_BAND_2': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_5': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_4': 1.9999999494757503e-05, 'THERMAL_LINES': 7791, 'TIRS_SSM_POSITION_STATUS': 'NOMINAL', 'GRID_CELL_SIZE_THERMAL': 30, 'NADIR_OFFNADIR': 'NADIR', 'RADIANCE_ADD_BAND_11': 0.10000000149011612, 'REQUEST_ID': '0501703064332_00025', 'EARTH_SUN_DISTANCE': 1.004449725151062, 'TIRS_SSM_MODEL': 'ACTUAL', 'FILE_DATE': 1488882124000, 'SCENE_CENTER_TIME': '18:45:59.2402600Z', 'SUN_ELEVATION': 58.094696044921875, 'BPF_NAME_OLI': 'LO8BPF20140419183527_20140419190339.01', 'RADIANCE_ADD_BAND_10': 0.10000000149011612, 'ROLL_ANGLE': -0.0010000000474974513, 'K1_CONSTANT_BAND_10': 774.8853149414062, 'SATURATION_BAND_1': 'Y', 'SATURATION_BAND_2': 'Y', 'SATURATION_BAND_3': 'Y', 'SATURATION_BAND_4': 'Y', 'SATURATION_BAND_5': 'Y', 'MAP_PROJECTION': 'UTM', 'SATURATION_BAND_6': 'Y', 'SENSOR_ID': 'OLI_TIRS', 'SATURATION_BAND_7': 'Y', 'K1_CONSTANT_BAND_11': 480.8883056640625, 'SATURATION_BAND_8': 'N', 'SATURATION_BAND_9': 'N', 'TARGET_WRS_PATH': 44, 'RADIANCE_MULT_BAND_11': 0.00033420001273043454, 'RADIANCE_MULT_BAND_10': 0.00033420001273043454, 'GROUND_CONTROL_POINTS_MODEL': 509, 'SPACECRAFT_ID': 'LANDSAT_8', 'ELEVATION_SOURCE': 'GLS2000', 'THERMAL_SAMPLES': 7671, 'GROUND_CONTROL_POINTS_VERIFY': 169}}, {'type': 'Image', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [15341, 15581], 'crs': 'EPSG:32610', 'crs_transform': [15, 0, 461392.5, 0, -15, 4264207.5]}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [7671, 7791], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 461385, 0, -30, 4264215]}], 'version': 1580044754541352, 'id': 'LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140505', 'properties': {'terra': [{'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504498782758567, 'id': 'MODIS/006/MOD09GA/2014_05_03', 'properties': {'system:time_start': 1399075200000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1399161600000, 'system:asset_size': 23608680756, 'system:index': '2014_05_03'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504502586801816, 'id': 'MODIS/006/MOD09GA/2014_05_04', 'properties': {'system:time_start': 1399161600000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1399248000000, 'system:asset_size': 22616093760, 'system:index': '2014_05_04'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504502692153885, 'id': 'MODIS/006/MOD09GA/2014_05_05', 'properties': {'system:time_start': 1399248000000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1399334400000, 'system:asset_size': 23559225642, 'system:index': '2014_05_05'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504491491371582, 'id': 'MODIS/006/MOD09GA/2014_05_06', 'properties': {'system:time_start': 1399334400000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1399420800000, 'system:asset_size': 22777088609, 'system:index': '2014_05_06'}}], 'RADIANCE_MULT_BAND_5': 0.006009500008076429, 'RADIANCE_MULT_BAND_6': 0.0014944999711588025, 'RADIANCE_MULT_BAND_3': 0.011645999737083912, 'RADIANCE_MULT_BAND_4': 0.009820199571549892, 'RADIANCE_MULT_BAND_1': 0.012341000139713287, 'RADIANCE_MULT_BAND_2': 0.012637999840080738, 'K2_CONSTANT_BAND_11': 1201.1441650390625, 'K2_CONSTANT_BAND_10': 1321.078857421875, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-121.23130694632096, 38.20890167865334], [-122.47808618435543, 38.442905249886934], [-122.9416241270812, 38.52616106461051], [-122.94257304228283, 38.52467261055228], [-122.94438908458714, 38.518980549130696], [-122.9480116995035, 38.506814434795785], [-123.42945547884437, 36.807365583536495], [-123.42944546960602, 36.80558241062019], [-121.35650439967876, 36.40925950162913], [-121.35462928167787, 36.409233706436694], [-121.2209704109367, 36.84467814167406], [-121.09380664017438, 37.25395464587639], [-120.98744109880928, 37.59368464704816], [-120.92971288838983, 37.77715018781449], [-120.874792117132, 37.95100539896876], [-120.85505283148036, 38.013433126642376], [-120.83525753541217, 38.07639805962481], [-120.81911222539682, 38.12806656677994], [-120.8214394607643, 38.1287277611953], [-120.83942642052946, 38.13230813141151], [-121.23130694632096, 38.20890167865334]]}, 'REFLECTIVE_SAMPLES': 7671, 'SUN_AZIMUTH': 134.8988800048828, 'CPF_NAME': 'LC08CPF_20140401_20140630_01.01', 'DATE_ACQUIRED': '2014-05-05', 'ELLIPSOID': 'WGS84', 'google:registration_offset_x': 0, 'google:registration_offset_y': 0, 'STATION_ID': 'LGN', 'RESAMPLING_OPTION': 'CUBIC_CONVOLUTION', 'ORIENTATION': 'NORTH_UP', 'WRS_ROW': 34, 'RADIANCE_MULT_BAND_9': 0.0023485999554395676, 'TARGET_WRS_ROW': 34, 'RADIANCE_MULT_BAND_7': 0.0005037300288677216, 'RADIANCE_MULT_BAND_8': 0.011114000342786312, 'IMAGE_QUALITY_TIRS': 9, 'TRUNCATION_OLI': 'UPPER', 'CLOUD_COVER': 24.25, 'GEOMETRIC_RMSE_VERIFY': 3.5369999408721924, 'COLLECTION_CATEGORY': 'T1', 'GRID_CELL_SIZE_REFLECTIVE': 30, 'CLOUD_COVER_LAND': 30.09000015258789, 'GEOMETRIC_RMSE_MODEL': 7.320000171661377, 'COLLECTION_NUMBER': 1, 'IMAGE_QUALITY_OLI': 9, 'LANDSAT_SCENE_ID': 'LC80440342014125LGN01', 'WRS_PATH': 44, 'google:registration_count': 0, 'PANCHROMATIC_SAMPLES': 15341, 'PANCHROMATIC_LINES': 15581, 'GEOMETRIC_RMSE_MODEL_Y': 4.623000144958496, 'REFLECTIVE_LINES': 7791, 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE': 'TIRS', 'GEOMETRIC_RMSE_MODEL_X': 5.675000190734863, 'system:asset_size': 1263423627, 'system:index': 'LC08_044034_20140505', 'REFLECTANCE_ADD_BAND_1': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_2': -0.10000000149011612, 'DATUM': 'WGS84', 'REFLECTANCE_ADD_BAND_3': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_4': -0.10000000149011612, 'RLUT_FILE_NAME': 'LC08RLUT_20130211_20150302_01_11.h5', 'REFLECTANCE_ADD_BAND_5': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_6': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_7': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_8': -0.10000000149011612, 'BPF_NAME_TIRS': 'LT8BPF20140505181139_20140505190416.01', 'GROUND_CONTROL_POINTS_VERSION': 4, 'DATA_TYPE': 'L1TP', 'UTM_ZONE': 10, 'LANDSAT_PRODUCT_ID': 'LC08_L1TP_044034_20140505_20170307_01_T1', 'REFLECTANCE_ADD_BAND_9': -0.10000000149011612, 'google:registration_ratio': 0, 'GRID_CELL_SIZE_PANCHROMATIC': 15, 'RADIANCE_ADD_BAND_4': -49.10100173950195, 'REFLECTANCE_MULT_BAND_7': 1.9999999494757503e-05, 'system:time_start': 1399315542790, 'RADIANCE_ADD_BAND_5': -30.047359466552734, 'REFLECTANCE_MULT_BAND_6': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_6': -7.472509860992432, 'REFLECTANCE_MULT_BAND_9': 1.9999999494757503e-05, 'PROCESSING_SOFTWARE_VERSION': 'LPGS_2.7.0', 'RADIANCE_ADD_BAND_7': -2.518630027770996, 'REFLECTANCE_MULT_BAND_8': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_1': -61.70698165893555, 'RADIANCE_ADD_BAND_2': -63.18870162963867, 'RADIANCE_ADD_BAND_3': -58.227840423583984, 'REFLECTANCE_MULT_BAND_1': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_8': -55.56882095336914, 'REFLECTANCE_MULT_BAND_3': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_9': -11.743189811706543, 'REFLECTANCE_MULT_BAND_2': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_5': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_4': 1.9999999494757503e-05, 'THERMAL_LINES': 7791, 'TIRS_SSM_POSITION_STATUS': 'NOMINAL', 'GRID_CELL_SIZE_THERMAL': 30, 'NADIR_OFFNADIR': 'NADIR', 'RADIANCE_ADD_BAND_11': 0.10000000149011612, 'REQUEST_ID': '0501703064572_00027', 'EARTH_SUN_DISTANCE': 1.0086472034454346, 'TIRS_SSM_MODEL': 'ACTUAL', 'FILE_DATE': 1488903671000, 'SCENE_CENTER_TIME': '18:45:42.7916370Z', 'SUN_ELEVATION': 62.584102630615234, 'BPF_NAME_OLI': 'LO8BPF20140505183026_20140505190323.01', 'RADIANCE_ADD_BAND_10': 0.10000000149011612, 'ROLL_ANGLE': -0.0010000000474974513, 'K1_CONSTANT_BAND_10': 774.8853149414062, 'SATURATION_BAND_1': 'Y', 'SATURATION_BAND_2': 'Y', 'SATURATION_BAND_3': 'Y', 'SATURATION_BAND_4': 'Y', 'SATURATION_BAND_5': 'Y', 'MAP_PROJECTION': 'UTM', 'SATURATION_BAND_6': 'Y', 'SENSOR_ID': 'OLI_TIRS', 'SATURATION_BAND_7': 'Y', 'K1_CONSTANT_BAND_11': 480.8883056640625, 'SATURATION_BAND_8': 'N', 'SATURATION_BAND_9': 'N', 'TARGET_WRS_PATH': 44, 'RADIANCE_MULT_BAND_11': 0.00033420001273043454, 'RADIANCE_MULT_BAND_10': 0.00033420001273043454, 'GROUND_CONTROL_POINTS_MODEL': 289, 'SPACECRAFT_ID': 'LANDSAT_8', 'ELEVATION_SOURCE': 'GLS2000', 'THERMAL_SAMPLES': 7671, 'GROUND_CONTROL_POINTS_VERIFY': 62}}, {'type': 'Image', 'bands': [{'id': 'B1', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B2', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B3', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B4', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B5', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B6', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B7', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B8', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [15341, 15601], 'crs': 'EPSG:32610', 'crs_transform': [15, 0, 464692.5, 0, -15, 4264507.5]}, {'id': 'B9', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B10', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'B11', 'data_type': {'type': 'PixelType', 'precision': 'float'}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}, {'id': 'BQA', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [7671, 7801], 'crs': 'EPSG:32610', 'crs_transform': [30, 0, 464685, 0, -30, 4264515]}], 'version': 1580044754541352, 'id': 'LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140521', 'properties': {'terra': [{'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504500914552187, 'id': 'MODIS/006/MOD09GA/2014_05_19', 'properties': {'system:time_start': 1400457600000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1400544000000, 'system:asset_size': 23343381618, 'system:index': '2014_05_19'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504503270957152, 'id': 'MODIS/006/MOD09GA/2014_05_20', 'properties': {'system:time_start': 1400544000000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1400630400000, 'system:asset_size': 22344174886, 'system:index': '2014_05_20'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504499085062986, 'id': 'MODIS/006/MOD09GA/2014_05_21', 'properties': {'system:time_start': 1400630400000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1400716800000, 'system:asset_size': 23263811253, 'system:index': '2014_05_21'}}, {'type': 'Image', 'bands': [{'id': 'num_observations_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'state_1km', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SensorAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'Range', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 65535}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarZenith', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'SolarAzimuth', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'gflags', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'orbit_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'granule_pnt', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [43200, 21600], 'crs': 'SR-ORG:6974', 'crs_transform': [926.625433056, 0, -20015109.354, 0, -926.625433055, 10007554.677]}, {'id': 'num_observations_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b01', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b02', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b03', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b04', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b05', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b06', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'sur_refl_b07', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -32768, 'max': 32767}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'QC_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 4294967295}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'obscov_500m', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': -128, 'max': 127}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'iobs_res', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}, {'id': 'q_scan', 'data_type': {'type': 'PixelType', 'precision': 'int', 'min': 0, 'max': 255}, 'dimensions': [86400, 43200], 'crs': 'SR-ORG:6974', 'crs_transform': [463.312716528, 0, -20015109.354, 0, -463.312716527, 10007554.677]}], 'version': 1504500880529169, 'id': 'MODIS/006/MOD09GA/2014_05_22', 'properties': {'system:time_start': 1400716800000, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]}, 'system:time_end': 1400803200000, 'system:asset_size': 22511022912, 'system:index': '2014_05_22'}}], 'RADIANCE_MULT_BAND_5': 0.005967800039798021, 'RADIANCE_MULT_BAND_6': 0.0014841000083833933, 'RADIANCE_MULT_BAND_3': 0.01156499981880188, 'RADIANCE_MULT_BAND_4': 0.009752199985086918, 'RADIANCE_MULT_BAND_1': 0.012256000190973282, 'RADIANCE_MULT_BAND_2': 0.012550000101327896, 'K2_CONSTANT_BAND_11': 1201.1441650390625, 'K2_CONSTANT_BAND_10': 1321.078857421875, 'system:footprint': {'type': 'LinearRing', 'coordinates': [[-120.9221114406814, 37.68244619012667], [-120.89633560745239, 37.76390614408945], [-120.83746336237951, 37.94945600779687], [-120.82098495481172, 38.00141006480963], [-120.78179975086263, 38.125049388247994], [-120.78173908398541, 38.12705556142276], [-120.79512978776856, 38.12976361438609], [-121.73406240469221, 38.31178421248136], [-122.79279800879766, 38.50701449179694], [-122.88876971795369, 38.5241778933743], [-122.9038553878929, 38.52682543966657], [-123.3934724535376, 36.80801002145629], [-123.3934642377511, 36.80639615821769], [-123.14252377291987, 36.76031119223474], [-121.39556579260922, 36.42323515794831], [-121.3201532766815, 36.40807244280241], [-121.31926234184606, 36.40876798117092], [-121.1964526203538, 36.807060467012924], [-121.07492303846685, 37.19674766434507], [-120.94691203296651, 37.60392056819356], [-120.9221114406814, 37.68244619012667]]}, 'REFLECTIVE_SAMPLES': 7671, 'SUN_AZIMUTH': 129.40968322753906, 'CPF_NAME': 'LC08CPF_20140401_20140630_01.01', 'DATE_ACQUIRED': '2014-05-21', 'ELLIPSOID': 'WGS84', 'google:registration_offset_x': 93.1732177734375, 'google:registration_offset_y': -389.06402587890625, 'STATION_ID': 'LGN', 'RESAMPLING_OPTION': 'CUBIC_CONVOLUTION', 'ORIENTATION': 'NORTH_UP', 'WRS_ROW': 34, 'RADIANCE_MULT_BAND_9': 0.0023324000649154186, 'TARGET_WRS_ROW': 34, 'RADIANCE_MULT_BAND_7': 0.0005002400139346719, 'RADIANCE_MULT_BAND_8': 0.011037000454962254, 'IMAGE_QUALITY_TIRS': 9, 'TRUNCATION_OLI': 'UPPER', 'CLOUD_COVER': 35.439998626708984, 'GEOMETRIC_RMSE_VERIFY': 3.2890000343322754, 'COLLECTION_CATEGORY': 'T1', 'GRID_CELL_SIZE_REFLECTIVE': 30, 'CLOUD_COVER_LAND': 14.020000457763672, 'GEOMETRIC_RMSE_MODEL': 5.670000076293945, 'COLLECTION_NUMBER': 1, 'IMAGE_QUALITY_OLI': 9, 'LANDSAT_SCENE_ID': 'LC80440342014141LGN01', 'WRS_PATH': 44, 'google:registration_count': 66, 'PANCHROMATIC_SAMPLES': 15341, 'PANCHROMATIC_LINES': 15601, 'GEOMETRIC_RMSE_MODEL_Y': 3.8980000019073486, 'REFLECTIVE_LINES': 7801, 'TIRS_STRAY_LIGHT_CORRECTION_SOURCE': 'TIRS', 'GEOMETRIC_RMSE_MODEL_X': 4.117000102996826, 'system:asset_size': 1261385761, 'system:index': 'LC08_044034_20140521', 'REFLECTANCE_ADD_BAND_1': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_2': -0.10000000149011612, 'DATUM': 'WGS84', 'REFLECTANCE_ADD_BAND_3': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_4': -0.10000000149011612, 'RLUT_FILE_NAME': 'LC08RLUT_20130211_20150302_01_11.h5', 'REFLECTANCE_ADD_BAND_5': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_6': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_7': -0.10000000149011612, 'REFLECTANCE_ADD_BAND_8': -0.10000000149011612, 'BPF_NAME_TIRS': 'LT8BPF20140521180614_20140521190408.02', 'GROUND_CONTROL_POINTS_VERSION': 4, 'DATA_TYPE': 'L1TP', 'UTM_ZONE': 10, 'LANDSAT_PRODUCT_ID': 'LC08_L1TP_044034_20140521_20170307_01_T1', 'REFLECTANCE_ADD_BAND_9': -0.10000000149011612, 'google:registration_ratio': 0.4370861053466797, 'GRID_CELL_SIZE_PANCHROMATIC': 15, 'RADIANCE_ADD_BAND_4': -48.76087951660156, 'REFLECTANCE_MULT_BAND_7': 1.9999999494757503e-05, 'system:time_start': 1400697934830, 'RADIANCE_ADD_BAND_5': -29.839229583740234, 'REFLECTANCE_MULT_BAND_6': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_6': -7.420740127563477, 'REFLECTANCE_MULT_BAND_9': 1.9999999494757503e-05, 'PROCESSING_SOFTWARE_VERSION': 'LPGS_2.7.0', 'RADIANCE_ADD_BAND_7': -2.501189947128296, 'REFLECTANCE_MULT_BAND_8': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_1': -61.279541015625, 'RADIANCE_ADD_BAND_2': -62.75099182128906, 'RADIANCE_ADD_BAND_3': -57.824501037597656, 'REFLECTANCE_MULT_BAND_1': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_8': -55.18389892578125, 'REFLECTANCE_MULT_BAND_3': 1.9999999494757503e-05, 'RADIANCE_ADD_BAND_9': -11.661849975585938, 'REFLECTANCE_MULT_BAND_2': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_5': 1.9999999494757503e-05, 'REFLECTANCE_MULT_BAND_4': 1.9999999494757503e-05, 'THERMAL_LINES': 7801, 'TIRS_SSM_POSITION_STATUS': 'NOMINAL', 'GRID_CELL_SIZE_THERMAL': 30, 'NADIR_OFFNADIR': 'NADIR', 'RADIANCE_ADD_BAND_11': 0.10000000149011612, 'REQUEST_ID': '0501703064217_00034', 'EARTH_SUN_DISTANCE': 1.0121588706970215, 'TIRS_SSM_MODEL': 'ACTUAL', 'FILE_DATE': 1488873846000, 'SCENE_CENTER_TIME': '18:45:34.8277940Z', 'SUN_ELEVATION': 65.65296173095703, 'BPF_NAME_OLI': 'LO8BPF20140521183116_20140521190315.02', 'RADIANCE_ADD_BAND_10': 0.10000000149011612, 'ROLL_ANGLE': -0.0010000000474974513, 'K1_CONSTANT_BAND_10': 774.8853149414062, 'SATURATION_BAND_1': 'Y', 'SATURATION_BAND_2': 'Y', 'SATURATION_BAND_3': 'Y', 'SATURATION_BAND_4': 'Y', 'SATURATION_BAND_5': 'Y', 'MAP_PROJECTION': 'UTM', 'SATURATION_BAND_6': 'Y', 'SENSOR_ID': 'OLI_TIRS', 'SATURATION_BAND_7': 'Y', 'K1_CONSTANT_BAND_11': 480.8883056640625, 'SATURATION_BAND_8': 'N', 'SATURATION_BAND_9': 'N', 'TARGET_WRS_PATH': 44, 'RADIANCE_MULT_BAND_11': 0.00033420001273043454, 'RADIANCE_MULT_BAND_10': 0.00033420001273043454, 'GROUND_CONTROL_POINTS_MODEL': 404, 'SPACECRAFT_ID': 'LANDSAT_8', 'ELEVATION_SOURCE': 'GLS2000', 'THERMAL_SAMPLES': 7671, 'GROUND_CONTROL_POINTS_VERIFY': 150}}]}
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Load a primary 'collection': Landsat imagery.
primary = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterDate('2014-04-01', '2014-06-01') \
.filterBounds(ee.Geometry.Point(-122.092, 37.42))
# Load a secondary 'collection': MODIS imagery.
modSecondary = ee.ImageCollection('MODIS/006/MOD09GA') \
.filterDate('2014-03-01', '2014-07-01')
# Define an allowable time difference: two days in milliseconds.
twoDaysMillis = 2 * 24 * 60 * 60 * 1000
# Create a time filter to define a match as overlapping timestamps.
timeFilter = ee.Filter.Or(
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_start',
'rightField': 'system:time_end'
}),
ee.Filter.maxDifference(**{
'difference': twoDaysMillis,
'leftField': 'system:time_end',
'rightField': 'system:time_start'
})
)
# Define the join.
saveAllJoin = ee.Join.saveAll(**{
'matchesKey': 'terra',
'ordering': 'system:time_start',
'ascending': True
})
# Apply the join.
landsatModis = saveAllJoin.apply(primary, modSecondary, timeFilter)
# Display the result.
print('Join.saveAll:', landsatModis.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____ |
ipynb/sdata_testprogram.ipynb | ###Markdown
Test Program
###Code
tpname = "ALU"
tpuuid = "a0000000ca6238bea16732ce88c1922f"
project = "Testprogram ALU"
tp = TestProgram(name=tpname,
uuid=tpuuid,
project=project)
tp.metadata.df
assert tp.uuid == tpuuid
assert tp.name == tpname
assert tp.uuid_testprogram == tpuuid
assert tp.name_testprogram == tpname
###Output
_____no_output_____
###Markdown
Test Series
###Code
testtype="BB"
tsname = "Testseries S1"
tsuuid = "b1111111ca6238bea16732ce88c1922f"
ts = TestSeries(name=tsname,
uuid_testseries=tsuuid,
name_testseries=tsname,
uuid_testprogram=tpuuid,
name_testprogram=tpname,
testtype=testtype
)
print(ts)
print(ts.metadata.df.value)
assert ts.name == tsname
assert ts.uuid_testseries == tsuuid
assert ts.name_testseries == tsname
assert ts.uuid_testprogram == tpuuid
assert ts.name_testprogram == tpname
assert ts.testtype == testtype
ts.uuid_testseries, tsuuid
tsname = "Testseries S1"
tsuuid = "b1111111ca6238bea16732ce88c1922f"
testtype = "UT"
ts = tp.gen_testseries(name=tsname, uuid=tsuuid, testtype=testtype)
ts.metadata.df
assert ts.name == tsname
assert ts.uuid_testprogram == tpuuid
assert ts.name_testprogram == tpname
assert ts.uuid_testseries == tsuuid
assert ts.name_testseries == tsname
assert ts.testtype == testtype
###Output
_____no_output_____
###Markdown
Test
###Code
tname = "Test S1-001"
tuuid = "c2222222222238bea16732ce88c19221"
testtype="BB"
t = ts.gen_test(name=tname,
uuid=tuuid)
t = Test(name=tname,
uuid_testseries=tsuuid,
name_testseries=tsname,
uuid_testprogram=tpuuid,
name_testprogram=tpname,
testtype="BB",
)
print(t)
assert t.name == tname
assert t.uuid_testseries == tsuuid
assert t.name_testseries == tsname
assert t.uuid_testprogram == tpuuid
assert t.name_testprogram == tpname
assert t.testtype == testtype
t.metadata.df
print(t)
assert t.name == tname
assert t.uuid_testprogram == tpuuid
assert t.name_testprogram == tpname
assert t.uuid_testseries == tsuuid
assert t.name_testseries == tsname
assert t.testtype == testtype
t.uuid_testseries, tsuuid, t.uuid_testseries == tsuuid
t.metadata.df
t.metadata.keys()
tp.name, tp.name_testprogram, ts.name_testprogram, t.name_testprogram
tp.uuid, tp.uuid_testprogram, ts.uuid_testprogram, t.uuid_testprogram
ts.name_testseries, t.name_testseries
###Output
_____no_output_____ |
notebook/bcdevops-repos.ipynb | ###Markdown
BC DevopsThis notebook combines bcdevops stats from different csvs into one file
###Code
import pandas as pd
import numpy as np
from functions import parse_file, left_merge, count_repositories, format_repolint_results, print_to_csv
pd.set_option('display.max_columns', None)
# Import csvs to be combined
df = parse_file('repository-basics.csv', 'bcdevops')
df_pr = parse_file('repository-pr.csv', 'bcdevops', df)
df_repo_1 = parse_file("repository-details-1.csv", 'bcdevops', df)
df_repo_2 = parse_file("repository-details-2.csv", 'bcdevops', df)
repolint_df = parse_file('repolint-results.csv', 'bcdevops', df)
# Reoplint data needs formatting to convert strings to booleans
formatted_repolint_results = format_repolint_results(repolint_df)
merged_df = left_merge(df, formatted_repolint_results)
merged_df = left_merge(merged_df, df_repo_1)
merged_df = left_merge(merged_df, df_repo_2)
merged_df = left_merge(merged_df, df_pr)
merged_df.head()
print_to_csv(merged_df, 'bcdevops', 'master.csv')
###Output
_____no_output_____ |
CSC14119 - Introduction to Data Science/Group Project 03 - Fake News Detection (Machine Learning Model)/Source/PJ3.ipynb | ###Markdown
Đồ án 3 - Fake News Detection Bảng phân công |Họ và tên|MSSV|Công việc|| :------ | :---: | :--------- ||Võ Thành Nam|19120301|Mô hình Decision Tree||Lương Ánh Nguyệt|19120315|Mô hình Multinomial Naive Bayes, deploy website||Phạm Lưu Mỹ Phúc|19120331|EDA, thiết kế website||Bùi Quang Bảo|19120454|Tiền xử lý dữ liệu, mô hình MLP Classifier| Website deployhttps://share.streamlit.io/nnguyet/fake-news-detection/app.py Link Github Projecthttps://github.com/nnguyet/Fake-News-Detection Thư viện
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import re
import seaborn as sns
import warnings
from sklearn import set_config
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.neural_network import MLPClassifier
from sklearn.pipeline import Pipeline
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import MultinomialNB
from wordcloud import WordCloud
set_config(display='diagram')
# pyvi: https://pypi.org/project/pyvi/0.0.7.5/ - Vietnamese tokenizing tool
# !pip install pyvi
from pyvi import ViTokenizer, ViPosTagger
# !pip install dill
import dill
# !pip install wordcloud
import wordcloud
###Output
_____no_output_____
###Markdown
Dữ liệuNguồn dữ liệu: VNFD Dataset - [vn_news_223_tdlfr.csv](https://github.com/thanhhocse96/vfnd-vietnamese-fake-news-datasets/blob/master/CSV/vn_news_223_tdlfr.csv)Mô tả dữ liệu: [Mô tả tập VNFD](https://github.com/thanhhocse96/vfnd-vietnamese-fake-news-datasets/tree/master/CSV)
###Code
df = pd.read_csv("data/vn_news_223_tdlfr.csv")
df = df.drop(columns=['domain'])
df
###Output
_____no_output_____
###Markdown
Chúng ta sẽ chia dữ liệu thành 2 tập train và validation, với tỉ lệ 80/20.Từ giờ, mọi bước liên quan đến tiền xử lý, trích xuất đặc trưng, train mô hình học máy đều chỉ thực hiện trên tập train. Tập validation sẽ được để dành cho việc kiểm tra lại mô hình.
###Code
X_df = df.drop("label", axis=1)
Y_sr = df["label"]
train_X_df, val_X_df, train_Y_sr, val_Y_sr = train_test_split(
X_df, Y_sr,
test_size = 0.2,
stratify = Y_sr,
# random_state = 0
)
###Output
_____no_output_____
###Markdown
Tiền xử lý văn bản tiếng Việt Stopwords: https://github.com/stopwords/vietnamese-stopwords/blob/master/vietnamese-stopwords.txtTokenizer: Pyvi - Vietnamese tokenizing tool - https://pypi.org/project/pyvi/0.0.7.5/Quá trình xử lý 1 đoạn text được thực hiện như sau:* Tokenize* Remove punctuations* Remove special chars* Remove links* Lowercase* Remove stopwords
###Code
with open("stopwords/vietnamese-stopwords.txt",encoding='utf-8') as file:
stopwords = file.readlines()
stopwords = [word.rstrip() for word in stopwords]
punctuations = '''!()-–=[]{}“”‘’;:'"|\,<>./?@#$%^&*_~'''
special_chars = ['\n', '\t']
regex = re.compile(
r'^(?:http|ftp)s?://' # http:// or https://
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # domain
r'localhost|' # localhost
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ip
r'(?::\d+)?' # port
r'(?:/?|[/?]\S+)$', re.IGNORECASE)
def tokenize(text):
tokenized_text = ViPosTagger.postagging(ViTokenizer.tokenize(text))
return tokenized_text[0]
def is_punctuation(token):
global punctuations
return True if token in punctuations else False
def is_special_chars(token):
global special_chars
return True if token in special_chars else False
def is_link(token):
return re.match(regex, token) is not None
def lowercase(token):
return token.lower()
def is_stopword(token):
global stopwords
return True if token in stopwords else False
# ===============================================================
# Process:
# Text -> Tokenize (pyvi) -> Remove punctuations -> Remove special chars
# -> Remove links -> Lowercase -> Remove stopwords -> Final Tokens
# ===============================================================
def vietnamese_text_preprocessing(text):
tokens = tokenize(text)
tokens = [token for token in tokens if not is_punctuation(token)]
tokens = [token for token in tokens if not is_special_chars(token)]
tokens = [token for token in tokens if not is_link(token)]
tokens = [lowercase(token) for token in tokens]
tokens = [token for token in tokens if not is_stopword(token)]
# return tokens
return tokens
###Output
_____no_output_____
###Markdown
Ví dụ sử dụng:
###Code
# Trích 1 đoạn từ https://www.fit.hcmus.edu.vn/
demo_text = 'Trải qua hơn 25 năm hoạt động, Khoa Công nghệ Thông tin (CNTT) đã phát triển vững chắc và được Chính phủ bảo trợ để trở thành một trong những khoa CNTT đầu ngành trong hệ thống giáo dục đại học của Việt Nam.'
demo_text_to_tokens = vietnamese_text_preprocessing(demo_text)
print(demo_text_to_tokens)
###Output
['trải', '25', 'hoạt_động', 'khoa', 'công_nghệ', 'thông_tin', 'cntt', 'phát_triển', 'vững_chắc', 'chính_phủ', 'bảo_trợ', 'trở_thành', 'khoa', 'cntt', 'đầu', 'ngành', 'hệ_thống', 'giáo_dục', 'đại_học', 'việt_nam']
###Markdown
EDA
###Code
print("Shape: ",df.shape)
print("Columns: ", df.columns.tolist())
###Output
Shape: (223, 2)
Columns: ['text', 'label']
###Markdown
Dữ liệu có 223 dòng với 2 cột là `text` chứa nội dung bản record Tiếng Việt và `label` chứa nhãn (1: tin giả, 0: tin thật) Kiểm tra dữ liệu bị thiếu, sai kiểu dữ liệu (nếu có)
###Code
df.dtypes
df.isna().sum()
###Output
_____no_output_____
###Markdown
Dữ liệu không bị thiếu hay sai kiểu dữ liệu. Kiểm tra phân bố các class có chênh lệch không
###Code
df.label.value_counts()
warnings.simplefilter(action="ignore", category=FutureWarning)
sns.countplot(df.label)
###Output
_____no_output_____
###Markdown
Các thông tin thống kê của văn bản (chiều dài trung bình mỗi record, v.v)
###Code
len_text = df['text'].apply(lambda x: len(x.split()))
len_text.describe()
###Output
_____no_output_____
###Markdown
Chiều dài trung bình mỗi record là 564.713 Trong đó, record ngắn nhất có 69 từ, record dài nhất có 2331 từ Trực quan hóa các từ hay xuất hiện trong Fake news và Real news
###Code
fake_news_df = df[df['label'] == 1]
real_news_df = df[df['label'] == 0]
def visulize_frequency_word(df, title):
words_df = []
df['text'].apply(lambda x: words_df.extend(vietnamese_text_preprocessing(x)))
wordcloud = WordCloud(background_color="white").generate(' '.join(words_df))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.title(title)
plt.show()
visulize_frequency_word(fake_news_df, "Frequency words in Fake news")
visulize_frequency_word(real_news_df, "Frequency words in Real news")
###Output
_____no_output_____
###Markdown
Xây dựng mô hình máy học: MLPClassifier Xây dựng pipelinePipeline: * **PreprocessAndFeaturesExtract**: Tiền xử lý văn bản và trích xuất đặc trưng * **Tiền xử lý văn bản**: Đã được cài đặt ở phần trên (hàm `vietnamese_text_preprocessing`) * **Trích xuất đặc trưng**: Văn bản -> Ma trận đặc trưng TF-IDF (`sklearn.feature_extraction.text.TfidfVectorizer` - https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html)* **Mô hình phân lớp mạng neural MLPClassifier** (`sklearn.neural_network.MLPClassifier` - https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html)
###Code
class PreprocessAndFeaturesExtract(BaseEstimator, TransformerMixin):
def __init__(self):
# print("> PreprocessAndFeaturesExtract > INIT")
self.trained_tokens = []
# ===============================================================
# Mỗi lần train chỉ gọi 1 lần fit
# Mỗi lần fit thì sẽ tạo 1 list trained_tokens, bao gồm những tokens/đặc trưng đã được train
# ===============================================================
def fit(self, X_df, y=None):
# Return a list of preprocessed_texts
preprocessed_texts = []
for index, row in X_df.iterrows():
tokens = vietnamese_text_preprocessing(row['text'])
preprocessed_texts.append(' '.join(tokens))
preprocessed_texts = np.array(preprocessed_texts)
# preprocessed_texts -> features
tv = TfidfVectorizer(min_df = 0.0, max_df = 1.0, use_idf = True)
tv_matrix = tv.fit_transform(preprocessed_texts)
tv_matrix = tv_matrix.toarray()
# df_features = pd.DataFrame(np.round(tv_matrix, 2), columns = tv.get_feature_names_out())
df_features = pd.DataFrame(np.round(tv_matrix, 2), columns = tv.get_feature_names())
self.trained_tokens = df_features.columns.values
# print(f"> PreprocessAndFeaturesExtract > FIT > X_df: {X_df.shape} > df_features: {df_features.shape}")
# print(f"self.trained_tokens: {self.trained_tokens}")
return self
# ===============================================================
# Khá giống phương thức fit (ở bước tiền xử lý, và trích xuất đặc trưng), tuy nhiên:
# fit: Gọi 1 lần duy nhất mỗi lần train
# transform: Được gọi nhiều lần, và có thể áp dụng với nhiều X_df khác nhau (để tính score hay predict chẳng hạn),
# dựa trên cái model đã được train trước đó bằng fit
# ===============================================================
# fit tạo mới self.trained_tokens, transform thì không
# ===============================================================
# transform được cài đặt để trả về những đặc trưng ĐÃ được học,
# những đặc trưng chưa học thì sẽ bỏ qua
# ===============================================================
def transform(self, X_df, y=None):
# Return a list of preprocessed_texts
preprocessed_texts = []
for index, row in X_df.iterrows():
tokens = vietnamese_text_preprocessing(row['text'])
preprocessed_texts.append(' '.join(tokens))
preprocessed_texts = np.array(preprocessed_texts)
# Features Extraction
# preprocessed_texts -> features
# TF-IDF Model
tv = TfidfVectorizer(min_df = 0.0, max_df = 1.0, use_idf = True)
tv_matrix = tv.fit_transform(preprocessed_texts)
tv_matrix = tv_matrix.toarray()
# vocab = tv.get_feature_names_out()
vocab = tv.get_feature_names()
temp_df_features = pd.DataFrame(np.round(tv_matrix, 2), columns=vocab)
n_rows = temp_df_features.shape[0]
df_features = pd.DataFrame()
for trained_token in self.trained_tokens:
if trained_token in vocab:
df_features[trained_token] = temp_df_features[trained_token]
else:
df_features[trained_token] = [0.000]*n_rows
# print(f"\n> PreprocessAndFeaturesExtract > TRANSFORM > X_df: {X_df.shape} > df_features: {df_features.shape}")
return df_features
# Predict function
# Input: A string
# Output: Label (0 - non fake news, 1 - fake news)
def mlp_predict(text):
mlp_pd = pd.DataFrame()
mlp_pd["text"] = [text]
pred_result = pipeline_mlp.predict(mlp_pd)[0]
return pred_result
# MLPClassifier
mlp_classifier = MLPClassifier(
hidden_layer_sizes=(50),
activation='relu',
solver='lbfgs',
random_state=0,
max_iter=10000
)
# Pipeline: PreprocessAndFeaturesExtract -> MLPClassifier
pipeline_mlp = Pipeline(
steps=[
("vnpreprocess", PreprocessAndFeaturesExtract()),
("mlpclassifier", mlp_classifier)
]
)
pipeline_mlp
###Output
_____no_output_____
###Markdown
TrainTiến hành huấn luyện mô hình trên tập train
###Code
pipeline_mlp.fit(train_X_df, train_Y_sr)
print("> Train completed")
###Output
> Train completed
###Markdown
ScoreTính accuracy của mô hình, sử dụng lần lượt tập train và tập validation.
###Code
train_acc = pipeline_mlp.score(train_X_df, train_Y_sr)
print(f"Train Accuracy = {train_acc}")
val_acc = pipeline_mlp.score(val_X_df, val_Y_sr)
print(f"Validation Accuracy = {val_acc}")
###Output
Train Accuracy = 1.0
Validation Accuracy = 0.9111111111111111
###Markdown
Predict (Demo)Dự đoán label của văn bản mới, với label: * 0 - non fake news* 1 - fake newsỞ đây chúng ta sẽ demo 2 đoạn văn bản được lấy ngẫu nhiên từ 2 bài báo của 2 trang web là kenh14.vn và thanhnien.vn. Việc chọn 2 bài báo hoàn toàn ngẫu nhiên và không có chủ đích muốn nói rằng trang nào chứa fake news hay không. Kết quả mọi output đều chỉ là dự đoán.
###Code
# Trích 1 đoạn từ: https://kenh14.vn/star/fan-nhao-nhao-vi-tin-cuc-hot-cu-kim-soo-hyun-se-co-fan-meeting-tai-viet-nam-20140402111911750.chn
demo_text_1 = 'Vào đúng dịp Cá tháng tư (01/4), trang fanpage của Kim Soo Hyun Việt Nam bất ngờ đăng tải thông tin cho biết chàng "Cụ" 400 tuổi của Vì sao đưa anh tới có thể sẽ có mặt tại Việt Nam vào ngày 22/4 tới đây. Không chỉ vậy, theo thông tin được hé lộ, Kim Soo Hyun còn tổ chức cả chương trình fan meeting để gặp gỡ và giao lưu với fan Việt.'
# Trích 1 đoạn từ: https://thanhnien.vn/so-ca-covid-19-tu-vong-thap-nhat-trong-175-ngay-o-tp-hcm-post1419142.html
demo_text_2 = 'Cùng ngày, Sở Y tế TP.HCM cho biết đã ban hành cập nhật hướng dẫn gói thuốc chăm sóc sức khỏe cho F0 cách ly tại nhà (phiên bản 7.1). Trong đó có điểm mới về thuốc dành cho F0 cách ly tại nhà có bệnh mãn tính và quy định F0 cách ly tại nhà sau 10 ngày âm tính được dỡ bỏ cách ly. Sở Y tế khuyến cáo F0 đủ điều kiện cách ly tại nhà, nhưng trong nhà có người thuộc nhóm nguy cơ thì nên cách ly nơi khác để giảm nguy cơ lây lan cho các thành viên trong nhà.'
print(f"Text Input: {demo_text_1}\nPredict Label: {mlp_predict(demo_text_1)}")
print(f"Text Input: {demo_text_2}\nPredict Label: {mlp_predict(demo_text_2)}")
###Output
Text Input: Vào đúng dịp Cá tháng tư (01/4), trang fanpage của Kim Soo Hyun Việt Nam bất ngờ đăng tải thông tin cho biết chàng "Cụ" 400 tuổi của Vì sao đưa anh tới có thể sẽ có mặt tại Việt Nam vào ngày 22/4 tới đây. Không chỉ vậy, theo thông tin được hé lộ, Kim Soo Hyun còn tổ chức cả chương trình fan meeting để gặp gỡ và giao lưu với fan Việt.
Predict Label: 1
Text Input: Cùng ngày, Sở Y tế TP.HCM cho biết đã ban hành cập nhật hướng dẫn gói thuốc chăm sóc sức khỏe cho F0 cách ly tại nhà (phiên bản 7.1). Trong đó có điểm mới về thuốc dành cho F0 cách ly tại nhà có bệnh mãn tính và quy định F0 cách ly tại nhà sau 10 ngày âm tính được dỡ bỏ cách ly. Sở Y tế khuyến cáo F0 đủ điều kiện cách ly tại nhà, nhưng trong nhà có người thuộc nhóm nguy cơ thì nên cách ly nơi khác để giảm nguy cơ lây lan cho các thành viên trong nhà.
Predict Label: 0
###Markdown
Xây dựng mô hình máy học: Decision Tree Xây dựng mô hình cây quyết định với 2 bước:- Tiền xử lí (sử dụng hàm PreprocessAndFeaturesExtract)- Xây dựng cây quyết định sử dụng DecisionTreeClassifier trong thư viện sklearn.
###Code
dtc = DecisionTreeClassifier(criterion='entropy')
dtcPipeline = Pipeline(steps = [('extractfeatures',PreprocessAndFeaturesExtract()),('DecisionTree',dtc)])
dtcPipeline
###Output
_____no_output_____
###Markdown
Train mô hình
###Code
%%time
dtcPipeline.fit(train_X_df, train_Y_sr)
dtcPipeline.score(train_X_df, train_Y_sr)
dtcPipeline.score(val_X_df, val_Y_sr)
###Output
_____no_output_____
###Markdown
Thay đổi các tham số để cải tiến mô hìnhTa có thế cải tiến mô hình bằng cách thay đổi các siêu tham số trong mô hình. Ở đây ta xem xét siêu tham số `ccp_alpha` của `DecisionTreeClassifier`.Việc tăng siêu tham số `ccp_alpha` sẽ làm tăng số lượng node bị loại bỏ khi thực hiện thao tác tỉa cây. Từ đó giúp mô hình không bị overfitting. [Tham khảo](https://scikit-learn.org/stable/auto_examples/tree/plot_cost_complexity_pruning.htmlsphx-glr-auto-examples-tree-plot-cost-complexity-pruning-py)
###Code
path = dtcPipeline['DecisionTree'].cost_complexity_pruning_path(dtcPipeline[:-1].transform(train_X_df), train_Y_sr)
ccp_alphas, impurities = path.ccp_alphas, path.impurities
###Output
_____no_output_____
###Markdown
Xem xét sự biến đổi của **impurity** của các node với sự biến đổi của siêu tham số `ccp_alpha`.
###Code
fig, ax = plt.subplots()
ax.plot(ccp_alphas[:-1], impurities[:-1], marker="o", drawstyle="steps-post")
ax.set_xlabel("Effective ccp_alpha")
ax.set_ylabel("Total impurity of leaves")
ax.set_title("Total Impurity vs effective alpha for training set");
###Output
_____no_output_____
###Markdown
Thay đổi siêu tham số `ccp_alpha` để tìm ra mô hình tốt nhất.
###Code
%%time
train_accs = []
test_accs = []
best_test_acc = 0
best_ccp_alpha = None
for ca in ccp_alphas:
dtcPipeline.set_params(DecisionTree__ccp_alpha = ca)
dtcPipeline.fit(train_X_df, train_Y_sr)
train_accs.append(dtcPipeline.score(train_X_df, train_Y_sr))
test_accs.append(dtcPipeline.score(val_X_df, val_Y_sr))
if test_accs[-1] > best_test_acc:
best_test_acc = test_accs[-1]
best_ccp_alpha = ca
###Output
Wall time: 13min 51s
###Markdown
Trực quan hóa kết quả **Accuracy** của các mô hình trên tập **train** và **test** khi thay đổi siêu tham số `ccp_alpha`.
###Code
fig, ax = plt.subplots()
ax.set_xlabel("ccp_alpha")
ax.set_ylabel("Accuracy")
ax.set_title("Test accuracy")
ax.plot(ccp_alphas, train_accs, marker="o", label="train", drawstyle="steps-post")
ax.plot(ccp_alphas, test_accs, marker="o", label="test", drawstyle="steps-post")
ax.legend()
plt.show()
print(f'Best test accuracy = {best_test_acc} with ccp_alpha = {best_ccp_alpha} ')
###Output
Best test accuracy = 0.8222222222222222 with ccp_alpha = 0.0
###Markdown
Gán lại `ccp_alpha` và `fit` mô hình trên toàn bộ dữ liệu. Như vậy ta đã có được mô hình cuối cùng.
###Code
dtcPipeline.set_params(DecisionTree__ccp_alpha = best_ccp_alpha)
dtcPipeline.fit(X_df,Y_sr)
###Output
_____no_output_____
###Markdown
Xây dựng mô hình máy học: Multinomial Naive Bayes Xây dựng pipeline gồm 2 bước:* Tiền xử lý văn bản và trích xuất dữ liệu (dùng class **PreprocessAndFeaturesExtract** đã xây dựng ở trên)* Mô hình Multinomial Naive Bayes (sử dụng **MultinomialNB** có sẵn trong **sklearn.naive_bayes**)
###Code
mnb_pipeline = Pipeline(steps = [('preprocessextract',PreprocessAndFeaturesExtract()),('mnb',MultinomialNB())])
###Output
_____no_output_____
###Markdown
Train mô hình
###Code
mnb_pipeline.fit(train_X_df, train_Y_sr)
###Output
_____no_output_____
###Markdown
Kiểm tra độ chính xác của mô hình * Accuracy score trên tập test
###Code
mnb_pipeline.score(train_X_df, train_Y_sr)
###Output
_____no_output_____
###Markdown
* Accuracy score trên tập validation
###Code
mnb_pipeline.score(val_X_df, val_Y_sr)
###Output
_____no_output_____
###Markdown
* Như vậy mô hình xây dựng có kết quả đánh giá khá tốt. Lưu modelCác mô hình máy học được lưu lại để sử dụng cho việc deploy mô hình lên web. Mô hình MLPClassifier
###Code
out = open("models/mlpclassifier.pkl",mode = "wb")
dill.dump(pipeline_mlp,out)
out.close()
###Output
_____no_output_____
###Markdown
Mô hình Cây quyết định
###Code
out = open("models/decisiontree.pkl",mode = "wb")
dill.dump(dtcPipeline,out)
out.close()
###Output
_____no_output_____
###Markdown
Mô hình Multinomial Naive Bayes
###Code
out = open("models/multinomialnb.pkl",mode = "wb")
dill.dump(dtcPipeline,out)
out.close()
###Output
_____no_output_____ |
predictionmodel/churn_prediction.ipynb | ###Markdown
Sequential modelA Sequential model is appropriate for a plain stack of layers where each layer has exactly one input tensor and one output tensor.
###Code
from keras import Sequential
from keras.layers import Dense
classifier = Sequential()
#First Hidden Layer
classifier.add(Dense(64, activation='relu', kernel_initializer='random_normal', input_dim=18))
#Second Hidden Layer
classifier.add(Dense(32, activation='relu', kernel_initializer='random_normal'))
#Output Layer
classifier.add(Dense(1, activation='sigmoid', kernel_initializer='random_normal'))
#Compiling the neural network
classifier.compile(optimizer ='adam',loss='binary_crossentropy', metrics =['accuracy'])
#Fitting the data to the training dataset/
classifier.fit(X_train,y_train, batch_size=64, epochs=33,validation_split=0.1)
eval_model=classifier.evaluate(X_test, y_test)
eval_model
y_pred=classifier.predict(X_test)
y_pred =(y_pred>0.5)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test, classifier.predict(X_test))
###Output
_____no_output_____
###Markdown
Accuracy of Sequential model is 0.8285887393281157 Sklearn Logistic regresssion
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=42)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
model.score(X_test,y_test)
y_le_p = model.predict(X_test)
model.score(X_test,y_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_le_p))
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_test, y_le_p))
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test, y_le_p)
###Output
_____no_output_____
###Markdown
Sklearn Logistic regresssion Accuracy 0.7129054 Sklearn desision tree
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=42)
from sklearn import tree
model2 = tree.DecisionTreeClassifier()
model2.fit(X_train, y_train)
y_dt_p = model2.predict(X_test)
model2.score(X_test,y_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_dt_p))
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_test, y_dt_p))
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test, y_dt_p)
###Output
_____no_output_____
###Markdown
Sklearn desision tree Accuracy 0.6388861078793864 Support Vector Machines (SVM)
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=42)
from sklearn import svm
clf = svm.SVC(kernel='linear', C = 1.0)
clf.fit(X_train,y_train)
clf.score(X_test,y_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, clf.predict(X_test)))
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_test, clf.predict(X_test)))
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test, clf.predict(X_test))
###Output
_____no_output_____
###Markdown
Support Vector Machines (SVM) accuracy 0.7009334985828358 skLearn naive_bayes
###Code
X= dfle.drop(['Churn'],axis=1)
y=dfle['Churn']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=42)
from sklearn.naive_bayes import GaussianNB
clfmodel =GaussianNB()
clfmodel.fit(X_train, y_train)
clfmodel.score(X_test,y_test)
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test, clfmodel.predict(X_test))
from sklearn.metrics import classification_report
print(classification_report(y_test, clfmodel.predict(X_test)))
from joblib import dump, load
dump(clfmodel, 'filename.joblib')
###Output
_____no_output_____
###Markdown
skLearn naive_bayes accuracy 0.7401480340947929 1D CNN
###Code
import keras
import numpy as np
print(X_train.shape)
print(y_train.shape)
num_class = 1
timesteps = 18
def createClassifier():
sequence = keras.layers.Input(shape=(timesteps, 1), name='Sequence')
conv = keras.Sequential()
conv.add(keras.layers.Conv1D(10, 5, activation='relu', input_shape=(timesteps, 1)))
conv.add(keras.layers.Conv1D(10, 5, activation='relu'))
conv.add(keras.layers.MaxPool1D(2))
conv.add(keras.layers.Dropout(0.2))
conv.add(keras.layers.Conv1D(5, 2, activation='relu'))
#conv.add(keras.layers.Conv1D(5, 2, activation='relu'))
conv.add(keras.layers.GlobalAveragePooling1D())
#conv.add(keras.layers.Dropout(0.2))
#conv.add(keras.layers.Flatten())
part1 = conv(sequence)
final = keras.layers.Dense(8, activation='relu')(part1)
final = keras.layers.Dropout(0.5)(final)
final = keras.layers.Dense(num_class, activation='sigmoid')(final)
model = keras.Model(inputs=[sequence], outputs=[final])
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = createClassifier()
print(model.summary())
X1_Train = X_train.values.reshape((4922,18,1))
model.fit([X1_Train], y_train, epochs =200,validation_split=0.1)
model.evaluate(X_test.values.reshape((2110,18,1)), y_test)
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test, model.predict(X_test.values.reshape((2110,18,1))))
y_pred=model.predict(X_test.values.reshape((2110,18,1)))
y_pred =(y_pred>0.5)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
0 0.84 0.89 0.86 1549
1 0.64 0.54 0.59 561
accuracy 0.80 2110
macro avg 0.74 0.71 0.72 2110
weighted avg 0.79 0.80 0.79 2110
|
Lab09/lab9_lmbaeza_taylor_series_in_python.ipynb | ###Markdown
Serie Taylor en Python LAB 9 Taylor Series**Universidad Naional de Colombia - Sede Bogotá****Metodos Numericos****Docente:** _German Hernandez_**Estudiante:** * Luis Miguel Báez Aponte - [email protected]  En esta publicación, revisaremos cómo crear una serie Taylor con Python y bucles for. Luego, refactorizaremos la serie de Taylor en funciones y compararemos la salida de nuestras funciones de la serie de Taylor con funciones de la biblioteca estándar de Python.Una serie de Taylor es una serie infinita de términos matemáticos que, cuando se suman, se aproximan a una función matemática. Se puede utilizar una serie de Taylor para aproximar $e^{x}$, y `cosine`.Un ejemplo de una serie de Taylor que se aproxima a $e^{x}$ esta abajo.$$e^x\approx \sum _{n=0}^{\infty }\left(\frac{x^n}{n!}\right)\approx 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdot \cdot \cdot \cdot $$ Podemos ver que cada término en la expansión de la serie Taylor depende del lugar de ese término en la serie. A continuación se muestra una tabla que muestra cada término de la serie Taylor en una fila. Las columnas de la tabla representan el índice del término, el término matemático y cómo codificar ese término en Python. Tenga en cuenta que la `factorial()` función es parte del módulo `math` en la biblioteca estándar de Python.$\begin{array}{ccc} Índice\:de\:término&Término\:matemático&Término\:en\:Python \\0 & x^0/0! & x**0/math.factorial(0) \\1 & x^1/1! & x**1/math.factorial(1) \\2 & x^2/2! & x**2/math.factorial(2) \\3 & x^3/3! & x**3/math.factorial(3) \\4 & x^4/4! & x**4/math.factorial(4)\end{array}$ Codifique la serie de Taylor escribiendo cada término individualmente Podemos combinar estos términos en una línea de código Python para estimar $e^2$. El siguiente código calcula la suma de los primeros cinco términos de la expansión de la serie Taylor de $e^x$, donde $x=2$. Tenga en cuenta que el módulo `math` debe importarse antes de poder utilizar `math.factorial()`.
###Code
import math
x = 2
e_to_2 = x**0/math.factorial(0) + x**1/math.factorial(1) + x**2/math.factorial(2) + x**3/math.factorial(3) + x**4/math.factorial(4)
print(e_to_2)
###Output
7.0
###Markdown
Nuestra aproximacion de la seria de Tailor of $e^2$ fué calculada como `7`. Comparemos nuestra aproximación de la serie Taylor con la función de Python `math.exp()`.
###Code
print(math.exp(2))
###Output
7.38905609893065
###Markdown
Nuestra aproximación `7.0` de la serie Taylor no está tan lejos del valor calculado `7.389056...` usando la función exp() de Python. Use un bucle for para calcular una serie de TaylorSi queremos acercarnos al valor de $e^x$ necesitamos agregar más términos a nuestra serie Taylor. El problema es que codificar cada término individual requiere mucho tiempo y es repetitivo. En lugar de codificar cada término individualmente, podemos usar un bucle for . Un bucle for es una estructura de repetición en Python que ejecuta una sección de código un número específico de veces. La sintaxis para codificar un bucle for en Python usando la `range()` función es la siguiente:```txtfor in range(): ```¿Dónde `` está cualquier nombre de variable de Python válido y `` es un número entero para determinar cuántas veces se `` ejecuta el bucle?Podemos recrear nuestra aproximación de $e^x$ con 5 términos usando un bucle for. Tenga en cuenta que debemos establecer la variable antes de `e_to_2` que 0comience el ciclo. El operador matemático `+=` de la línea `e_to_2 += x**i/math.factorial(i)` es equivalente a `e_to_2 = e_to_2 + x**i/math.factorial(i)`.
###Code
import math
x = 2
e_to_2 = 0
for i in range(5):
e_to_2 += x**i/math.factorial(i)
print(e_to_2)
###Output
7.0
###Markdown
El resultado `7.0` es el mismo que calculamos cuando escribimos cada término de la serie de Taylor individualmente.Una ventaja de usar un bucle for es que podemos aumentar fácilmente el número de términos. Si aumentamos el número de veces que se ejecuta el ciclo `for` , aumentamos el número de términos en la expansión de la serie Taylor. Probemos `10` términos. Tenga en cuenta cómo la línea `for i in range(10):` ahora incluye `10` pasada a la `range()` función.
###Code
import math
x = 2
e_to_2 = 0
for i in range(10):
e_to_2 += x**i/math.factorial(i)
print(e_to_2)
###Output
7.3887125220458545
###Markdown
El resultado es `7.38871....` Veamos qué tan cerca está eso de $e^2$ calculado con la `exp()` función de Python .
###Code
print(math.exp(2))
###Output
7.38905609893065
###Markdown
El resultado es `7.38905....` Nos acercamos al valor de $e^2$ cuando `10` se utilizan términos en la serie Taylor en comparación con cuando 5se utilizan términos en la serie Taylor. Refactorice el bucle for en una funciónA continuación, refactoricemos el código anterior que contenía un bucle for (que calculó $e^2$) en una función que puede calcular mielevado a cualquier potencia estimada con cualquier número de términos en la Serie Taylor. La sintaxis general para definir una función en Python se encuentra a continuación.```pythondef (, , ...): return ``` Donde `def` es la palabra clave de Python que define una función, `` es un nombre de variable de Python válido y ``, `` son los argumentos de entrada que se pasan a la función. `` que se ejecuta cuando se llama a la función debe tener sangría (la sangría estándar es de 4 espacios). La palabra clave returndenota el `` de la función.Codifiquemos nuestro bucle for que se aproxima $e^2$ en una función.
###Code
import math
def func_e_to_2(n):
x = 2
e_to_2 = 0
for i in range(n):
e_to_2 += x**i/math.factorial(i)
return e_to_2
###Output
_____no_output_____
###Markdown
Si llamamos a nuestra función `func_e_to_2()` con el argumento de entrada `10`, el resultado es el mismo que cuando ejecutamos los tiempos de bucle for `10` .
###Code
out = func_e_to_2(10)
print(out)
###Output
7.3887125220458545
###Markdown
La salida de nuestra `func_e_to_2()` función es `7.38871....`Podemos hacer nuestra función más general estableciendo $x$ (el número quemise eleva a) como argumento de entrada. Observe cómo ahora hay dos argumentos de entrada en la definición de la función `(x, n)`. $x$ es el numero `e` eleva a, y n es el número de términos en la serie de Taylor (que es el número de veces que el bucle for se ejecuta en el interior de la definición de la función).
###Code
import math
def func_e(x, n):
e_approx = 0
for i in range(n):
e_approx += x**i/math.factorial(i)
return e_approx
###Output
_____no_output_____
###Markdown
Vamos a calcular mi2usando `10` términos con nuestra nueva `func_e()` función.
###Code
out = func_e(2,10)
print(out)
###Output
7.3887125220458545
###Markdown
El resultado es `7.38871...` el mismo resultado que antes.Una ventaja de escribir nuestra expansión de la serie Taylor en una función es que ahora el cálculo de aproximación de la serie Taylor es reutilizable y se puede llamar en una línea de código. Por ejemplo, podemos estimar el valor de $e^5$ con `10` términos, llamando a nuestra `func_e()` función con los argumentos de entrada `(5,10)`.
###Code
out = func_e(5,10)
print(out)
###Output
143.68945656966488
###Markdown
El resultado es `143.68945....` Veamos qué tan cerca está este valor de la `exp()` función de Python cuando hacemos el mismo cálculo $e^5$.
###Code
out = math.exp(5)
print(out)
###Output
148.4131591025766
###Markdown
El resultado es `148.41315....` La aproximación de la serie de Taylor calculada por nuestra `func_e()` función está bastante cerca del valor calculado por la `exp()` función de Python . Use un bucle for para calcular la diferencia entre la expansión de la serie Taylor y la exp()función de Python Ahora usemos un bucle for para calcular la diferencia entre la expansión de la serie Taylor calculada por nuestra `func_e()` función en comparación con la `exp()` función de Python . Calcularemos la diferencia entre las dos funciones cuando usemos términos entre `1` y `10` en la expansión de la serie de Taylor.El siguiente código usa f-strings , que es una sintaxis de Python para insertar el valor de una variable en una cadena. La sintaxis general de una cadena f en Python se muestra a continuación.```pythonf'string statment {}'```Donde fdenota el comienzo de un F-string , el F-cadena está rodeado por comillas `' '`, y la variable `` está encerrada entre llaves `{ }`. El valor de `` se imprimirá sin las llaves.
###Code
import math
x = 5
for i in range(1,11):
e_approx = func_e(x,i)
e_exp = math.exp(x)
e_error = abs(e_approx - e_exp)
print(f'{i} terms: Taylor Series approx= {e_approx}, exp calc= {e_exp}, error = {e_error}')
###Output
1 terms: Taylor Series approx= 1.0, exp calc= 148.4131591025766, error = 147.4131591025766
2 terms: Taylor Series approx= 6.0, exp calc= 148.4131591025766, error = 142.4131591025766
3 terms: Taylor Series approx= 18.5, exp calc= 148.4131591025766, error = 129.9131591025766
4 terms: Taylor Series approx= 39.33333333333333, exp calc= 148.4131591025766, error = 109.07982576924327
5 terms: Taylor Series approx= 65.375, exp calc= 148.4131591025766, error = 83.0381591025766
6 terms: Taylor Series approx= 91.41666666666667, exp calc= 148.4131591025766, error = 56.99649243590993
7 terms: Taylor Series approx= 113.11805555555556, exp calc= 148.4131591025766, error = 35.29510354702104
8 terms: Taylor Series approx= 128.61904761904762, exp calc= 148.4131591025766, error = 19.79411148352898
9 terms: Taylor Series approx= 138.30716765873015, exp calc= 148.4131591025766, error = 10.105991443846449
10 terms: Taylor Series approx= 143.68945656966488, exp calc= 148.4131591025766, error = 4.723702532911716
###Markdown
Observe cómo el error disminuye a medida que agregamos términos a la serie de Taylor. Cuando la serie de Taylor solo tiene `1` término, el error es `147.41....` Cuando `10` se utilizan términos en la serie de Taylor, el error se reduce a `4.7237....`. Use una breakdeclaración para salir de un bucle for antes. ¿Cuántos términos se necesitarían para producir un error menor que 1? Podemos usar una breakinstrucción para salir del ciclo for cuando el error es menor que `1`. El siguiente código calcula cuántos términos se necesitan en la serie Taylor, cuando $e^5$ se calcula, para mantener el error menor que `1`.
###Code
import math
x = 5
for i in range(1,20):
e_approx = func_e(x,i)
e_exp = math.exp(x)
e_error = abs(e_approx - e_exp)
if e_error < 1:
break
print(f'{i} terms: Taylor Series approx= {e_approx}, exp calc= {e_exp}, error = {e_error}')
###Output
12 terms: Taylor Series approx= 147.60384850489015, exp calc= 148.4131591025766, error = 0.8093105976864479
###Markdown
El resultado muestra que se necesitan `12` términos en la serie de Taylor para eliminar el error a continuación `1`. Cree una función para estimar el valor de $cos(x)$ usando una serie de Taylor A continuación, calculemos el valor de la función coseno usando una serie de Taylor. La expansión de la serie Taylor para porque $cos(x)$ Esta abajo.$$cos\left(x\right)\approx \:\sum \:_{n=0}^{\infty \:}\left(\left(-1\right)^n\cdot \frac{x^{2\cdot n}}{\left(2\cdot n\right)!}\right)\approx 1-\frac{x^2}{2!}+\frac{x^4}{6!}-\frac{x^6}{6!}\cdot \:\cdot \:\cdot \:\cdot \:$$Podemos codificar esta fórmula en una función que contenga un bucle `for`. Tenga en cuenta que la variable `x` es el valor del que estamos tratando de encontrar el coseno, la variable nes el número de términos en la serie de Taylor y la variable ies el índice de bucle, que también es el número de término de la serie de Taylor. Estamos usando una variable separada para el coeficiente coefque es igual a $(-1)^i$, el numerador numque es igual a $x^{2*i}$ yoy el denominador denomque es igual a $(2*i!)$. Dividir la fórmula de la serie Taylor en tres partes puede reducir los errores de codificación.
###Code
import math
def func_cos(x, n):
cos_approx = 0
for i in range(n):
coef = (-1)**i
num = x**(2*i)
denom = math.factorial(2*i)
cos_approx += ( coef ) * ( (num)/(denom) )
return cos_approx
###Output
_____no_output_____
###Markdown
Usemos nuestra `func_cos()` función para estimar el coseno de `45` grados. Tenga en cuenta que la `func_cos()` función calcula el coseno de un ángulo en radianes. Si queremos calcular el coseno de `45` grados usando nuestra función, primero tenemos que convertir `45` grados en radianes. Afortunadamente, el mathmódulo de Python tiene una función llamada `radians()` que hace la conversión de ángulo temprano.
###Code
angle_rad = (math.radians(45))
out = func_cos(angle_rad,5)
print(out)
###Output
0.7071068056832942
###Markdown
Usando nuestra `func_cos()`función y 5términos en la aproximación de la serie de Taylor, estimamos que el coseno de 45 grados es `0.707106805....` Comprobemos nuestra `func_cos()`función en comparación con la `cos()`función de Python del mathmódulo.
###Code
out = math.cos(angle_rad)
print(out)
###Output
0.7071067811865476
###Markdown
Usando la `cos()` función de Python , el coseno de 45 grados devuelve 0.707106781...Este valor está muy cerca de la aproximación calculada usando nuestra `func_cos()` función. Construya una gráfica para comparar la aproximación de la serie de Taylor con la cos()función de PythonEn la última parte de esta publicación, vamos a construir un gráfico que muestra cómo la aproximación de la serie de Taylor calculada por nuestra `func_cos()` función se compara con la `cos()` función de Python .La idea es hacer una gráfica que tenga una línea para la `cos()` función de Python y líneas para la aproximación de la serie de Taylor basada en diferentes números de términos.Por ejemplo, si usamos 3 términos en la aproximación de la serie de Taylor, nuestra gráfica tiene dos líneas. Una línea para la `cos()` función de Python y una línea para nuestra `func_cos()` función con tres términos en la aproximación de la serie de Taylor. Calcularemos el coseno usando ambas funciones para ángulos entre -2π radianes y 2π radianes.
###Code
import math
import numpy as np
import matplotlib.pyplot as plt
# if using a Jupyter notebook, include:
%matplotlib inline
angles = np.arange(-2*np.pi,2*np.pi,0.1)
p_cos = np.cos(angles)
t_cos = [func_cos(angle,3) for angle in angles]
fig, ax = plt.subplots()
ax.plot(angles,p_cos)
ax.plot(angles,t_cos)
ax.set_ylim([-5,5])
ax.legend(['cos() function','Taylor Series - 3 terms'])
plt.show()
###Output
_____no_output_____
###Markdown
Podemos usar un bucle for para ver cuánto mejor se compara la adición de términos adicionales a nuestra aproximación de la serie Taylor con la `cos()` función de Python .
###Code
import math
import numpy as np
import matplotlib.pyplot as plt
# if using a Jupyter notebook, include:
%matplotlib inline
angles = np.arange(-2*np.pi,2*np.pi,0.1)
p_cos = np.cos(angles)
fig, ax = plt.subplots()
ax.plot(angles,p_cos)
# add lines for between 1 and 6 terms in the Taylor Series
for i in range(1,6):
t_cos = [func_cos(angle,i) for angle in angles]
ax.plot(angles,t_cos)
ax.set_ylim([-7,4])
# set up legend
legend_lst = ['cos() function']
for i in range(1,6):
legend_lst.append(f'Taylor Series - {i} terms')
ax.legend(legend_lst, loc=3)
plt.show()
###Output
_____no_output_____ |
examples/notebooks/sparse_inv_cov_est.ipynb | ###Markdown
Sparse Inverse Covariance Estimation**References:**1. S. Boyd and L. Vandenberghe. Chapter 7.1.1 in [*Convex Optimization.*](https://web.stanford.edu/~boyd/cvxbook/) Cambridge University Press, 2004.2. O. Bannerjee, L. E. Ghaoui, and A. d'Aspremont. [*Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data.*](http://www.jmlr.org/papers/volume9/banerjee08a/banerjee08a.pdf) Journal of Machine Learning Research, 9(1):485-516, 2008.3. J. Friedman, T. Hastie, and R. Tibshirani. [*Sparse Inverse Covariance Estimation with the Graphical Lasso.*](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3019769/) Biostatistics, 9(3):432-441, 2008. IntroductionSuppose $z \in \mathbf{R}^q$ is a Gaussian random variable with mean zero and covariance matrix $\Sigma$, where $\Sigma^{-1}$ is known to be sparse. (This implies that many pairs of elements in $z$ are conditionally independent). We want to estimate the covariance matrix based on samples $z_1,\ldots,z_p$ drawn independently from $N(0,\Sigma)$.A good heuristic for estimating $\Sigma$ is to solve the problem$$\text{minimize}~ -\log\det(S) + \text{tr}(SQ) + \alpha\|S\|_1$$with respect to $S \in \mathbf{S}^q$ (the set of symmetric matrices), where $Q = \frac{1}{p}\sum_{l=1}^p z_lz_l^T$ is the sample covariance and $\alpha > 0$ is a sparsity parameter. Here $\log\det$ is understood to be an extended real-valued function, so that $\log\det(S) = -\infty$ whenever $S$ is not positive definite.If $S^*$ is the solution to this problem, we take our estimate of the covariance matrix to be $\hat \Sigma = (S^*)^{-1}$. Reformulate ProblemLet $x_i \in \mathbf{R}^{q(q+1)/2}$ be a vectorization of $S_i \in \mathbf{S}^q$ for $i = 1,2$. For instance, $x_i$ could be the lower triangular elements of $S_i$ taken in column order. The sparse inverse covariance estimation problem can be written in standard form by setting$$f_1(x_1) = -\log\det(S_1) + \text{tr}(S_1Q), \quad f_2(x_2) = \alpha\|S_2\|_1,$$where it is implicit that $x_i$ is reshaped into $S_i$. Notice that we have grouped the $\log\det$ term with the matrix trace term. This is because $\text{tr}(S_1Q)$ is an affine function, so we can apply the affine addition rule to evaluate $\mathbf{prox}_{tf_1}$ using $\mathbf{prox}_{t\log\det(\cdot)}$. See Sections 2.2 and 6.7.5 of [N. Parikh and S. Boyd (2013)](https://web.stanford.edu/~boyd/papers/prox_algs.html). Generate DataWe generate $S$ randomly from the set of symmetric positive definite matrices with $q = 20$ and about 10% nonzero entries. Then, we compute $Q$ using $p = 1000$ IID samples drawn from $N(0,S^{-1})$.
###Code
import numpy as np
import scipy as sp
from scipy import sparse
from sklearn.datasets import make_sparse_spd_matrix
from a2dr import a2dr
from a2dr.proximal import *
np.random.seed(1)
# Problem data.
q = 20
p = 1000
nnz_ratio = 0.1 # Fraction of nonzeros in S.
# Create sparse symmetric PSD matrix S.
S_true = sparse.csc_matrix(make_sparse_spd_matrix(q,1-nnz_ratio))
# Create covariance matrix associated with S.
Sigma = sparse.linalg.inv(S_true).todense()
# Draw samples from the Gaussian distribution with covariance Sigma.
z_sample = sp.linalg.sqrtm(Sigma).dot(np.random.randn(q,p))
Q = np.cov(z_sample)
###Output
_____no_output_____
###Markdown
Solve Problem for Several $\alpha$ Values
###Code
# Calculate smallest alpha for which solution is trivially
# the diagonal matrix (diag(Q) + alpha*I)^{-1}.
# Reference: O. Bannerjee, L. E. Ghaoui, and A. d'Aspremont (2008).
mask = np.ones(Q.shape, dtype=bool)
np.fill_diagonal(mask, 0)
alpha_max = np.max(np.abs(Q)[mask])
# The alpha values for each attempt at generating S.
alpha_ratios = np.array([1, 0.1, 0.01])
alphas = alpha_ratios*alpha_max
# Empty list of result matrices S.
Ss = []
# Solve for the problem for each value of alpha.
for alpha in alphas:
# Convert problem to standard form.
prox_list = [lambda v, t: prox_neg_log_det(v.reshape((q,q), order='C'), t, lin_term=t*Q).ravel(order='C'),
lambda v, t: prox_norm1(v, t*alpha)]
A_list = [sparse.eye(q*q), -sparse.eye(q*q)]
b = np.zeros(q*q)
# Solve with A2DR.
a2dr_result = a2dr(prox_list, A_list, b)
a2dr_S = a2dr_result["x_vals"][-1].reshape((q,q), order='C')
# Threshold S element values to enforce exact zeroes.
S_thres = a2dr_S
S_thres[np.abs(S_thres) <= 1e-4] = 0
# Store thresholded S for later visualization.
Ss += [S_thres]
print("Solved optimization problem with alpha =", alpha)
###Output
----------------------------------------------------------------------
a2dr v0.2.3.post3 - Prox-Affine Distributed Convex Optimization Solver
(c) Anqi Fu, Junzi Zhang
Stanford University 2019
----------------------------------------------------------------------
### Preconditioning starts ... ###
### Preconditioning finished. ###
max_iter = 1000, t_init (after preconditioning) = 2.00
eps_abs = 1.00e-06, eps_rel = 1.00e-08, precond = True
ada_reg = True, anderson = True, m_accel = 10
lam_accel = 1.00e-08, aa_method = lstsq, D_safe = 1.00e+06
eps_safe = 1.00e-06, M_safe = 10
variables n = 800, constraints m = 400
nnz(A) = 800
Setup time: 1.80e-02
----------------------------------------------------
iter | total res | primal res | dual res | time (s)
----------------------------------------------------
0| 1.90e+00 2.66e-01 1.88e+00 5.47e-02
27| 7.64e-07 4.89e-08 7.63e-07 1.59e-01
----------------------------------------------------
Status: Solved
Solve time: 1.59e-01
Total number of iterations: 28
Best total residual: 7.64e-07; reached at iteration 27
======================================================================
Solved optimization problem with alpha = 1.1847682358887426
----------------------------------------------------------------------
a2dr v0.2.3.post3 - Prox-Affine Distributed Convex Optimization Solver
(c) Anqi Fu, Junzi Zhang
Stanford University 2019
----------------------------------------------------------------------
### Preconditioning starts ... ###
### Preconditioning finished. ###
max_iter = 1000, t_init (after preconditioning) = 2.00
eps_abs = 1.00e-06, eps_rel = 1.00e-08, precond = True
ada_reg = True, anderson = True, m_accel = 10
lam_accel = 1.00e-08, aa_method = lstsq, D_safe = 1.00e+06
eps_safe = 1.00e-06, M_safe = 10
variables n = 800, constraints m = 400
nnz(A) = 800
Setup time: 2.48e-02
----------------------------------------------------
iter | total res | primal res | dual res | time (s)
----------------------------------------------------
0| 1.90e+00 2.66e-01 1.88e+00 8.45e-02
84| 9.81e-07 2.45e-08 9.81e-07 4.57e-01
----------------------------------------------------
Status: Solved
Solve time: 4.57e-01
Total number of iterations: 85
Best total residual: 9.81e-07; reached at iteration 84
======================================================================
Solved optimization problem with alpha = 0.11847682358887426
----------------------------------------------------------------------
a2dr v0.2.3.post3 - Prox-Affine Distributed Convex Optimization Solver
(c) Anqi Fu, Junzi Zhang
Stanford University 2019
----------------------------------------------------------------------
### Preconditioning starts ... ###
### Preconditioning finished. ###
max_iter = 1000, t_init (after preconditioning) = 2.00
eps_abs = 1.00e-06, eps_rel = 1.00e-08, precond = True
ada_reg = True, anderson = True, m_accel = 10
lam_accel = 1.00e-08, aa_method = lstsq, D_safe = 1.00e+06
eps_safe = 1.00e-06, M_safe = 10
variables n = 800, constraints m = 400
nnz(A) = 800
Setup time: 1.45e-02
----------------------------------------------------
iter | total res | primal res | dual res | time (s)
----------------------------------------------------
0| 1.90e+00 2.66e-01 1.88e+00 5.05e-02
100| 1.23e-05 3.07e-07 1.23e-05 3.56e-01
138| 9.90e-07 1.89e-08 9.90e-07 4.73e-01
----------------------------------------------------
Status: Solved
Solve time: 4.73e-01
Total number of iterations: 139
Best total residual: 9.90e-07; reached at iteration 138
======================================================================
Solved optimization problem with alpha = 0.011847682358887427
###Markdown
Plot Resulting Sparsity Patterns
###Code
import matplotlib.pyplot as plt
# Show plot inline in ipython.
%matplotlib inline
# Plot properties.
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
# Create figure.
plt.figure()
plt.figure(figsize=(12, 12))
# Plot sparsity pattern for the true covariance matrix.
plt.subplot(2, 2, 1)
plt.spy(S_true)
plt.title('Inverse of true covariance matrix', fontsize=16)
# Plot sparsity pattern for each result, corresponding to a specific alpha.
for i in range(len(alphas)):
plt.subplot(2, 2, 2+i)
plt.spy(Ss[i])
plt.title('Estimated inv. cov. matrix, $\\alpha$={0:.8f}'.format(alphas[i]), fontsize=16)
###Output
_____no_output_____ |
courses/udacity_intro_to_tensorflow_for_deep_learning/l07c01_saving_and_loading_models.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
!pip install -q -U "tensorflow-gpu==2.0.0rc0"
!pip install -q -U tensorflow_hub
!pip install -q -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import time
import numpy as np
import matplotlib.pylab as plt
tf.enable_v2_behavior()
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
splits = tfds.Split.ALL.subsplit(weighted=(80, 20))
splits, info = tfds.load('cats_vs_dogs', with_info=True, as_supervised=True, split = splits)
(train_examples, validation_examples) = splits
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2, activation='softmax')
])
model.summary()
###Output
_____no_output_____
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
_____no_output_____
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
_____no_output_____
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The differnece in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `realoaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
_____no_output_____
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
# Clone the model to undo the `.compile`
# This is a Workaround for a bug when keras loads a hub saved_model
model = tf.keras.models.clone_model(model)
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
_____no_output_____
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
_____no_output_____
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
!pip install -q -U "tensorflow-gpu==2.0.0rc0"
!pip install -q -U tensorflow_hub
!pip install -q -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
splits = tfds.Split.ALL.subsplit(weighted=(80, 20))
splits, info = tfds.load('cats_vs_dogs', with_info=True, as_supervised=True, split = splits)
(train_examples, validation_examples) = splits
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2, activation='softmax')
])
model.summary()
###Output
_____no_output_____
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
_____no_output_____
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
_____no_output_____
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The differnece in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `realoaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
_____no_output_____
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
# Clone the model to undo the `.compile`
# This is a Workaround for a bug when keras loads a hub saved_model
model = tf.keras.models.clone_model(model)
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
_____no_output_____
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
_____no_output_____
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
!pip install -U "tensorflow-gpu==2.0.0rc0"
!pip install -U tensorflow_hub
!pip install -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
(train_examples, validation_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[:80%]', 'train[80%:]'],
with_info=True,
as_supervised=True,
)
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2, activation='softmax')
])
model.summary()
###Output
_____no_output_____
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
_____no_output_____
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
_____no_output_____
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The difference in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `reloaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
_____no_output_____
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the result is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
_____no_output_____
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
_____no_output_____
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
!pip install -U tensorflow_hub
!pip install -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before.
###Code
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
(train_examples, validation_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[:80%]', 'train[80%:]'],
with_info=True,
as_supervised=True,
)
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2)
])
model.summary()
###Output
_____no_output_____
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
_____no_output_____
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
_____no_output_____
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The difference in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `reloaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
_____no_output_____
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the result is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
_____no_output_____
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
_____no_output_____
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
!pip install -U tensorflow_hub
!pip install -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before.
###Code
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
(train_examples, validation_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[:80%]', 'train[80%:]'],
with_info=True,
as_supervised=True,
)
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2)
])
model.summary()
###Output
_____no_output_____
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
_____no_output_____
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
_____no_output_____
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The difference in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
!pip install -U "tensorflow-gpu==2.0.0rc0"
!pip install -U tensorflow_hub
!pip install -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
(train_examples, validation_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[:80%]', 'train[80%:]'],
with_info=True,
as_supervised=True,
)
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2)
])
model.summary()
###Output
_____no_output_____
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
_____no_output_____
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
_____no_output_____
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The difference in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `reloaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
_____no_output_____
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the result is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
_____no_output_____
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
_____no_output_____
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
!pip install -U tensorflow_hub
!pip install -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before.
###Code
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
(train_examples, validation_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[:80%]', 'train[80%:]'],
# We can get metadata form the dataset
with_info=True,
# We get the corresponding labels of the images
as_supervised=True,
)
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2)
])
model.summary()
###Output
_____no_output_____
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
_____no_output_____
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
_____no_output_____
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The difference in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `reloaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
_____no_output_____
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the result is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
_____no_output_____
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
_____no_output_____
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `reloaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
_____no_output_____
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the result is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
_____no_output_____
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
_____no_output_____
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
!pip install -U tensorflow_hub
!pip install -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before.
###Code
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
(train_examples, validation_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[:80%]', 'train[80%:]'],
with_info=True,
as_supervised=True,
)
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2)
])
model.summary()
###Output
_____no_output_____
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
_____no_output_____
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
_____no_output_____
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The difference in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `reloaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
_____no_output_____
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the result is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
_____no_output_____
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
_____no_output_____
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
!pip install -U "tensorflow-gpu==2.0.0rc0"
!pip install -U tensorflow_hub
!pip install -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
splits = tfds.Split.ALL.subsplit(weighted=(80, 20))
splits, info = tfds.load('cats_vs_dogs', with_info=True, as_supervised=True, split = splits)
(train_examples, validation_examples) = splits
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2, activation='softmax')
])
model.summary()
###Output
_____no_output_____
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
_____no_output_____
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
_____no_output_____
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The difference in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `reloaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
_____no_output_____
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the result is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
_____no_output_____
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
_____no_output_____
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
!pip install -q -U "tensorflow-gpu==2.0.0rc0"
!pip install -q -U tensorflow_hub
!pip install -q -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
splits = tfds.Split.ALL.subsplit(weighted=(80, 20))
splits, info = tfds.load('cats_vs_dogs', with_info=True, as_supervised=True, split = splits)
(train_examples, validation_examples) = splits
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2, activation='softmax')
])
model.summary()
###Output
_____no_output_____
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
_____no_output_____
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
_____no_output_____
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The differnece in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `realoaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
_____no_output_____
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the result is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
_____no_output_____
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
_____no_output_____
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
!pip install -U tensorflow_hub
!pip install -U tensorflow_datasets
###Output
Requirement already up-to-date: tensorflow_hub in /usr/local/lib/python3.6/dist-packages (0.10.0)
Requirement already satisfied, skipping upgrade: numpy>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow_hub) (1.19.4)
Requirement already satisfied, skipping upgrade: protobuf>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow_hub) (3.12.4)
Requirement already satisfied, skipping upgrade: six>=1.9 in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.8.0->tensorflow_hub) (1.15.0)
Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.8.0->tensorflow_hub) (50.3.2)
Requirement already up-to-date: tensorflow_datasets in /usr/local/lib/python3.6/dist-packages (4.1.0)
Requirement already satisfied, skipping upgrade: attrs>=18.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (20.3.0)
Requirement already satisfied, skipping upgrade: tensorflow-metadata in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (0.26.0)
Requirement already satisfied, skipping upgrade: termcolor in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (1.1.0)
Requirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (1.19.4)
Requirement already satisfied, skipping upgrade: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (0.8)
Requirement already satisfied, skipping upgrade: promise in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (2.3)
Requirement already satisfied, skipping upgrade: future in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (0.16.0)
Requirement already satisfied, skipping upgrade: absl-py in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (0.10.0)
Requirement already satisfied, skipping upgrade: requests>=2.19.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (2.23.0)
Requirement already satisfied, skipping upgrade: typing-extensions; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (3.7.4.3)
Requirement already satisfied, skipping upgrade: importlib-resources; python_version < "3.9" in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (3.3.0)
Requirement already satisfied, skipping upgrade: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (3.12.4)
Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (1.15.0)
Requirement already satisfied, skipping upgrade: dill in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (0.3.3)
Requirement already satisfied, skipping upgrade: tqdm in /usr/local/lib/python3.6/dist-packages (from tensorflow_datasets) (4.41.1)
Requirement already satisfied, skipping upgrade: googleapis-common-protos<2,>=1.52.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-metadata->tensorflow_datasets) (1.52.0)
Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow_datasets) (1.24.3)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow_datasets) (2020.12.5)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow_datasets) (2.10)
Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow_datasets) (3.0.4)
Requirement already satisfied, skipping upgrade: zipp>=0.4; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from importlib-resources; python_version < "3.9"->tensorflow_datasets) (3.4.0)
Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.6.1->tensorflow_datasets) (50.3.2)
###Markdown
Some normal imports we've seen before.
###Code
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
(train_examples, validation_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[:80%]', 'train[80%:]'],
with_info=True,
as_supervised=True,
)
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
#train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
#validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
train_batches = train_examples.shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2)
])
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
keras_layer (KerasLayer) (None, 1280) 2257984
_________________________________________________________________
dense (Dense) (None, 2) 2562
=================================================================
Total params: 2,260,546
Trainable params: 2,562
Non-trainable params: 2,257,984
_________________________________________________________________
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
Epoch 1/3
582/582 [==============================] - 58s 66ms/step - loss: 0.1006 - accuracy: 0.9635 - val_loss: 0.0326 - val_accuracy: 0.9893
Epoch 2/3
582/582 [==============================] - 37s 57ms/step - loss: 0.0325 - accuracy: 0.9888 - val_loss: 0.0318 - val_accuracy: 0.9890
Epoch 3/3
582/582 [==============================] - 37s 56ms/step - loss: 0.0295 - accuracy: 0.9903 - val_loss: 0.0346 - val_accuracy: 0.9886
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
1608539046.h5 sample_data
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
keras_layer (KerasLayer) (None, 1280) 2257984
_________________________________________________________________
dense (Dense) (None, 2) 2562
=================================================================
Total params: 2,260,546
Trainable params: 2,562
Non-trainable params: 2,257,984
_________________________________________________________________
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The difference in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `reloaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
Epoch 1/3
582/582 [==============================] - 40s 57ms/step - loss: 0.0224 - accuracy: 0.9921 - val_loss: 0.0331 - val_accuracy: 0.9899
Epoch 2/3
582/582 [==============================] - 37s 56ms/step - loss: 0.0189 - accuracy: 0.9932 - val_loss: 0.0335 - val_accuracy: 0.9899
Epoch 3/3
582/582 [==============================] - 37s 56ms/step - loss: 0.0171 - accuracy: 0.9939 - val_loss: 0.0320 - val_accuracy: 0.9897
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
assets saved_model.pb variables
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the result is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
keras_layer (KerasLayer) (None, 1280) 2257984
_________________________________________________________________
dense (Dense) (None, 2) 2562
=================================================================
Total params: 2,260,546
Trainable params: 2,226,434
Non-trainable params: 34,112
_________________________________________________________________
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
adding: 1608539266/ (stored 0%)
adding: 1608539266/saved_model.pb (deflated 92%)
adding: 1608539266/assets/ (stored 0%)
adding: 1608539266/variables/ (stored 0%)
adding: 1608539266/variables/variables.data-00000-of-00001 (deflated 8%)
adding: 1608539266/variables/variables.index (deflated 78%)
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
1608539046.h5 1608539258 1608539266 model.zip sample_data
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
!pip install -q -U "tensorflow-gpu==2.0.0rc0"
!pip install -q -U tensorflow_hub
!pip install -q -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
splits = tfds.Split.ALL.subsplit(weighted=(80, 20))
splits, info = tfds.load('cats_vs_dogs', with_info=True, as_supervised=True, split = splits)
(train_examples, validation_examples) = splits
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2, activation='softmax')
])
model.summary()
###Output
_____no_output_____
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
_____no_output_____
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
_____no_output_____
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The differnece in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `realoaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
_____no_output_____
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the result is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
# Clone the model to undo the `.compile`
# This is a Workaround for a bug when keras loads a hub saved_model
model = tf.keras.models.clone_model(model)
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
_____no_output_____
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
_____no_output_____
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub Saving and Loading ModelsIn this tutorial we will learn how we can take a trained model, save it, and then load it back to keep training it or use it to perform inference. In particular, we will use transfer learning to train a classifier to classify images of cats and dogs, just like we did in the previous lesson. We will then take our trained model and save it as an HDF5 file, which is the format used by Keras. We will then load this model, use it to perform predictions, and then continue to train the model. Finally, we will save our trained model as a TensorFlow SavedModel and then we will download it to a local disk, so that it can later be used for deployment in different platforms. Concepts that will be covered in this Colab1. Saving models in HDF5 format for Keras2. Saving models in the TensorFlow SavedModel format3. Loading models4. Download models to Local DiskBefore starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. ImportsIn this Colab we will use the TensorFlow 2.0 Beta version.
###Code
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
!pip install -U "tensorflow-gpu==2.0.0rc0"
!pip install -U tensorflow_hub
!pip install -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Part 1: Load the Cats vs. Dogs Dataset We will use TensorFlow Datasets to load the Dogs vs Cats dataset.
###Code
(train_examples, validation_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[:80%]', 'train[80%:]'],
with_info=True,
as_supervised=True,
)
###Output
_____no_output_____
###Markdown
The images in the Dogs vs. Cats dataset are not all the same size. So, we need to reformat all images to the resolution expected by MobileNet (224, 224)
###Code
def format_image(image, label):
# `hub` image modules exepct their data normalized to the [0,1] range.
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
num_examples = info.splits['train'].num_examples
BATCH_SIZE = 32
IMAGE_RES = 224
train_batches = train_examples.cache().shuffle(num_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.cache().map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Part 2: Transfer Learning with TensorFlow HubWe will now use TensorFlow Hub to do Transfer Learning.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES,3))
###Output
_____no_output_____
###Markdown
Freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(2)
])
model.summary()
###Output
_____no_output_____
###Markdown
Train the modelWe now train this model like any other, by first calling `compile` followed by `fit`.
###Code
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 3
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Check the predictionsGet the ordered list of class names.
###Code
class_names = np.array(info.features['label'].names)
class_names
###Output
_____no_output_____
###Markdown
Run an image batch through the model and convert the indices to class names.
###Code
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
predicted_class_names
###Output
_____no_output_____
###Markdown
Let's look at the true labels and predicted ones.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Part 3: Save as Keras `.h5` modelNow that we've trained the model, we can save it as an HDF5 file, which is the format used by Keras. Our HDF5 file will have the extension '.h5', and it's name will correpond to the current time stamp.
###Code
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
!ls
###Output
_____no_output_____
###Markdown
You can later recreate the same model from this file, even if you no longer have access to the code that created the model.This file includes:- The model's architecture- The model's weight values (which were learned during training)- The model's training config (what you passed to `compile`), if any- The optimizer and its state, if any (this enables you to restart training where you left off) Part 4: Load the Keras `.h5` ModelWe will now load the model we just saved into a new model called `reloaded`. We will need to provide the file path and the `custom_objects` parameter. This parameter tells keras how to load the `hub.KerasLayer` from the `feature_extractor` we used for transfer learning.
###Code
reloaded = tf.keras.models.load_model(
export_path_keras,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
###Output
_____no_output_____
###Markdown
We can check that the reloaded model and the previous model give the same result
###Code
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
###Output
_____no_output_____
###Markdown
The difference in output should be zero:
###Code
(abs(result_batch - reloaded_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the reult is 0.0, which indicates that both models made the same predictions on the same batch of images. Keep TrainingBesides making predictions, we can also take our `reloaded` model and keep training it. To do this, you can just train the `reloaded` as usual, using the `.fit` method.
###Code
EPOCHS = 3
history = reloaded.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Part 5: Export as SavedModel You can also export a whole model to the TensorFlow SavedModel format. SavedModel is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python. A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with TFLite, TensorFlow.js, TensorFlow Serving, or TFHub).The SavedModel files that were created contain:* A TensorFlow checkpoint containing the model weights.* A SavedModel proto containing the underlying Tensorflow graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model wasn't compiled before, then only the inference graph gets exported.* The model's architecture config, if available.Let's save our original `model` as a TensorFlow SavedModel. To do this we will use the `tf.saved_model.save()` function. This functions takes in the model we want to save and the path to the folder where we want to save our model. This function will create a folder where you will find an `assets` folder, a `variables` folder, and the `saved_model.pb` file.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
!ls {export_path_sm}
###Output
_____no_output_____
###Markdown
Part 6: Load SavedModel Now, let's load our SavedModel and use it to make predictions. We use the `tf.saved_model.load()` function to load our SavedModels. The object returned by `tf.saved_model.load` is 100% independent of the code that created it.
###Code
reloaded_sm = tf.saved_model.load(export_path_sm)
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm` (reloaded SavedModel) to make predictions on a batch of images.
###Code
reload_sm_result_batch = reloaded_sm(image_batch, training=False).numpy()
###Output
_____no_output_____
###Markdown
We can check that the reloaded SavedModel and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_result_batch)).max()
###Output
_____no_output_____
###Markdown
As we can see, the result is 0.0, which indicates that both models made the same predictions on the same batch of images. Part 7: Loading the SavedModel as a Keras ModelThe object returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have `.fit`, `.predict`, `.summary`, etc. methods). Therefore, you can't simply take your `reloaded_sm` model and keep training it by running `.fit`. To be able to get back a full keras model from the Tensorflow SavedModel format we must use the `tf.keras.models.load_model` function. This function will work the same as before, except now we pass the path to the folder containing our SavedModel.
###Code
t = time.time()
export_path_sm = "./{}".format(int(t))
print(export_path_sm)
tf.saved_model.save(model, export_path_sm)
reload_sm_keras = tf.keras.models.load_model(
export_path_sm,
custom_objects={'KerasLayer': hub.KerasLayer})
reload_sm_keras.summary()
###Output
_____no_output_____
###Markdown
Now, let's use the `reloaded_sm)keras` (reloaded Keras model from our SavedModel) to make predictions on a batch of images.
###Code
result_batch = model.predict(image_batch)
reload_sm_keras_result_batch = reload_sm_keras.predict(image_batch)
###Output
_____no_output_____
###Markdown
We can check that the reloaded Keras model and the previous model give the same result.
###Code
(abs(result_batch - reload_sm_keras_result_batch)).max()
###Output
_____no_output_____
###Markdown
Part 8: Download your model You can download the SavedModel to your local disk by creating a zip file. We wil use the `-r` (recursice) option to zip all subfolders.
###Code
!zip -r model.zip {export_path_sm}
###Output
_____no_output_____
###Markdown
The zip file is saved in the current working directory. You can see what the current working directory is by running:
###Code
!ls
###Output
_____no_output_____
###Markdown
Once the file is zipped, you can download it to your local disk.
###Code
try:
from google.colab import files
files.download('./model.zip')
except ImportError:
pass
###Output
_____no_output_____ |
PyTorch/Simple Neural Network/PyTorch Simple Neural Network.ipynb | ###Markdown
Step to create a simple NN ---* Imports required library* Creating Neural Network model* Set device* Set hyperparameters* Load data* Initialize network* Set loss and optimizer* Train network* Evaluation on test set Step 1---Imports required library
###Code
import os
import torch
import torch.nn as nn #has layer like linear, conv which ave parameters ans also contains loss function and activtion function which don't have parameters.
import torch.optim as optim #optimization algorithm like gradient decent, adam, stocstic gradient decent etc.
import torch.nn.functional as F #activation function that don't have parameter like relu, sigmoid etc.
from torch.utils.data import DataLoader #helps to create minibatch and prepare dataset easily
import torchvision.datasets as datasets #inbuilts dataset that already available
import torchvision.transforms as transforms #transform data
###Output
_____no_output_____
###Markdown
Step 2---Create NN model
###Code
class NN(nn.Module):
def __init__(self, num_features_size, num_classes):
super(NN, self).__init__()
self.fc1 = nn.Linear(in_features=num_features_size, out_features=30)
self.fc2 = nn.Linear(in_features=30, out_features=num_classes)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
###Output
_____no_output_____
###Markdown
Step 3---Initialize device
###Code
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
###Output
_____no_output_____
###Markdown
Step 4---Set Hyperparameters
###Code
num_features_size = 28*28
num_classes = 10
learning_rate = .001
batch_size = 64
num_epoch = 100
###Output
_____no_output_____
###Markdown
Step 5---Load data
###Code
# !pwd
os.chdir('/content/drive/MyDrive/Colab Notebooks/PyTorch Notebooks/Simple Neural Network')
# !pwd
train_dataset = datasets.MNIST(root='dataset/', train=True, transform=transforms.ToTensor(), download=True)
train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
test_dataset = datasets.MNIST(root='dataset/', train=False, transform=transforms.ToTensor(), download=True)
test_loader = DataLoader(test_dataset, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
Step 6---Initialize network
###Code
model = NN(num_features_size=num_features_size, num_classes=num_classes).to(device)
###Output
_____no_output_____
###Markdown
Step 7---Set Loss and Optimizer
###Code
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), learning_rate)
###Output
_____no_output_____
###Markdown
Step 8---Train network
###Code
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.reshape(X.shape[0],-1).to(device), y.to(device)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.reshape(X.shape[0],-1).to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
epochs = 50
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_loader, model, criterion, optimizer)
test(test_loader, model, criterion)
print("Done!")
###Output
_____no_output_____ |
03 scripts/.ipynb_checkpoints/nlp_code_snippets-checkpoint.ipynb | ###Markdown
Load Data
###Code
train_df = pd.read_csv('../train.csv', sep="\t")
test_df = pd.read_csv("../test.csv", sep="\t")
###Output
_____no_output_____
###Markdown
Sentiment Analysis
###Code
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sentiments = SentimentIntensityAnalyzer()
# The following pandas series contains a dictionary. Faster to do
# it this way and later split it based on columns
# (Slower, as pandas apply operation uses only single core)
train_df["answer_polarity_scores"] = train_df['answer_text'].apply(
lambda x: sentiments.polarity_scores(x))
test_df["answer_polarity_scores"] = test_df['answer_text'].apply(
lambda x: sentiments.polarity_scores(x))
train_df["question_polarity_scores"] = train_df['question_text'].apply(
lambda x: sentiments.polarity_scores(x))
test_df["question_polarity_scores"] = test_df['question_text'].apply(
lambda x: sentiments.polarity_scores(x))
## This could easily be done by creating a function or for loop
## I just did it during the hackathon. But if you are combining
## everything, just write a function to make it look more beautiful
##############
# Train data #
##############
# Sentiment for question_text (q_***)
train_df['q_compound'] = train_df["question_polarity_scores"].apply(lambda x: x['compound'])
train_df['q_pos']= train_df["question_polarity_scores"].apply(lambda x: x['pos'])
train_df['q_neg']= train_df["question_polarity_scores"].apply(lambda x: x['neg'])
train_df['q_neu']= train_df["question_polarity_scores"].apply(lambda x: x['neu'])
# Sentiment for answer_text (a_***)
train_df['a_compound'] = train_df["answer_polarity_scores"].apply(lambda x: x['compound'])
train_df['a_pos']= train_df["answer_polarity_scores"].apply(lambda x: x['pos'])
train_df['a_neg']= train_df["answer_polarity_scores"].apply(lambda x: x['neg'])
train_df['a_neu']= train_df["answer_polarity_scores"].apply(lambda x: x['neu'])
#############
# Test data #
#############
# Sentiment for question_text (q_***)
test_df['q_compound'] = test_df["question_polarity_scores"].apply(lambda x: x['compound'])
test_df['q_pos']= test_df["question_polarity_scores"].apply(lambda x: x['pos'])
test_df['q_neg']= test_df["question_polarity_scores"].apply(lambda x: x['neg'])
test_df['q_neu']= test_df["question_polarity_scores"].apply(lambda x: x['neu'])
# Sentiment for answer_text (a_***)
test_df['a_compound'] = test_df["answer_polarity_scores"].apply(lambda x: x['compound'])
test_df['a_pos']= test_df["answer_polarity_scores"].apply(lambda x: x['pos'])
test_df['a_neg']= test_df["answer_polarity_scores"].apply(lambda x: x['neg'])
test_df['a_neu']= test_df["answer_polarity_scores"].apply(lambda x: x['neu'])
# Now drop the columns that contains the dictionary of sentiments
# It is not needed anymore
train_df.drop(["question_polarity_scores", "answer_polarity_scores"],
axis=1, inplace=True)
test_df.drop(["question_polarity_scores", "answer_polarity_scores"],
axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
TF-IDF features
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import make_pipeline, make_union
from sklearn.base import BaseEstimator, TransformerMixin
vectorizer = TfidfVectorizer(stop_words='english',
max_features=10,
use_idf=True,
norm='l1'
)
# TF-IDF vectorizer for train and test sets on question_text
train_q = vectorizer.fit_transform(train_df["question_text"])
train_q_df = pd.DataFrame(train_q.toarray(), columns=vectorizer.get_feature_names())
train_df.join(train_q_df)
test_q = vectorizer.transform(test_df['question_text'])
test_q_df = pd.DataFrame(test_q.toarray(), columns=vectorizer.get_feature_names())
test_df.join(test_q_df)
# TF-IDF vectorizer for train and test sets on answer_text
train_a = vectorizer.fit_transform(train_df["answer_text"])
train_a_df = pd.DataFrame(train_a.toarray(), columns=vectorizer.get_feature_names())
train_df.join(train_a_df)
test_a = vectorizer.transform(test_df['answer_text'])
test_a_df = pd.DataFrame(test_a.toarray(), columns=vectorizer.get_feature_names())
test_df.join(test_a_df)
set(train_df.columns) - set(test_df.columns)
###Output
_____no_output_____ |
Tensorflow_Keras/Mask_RCNN/Mask_RCNN_Kangaroo_Dataset.ipynb | ###Markdown
Install LibrariesImport Mask_RCNN library from [Matterport Github](https://github.com/matterport/Mask_RCNN)
###Code
!git clone https://github.com/matterport/Mask_RCNN.git
###Output
Cloning into 'Mask_RCNN'...
remote: Enumerating objects: 956, done.[K
remote: Total 956 (delta 0), reused 0 (delta 0), pack-reused 956
Receiving objects: 100% (956/956), 111.90 MiB | 27.24 MiB/s, done.
Resolving deltas: 100% (571/571), done.
###Markdown
Install mask_rcnn libraryEdit setup.py file for colab to include Mask_RCNN folder name before the required files to install
###Code
!python3 Mask_RCNN/setup.py install
###Output
_____no_output_____
###Markdown
Confirm library was installed
###Code
!pip show mask-rcnn
###Output
Name: mask-rcnn
Version: 2.1
Summary: Mask R-CNN for object detection and instance segmentation
Home-page: https://github.com/matterport/Mask_RCNN
Author: Matterport
Author-email: [email protected]
License: MIT
Location: /usr/local/lib/python3.6/dist-packages/mask_rcnn-2.1-py3.6.egg
Requires:
Required-by:
###Markdown
Kangaroo Dataset Preparation The dataset is comprised of 183 photographs that contain kangaroos, and XML annotation files that provide bounding boxes for the kangaroos in each photograph.The Mask R-CNN is designed to learn to predict both bounding boxes for objects as well as masks for those detected objects, and the kangaroo dataset does not provide masks. As such, we will use the dataset to learn a kangaroo object detection task, and ignore the masks and not focus on the image segmentation capabilities of the model. Cloning the dataset from [Experiencor Github](https://github.com/experiencor/kangaroo)
###Code
!git clone https://github.com/experiencor/kangaroo.git
###Output
Cloning into 'kangaroo'...
remote: Enumerating objects: 334, done.[K
remote: Total 334 (delta 0), reused 0 (delta 0), pack-reused 334[K
Receiving objects: 100% (334/334), 18.39 MiB | 19.08 MiB/s, done.
Resolving deltas: 100% (158/158), done.
###Markdown
We can also see that the numbering system is not contiguous, that there are some photos missing, e.g. there is no ‘00007‘ JPG or XML. This means that we should focus on loading the list of actual files in the directory rather than using a numbering system. Develop KangarooDataset Object The size and the bounding boxes are the minimum information that we require from each annotation file. The mask-rcnn library requires that train, validation, and test datasets be managed by a mrcnn.utils.Dataset object.This means that a new class must be defined that extends the mrcnn.utils.Dataset class and defines a function to load the dataset, with any name you like such as load_dataset(), and override two functions, one for loading a mask called load_mask() and one for loading an image reference (path or URL) called image_reference().The custom load function, e.g. load_dataset() is responsible for both defining the classes and for defining the images in the dataset. Classes are defined by calling the built-in add_class() function and specifying the ‘source‘ (the name of the dataset), the ‘class_id‘ or integer for the class (e.g. 1 for the first lass as 0 is reserved for the background class), and the ‘class_name‘.Objects are defined by a call to the built-in add_image() function and specifying the ‘source‘ (the name of the dataset), a unique ‘image_id‘ (e.g. the filename without the file extension like ‘00001‘), and the path for where the image can be loaded (e.g. ‘kangaroo/images/00001.jpg‘). This will define an “image info” dictionary for the image that can be retrieved later via the index or order in which the image was added to the dataset. You can also specify other arguments that will be added to the image info dictionary, such as an ‘annotation‘ to define the annotation path.We will implement a load_dataset() function that takes the path to the dataset directory and loads all images in the dataset. Note, testing revealed that there is an issue with image number ‘00090‘, so we will exclude it from the dataset. We can go one step further and add one more argument to the function to define whether the Dataset instance is for training or test/validation. We have about 160 photos, so we can use about 20%, or the last 32 photos, as a test or validation dataset and the first 131, or 80%, as the training dataset.Next, we need to define the load_mask() function for loading the mask for a given ‘image_id‘. In this case, the ‘image_id‘ is the integer index for an image in the dataset, assigned based on the order that the image was added via a call to add_image() when loading the dataset. The function must return an array of one or more masks for the photo associated with the image_id, and the classes for each mask. We don’t have masks, but we do have bounding boxes. We can load the bounding boxes for a given photo and return them as masks. The library will then infer bounding boxes from our “masks” which will be the same size. A mask is a two-dimensional array with the same dimensions as the photograph with all zero values where the object isn’t and all one values where the object is in the photograph. We can achieve this by creating a NumPy array with all zero values for the known size of the image and one channel for each bounding box. A mask is a two-dimensional array with the same dimensions as the photograph with all zero values where the object isn’t and all one values where the object is in the photograph.We can achieve this by creating a NumPy array with all zero values for the known size of the image and one channel for each bounding box.
###Code
# split into train and test set
from os import listdir
from xml.etree import ElementTree
from numpy import zeros
from numpy import asarray
from Mask_RCNN.mrcnn.utils import Dataset
# class that defines and loads the kangaroo dataset
class KangarooDataset(Dataset):
# load the dataset definitions
def load_dataset(self, dataset_dir, is_train=True):
# define one class
self.add_class("dataset", 1, "kangaroo")
# define data locations
images_dir = dataset_dir + '/images/'
annotations_dir = dataset_dir + '/annots/'
# find all images
for filename in listdir(images_dir):
# extract image id
image_id = filename[:-4]
# skip bad images
if image_id in ['00090']:
continue
# skip all images after 150 if we are building the train set
if is_train and int(image_id) >= 150:
continue
# skip all images before 150 if we are building the test/val set
if not is_train and int(image_id) < 150:
continue
img_path = images_dir + filename
ann_path = annotations_dir + image_id + '.xml'
# add to dataset
self.add_image('dataset', image_id=image_id, path=img_path, annotation=ann_path)
# extract bounding boxes from an annotation file
def extract_boxes(self, filename):
# load and parse the file
tree = ElementTree.parse(filename)
# get the root of the document
root = tree.getroot()
# extract each bounding box
boxes = list()
for box in root.findall('.//bndbox'):
xmin = int(box.find('xmin').text)
ymin = int(box.find('ymin').text)
xmax = int(box.find('xmax').text)
ymax = int(box.find('ymax').text)
coors = [xmin, ymin, xmax, ymax]
boxes.append(coors)
# extract image dimensions
width = int(root.find('.//size/width').text)
height = int(root.find('.//size/height').text)
return boxes, width, height
# load the masks for an image
def load_mask(self, image_id):
# get details of image
info = self.image_info[image_id]
# define box file location
path = info['annotation']
# load XML
boxes, w, h = self.extract_boxes(path)
# create one array for all masks, each on a different channel
masks = zeros([h, w, len(boxes)], dtype='uint8')
# create masks
class_ids = list()
for i in range(len(boxes)):
box = boxes[i]
row_s, row_e = box[1], box[3]
col_s, col_e = box[0], box[2]
masks[row_s:row_e, col_s:col_e, i] = 1
class_ids.append(self.class_names.index('kangaroo'))
return masks, asarray(class_ids, dtype='int32')
# load an image reference
def image_reference(self, image_id):
info = self.image_info[image_id]
return info['path']
# train set
train_set = KangarooDataset()
train_set.load_dataset('kangaroo', is_train=True)
train_set.prepare()
print('Train: %d' % len(train_set.image_ids))
# test/val set
test_set = KangarooDataset()
test_set.load_dataset('kangaroo', is_train=False)
test_set.prepare()
print('Test: %d' % len(test_set.image_ids))
###Output
Train: 131
Test: 32
###Markdown
**Test KangarooDataset Object**
###Code
# plot first few images
import matplotlib.pyplot as plt
for i in range(9):
# define subplot
plt.subplot(330 + 1 + i)
# plot raw pixel data
image = train_set.load_image(i)
plt.imshow(image)
# plot all masks
mask, _ = train_set.load_mask(i)
for j in range(mask.shape[2]):
plt.imshow(mask[:, :, j], cmap='gray', alpha=0.3)
# show the figure
plt.show()
###Output
_____no_output_____
###Markdown
Mask-rcnn library provides utilities for displaying images and masks. We can use some of these built-in functions to confirm that the Dataset is operating correctly.
###Code
from Mask_RCNN.mrcnn.utils import extract_bboxes
from Mask_RCNN.mrcnn.visualize import display_instances
# define image id
image_id = 1
# load the image
image = train_set.load_image(image_id)
# load the masks and the class ids
mask, class_ids = train_set.load_mask(image_id)
# extract bounding boxes from the masks
bbox = extract_bboxes(mask)
# display image with masks and bounding boxes
display_instances(image, bbox, mask, class_ids, train_set.class_names)
###Output
_____no_output_____
###Markdown
Train Mask R-CNN Model for Kangaroo Detection Dowmload weights trained on MSCOCO dataset
###Code
!wget https://github.com/matterport/Mask_RCNN/releases/download/v2.0/mask_rcnn_coco.h5
###Output
--2019-08-03 14:09:59-- https://github.com/matterport/Mask_RCNN/releases/download/v2.0/mask_rcnn_coco.h5
Resolving github.com (github.com)... 140.82.118.3
Connecting to github.com (github.com)|140.82.118.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/107595270/872d3234-d21f-11e7-9a51-7b4bc8075835?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190803%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190803T140959Z&X-Amz-Expires=300&X-Amz-Signature=5cf0936693542b6fdd74441cb8d6b0ba9f53c39667834824b5e4f43bde0bc63f&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dmask_rcnn_coco.h5&response-content-type=application%2Foctet-stream [following]
--2019-08-03 14:09:59-- https://github-production-release-asset-2e65be.s3.amazonaws.com/107595270/872d3234-d21f-11e7-9a51-7b4bc8075835?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190803%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190803T140959Z&X-Amz-Expires=300&X-Amz-Signature=5cf0936693542b6fdd74441cb8d6b0ba9f53c39667834824b5e4f43bde0bc63f&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dmask_rcnn_coco.h5&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.93.195
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.93.195|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 257557808 (246M) [application/octet-stream]
Saving to: ‘mask_rcnn_coco.h5’
mask_rcnn_coco.h5 100%[===================>] 245.63M 36.0MB/s in 7.4s
2019-08-03 14:10:07 (33.2 MB/s) - ‘mask_rcnn_coco.h5’ saved [257557808/257557808]
###Markdown
This class requires a configuration object as a parameter. The configuration object defines how the model might be used during training or inference.
###Code
from Mask_RCNN.mrcnn.config import Config
# define a configuration for the model
class KangarooConfig(Config):
# Give the configuration a recognizable name
NAME = "kangaroo_cfg"
# Number of classes (background + kangaroo)
NUM_CLASSES = 1 + 1
# Number of training steps per epoch
STEPS_PER_EPOCH = 131
# prepare config
config = KangarooConfig()
config.display()
# define the model
from Mask_RCNN.mrcnn.model import MaskRCNN
model = MaskRCNN(mode='training', model_dir='./', config=config)
# load weights (mscoco)
model.load_weights('mask_rcnn_coco.h5', by_name=True, exclude=["mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"])
###Output
_____no_output_____
###Markdown
We are interested in object detection instead of object segmentation, I recommend paying attention to the loss for the classification output on the train and validation datasets (e.g. mrcnn_class_loss and val_mrcnn_class_loss), as well as the loss for the bounding box output for the train and validation datasets (mrcnn_bbox_loss and val_mrcnn_bbox_loss)
###Code
# train weights (output layers or 'heads')
model.train(train_set, test_set, learning_rate=config.LEARNING_RATE, epochs=5, layers='heads')
###Output
Starting at epoch 0. LR=0.001
Checkpoint Path: ./kangaroo_cfg20190803T1421/mask_rcnn_kangaroo_cfg_{epoch:04d}.h5
Selecting layers to train
fpn_c5p5 (Conv2D)
fpn_c4p4 (Conv2D)
fpn_c3p3 (Conv2D)
fpn_c2p2 (Conv2D)
fpn_p5 (Conv2D)
fpn_p2 (Conv2D)
fpn_p3 (Conv2D)
fpn_p4 (Conv2D)
In model: rpn_model
rpn_conv_shared (Conv2D)
rpn_class_raw (Conv2D)
rpn_bbox_pred (Conv2D)
mrcnn_mask_conv1 (TimeDistributed)
mrcnn_mask_bn1 (TimeDistributed)
mrcnn_mask_conv2 (TimeDistributed)
mrcnn_mask_bn2 (TimeDistributed)
mrcnn_class_conv1 (TimeDistributed)
mrcnn_class_bn1 (TimeDistributed)
mrcnn_mask_conv3 (TimeDistributed)
mrcnn_mask_bn3 (TimeDistributed)
mrcnn_class_conv2 (TimeDistributed)
mrcnn_class_bn2 (TimeDistributed)
mrcnn_mask_conv4 (TimeDistributed)
mrcnn_mask_bn4 (TimeDistributed)
mrcnn_bbox_fc (TimeDistributed)
mrcnn_mask_deconv (TimeDistributed)
mrcnn_class_logits (TimeDistributed)
mrcnn_mask (TimeDistributed)
###Markdown
A model file is created and saved at the end of each epoch in a subdirectory that starts with ‘kangaroo_cfg‘ followed by random characters. A model must be selected for use; in this case, the loss continues to decrease for the bounding boxes on each epoch, so we will use the final model at the end of the run (‘mask_rcnn_kangaroo_cfg_0005.h5‘). Copy the model file from the config directory into your current working directory. We will use it in the following sections to evaluate the model and make predictions. Evaluate Mask R-CNN Model The performance of a model for an object recognition task is often evaluated using the mean absolute precision, or mAP. Precision refers to the percentage of the correctly predicted bounding boxes (IoU > 0.5) out of all bounding boxes predicted. Recall is the percentage of the correctly predicted bounding boxes (IoU > 0.5) out of all objects in the photo. As we make more predictions, the recall percentage will increase, but precision will drop or become erratic as we start making false positive predictions. The recall (x) can be plotted against the precision (y) for each number of predictions to create a curve or line. We can maximize the value of each point on this line and calculate the average value of the precision or AP for each value of recall. The average or mean of the average precision (AP) across all of the images in a dataset is called the mean average precision, or mAP. The mask-rcnn library provides a mrcnn.utils.compute_ap to calculate the AP and other metrics for a given images. These AP scores can be collected across a dataset and the mean calculated to give an idea at how good the model is at detecting objects in a dataset. Config object for makig predictions
###Code
# define the prediction configuration
class PredictionConfig(Config):
# define the name of the configuration
NAME = "kangaroo_cfg"
# number of classes (background + kangaroo)
NUM_CLASSES = 1 + 1
# simplify GPU config
GPU_COUNT = 1
IMAGES_PER_GPU = 1
# create config
cfg = PredictionConfig()
# define the model
model = MaskRCNN(mode='inference', model_dir='./', config=cfg)
# load model weights
model.load_weights('kangaroo_cfg20190803T1421/mask_rcnn_kangaroo_cfg_0005.h5', by_name=True)
###Output
_____no_output_____
###Markdown
First, the image and ground truth mask can be loaded from the dataset for a given image_id. This can be achieved using the load_image_gt() convenience function. Next, the pixel values of the loaded image must be scaled in the same way as was performed on the training data, e.g. centered. This can be achieved using the mold_image() convenience function. The dimensions of the image then need to be expanded one sample in a dataset and used as input to make a prediction with the model. Next, the prediction can be compared to the ground truth and metrics calculated using the compute_ap() function. The AP values can be added to a list, then the mean value calculated.
###Code
# calculate the mAP for a model on a given dataset
from Mask_RCNN.mrcnn.model import mold_image, load_image_gt
from Mask_RCNN.mrcnn.utils import compute_ap
from numpy import expand_dims, mean
def evaluate_model(dataset, model, cfg):
APs = list()
for image_id in dataset.image_ids:
# load image, bounding boxes and masks for the image id
image, image_meta, gt_class_id, gt_bbox, gt_mask = load_image_gt(dataset, cfg, image_id, use_mini_mask=False)
# convert pixel values (e.g. center)
scaled_image = mold_image(image, cfg)
# convert image into one sample
sample = expand_dims(scaled_image, 0)
# make prediction
yhat = model.detect(sample, verbose=0)
# extract results for first sample
r = yhat[0]
# calculate statistics, including AP
AP, _, _, _ = compute_ap(gt_bbox, gt_class_id, gt_mask, r["rois"], r["class_ids"], r["scores"], r['masks'])
# store
APs.append(AP)
# calculate the mean AP across all images
mAP = mean(APs)
return mAP
# evaluate model on training dataset
train_mAP = evaluate_model(train_set, model, cfg)
print("Train mAP: %.3f" % train_mAP)
# evaluate model on test dataset
test_mAP = evaluate_model(test_set, model, cfg)
print("Test mAP: %.3f" % test_mAP)
###Output
Train mAP: 0.920
Test mAP: 0.974
###Markdown
Prediction on New dataset
###Code
# plot a number of photos with ground truth and predictions
def plot_actual_vs_predicted(dataset, model, cfg, n_images=5):
# load image and mask
for i in range(n_images):
# load the image and mask
image = dataset.load_image(i)
mask, _ = dataset.load_mask(i)
# convert pixel values (e.g. center)
scaled_image = mold_image(image, cfg)
# convert image into one sample
sample = expand_dims(scaled_image, 0)
# make prediction
yhat = model.detect(sample, verbose=0)[0]
# define subplot
plt.subplot(n_images, 2, i*2+1)
# plot raw pixel data
plt.imshow(image)
plt.title('Actual')
# plot masks
for j in range(mask.shape[2]):
plt.imshow(mask[:, :, j], cmap='gray', alpha=0.3)
# get the context for drawing boxes
plt.subplot(n_images, 2, i*2+2)
# plot raw pixel data
plt.imshow(image)
plt.title('Predicted')
ax = plt.gca()
# plot each box
for box in yhat['rois']:
# get coordinates
y1, x1, y2, x2 = box
# calculate width and height of the box
width, height = x2 - x1, y2 - y1
# create the shape
rect = plt.Rectangle((x1, y1), width, height, fill=False, color='red')
# draw the box
ax.add_patch(rect)
# show the figure
plt.show()
# plot predictions for train dataset
plot_actual_vs_predicted(train_set, model, cfg)
# plot predictions for test dataset
plot_actual_vs_predicted(test_set, model, cfg)
###Output
_____no_output_____ |
Unsupervised ML/Clustering (Iris Dataset).ipynb | ###Markdown
Finding the optimum number of Clusters using Unsupervised ML (Iris Dataset) by Krishna Kumar Importing the libraries and the dataset
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from sklearn.cluster import KMeans
os.getcwd()
os.chdir("C:\\Users\\HP\\Desktop\\MY PORTFOLIO\\Internships\\The Sparks Foundation\\Task 2")
df=pd.read_csv("Iris.csv")
df
df.describe()
# Dropping the Id column
df.drop('Id', axis='columns', inplace=True)
df.corr()
###Output
_____no_output_____
###Markdown
A High positive correlation can be seen among the SepalLengthCm, PetalLengthCm and PetalWidthCm variables
###Code
sns.pairplot(df)
num = df.iloc[:,[0,1,2,3]].values
num
###Output
_____no_output_____
###Markdown
Finding the optimum number of Clusters through Elbow Point
###Code
# Holds the Sum of Squared Error values for each k
sse = []
for i in range(1, 10):
kmeans = KMeans(n_clusters = i, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
kmeans.fit(num)
sse.append(kmeans.inertia_)
a=1
for x in range(0,9):
print("The SSE is", sse[x],"when number of clusters is", a)
a=a+1
###Output
The SSE is 680.8243999999996 when number of clusters is 1
The SSE is 152.36870647733915 when number of clusters is 2
The SSE is 78.94084142614601 when number of clusters is 3
The SSE is 57.34540931571815 when number of clusters is 4
The SSE is 46.535582051282034 when number of clusters is 5
The SSE is 38.93873974358975 when number of clusters is 6
The SSE is 34.190687924796634 when number of clusters is 7
The SSE is 29.90537429982511 when number of clusters is 8
The SSE is 27.927882157034986 when number of clusters is 9
###Markdown
We can observe that the SSE was least when there were 9 clusters. But we need to find the minimum optimum number of clusters. We can determine that by looking at the SSE values. The point where the decrease in SSE values stops being drastic (when there are 3 clusters in the above case), can be considered as the optimum number of clusters. This can also be shown through the Elbow plot.
###Code
#Using the 'Elbow Method' to know the optimum number of clusters
plt.plot(range(1, 10), sse)
plt.title('Elbow Plot')
plt.xlabel('Number of Clusters')
plt.ylabel('sse') # sum of squared errors
plt.show()
###Output
_____no_output_____
###Markdown
The Elbow plot also confirms that the third cluster was the point where the SSE stops dimishing considerably KMeans on the Dataset
###Code
kmeans = KMeans(n_clusters = 3, init = 'k-means++',max_iter = 300, n_init = 10, random_state = 0)
y_kmeans = kmeans.fit_predict(num)
y_kmeans
y_kmeans == 0,0
###Output
_____no_output_____
###Markdown
Plotting the Scatter Graph
###Code
plt.scatter(num[y_kmeans == 0, 0], num[y_kmeans == 0, 1], s = 50, c = 'orange', label = 'Iris-setosa')
plt.scatter(num[y_kmeans == 1, 0], num[y_kmeans == 1, 1], s = 50, c = 'blue', label = 'Iris-versicolour')
plt.scatter(num[y_kmeans == 2, 0], num[y_kmeans == 2, 1], s = 50, c = 'black', label = 'Iris-virginica')
# Plotting the centroids of the clusters
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1], s = 100, c = 'yellow', label = 'Centroids')
plt.legend()
###Output
_____no_output_____ |
assignments/PythonAdvanceTheory/Python_Advance_12.ipynb | ###Markdown
Assignment_12 Q1. Does assigning a value to a string's indexed character violate Python's string immutability?Yes. It is beacause of string immutability that means we can not change the characters in string by assigning new values to indexes. Q2. Does using the += operator to concatenate strings violate Python's string immutability? Why or why not?No. When we concatenate string, we assign new value to the same string variable. We are not altering string based on the indexes, so it won't violate Python's string immutability Q3. In Python, how many different ways are there to index a character?There are two methods for this, find() and index() both will return the lowest index at which char is found. Q4. What is the relationship between indexing and slicing?Indexing is a way to access an element of a string by its position or index. Slicing is accessing only part of a string and not a full string. String slicing is done by using index.
###Code
s = 'ineuron'
print(s[1:4])
print(s[-1:2:-1])
###Output
neu
noru
###Markdown
Q5. What is an indexed character's exact data type? What is the data form of a slicing-generated substring?For both cases, it is 'str' type. Q6. What is the relationship between string and character "types" in Python?In Python, There is no char data type, even a single character enclosed in double quotes is considered as str. Q7. Identify at least two operators and one method that allow you to combine one or more smaller strings to create a larger string."+" will concatenate two strings and "*" will repeat the string number of times mentioned. We can also use join() method for getting a larger string.
###Code
print("Hello!" + 'How are you?')
print("Rain" * 4)
l = ['How', 'are', 'you?']
s = ' '.join(l)
print(s)
###Output
Hello!How are you?
RainRainRainRain
How are you?
###Markdown
Q8. What is the benefit of first checking the target string with in or not in before using the index method to find a substring?Normally, when we use index() method it will return the starting index of a substring, if substring is present in a string. But the major concern is if a substring is not present in a string, index() will return a ValueError. To avoid this, we can first check the target string with in or not in before using the index method to find a substring.
###Code
s = 'abcdefghijk'
s1 = 'xyz'
if s1 in s:
print(s.index(s1))
s.index(s1)
###Output
_____no_output_____ |
Introduction_to_Pandas.ipynb | ###Markdown
 Data Analysis Numpy Differences between numpy arrays and Python lists. 1. List can contains multiple datatype, while numpy array contains only one
###Code
lst = [1, 2, 'Cat']
lst
import numpy as np
np.array([1,2,'Cat'])
###Output
_____no_output_____
###Markdown
2. Numpy array can be broadcasted (elementwise add,subtract,multiplication..) while list can't
###Code
SIZE = 10
# Python List
lst = list(range(SIZE))
print(lst + [1])
# Numpy Array
import numpy as np
a = np.arange(SIZE)
print(a + [1])
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1]
[ 1 2 3 4 5 6 7 8 9 10]
###Markdown
Pandas
###Code
# Import library
import pandas as pd
###Output
_____no_output_____
###Markdown
SeriesCan be understood as an one-dimensional array with flexible indices. Flexible indices means that the indices does not need to be **number**
###Code
# Create a Pandas Series
name = ['cat', 'dog', 'pig']
age = [1, 1, 4]
se = pd.Series(data = age, index = name)
print(se)
###Output
cat 1
dog 1
pig 4
dtype: int64
###Markdown
Data Selection
###Code
se.index
# Get the index of Series / DataFrame
se.index[0]
se
# flexible indexing
implicit_index = [0,1,2]
explicit_index = ['cat', 'dog', 'pig']
actual_data = [1,1,4]
se
# Indexing
# Explicit
# print(se['pig'])
# Implicit
# print(se[2])
# # Slicing
# print(se['dog':])
# # Fancy
# print(se[['cat', 'pig']])
# # Masking / Filtering
# se == 4
print(se[se == 4])
se
# loc vs iloc
# loc -> explicit indexing
# print(se.loc['pig'])
# iloc -> implicit indexing
print(se.iloc[2])
###Output
4
###Markdown
**TL;DR:**| Attributes | List | np.array | pd.Series ||---------------------------------- |------ |---------- |----------- || flexible index? | no | no | **yes** || can contains different datatype? | **yes** | no | no | DataFrameCan be understood as a two-dimensional array with both flexible row indices and flexible column names.
###Code
dct = {'Dog': 1, 'Cat':10, 'Dog': 9}
dct['Dog']
# Create a Pandas DataFrame
# From dictionary of lists
name = ['Uku', 'Lele', 'Chuon_Chuon', 'Tut', 'Tit']
age = [1, 1, 4, 5, 2]
breed = ['Flerken', 'Catto', 'Poddle', 'Pomeranian', 'Pomeranian']
servant = ['Minh Anh', 'Minh Anh', 'Charles Lee', 'Natalie', 'Natalie']
cuteness = ['Very cute', 'Also very cute', 'Cute but not as cute', 'Unbelievably cute', 'Annoyingly cute']
data = {'Age': age,
'Breed': breed,
'Servant': servant,
'Cuteness': cuteness}
df = pd.DataFrame(data = data, index = name)
df
# Data Selection: 2 ways to get a value out of a DF
# 1) Selecting a Series (column)
df['Breed']
# by implicit indexing
# df['Breed'][2]
# # by explicit indexing
# df['Breed']['Chuon_Chuon']
df
# 2) loc & iloc. Always remember, row first, col second
# # loc
# df.loc['Chuon_Chuon','Breed']
# iloc
# df.iloc[2, 1]
# # Masking / Filtering
df['Servant'] == 'Natalie' # return a SERIES of booleans
df[df['Servant'] == 'Natalie'] # select the df according to the booleans series
df['Servant'] == 'Natalie'
###Output
_____no_output_____
###Markdown
What you should take away from this notebook:1. Understand the main differences between np.array, List and pd.Series2. Understand flexible indexing 3. Understand how to select data from List, np.array, pd.Series4. Understand how to create dataframe, series 5. Distinguish between **loc**(explicit index select) and **iloc**(implicit index select)
###Code
###Output
_____no_output_____
###Markdown
Introduction to Pandas**pandas** is a Python package providing fast, flexible, and expressive data structures designed to work with panel data.pandas is well suited for tabular data with heterogeneously-typed colums, as in an SQL table or Excel spreadsheet **Key Features**:- Easy handling of **missing data**- Automatic and explicit **data alignment**- Intelligent label-based **slicing, indexing and subsetting** of large data sets- Powerful, flexible **group by functionality** to perform split-apply-combine operations on data sets- Robust **IO Tools** for loading data from flat files, Excel files, databases etc.
###Code
from IPython.core.display import HTML
HTML("<iframe src=http://pandas.pydata.org width=800 height=350></iframe>")
###Output
_____no_output_____
###Markdown
- Before we explore the package pandas, let's import pandas package. We often use pd to refer to pandas in convention.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
SeriesA Series is a single vector of data (like a Numpy array) with an *index* that labels each element in the vector.
###Code
counts = pd.Series([223, 43, 53, 24, 43])
counts
type(counts)
###Output
_____no_output_____
###Markdown
- If an *index* is not specified, a default sequence of integers is assigned as index. - We can access the values like an array
###Code
counts[0]
counts[1:4]
###Output
_____no_output_____
###Markdown
- You can get the array representation and index object of the *Series* via its values and index atrributes, respectively.
###Code
counts.values
counts.index
###Output
_____no_output_____
###Markdown
- We can assign meaningful labels to the index, if they are available:
###Code
fruit = pd.Series([223, 43, 53, 24, 43],
index=['apple', 'orange', 'banana', 'pears', 'lemon'])
fruit
fruit.index
###Output
_____no_output_____
###Markdown
- These labels can be used to refer to the values in the Series.
###Code
fruit['apple']
fruit[['apple', 'lemon']]
###Output
_____no_output_____
###Markdown
- We can give both the array of values and the index meaningful labels themselves:
###Code
fruit.name = 'counts'
fruit.index.name = 'fruit'
fruit
###Output
_____no_output_____
###Markdown
- Operations can be applied to Series without losing the data structure.- Use bool array to filter Series
###Code
fruit > 50
fruit[fruit > 50]
###Output
_____no_output_____
###Markdown
- Critically, the labels are used to align data when used in operations with other Series objects.
###Code
fruit2 = pd.Series([11, 12, 13, 14, 15],
index=fruit.index)
fruit2
fruit2 = fruit2.drop('apple')
fruit2
fruit2['grape'] = 18
fruit2
fruit3 = fruit + fruit2
fruit3
###Output
_____no_output_____
###Markdown
- Contrast this with arrays, where arrays of the same length will combine values element-wise; Adding Series combined values with the same label in the resulting series.- Notice that the missing values were propogated by addition.
###Code
fruit3.dropna()
fruit3
fruit3.isnull()
###Output
_____no_output_____
###Markdown
DataFrameA DataFrame is a tabular data structure, encapsulating multiple series like columns in a spreadsheet.Each column can be a different value type (numeric, string, boolean etc).
###Code
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year':[2000, 2001, 2002, 2001, 2003],
'pop':[1.5, 1.7, 3.6, 2.4, 2.9]}
df = pd.DataFrame(data)
df
len(df) # Get the number of rows in the dataframe
df.shape # Get the (rows, cols) of the dataframe
df.T
df.columns # get the index of columns
df.index # get the index of the row
df.dtypes
df.describe()
###Output
_____no_output_____
###Markdown
- There are three basic ways to access the data in the dataframe 1. use DataFrame[] to access data quickly 2. use DataFrame.iloc[row, col] integer position based selection method 3. use DataFrame.loc[row, col] label based selection method
###Code
df
df['state'] # indexing by label
df[['state', 'year']] # indexing by a list of label
df[:2] # numpy-style indexing
df.iloc[0, 0]
df.iloc[0, :]
df.iloc[:, 1]
df.iloc[:2, 1:3]
df.loc[:, 'state']
df.loc[:, ['state', 'year']]
###Output
_____no_output_____
###Markdown
- Add new column and delete column
###Code
df['debt'] = np.random.randn(len(df))
df['rain'] = np.abs(np.random.randn(len(df)))
df
df = df.drop('debt', axis=1)
df
row1 = pd.Series([4.5, 'Nevada', 2005, 2.56], index=df.columns)
df.append(row1,ignore_index=True)
df.drop([0, 1])
###Output
_____no_output_____
###Markdown
- data filtering
###Code
df['pop'] < 2
df
df.loc[df['pop'] < 2, 'pop'] = 2
df
df['year'] == 2001
(df['pop'] > 3) | (df['year'] == 2001)
df.loc[(df['pop'] > 3) | (df['year'] == 2001), 'pop'] = 3
df
###Output
_____no_output_____
###Markdown
- Sorting index
###Code
df.sort_index(ascending=False)
df.sort_index(axis=1, ascending=False)
###Output
_____no_output_____
###Markdown
Summarizing and Computing Descriptive StatisticsBuilt in functions to calculate the values over row or columns.
###Code
df
df.loc[:, ['pop', 'rain']].sum()
df.loc[:,['pop', 'rain']].mean()
df.loc[:, ['pop', 'rain']].var()
df.loc[:, ['pop', 'rain']].cumsum()
###Output
_____no_output_____
###Markdown
Apply functions to each column or row of a DataFrame
###Code
df
df.loc[:, ['pop', 'rain']].apply(lambda x: x.max() - x.min()) # apply new functions to each row
###Output
_____no_output_____
###Markdown
Grouped and apply
###Code
df
df.groupby(df['state']).mean()
df.groupby(df['state'])[['pop', 'rain']].apply(lambda x: x.max() - x.min())
grouped = df.groupby(df['state'])
group_list = []
for name, group in grouped:
print(name)
print(group)
print('\n')
###Output
_____no_output_____
###Markdown
Set Hierarchical indexing
###Code
df
df_h = df.set_index(['state', 'year'])
df_h
df_h.index.is_unique
df_h.loc['Ohio', :].max() - df_h.loc['Ohio', :].min()
###Output
_____no_output_____
###Markdown
Import and Store Data - Read and write *csv* file.
###Code
df
df.to_csv('test_csv_file.csv',index=False)
%more test_csv_file.csv
df_csv = pd.read_csv('test_csv_file.csv')
df_csv
###Output
_____no_output_____
###Markdown
- Read and write *excel* file.
###Code
writer = pd.ExcelWriter('test_excel_file.xlsx')
df.to_excel(writer, 'sheet1', index=False)
writer.save()
df_excel = pd.read_excel('test_excel_file.xlsx', sheetname='sheet1')
df_excel
pd.read_table??
###Output
_____no_output_____
###Markdown
Filtering out Missing DataYou have a number of options for filtering out missing data.
###Code
df = pd.DataFrame([[1, 6.5, 3.], [1., np.nan, np.nan],
[np.nan, np.nan, np.nan], [np.nan, 6.5, 3.]])
df
cleaned = df.dropna() # delete rows with Nan value
cleaned
df.dropna(how='all') # delete rows with all Nan value
df.dropna(thresh=2) # keep the rows with at least thresh non-Nan value
df.fillna(0) # fill Nan with a constant
###Output
_____no_output_____
###Markdown
Plotting in DataFrame
###Code
variables = pd.DataFrame({'normal': np.random.normal(size=100),
'gamma': np.random.gamma(1, size=100),
'poisson': np.random.poisson(size=100)})
variables.head()
variables.shape
variables.cumsum().plot()
variables.cumsum().plot(subplots=True)
###Output
_____no_output_____
###Markdown
**Introduction to Pandas and advanced data science ** [Import Pandas Series](scrollTo=qJovi707nY1k&line=3&uniqifier=1) [Working with *apply* function in *pandas Series* ](scrollTo=ZrhmN_fAqsi2&line=1&uniqifier=1) [Working with dictionaies](scrollTo=NxCNX_t6u1zk&line=4&uniqifier=1) Pandas Series
###Code
import pandas as pd
s = pd.Series([1,2,3,4,5])
s
print(type(s))
print(s.index)
print(s.values)
fruits = ['apples','bananas','cherries','pears']
s1 = pd.Series([20,10,30,40] , index = fruits)
s2 = pd.Series([8,8,8,8], index = fruits)
print(s1+s2)
print(s1['apples'])
###Output
20
###Markdown
working with Apply function in Pandas Series
###Code
import numpy as np
s
s.apply(np.sqrt)
s.apply(lambda x:x+25 if(x>2) else x+10)
###Output
_____no_output_____
###Markdown
working with series object using dictionaries
###Code
cities = {"London": 8615246,
"Berlin": 3562166,
"Madrid": 3165235,
"Rome": 2874038,
"Paris": 2273305,
"Vienna": 1805681,
"Bucharest": 1803425,
"Hamburg": 1760433,
"Budapest": 1754000,
"Warsaw": 1740119,
"Barcelona": 1602386,
"Munich": 1493900,
"Milan": 1350680}
city_series = pd.Series(cities)
print(city_series)
###Output
London 8615246
Berlin 3562166
Madrid 3165235
Rome 2874038
Paris 2273305
Vienna 1805681
Bucharest 1803425
Hamburg 1760433
Budapest 1754000
Warsaw 1740119
Barcelona 1602386
Munich 1493900
Milan 1350680
dtype: int64
###Markdown
ceate a Data Frame using series
###Code
city_df = pd.DataFrame(city_series, columns = ['population'])
city_df.index
city_df.isnull()
city_index_new = ['London', 'Berlin', 'Madrid','Delhi', 'Rome', 'Paris', 'Vienna', 'Bucharest',
'Hamburg', 'Budapest', 'Warsaw','Tokyo', 'Barcelona', 'Munich', 'Milan']
city_df = pd.DataFrame(city_series, columns = ['population'], index = city_index_new)
city_df
city_df['population'].isnull()
city_df['population'].dropna()
city_df['population'].fillna(method='ffill')
cities_new = {"name": ["London", "Berlin", "Madrid", "Rome",
"Paris", "Vienna", "Bucharest", "Hamburg",
"Budapest", "Warsaw", "Barcelona",
"Munich", "Milan"],
"population": [8615246, 3562166, 3165235, 2874038,
2273305, 1805681, 1803425, 1760433,
1754000, 1740119, 1602386, 1493900,
1350680],
"country": ["England", "Germany", "Spain", "Italy",
"France", "Austria", "Romania",
"Germany", "Hungary", "Poland", "Spain",
"Germany", "Italy"]}
city_frame = pd.DataFrame(cities_new, columns=['name','population'], index=cities_new['country'])
city_frame
###Output
_____no_output_____
###Markdown
Row Operations and Accessing Rows using Index
###Code
print(city_frame.loc['Italy'])
print(city_frame[city_frame['population']>2000000])
print(city_frame.iloc[[3,2,4,5,-1]])
###Output
name population
Italy Rome 2874038
Spain Madrid 3165235
France Paris 2273305
Austria Vienna 1805681
Italy Milan 1350680
###Markdown
sum and cumulative sum using pandas
###Code
years = range(2014,2019)
cities = ["Zürich", "Freiburg", "München", "Konstanz", "Saarbrücken"]
shops = pd.DataFrame(index=years)
for city in cities:
shops.insert(loc=len(shops.columns),
column = city,
value=(np.random.uniform(0.7,1,(5,))*1000).round(2)
)
print(shops)
shops.sum()
shops.sum(axis=1)
shops.sum(axis=0)
shops.cumsum()
shops.cumsum(axis=0)
shops.cumsum(axis=1)
# area in square km:
area = [1572, 891.85, 605.77, 1285,
105.4, 414.6, 228, 755,
525.2, 517, 101.9, 310.4,
181.8]
# area could have been designed as a list, a Series, an array or a scalar
city_frame["area"] = area
city_frame.head()
city_frame.sort_values(by="area")
city_frame.T
###Output
_____no_output_____
###Markdown
to_replace
###Code
s= pd.Series([10,20,30,40,50])
s
f = lambda x: x if x>20 else 999
s.apply(f)
###Output
_____no_output_____
###Markdown
appeding new column
###Code
f1 = lambda x: 'Metro' if x> 2500000 else 'Town'
city_type = city_frame['population'].apply(f1)
city_type
city_frame['city_type'] = city_type
city_frame
city_frame.replace("Metro", "Metroplitan", inplace=True)
city_frame
import pandas as pd
import numpy as np
import random
###Output
_____no_output_____
###Markdown
Pandas Groupby
###Code
nvalues = 30
values = np.random.randint(1,20,(nvalues))
values
fruits = ["bananas", "oranges", "apples", "clementines", "cherries", "pears"]
fruits_index = np.random.choice(fruits, (nvalues,))
s = pd.Series(values, index=fruits_index)
s
grouped = s.groupby(s.index)
grouped
for f , s in grouped:
print(s)
beverages = pd.DataFrame({'Name': ['Robert', 'Melinda', 'Brenda',
'Samantha', 'Melinda', 'Robert',
'Melinda', 'Brenda', 'Samantha'],
'Coffee': [3, 0, 2, 2, 0, 2, 0, 1, 3],
'Tea': [0, 4, 2, 0, 3, 0, 3, 2, 0]})
beverages
beverages['Coffee'].sum()
res=beverages.groupby(['Name']).sum()
print(type(res))
res
names = ('Ortwin', 'Mara', 'Siegrun', 'Sylvester', 'Metin', 'Adeline', 'Utz', 'Susan', 'Gisbert', 'Senol')
data = {'Monday': np.array([0, 9, 2, 3, 7, 3, 9, 2, 4, 9]),
'Tuesday': np.array([2, 6, 3, 3, 5, 5, 7, 7, 1, 0]),
'Wednesday': np.array([6, 1, 1, 9, 4, 0, 8, 6, 8, 8]),
'Thursday': np.array([1, 8, 6, 9, 9, 4, 1, 7, 3, 2]),
'Friday': np.array([3, 5, 6, 6, 5, 2, 2, 4, 6, 5]),
'Saturday': np.array([8, 4, 8, 2, 3, 9, 3, 4, 9, 7]),
'Sunday': np.array([0, 8, 7, 8, 9, 7, 2, 0, 5, 2])}
data_df = pd.DataFrame(data, index=names)
data_df
list(data_df)
data_df['class'] = ['a','b','c','d','a','b','c','d','a','b']
data_df
res = data_df.groupby('class').sum()
res
list(res)
res.index
###Output
_____no_output_____
###Markdown
dealing with NaN
###Code
import pandas as pd
temp_df = pd.read_csv("temperatures.csv",sep=";",decimal=',')
temp_df.set_index('time',inplace=True)
temp_df.shape
temp_df.head(5)
average_temp = temp_df.mean(axis=1)
average_temp[0:5]
temp_df=temp_df.assign(temparature=average_temp)
temp_df.head()
###Output
_____no_output_____
###Markdown
create a NaN array
###Code
import numpy as np
a = np.random.randint(1,30,(4,2))
a
rand_df = pd.DataFrame(a, columns=['x','y'])
rand_df
f = lambda z: 0 if z > 20 else z
for id in ['x','y']:
rand_df[id]= rand_df[id].map(f)
rand_df
#create a similar shaped NaN data frame
random_df = pd.DataFrame(np.random.random(size=temp_df.shape),
columns = temp_df.columns.values,
index=temp_df.index)
random_df.head()
nan_df = pd.DataFrame(np.nan,
columns = temp_df.columns.values,
index=temp_df.index)
nan_df.head()
df_bool = random_df<0.9
df_bool.head()
disturbed_data = temp_df.where(df_bool, nan_df)
print(disturbed_data.shape)
disturbed_data.head()
final_temp_df= disturbed_data.dropna()
final_temp_df.shape
final_temp_df.head()
###Output
_____no_output_____
###Markdown
multi dimensional indexing with pandas
###Code
cities = ["Vienna", "Vienna", "Vienna",
"Hamburg", "Hamburg", "Hamburg",
"Berlin", "Berlin", "Berlin",
"Zürich", "Zürich", "Zürich"]
index = [cities, ["country", "area", "population",
"country", "area", "population",
"country", "area", "population",
"country", "area", "population"]]
data = ["Austria", 414.60, 1805681,
"Germany", 755.00, 1760433,
"Germany", 891.85, 3562166,
"Switzerland", 87.88, 378884]
city_series = pd.Series(data, index=index)
print(city_series)
city_series[:,'area']
city_series.swaplevel()
###Output
_____no_output_____
###Markdown
DESU IA4 HEALHInfo-PROF, Introduction to Python programming for health data Session 2: Introduction to PANDASLeaning objectives1. Learning the different data types in pandas: Data frame and series2. Importing and exporting data into a data frame2. Subseting data frames5. Doing transformations with dataframes What is Pandas?Pandas is a Python library used for working with data sets.It has functions for analyzing, cleaning, exploring, and manipulating data.Pandas on-line documentation : https://pandas.pydata.org/docs/reference/index.html
###Code
#Importing Pandas and verifying the version
import pandas as pd # as allows to create an alias
import numpy as np
print(pd.__version__) #allow to verify the pandas function
###Output
1.1.5
###Markdown
Data types on Pandas :1. **Series :** It is a one-dimensional array holding data of any type.2. **Dataframes :** Multidimensional data tables holding data of any type. We can think that the series are like the columns of a dataframe whereas the whole table is the dataframe.
###Code
# Example series with labels
a = [1, 7, 2]
myvar = pd.Series(a, index = ["x", "y", "z"])
print(myvar)
###Output
x 1
y 7
z 2
dtype: int64
###Markdown
Dataframes Dataframes are multidiomensional matrices that can store data of different types.
###Code
data = {
"calories": [420, 380, 390],
"duration": [50, 40, 45],
"category" : ['a','b','c']
}
df = pd.DataFrame(data, index = ["day1", "day2", "day3"])
print(df)
students = [ ('jack', 34, 'Sydeny') ,
('Riti', 30, 'Delhi' ) ,
('Aadi', 16, 'New York') ]
# Create a DataFrame object
dfObj = pd.DataFrame(students, columns = ['Name' , 'Age', 'City'], index=['a', 'b', 'c'])
###Output
_____no_output_____
###Markdown
**Exercise :** Create a dataframe that stores in one row the person ID, height, weight, sex and birthdate. Add at least three examples [DataFrame attributes](https://pandas.pydata.org/docs/reference/frame.html) Exercise : For the dataframe previously created, go to dataframe attributes and show the following information : 1. Number of elements2. Name of the columns3. Name of the rows4. Number of rows and columns5. Show the first rows of the dataframe Acces the elements of a dataframe :Access by columns:
###Code
df['calories']
###Output
_____no_output_____
###Markdown
DataFrame.loc | Select Column & Rows by NameDataFrame provides indexing label loc for selecting columns and rows by names dataFrame.loc[ROWS RANGE , COLUMNS RANGE]
###Code
df.loc['day1',:]
df.loc[:,'calories']
###Output
_____no_output_____
###Markdown
DataFrame.iloc | Select Column Indexes & Rows Index PositionsDataFrame provides indexing label iloc for accessing the column and rows by index positions i.e.*dataFrame.iloc[ROWS INDEX RANGE , COLUMNS INDEX RANGE]*It selects the columns and rows from DataFrame by index position specified in range. If ‘:’ is given in rows or column Index Range then all entries will be included for corresponding row or column.
###Code
df.iloc[:,[0,2]]
###Output
_____no_output_____
###Markdown
Variable conversion :
###Code
df_petit = pd.DataFrame({ 'Country': ['France','Spain','Germany', 'Spain','Germany', 'France', 'Italy'], 'Age': [50,60,40,20,40,30, 20] })
df_petit
###Output
_____no_output_____
###Markdown
Label encoding : Label Encoding refers to converting the labels into a numeric form so as to convert them into the machine-readable form. Machine learning algorithms can then decide in a better way how those labels must be operated. It is an important pre-processing step for the structured dataset in supervised learning.
###Code
df_petit['Country_cat'] = df_petit['Country'].astype('category').cat.codes
df_petit
###Output
_____no_output_____
###Markdown
One hot encoding
###Code
help(pd.get_dummies)
df_petit = pd.get_dummies(df_petit,prefix=['Country'], columns = ['Country'], drop_first=True)
df_petit.head()
###Output
_____no_output_____
###Markdown
**Exercise :** Create a dataframe with 3 columns with the characteristics : ID, sex (M or F), frailty degree (FB, M, F). Convert the categorical variables using label encoding and one-hot-encoding. Dealing with dates https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html
###Code
#Library to deeal with dates
import datetime
dti = pd.to_datetime(
["1/1/2018", np.datetime64("2018-01-01"), datetime.datetime(2018, 1, 1)]
)
dti
df = pd.DataFrame({'date': ['3/10/2000', '3/11/2000', '3/12/2000'],
'value': [2, 3, 4]})
df['date'] = pd.to_datetime(df['date'])
df
###Output
_____no_output_____
###Markdown
Cutomize the date format
###Code
df = pd.DataFrame({'date': ['2016-6-10 20:30:0',
'2016-7-1 19:45:30',
'2013-10-12 4:5:1'],
'value': [2, 3, 4]})
df['date'] = pd.to_datetime(df['date'], format="%Y-%d-%m %H:%M:%S")
df
###Output
_____no_output_____
###Markdown
**Exercise :** Check the Pandas documentation and create a dataframe with a columns with dates and try different datetypes. Access date elements dt. accessor :The dt. accessor is an object that allows to access the different data and time elements in a datatime object.https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.html
###Code
df['date_only'] = df['date'].dt.date
df['time_only'] = df['date'].dt.time
df['hour_only'] = df['date'].dt.hour
df
###Output
_____no_output_____
###Markdown
Importing datasetshttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
###Code
df = pd.read_csv("https://raw.githubusercontent.com/rakelup/EPICLIN2021/master/diabetes.csv", sep=",",error_bad_lines=False)
df.head()
###Output
_____no_output_____
###Markdown
Data overview
###Code
# Data overview
print ('Rows : ', df.shape[0])
print ('Coloumns : ', df.shape[1])
print ('\nFeatures : \n', df.columns.tolist())
print ('\nNumber of Missing values: ', df.isnull().sum().values.sum())
print ('\nNumber of unique values : \n', df.nunique())
df.describe()
df.columns
###Output
_____no_output_____
###Markdown
Cleaning data in a dataframe: 1. Dealing with missing values2. Data in wrong format3. Wrong data4. Duplicates Dealing with missing values : Handling missing values is an essential part of data cleaning and preparation process since almost all data in real life comes with some missing values. Check for missing values
###Code
df.info()
df.isnull().sum()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 768 entries, 0 to 767
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Pregnancies 768 non-null int64
1 Glucose 768 non-null int64
2 BloodPressure 768 non-null int64
3 SkinThickness 768 non-null int64
4 Insulin 768 non-null int64
5 BMI 768 non-null float64
6 DiabetesPedigreeFunction 768 non-null float64
7 Age 768 non-null int64
8 Outcome 768 non-null int64
dtypes: float64(2), int64(7)
memory usage: 54.1 KB
###Markdown
Let's create a daframe with missing values.
###Code
df2 = df
df2.Glucose.replace(99, np.nan, inplace=True)
df2.BloodPressure.replace(74, np.nan, inplace=True)
print ('\nNumber of Missing values: ', df2.isnull().sum())
print ('\nTotal number of missing values : ', df2.isnull().sum().values.sum())
###Output
Valeurs manquantes: Pregnancies 0
Glucose 17
BloodPressure 52
SkinThickness 0
Insulin 0
BMI 0
DiabetesPedigreeFunction 0
Age 0
Outcome 0
dtype: int64
Valeurs manquantes total: 69
###Markdown
First strategy : Removing the whole row that contains a missing value
###Code
# Removing the whole row
df3 = df2.dropna()
print ('\nValeurs manquantes: ', df3.isnull().sum())
print ('\nValeurs manquantes total: ', df3.isnull().sum().values.sum())
##Replace the missing values
df2.Glucose.replace(np.nan, df['Glucose'].median(), inplace=True)
df2.BloodPressure.replace(np.nan, df['BloodPressure'].median(), inplace=True)
###Output
_____no_output_____
###Markdown
Sorting the datahttps://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sort_values.html
###Code
#Trier les données
b = df.sort_values('Pregnancies')
b.head()
###Output
_____no_output_____
###Markdown
**Exercise :** Sort the data in descending order according to the insulin level and store the data in a new Data frame. How to store the data in the same dataframe? Subseting the data
###Code
df[df['BloodPressure'] >70].count() # Filtrage par valeur
df_court = df[['Insulin','Glucose']]
df_court.drop('Insulin', inplace= True, axis = 1)
df_court.head()
###Output
/usr/local/lib/python3.7/dist-packages/pandas/core/frame.py:4174: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
errors=errors,
###Markdown
Statistics applied to dataframesDataFrame.aggregate(func=None, axis=0, *args, **kwargs)Aggregate using one or more operations over the specified axis.https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.aggregate.html
###Code
###Output
_____no_output_____ |
Basics/00-Python Object and Data Structure Basics/09-Objects and Data Structures Assessment Test.ipynb | ###Markdown
Objects and Data Structures Assessment Test Test your knowledge. ** Answer the following questions ** Write a brief description of all the following Object Types and Data Structures we've learned about: Numbers: are numeric literals (contain int, float, binary types)Strings: containing letters in a user defined sequenceLists: Lists can be thought of the most general version of a sequence in PythonTuples: Sets contain only non repeated elements and are immutableDictionaries:Dictionaries are hash tables. NumbersWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25
###Code
(60 + (10 ** 2) / 4 * 7) - 134.75
###Output
_____no_output_____
###Markdown
Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) What is the value of the expression 4 * 6 + 5 What is the value of the expression 4 + 6 * 5
###Code
#44
#29
#34
print(4*(6+5))
print(4*6+5)
print(4+6*5)
###Output
44
29
34
###Markdown
What is the *type* of the result of the expression 3 + 1.5 + 4?Answer: float What would you use to find a number’s square root, as well as its square?
###Code
# Square root:
100 ** 0.5
# Square:
10 ** 2
#we can also accomplish such tasks by importing the math library and using .pow() fucntion
###Output
_____no_output_____
###Markdown
Strings Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below:
###Code
s = 'hello'
# Print out 'e' using indexing
print(s[1])
###Output
e
###Markdown
Reverse the string 'hello' using slicing:
###Code
s ='hello'
# Reverse the string using slicing
print(s[::-1])
###Output
olleh
###Markdown
Given the string hello, give two methods of producing the letter 'o' using indexing.
###Code
s ='hello'
# Print out the 'o'
# Method 1:
print(s[4])
# Method 2:
print(s[-1])
###Output
o
###Markdown
Lists Build this list [0,0,0] two separate ways.
###Code
# Method 1:
[0]*3
# Method 2:
list2 = [0,0,0]
list2
###Output
_____no_output_____
###Markdown
Reassign 'hello' in this nested list to say 'goodbye' instead:
###Code
list3 = [1,2,[3,4,'hello']]
list3[2][2] = 'goodbye
###Output
_____no_output_____
###Markdown
Sort the list below:
###Code
list4 = [5,3,4,6,1]
sorted(list4)
###Output
_____no_output_____
###Markdown
Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries:
###Code
d = {'simple_key':'hello'}
# Grab 'hello'
d['simple_key']
d = {'k1':{'k2':'hello'}}
# Grab 'hello'
d['k1']['k2']
# Getting a little tricker
d = {'k1':[{'nest_key':['this is deep',['hello']]}]}
#Grab hello
d['k1'][0]['nest_key'][1][0]
# This will be hard and annoying!
d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]}
#grabbing hello
d['k1'][2]['k2'][1]['tough'][2][0]
###Output
_____no_output_____
###Markdown
Can you sort a dictionary? Why or why not?**Answer: Nopes! dictionaries are *mappings* not a sequence.** Tuples What is the major difference between tuples and lists?immutable How do you create a tuple?t=(1,2,3) Sets What is unique about a set?only unique elements can be added Use a set to find the unique values of the list below:
###Code
list5 = [1,2,2,33,4,4,11,22,3,3,2]
print(set(list5))
###Output
{1, 2, 33, 4, 3, 11, 22}
###Markdown
Booleans For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.OperatorDescriptionExample==If the values of two operands are equal, then the condition becomes true. (a == b) is not true.!=If values of two operands are not equal, then condition becomes true. (a != b) is true.>If the value of left operand is greater than the value of right operand, then condition becomes true. (a > b) is not true.<If the value of left operand is less than the value of right operand, then condition becomes true. (a < b) is true.>=If the value of left operand is greater than or equal to the value of right operand, then condition becomes true. (a >= b) is not true. <=If the value of left operand is less than or equal to the value of right operand, then condition becomes true. (a <= b) is true. What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!)
###Code
# Answer before running cell
2 > 3
# Answer before running cell
3 <= 2
# Answer before running cell
3 == 2.0
# Answer before running cell
3.0 == 3
# Answer before running cell
4**0.5 != 2
###Output
_____no_output_____
###Markdown
Final Question: What is the boolean output of the cell block below?
###Code
# two nested lists
l_one = [1,2,[3,4]]
l_two = [1,2,{'k1':4}]
# True or False?
l_one[2][0] >= l_two[2]['k1']
###Output
_____no_output_____ |
numpy-tutorial/basics.ipynb | ###Markdown
https://docs.scipy.org/doc/numpy/user/quickstart.html Quick start tutorial
###Code
import numpy as np
np.array
###Output
_____no_output_____
###Markdown
**Simple Syntax Example**
###Code
a = np.arange(20).reshape(4, 5)
###Output
_____no_output_____
###Markdown
- .arrange(n) gives array of n size- .reshape(r, c) gives dimensions r x c
###Code
a
a.shape
###Output
_____no_output_____
###Markdown
- .shape renturns dimensions of array
###Code
a.ndim
###Output
_____no_output_____
###Markdown
- .ndim number of axes
###Code
a.dtype.name
a.itemsize
###Output
_____no_output_____
###Markdown
- .itemsize of bits/8 = of bytes
###Code
a.size
type(a)
b = np.array([1, 2, 3])
b
type(b)
###Output
_____no_output_____
###Markdown
**Array Creation Example**
###Code
floatArray = np.array([7.4, 6.4, 9.7])
type(floatArray)
floatArray.dtype
###Output
_____no_output_____
###Markdown
dtype = datatype
###Code
md = np.array([(1,2,3,4),(2,3,7,8)])
md
c = np.array( [ [1,2], [3,4] ], dtype=float )
c
c.dtype = complex
c
###Output
_____no_output_____
###Markdown
Important functions- .zeros create an array of zeros- .ones creats an array full of ones- .empty creates an aray with initial content that is random and depends on the state of the memory.
###Code
zeros = np.zeros((10, 50))
ones = np.ones((2,6,7))
ones
ones.size
ones.itemsize
###Output
_____no_output_____
###Markdown
**Sequence**
###Code
np.arange(0, 10, .7)
np.linspace(0, 10, 20)
###Output
_____no_output_____
###Markdown
**Printing**
###Code
print(a)
a.reshape(10, 2)
###Output
_____no_output_____
###Markdown
**Basic Operations**
###Code
a1 = np.array( [2, 3, 4, 5] )
a2 = np.arange(4)
a1
a2
a1 - a2
a1*a2
20*np.sin(a1)
a<35
a1 * a2
a1 @ a2
a1.dot(a2)
a1+=a2
a1
a1*=a2
a1
a1.sum()
###Output
_____no_output_____
###Markdown
Arrays cast to most percise type
###Code
cast1 = np.array([.2, .4, .6])
cast2 = np.arange(3)
cast1 + cast2
cast3 = cast1 + cast2
cast3.dtype
cast1.dtype
cast2.dtype.name
test = np.random.random((2,3))
test.sum()
test.min()
test
test.max()
test.sum(axis=0)
###Output
_____no_output_____
###Markdown
**sum each col** above "axis=0"
###Code
test.sum(axis=1)
###Output
_____no_output_____
###Markdown
**Sum each row above** axis=1 **Cumulative sum along each row**
###Code
test.cumsum(axis=1)
###Output
_____no_output_____ |
experiment/data_preprocess/Remove_Missing_Values.ipynb | ###Markdown
###Code
import pandas as pd
# get data
sample_data = pd.read_csv("https://raw.githubusercontent.com/jinchen1036/Product-Price-Prediction/main/data/sample_data.csv",sep=",")
# information about the data
sample_data.info()
# How many unique category_name
number_of_category_names = len(sample_data.category_name.unique())
print(f"The number of unique category name is: {number_of_category_names}")
# How many rows are missing category_name
number_of_missing_values = sample_data.category_name.isna().sum()
print(f"The number of missing value of category name column is: {number_of_missing_values}")
# Replace the missing values with the most frequent values present in each column
sample_data["category_name"] = sample_data["category_name"].fillna(sample_data["category_name"].mode().iloc[0])
sample_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000 entries, 0 to 999
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 train_id 1000 non-null int64
1 name 1000 non-null object
2 item_condition_id 1000 non-null int64
3 category_name 1000 non-null object
4 brand_name 561 non-null object
5 price 1000 non-null float64
6 shipping 1000 non-null int64
7 item_description 1000 non-null object
dtypes: float64(1), int64(3), object(4)
memory usage: 62.6+ KB
|
dissertation/.ipynb_checkpoints/tests_for_colloquium-checkpoint.ipynb | ###Markdown
This entire theory is built on the idea that everything is normalized as input into the brain. i.e. all values are between 0 and 1. This is necessary because the learning rule has an adaptive learning rate that is $\sigma^4$. If everything is normalized, the probability of $\sigma^2$ being greater than 1 is very low
###Code
p = GMM([1.0], np.array([[0.5,0.05]]))
num_samples = 1000
beg = 0.0
end = 1.0
t = np.linspace(beg,end,num_samples)
num_neurons = len(p.pis)
colors = [np.random.rand(num_neurons,) for i in range(num_neurons)]
p_y = p(t)
p_max = p_y.max()
np.random.seed(110)
num_neurons = 1
network = Net(1,1,num_neurons, bias=0.0006, decay=[0.05], kernels=[[1,1]], locs=[[0,0]], sleep_cycle=2000)
samples, labels = p.sample(10000)
ys = []
lbls = []
colors = [np.random.rand(3,) for i in range(num_neurons)]
def f(i=0):
x = np.array(samples[i])
l = labels[i]
y = network(x.reshape(1,1,1))
ys.append(y)
c = 'b' if l else 'g'
lbls.append(c)
fig, ax = plt.subplots(figsize=(15,5))
ax.plot(t, p_y/p_max, c='r', lw=3, label='$p(x)$')
ax.plot([x,x],[0,p_max],label="$x\sim p(x)$", lw=4)
y = network(t.reshape(num_samples,1,1),update=0)
for j,yi in enumerate(y):
yj_max = y[j].max()
ax.plot(t, y[j]/yj_max, c=colors[j], lw=3, label="$q(x)$")
ax.set_ylim(0.,1.5)
ax.set_xlim(beg,end)
plt.savefig('for_colloquium/fig%03i.png'%(i))
plt.show()
interactive_plot = interactive(f, i=(0, 9999))
output = interactive_plot.children[-1]
output.layout.height = '450px'
interactive_plot
[n.weights for n in list(network.neurons.items())[0][1]]
[np.sqrt(n.bias) for n in list(network.neurons.items())[0][1]]
[n.pi for n in list(network.neurons.items())[0][1]]
###Output
_____no_output_____
###Markdown
I can assume $q(x)$ has two forms$$q(x) = \frac{1}{\sqrt{2 \pi \sigma^2}}exp\{-\frac{(x-\mu)^2}{2\sigma^2}\}$$or $$q(x) = exp\{-\frac{(x-\mu)^2}{\sigma^2}\}$$When I assume the second form and remove the extra $\sigma$ term from the learning equations it no longer converges smoothly. However, if I add an 'astrocyte' to normalize all of them periodically by averaging over the output it works again. Perhaps astrocytes 'normalizing' the neurons is the biological mechanism for keeping the output roughly normal.
###Code
def s(x):
return (1/(1+np.exp(-10*(x-0.25))))
x = np.linspace(0,1,100)
plt.plot(x,s(x))
plt.show()
###Output
_____no_output_____ |
experiments/tl_1v2/wisig-oracle.run1.limited/trials/14/trial.ipynb | ###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_1v2:wisig-oracle.run1.limited",
"device": "cuda",
"lr": 0.0001,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10",
"1-12",
"1-14",
"1-16",
"1-18",
"1-19",
"1-8",
"10-11",
"10-17",
"10-4",
"10-7",
"11-1",
"11-10",
"11-19",
"11-20",
"11-4",
"11-7",
"12-19",
"12-20",
"12-7",
"13-14",
"13-18",
"13-19",
"13-20",
"13-3",
"13-7",
"14-10",
"14-11",
"14-12",
"14-13",
"14-14",
"14-19",
"14-20",
"14-7",
"14-8",
"14-9",
"15-1",
"15-19",
"15-6",
"16-1",
"16-16",
"16-19",
"16-20",
"17-10",
"17-11",
"18-1",
"18-10",
"18-11",
"18-12",
"18-13",
"18-14",
"18-15",
"18-16",
"18-17",
"18-19",
"18-2",
"18-20",
"18-4",
"18-5",
"18-7",
"18-8",
"18-9",
"19-1",
"19-10",
"19-11",
"19-12",
"19-13",
"19-14",
"19-15",
"19-19",
"19-2",
"19-20",
"19-3",
"19-4",
"19-6",
"19-7",
"19-8",
"19-9",
"2-1",
"2-13",
"2-15",
"2-3",
"2-4",
"2-5",
"2-6",
"2-7",
"2-8",
"20-1",
"20-12",
"20-14",
"20-15",
"20-16",
"20-18",
"20-19",
"20-20",
"20-3",
"20-4",
"20-5",
"20-7",
"20-8",
"3-1",
"3-13",
"3-18",
"3-2",
"3-8",
"4-1",
"4-10",
"4-11",
"5-1",
"5-5",
"6-1",
"6-15",
"6-6",
"7-10",
"7-11",
"7-12",
"7-13",
"7-14",
"7-7",
"7-8",
"7-9",
"8-1",
"8-13",
"8-14",
"8-18",
"8-20",
"8-3",
"8-8",
"9-1",
"9-7",
],
"domains": [1, 2, 3, 4],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/root/csc500-main/datasets/wisig.node3-19.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "Wisig_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run1_10kExamples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1",
},
],
"dataset_seed": 154325,
"seed": 154325,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
1-Connection.ipynb | ###Markdown
 cx_Oracle 8 Connection ArchitectureDocumentation reference link: [Introduction to cx_Oracle](https://cx-oracle.readthedocs.io/en/latest/user_guide/introduction.html)  InstallationDocumentation reference link: [cx_Oracle 8 Installation](https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html) **Install cx_Oracle**Install with a command like one of the following:```$ python -m pip install cx_Oracle --upgrade$ python -m pip install cx_Oracle --upgrade --user$ python -m pip install cx_Oracle --upgrade --user --proxy=http://proxy.example.com:80``` **Install Oracle Instant Client**Only needed if Python is run on a computer that does **not** have Oracle Database installed.Download and extract the Basic or Basic Light package from [oracle.com/database/technologies/instant-client.html](https://www.oracle.com/database/technologies/instant-client.html).Make sure to download the correct architecture for your operating system. If your Python is 32-bit, then you will need a 32-bit Instant Client.Installation can be automated:On Windows:```wget https://download.oracle.com/otn_software/[...]/instantclient-basic-windows.x64-19.12.0.0.0dbru.zipunzip instantclient-basic-windows.x64-19.12.0.0.0dbru.zip```On macOS:```cd $HOME/Downloadscurl -O https://download.oracle.com/otn_software/mac/instantclient/198000/instantclient-basic-macos.x64-19.8.0.0.0dbru.dmghdiutil mount instantclient-basic-macos.x64-19.8.0.0.0dbru.dmg/Volumes/instantclient-basic-macos.x64-19.8.0.0.0dbru/install_ic.shhdiutil unmount /Volumes/instantclient-basic-macos.x64-19.8.0.0.0dbru``` **Other Install Choices**On Linux you can alternatively install cx_Oracle and Instant Client RPM packages from yum.oracle.com, see [yum.oracle.com/oracle-linux-python.html](https://yum.oracle.com/oracle-linux-python.html) InitializationDocumentation reference link: [cx_Oracle 8 Initialization](https://cx-oracle.readthedocs.io/en/latest/user_guide/initialaization.html)When you run cx_Oracle it needs to be able to load the Oracle Client libraries. There are several ways this can be done.
###Code
import cx_Oracle
import os
import sys
import platform
try:
if platform.system() == "Darwin":
cx_Oracle.init_oracle_client(lib_dir=os.environ.get("HOME")+"/instantclient_19_8")
elif platform.system() == "Windows":
cx_Oracle.init_oracle_client(lib_dir=r"C:\oracle\instantclient_19_14")
# else assume system library search path includes Oracle Client libraries
# On Linux, must use ldconfig or set LD_LIBRARY_PATH, as described in installation documentation.
except Exception as err:
print("Whoops!")
print(err);
sys.exit(1);
###Output
_____no_output_____
###Markdown
Connecting to a Database**Connections are used for executing SQL, PL/SQL and SODA calls in an Oracle Database**Documentation reference link: [Connecting to Oracle Database](https://cx-oracle.readthedocs.io/en/latest/user_guide/connection_handling.html)
###Code
# Credentials
un = "pythondemo"
pw = "welcome"
###Output
_____no_output_____
###Markdown
Instead of hard coding the password, you could prompt for a value, pass it as an environment variable, or use Oracle "external authentication". Easy Connect Syntax: "hostname/servicename"
###Code
cs = "localhost/orclpdb1"
connection = cx_Oracle.connect(user=un, password=pw, dsn=cs)
print(connection)
###Output
<cx_Oracle.Connection to pythondemo@localhost/orclpdb1>
###Markdown
Oracle Client 19c has improved [Easy Connect Plus](https://www.oracle.com/pls/topic/lookup?ctx=dblatest&id=GUID-8C85D289-6AF3-41BC-848B-BF39D32648BA) syntax:```cs = "tcps://my.cloud.com:1522/orclpdb1?connect_timeout=4&expire_time=10"``` Oracle Network and Oracle Client Configuration Files**Optional configuration files can be used to alter connection behaviors, such as network encryption.**Documentation reference link: [Optional configuration files](https://cx-oracle.readthedocs.io/en/latest/user_guide/initialization.htmloptional-oracle-net-configuration-files)
###Code
# tnsnames.ora in /opt/oracle/configdir
highperfdb = (description=
(retry_count=5)(retry_delay=3)
(address=(protocol=tcps)(port=1522)(host=xxxxxx.oraclecloud.com))
(connect_data=(service_name=yyyyyyyyyy.oraclecloud.com))
(security=(ssl_server_cert_dn=
"CN=zzzzzzzz.oraclecloud.com,OU=Oracle ADB,O=Oracle Corporation,L=Redwood City,ST=California,C=US")))
# sqlnet.ora in /opt/oracle/configdir
sqlnet.outbound_connect_timeout=5
sqlnet.expire_time=2
sqlnet.encryption_client = required
sqlnet.encryption_types_client = (AES256)
<?xml version="1.0"?>
<!--
oraacess.xml in /opt/oracle/configdir
-->
<oraaccess xmlns="http://xmlns.oracle.com/oci/oraaccess"
xmlns:oci="http://xmlns.oracle.com/oci/oraaccess"
schemaLocation="http://xmlns.oracle.com/oci/oraaccess
http://xmlns.oracle.com/oci/oraaccess.xsd">
<default_parameters>
<statement_cache>
<size>100</size>
</statement_cache>
<result_cache>
<max_rset_rows>100</max_rset_rows>
<max_rset_size>10K</max_rset_size>
<max_size>64M</max_size>
</result_cache>
</default_parameters>
</oraaccess>
###Output
_____no_output_____
###Markdown
With the files above in `/opt/oracle/configdir`, your python application can look like:``` myapp.pycx_Oracle.init_oracle_client( lib_dir=os.environ.get("HOME")+"/instantclient_19_3", config_dir="/opt/oracle/configdir")connection = cx_Oracle.connect(user=un, password=pw, dsn="highperfdb")``` Connection Types Standalone ConnectionsStandalone connections are simple to create. 
###Code
# Stand-alone Connections
connection = cx_Oracle.connect(user=un, password=pw, dsn=cs)
print(connection)
###Output
<cx_Oracle.Connection to pythondemo@localhost/orclpdb1>
###Markdown
Pooled Connections Pools are highly recommended if you have:- a lot of connections that will be used for short periods of time- or a small number of connections that are idle for long periods of time Pool advantages- Reduced cost of setting up and tearing down connections- Dead connection detection- Connection- and runtime- load balancing (CLB and RLB)- Support for Application Continuity- Support for DRCP 
###Code
# Pooled Connections
# Call once during application initization
pool = cx_Oracle.SessionPool(user=un, password=pw, dsn=cs, threaded=True,
min=1, max=20, increment=1)
# Get a connection when needed in the application body
with pool.acquire() as connection:
# do_something_useful(connection)
print("Got a connection")
###Output
Got a connection
###Markdown
**Tip** Use a fixed size pool `min` = `max` and `increment = 0`. See [Guideline for Preventing Connection Storms: Use Static Pools](https://www.oracle.com/pls/topic/lookup?ctx=dblatest&id=GUID-7DFBA826-7CC0-4D16-B19C-31D168069B54). Setting Connection "Session" StateDocumentation reference link: [Session CallBacks for Setting Pooled Connection State](https://cx-oracle.readthedocs.io/en/latest/user_guide/connection_handling.htmlsession-callbacks-for-setting-pooled-connection-state)Use a 'session callback' to efficiently set state such as NLS settings.Session state is stored in each session in the pool and will be available to the next user of the session. (Note this is different to transaction state which is rolled back when connections are released to the pool)
###Code
# Set some NLS state for a connection: Only invoked for new sessions
def initSession(connection, requestedTag):
cursor = connection.cursor()
cursor.execute("""ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI'
NLS_LANGUAGE = GERMAN""")
# Create the pool with session callback defined
pool = cx_Oracle.SessionPool(user=un, password=pw, dsn=cs,
sessionCallback=initSession, min=1, max=4, increment=1, threaded=True)
# Acquire a connection from the pool (will always have the new NLS setting)
with pool.acquire() as connection:
with connection.cursor() as cursor:
cursor.execute("""SELECT * FROM DOES_NOT_EXIST""") # Error message is in French
###Output
_____no_output_____
###Markdown
The callback has an optional 'tagging' capability (not shown) that allows different connections to have different state for different application requirements. Callback benefit comparisonFor a simple web service that is invoked 1000 times, and does 1000 queries. Closing Connections Close connections when not needed. This is important for pooled connections.```connection.close()```To avoid resource closing order issues, you may want to use `with` or let resources be closed at end of scope:```with pool.acquire() as connection: do_something(connection)``` Database Resident Connection Pooling**Connection pooling on the database tier**Documentation reference link: [Database Resident Connection Pooling (DRCP)](https://cx-oracle.readthedocs.io/en/latest/user_guide/connection_handling.htmldatabase-resident-connection-pooling-drcp) Dedicated server processes are the default in the database, but DRCP is an alternative when the database server is short of memory.  Use DRCP if and only if:- The database computer doesn't have enough memory to keep all application connections open concurrently- When you have thousands of users which need access to a database server session for a short period of time- Applications mostly use same database credentials, and have identical session settingsUsing DRCP in conjunction with Python Connection Pooling is recommended. Memory example with 5000 application users and a DRCP pool of size 100 In Python, the connect string must request a pooled server. For best reuse, set a connection class and use the 'SELF' purity when getting a connection from the pool.```pool = cx_Oracle.SessionPool(user=un, password=pw, dsn="dbhost.example.com/orcl:pooled")connection = pool.acquire(cclass="MYCLASS", purity=cx_Oracle.ATTR_PURITY_SELF)```Don't forget to start the pool first!:```SQL> execute dbms_connection_pool.start_pool()``` Connecting to Autonomous Database in Oracle Cloud If you haven't seen it, try our "Always Free" service that gives free access to Oracle DB and other cloud resources ADB connections use "wallets" for mutual TLS to provide strong security.Click the "DB Connection" button:  And download the wallet zip:  Unzip and extract the `cwallet.sso` file, and optionally the `tnsnames.ora` and `sqlnet.ora` files.```-rw-r--r-- 1 cjones staff 6725 15 Aug 00:12 cwallet.sso-rw-r--r-- 1 cjones staff 134 15 Aug 10:13 sqlnet.ora-rw-r--r-- 1 cjones staff 1801 15 Aug 00:12 tnsnames.ora```Keep `cwallet.sso` secure.
###Code
# You still need a DB username and password.
cloud_user = "cj"
cloud_password = os.environ.get("CLOUD_PASSWORD")
# "Easy Connect" syntax can be inferred from the tnsnames.ora file entries.
# The wallet_location is the directory containing cwallet.sso.
# When using "Easy Connect", no other files from the zip are needed
# cloud_connect_string = "cjdbmelb_high"
cloud_cs = "tcps://abc.oraclecloud.com:1522/anc_cjdbmelb_high.adb.oraclecloud.com" \
"?wallet_location=/home/cjones/CJDBMELB/"
connection = cx_Oracle.connect(user=cloud_user, password=cloud_password, dsn=cloud_cs)
with connection.cursor() as cursor:
sql = "select user from dual"
for r, in cursor.execute(sql):
print("User is", r)
###Output
_____no_output_____ |
[03 - Results]/dos results ver 4/models/iter-2/fft_r13-i2.ipynb | ###Markdown
Module Imports for Data Fetiching and Visualization
###Code
import time
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Module Imports for Data Processing
###Code
from sklearn import preprocessing
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
import pickle
###Output
_____no_output_____
###Markdown
Importing Dataset from GitHub Train Data
###Code
df1 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-15-m-1-r13.csv?token=AKVFSOBQE7WEVKJNPHBZ7ZK63JGKG')
df2 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-15-m-11-r13.csv?token=AKVFSOCO6G7PCJACTCL6OTK63JGKO')
df3 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-4-m-1-r13.csv?token=AKVFSOGQYRVCLSVVEOETOXC63JGKS')
df4 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-4-m-11-r13.csv?token=AKVFSOB3HO5P6GDK67P6K4S63JGKW')
df5 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-6-m-1-r13.csv?token=AKVFSODFTZSKE5ITXKTDLXS63JGK2')
df6 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-6-m-11-r13.csv?token=AKVFSOF4G7RX52XQGBQHEIS63JGK6')
df7 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-9-m-1-r13.csv?token=AKVFSOAMLOTS5B3J2FMHWFS63JGLC')
df8 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-9-m-11-r13.csv?token=AKVFSOGLVST7JJQOPNQGS6C63JGLG')
df9 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-normal-n-0-15-r13.csv?token=AKVFSOAU4VSVFXRBDCFEQ6C63JGLM')
df10 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-normal-n-0-4-r13.csv?token=AKVFSOFV7M6DTXAWFI5NP3S63JGLQ')
df11 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-normal-n-0-6-r13.csv?token=AKVFSOFC42FQFWLHL7WWRQC63JGLW')
df12 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-normal-n-0-9-r13.csv?token=AKVFSOGKXSWEWNMUSHWBJRC63JGL4')
print(df1.shape)
print(df2.shape)
print(df3.shape)
print(df4.shape)
print(df5.shape)
print(df6.shape)
print(df7.shape)
print(df8.shape)
print(df9.shape)
print(df10.shape)
print(df11.shape)
print(df12.shape)
df = df1.append(df2, ignore_index=True,sort=False)
df = df.append(df3, ignore_index=True,sort=False)
df = df.append(df4, ignore_index=True,sort=False)
df = df.append(df5, ignore_index=True,sort=False)
df = df.append(df6, ignore_index=True,sort=False)
df = df.append(df7, ignore_index=True,sort=False)
df = df.append(df8, ignore_index=True,sort=False)
df = df.append(df9, ignore_index=True,sort=False)
df = df.append(df10, ignore_index=True,sort=False)
df = df.append(df11, ignore_index=True,sort=False)
df = df.append(df12, ignore_index=True,sort=False)
df = df.sort_values('timestamp')
df.to_csv('fft-r12-train.csv',index=False)
df = pd.read_csv('fft-r12-train.csv')
df
df.shape
###Output
_____no_output_____
###Markdown
Test Data
###Code
df13 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-15-m-12-r13.csv?token=AKVFSOAXXM3UU5GITOILMOC63JHPW')
df14 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-15-m-7-r13.csv?token=AKVFSOFKNO7OWYS5BRNHQZK63JHP4')
df15 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-4-m-12-r13.csv?token=AKVFSOGYU422BL6DF7RJ6HS63JHQA')
df16 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-4-m-7-r13.csv?token=AKVFSODPPLDKWC3NVAKQGVK63JHQE')
df17 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-6-m-12-r13.csv?token=AKVFSOBEBVBBJKZXWSQLBQ263JHQI')
df18 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-6-m-7-r13.csv?token=AKVFSOEX7RZOWF53SBAXS6K63JHQO')
df19 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-9-m-12-r13.csv?token=AKVFSOE3CS3PGH25O4QDKOC63JHQS')
df20 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r13/2-fft-malicious-n-0-9-m-7-r13.csv?token=AKVFSOCLBNZLGDTYRDK7KKS63JHQW')
print(df13.shape)
print(df14.shape)
print(df15.shape)
print(df16.shape)
print(df17.shape)
print(df18.shape)
print(df19.shape)
print(df20.shape)
df5
###Output
_____no_output_____
###Markdown
Processing
###Code
df.isnull().sum()
df = df.drop(columns=['timestamp','src_ni','src_router','dst_ni','dst_router'])
df.corr()
plt.figure(figsize=(25,25))
sns.heatmap(df.corr(), annot = True)
plt.show()
def find_correlation(data, threshold=0.9):
corr_mat = data.corr()
corr_mat.loc[:, :] = np.tril(corr_mat, k=-1)
already_in = set()
result = []
for col in corr_mat:
perfect_corr = corr_mat[col][abs(corr_mat[col])> threshold].index.tolist()
if perfect_corr and col not in already_in:
already_in.update(set(perfect_corr))
perfect_corr.append(col)
result.append(perfect_corr)
select_nested = [f[1:] for f in result]
select_flat = [i for j in select_nested for i in j]
return select_flat
columns_to_drop = find_correlation(df.drop(columns=['target']))
columns_to_drop
#df = df.drop(columns=[''])
plt.figure(figsize=(21,21))
sns.heatmap(df.corr(), annot = True)
plt.show()
plt.figure(figsize=(25,25))
sns.heatmap(df.corr())
plt.show()
###Output
_____no_output_____
###Markdown
Processing Dataset for Training
###Code
train_X = df.drop(columns=['target'])
train_Y = df['target']
#standardization
x = train_X.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = train_X.columns
x_scaled = min_max_scaler.fit_transform(x)
train_X = pd.DataFrame(x_scaled)
train_X.columns = columns
train_X
train_X[train_X.duplicated()].shape
test_X = df13.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router'])
test_Y = df13['target']
x = test_X.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X = pd.DataFrame(x_scaled)
test_X.columns = columns
print(test_X[test_X.duplicated()].shape)
test_X
test_X1 = df14.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router'])
test_Y1 = df14['target']
x = test_X1.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X1.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X1 = pd.DataFrame(x_scaled)
test_X1.columns = columns
print(test_X1[test_X1.duplicated()].shape)
test_X2 = df15.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router'])
test_Y2 = df15['target']
x = test_X2.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X2.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X2 = pd.DataFrame(x_scaled)
test_X2.columns = columns
print(test_X2[test_X2.duplicated()].shape)
test_X3 = df16.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router'])
test_Y3 = df16['target']
x = test_X3.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X3.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X3 = pd.DataFrame(x_scaled)
test_X3.columns = columns
print(test_X3[test_X3.duplicated()].shape)
test_X4 = df17.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router'])
test_Y4 = df17['target']
x = test_X4.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X4.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X4 = pd.DataFrame(x_scaled)
test_X4.columns = columns
print(test_X4[test_X4.duplicated()].shape)
test_X5 = df18.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router'])
test_Y5 = df18['target']
x = test_X5.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X5.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X5 = pd.DataFrame(x_scaled)
test_X5.columns = columns
print(test_X5[test_X5.duplicated()].shape)
test_X6 = df19.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router'])
test_Y6 = df19['target']
x = test_X6.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X6.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X6 = pd.DataFrame(x_scaled)
test_X6.columns = columns
print(test_X6[test_X6.duplicated()].shape)
test_X7 = df20.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router'])
test_Y7 = df20['target']
x = test_X7.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X7.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X7 = pd.DataFrame(x_scaled)
test_X7.columns = columns
print(test_X7[test_X7.duplicated()].shape)
###Output
(0, 20)
###Markdown
Machine Learning Models Module Imports for Data Processing and Report Generation in Machine Learning Models
###Code
from sklearn.model_selection import train_test_split
import statsmodels.api as sm
from sklearn import metrics
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import mean_squared_error
###Output
_____no_output_____
###Markdown
Labels1. 0 - malicious2. 1 - good
###Code
train_Y = df['target']
train_Y.value_counts()
###Output
_____no_output_____
###Markdown
Training and Validation Splitting of the Dataset
###Code
seed = 5
np.random.seed(seed)
X_train, X_test, y_train, y_test = train_test_split(train_X, train_Y, test_size=0.2, random_state=seed, shuffle=True)
###Output
_____no_output_____
###Markdown
Feature Selection
###Code
#SelectKBest for feature selection
bf = SelectKBest(score_func=chi2, k=17)
fit = bf.fit(X_train,y_train)
dfscores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(columns)
featureScores = pd.concat([dfcolumns,dfscores],axis=1)
featureScores.columns = ['Specs','Score']
print(featureScores.nlargest(17,'Score'))
featureScores.plot(kind='barh')
###Output
Specs Score
7 traversal_id 2358.444848
17 traversal_index 837.658820
14 max_packet_count 302.238327
15 packet_count_index 289.860370
12 packet_count_decr 151.658579
13 packet_count_incr 150.580663
0 outport 92.424032
6 vc 48.166788
16 port_index 40.117244
9 current_hop 15.726618
8 hop_count 13.831085
19 vnet_vc_cc_index 8.366634
1 inport 6.218698
2 cache_coherence_type 4.882277
18 cache_coherence_vnet_index 4.882277
4 flit_type 3.911649
11 enqueue_time 3.832814
###Markdown
Decision Tree Classifier
###Code
#decisiontreee
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV
dt = DecisionTreeClassifier(max_depth=20,max_features=20,random_state = 42)
dt.fit(X_train,y_train)
pickle.dump(dt, open("dt-r12.pickle.dat", 'wb'))
y_pred_dt= dt.predict(X_test)
dt_score_train = dt.score(X_train,y_train)
print("Train Prediction Score",dt_score_train*100)
dt_score_test = accuracy_score(y_test,y_pred_dt)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X)
dt_score_test = accuracy_score(test_Y,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X1)
dt_score_test = accuracy_score(test_Y1,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X2)
dt_score_test = accuracy_score(test_Y2,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X3)
dt_score_test = accuracy_score(test_Y3,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X4)
dt_score_test = accuracy_score(test_Y4,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X5)
dt_score_test = accuracy_score(test_Y5,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X6)
dt_score_test = accuracy_score(test_Y6,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X7)
dt_score_test = accuracy_score(test_Y7,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
feat_importances = pd.Series(dt.feature_importances_, index=columns)
feat_importances.plot(kind='barh')
cm = confusion_matrix(y_test, y_pred_dt)
class_label = ["Anomalous", "Normal"]
df_cm = pd.DataFrame(cm, index=class_label,columns=class_label)
sns.heatmap(df_cm, annot=True, fmt='d')
plt.title("Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.show()
print(classification_report(y_test,y_pred_dt))
dt_roc_auc = roc_auc_score(y_test, y_pred_dt)
fpr, tpr, thresholds = roc_curve(y_test, dt.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='DTree (area = %0.2f)' % dt_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.savefig('DT_ROC')
plt.show()
###Output
_____no_output_____
###Markdown
XGB Classifier
###Code
from xgboost import XGBClassifier
from xgboost import plot_importance
xgbc = XGBClassifier(max_depth=20,min_child_weight=1,n_estimators=500,random_state=42,learning_rate=0.2)
xgbc.fit(X_train,y_train)
pickle.dump(xgbc, open("xgbc-r13.pickle.dat", 'wb'))
y_pred_xgbc= xgbc.predict(X_test)
xgbc_score_train = xgbc.score(X_train,y_train)
print("Train Prediction Score",xgbc_score_train*100)
xgbc_score_test = accuracy_score(y_test,y_pred_xgbc)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X)
xgbc_score_test = accuracy_score(test_Y,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X1)
xgbc_score_test = accuracy_score(test_Y1,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X2)
xgbc_score_test = accuracy_score(test_Y2,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X3)
xgbc_score_test = accuracy_score(test_Y3,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X4)
xgbc_score_test = accuracy_score(test_Y4,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X5)
xgbc_score_test = accuracy_score(test_Y5,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X6)
xgbc_score_test = accuracy_score(test_Y6,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X7)
xgbc_score_test = accuracy_score(test_Y7,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
plot_importance(xgbc)
plt.show()
cm = confusion_matrix(y_test, y_pred_xgbc)
class_label = ["Anomalous", "Normal"]
df_cm = pd.DataFrame(cm, index=class_label,columns=class_label)
sns.heatmap(df_cm, annot=True, fmt='d')
plt.title("Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.show()
print(classification_report(y_test,y_pred_xgbc))
xgb_roc_auc = roc_auc_score(y_test, y_pred_xgbc)
fpr, tpr, thresholds = roc_curve(y_test, xgbc.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='XGBoost (area = %0.2f)' % xgb_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.savefig('XGB_ROC')
plt.show()
###Output
_____no_output_____ |
Tensorflow/Simple_models/Linear_Regression.ipynb | ###Markdown
Generate Data
###Code
x_min = 0
x_max = 7
n = 500
seed = 45
noise_std = 2
np.random.seed(seed)
train_X = np.random.uniform(x_min,x_max,n)
theta = np.random.randn(2,1)*2
train_Y = theta[0] + theta[1]*train_X + np.random.normal(0,noise_std,n)
plt.scatter(train_X,train_Y);
###Output
_____no_output_____
###Markdown
Set model inputs and weights
###Code
# tf Graph Input
X = tf.placeholder("float")
Y = tf.placeholder("float")
# Set model weights
W = tf.Variable(5., name="weight")
b = tf.Variable(4., name="bias")
###Output
_____no_output_____
###Markdown
Construct a linear model
###Code
pred = tf.add(tf.multiply(X, W), b)
###Output
_____no_output_____
###Markdown
Hyperparamaters
###Code
learning_rate = 0.1
epochs = 100
display_step = 10
###Output
_____no_output_____
###Markdown
Loss and optmimizer
###Code
# Mean squared error
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n)
# Gradient descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
###Output
WARNING: Logging before flag parsing goes to stderr.
W0712 17:04:46.321293 140190995961664 deprecation.py:323] From /root/environments/my_env/lib/python3.6/site-packages/tensorflow/python/ops/math_grad.py:1205: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
Create Init method
###Code
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
###Output
_____no_output_____
###Markdown
Train linear model
###Code
# Start training
with tf.Session() as sess:
sess.run(init)
# Fit all training data
for epoch in range(epochs):
for (x, y) in zip(train_X, train_Y):
sess.run(optimizer, feed_dict={X: x, Y: y})
#Display logs per epoch step
if (epoch+1) % display_step == 0:
c = sess.run(cost, feed_dict={X: train_X, Y:train_Y})
print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c),"\n","W=", sess.run(W), "b=", sess.run(b))
print("Optimization Finished!")
training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n')
print("Real W = {0}, b = {1}".format(theta[1],theta[0]))
#Graphic display
plt.plot(train_X, train_Y, 'ro', label='Original data')
plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')
plt.legend()
plt.show()
###Output
Epoch: 0010 cost= 2.502203465
W= 3.4246387 b= 3.1051524
Epoch: 0020 cost= 2.319518566
W= 3.5147514 b= 2.6881714
Epoch: 0030 cost= 2.206198454
W= 3.5857248 b= 2.3597496
Epoch: 0040 cost= 2.135903358
W= 3.6416283 b= 2.1010659
Epoch: 0050 cost= 2.092302799
W= 3.685658 b= 1.8973271
Epoch: 0060 cost= 2.065258265
W= 3.720334 b= 1.736856
Epoch: 0070 cost= 2.048485041
W= 3.747649 b= 1.6104653
Epoch: 0080 cost= 2.038081646
W= 3.769162 b= 1.5109161
Epoch: 0090 cost= 2.031630039
W= 3.7861066 b= 1.4325076
Epoch: 0100 cost= 2.027629137
W= 3.799454 b= 1.3707496
Optimization Finished!
Training cost= 2.0276291 W= 3.799454 b= 1.3707496
Real W = [3.84789033], b = [1.20685109]
|
2. Automatic Differentiation.ipynb | ###Markdown
Automatic Differentiation> The **backpropagation** algorithm was originally introduced in the 1970s, but its importance wasn't fully appreciated until a famous 1986 paper by David Rumelhart, Geoffrey Hinton, and Ronald Williams.> **Backpropagation** is the key algorithm that makes training deep models computationally tractable. For modern neural networks, it can make training with gradient descent as much as ten million times faster, relative to a naive implementation. That’s the difference between a model taking a week to train and taking 200,000 years. (Christopher Olah, 2016)We have seen that in order to optimize our models we need to compute the derivative of the loss function with respect to all model paramaters. The computation of derivatives in computer models is addressed by four main methods: + Manually working out derivatives and coding the result (as in the original paper describing backpropagation); + Numerical differentiation (using finite difference approximations); + Symbolic differentiation (using expression manipulation in software, such as Sympy); + and Automatic differentiation (AD).**Automatic differentiation** (AD) works by systematically applying the **chain rule** of differential calculus at the elementary operator level.Let $ y = f(g(x)) $ our target function. In its basic form, the chain rule states:$$ \frac{\partial f}{\partial x} = \frac{\partial f}{\partial g} \frac{\partial g}{\partial x} $$or, if there are more than one variable $g_i$ in-between $y$ and $x$ (f.e. if $f$ is a two dimensional function such as $f(g_1(x), g_2(x))$), then:$$ \frac{\partial f}{\partial x} = \sum_i \frac{\partial f}{\partial g_i} \frac{\partial g_i}{\partial x} $$> See https://www.math.hmc.edu/calculus/tutorials/multichainrule/Now, let's see how AD allows the accurate evaluation of derivatives at machine precision, with only a small constant factor of overhead.In its most basic description, AD relies on the fact that all numerical computationsare ultimately compositions of a finite set of elementary operations for which derivatives are known. For example, let's consider the computation of the derivative of this function, that represents a 1-layer neural network model:$$ f(x) = \frac{1}{1 + e^{- ({w}^T \cdot x + b)}} $$First, let's write how to evaluate $f(x)$ via a sequence of primitive operations:```pythonx = ?f1 = w * xf2 = f1 + bf3 = -f2f4 = 2.718281828459 ** f3f5 = 1.0 + f4f = 1.0/f5```The question mark indicates that $x$ is a value that must be provided. This *program* can compute the value of $x$ and also **populate program variables**. We can evaluate $\frac{\partial f}{\partial x}$ at some $x$ by using the chain rule. This is called *forward-mode differentiation*. In our case:
###Code
def f(x,w,b):
f1 = w * x
f2 = f1 + b
f3 = -f2
f4 = 2.718281828459 ** f3
f5 = 1.0 + f4
return 1.0/f5
def dfdx_forward(x, w, b):
f1 = w * x
p1 = w # p1 = df1/dx
f2 = f1 + b
p2 = p1 * 1.0 # p2 = p1 * df2/df1
f3 = -f2
p3 = p2 * -1.0 # p3 = p2 * df3/df2
f4 = 2.718281828459 ** f3
p4 = p3 * 2.718281828459 ** f3 # p4 = p3 * df4/df3
f5 = 1.0 + f4
p5 = p4 * 1.0 # p5 = p4 * df5/df4
f6 = 1.0/f5
dfx = p5 * -1.0 / f5 ** 2.0 # df/dx = p5 * df6/df5
return f6, dfx
der = (f(3+0.00001, 2, 1) - f(3, 2, 1))/0.00001
print("Value of the function at (3, 2, 1): ",f(3, 2, 1))
print("df/dx Derivative (fin diff) at (3, 2, 1): ",der)
print("df/dx Derivative (aut diff) at (3, 2, 1): ",dfdx_forward(3, 2, 1)[1])
###Output
Value of the function at (3, 2, 1): 0.9990889488055992
df/dx Derivative (fin diff) at (3, 2, 1): 0.0018204242002717306
df/dx Derivative (aut diff) at (3, 2, 1): 0.0018204423602438651
###Markdown
It is interesting to note that this *program* can be automatically derived if we have access to **subroutines implementing the derivatives of primitive functions** (such as $\exp{(x)}$ or $1/x$) and all intermediate variables are computed in the right order. It is also interesting to note that AD allows the accurate evaluation of derivatives at **machine precision**, with only a small constant factor of overhead. > ** Exercise: ** Write an automatic differentiation program to compute $\partial f/ \partial w$ and $\partial f/\partial b$.
###Code
def f(x,w,b):
f1 = w * x
f2 = f1 + b
f3 = -f2
f4 = 2.718281828459 ** f3
f5 = 1.0 + f4
return 1.0/f5
# solution code
def dfdx_forward_w(x, w, b):
pass
def dfdx_forward_b(x, w, b):
pass
print("df/dw Derivative (aut diff) at (3, 2, 1): ",
dfdx_forward_w(3, 2, 1))
print("df/db Derivative (aut diff) at (3, 2, 1): ",
dfdx_forward_b(3, 2, 1))
# approximate results (just for checking)
derw = (f(3, 2+0.00001, 1) - f(3, 2, 1))/0.00001
derb = (f(3, 2, 1+0.00001) - f(3, 2, 1))/0.00001
print("df/dw Derivative (fin diff) at (3, 2, 1): ",derw)
print("df/db Derivative (fin diff) at (3, 2, 1): ",derb)
###Output
df/dw Derivative (aut diff) at (3, 2, 1): None
df/db Derivative (aut diff) at (3, 2, 1): None
df/dw Derivative (fin diff) at (3, 2, 1): 0.0027306226724199685
df/db Derivative (fin diff) at (3, 2, 1): 0.0009102166464991511
###Markdown
Forward differentiation is efficient for functions $f : \mathbb{R}^n \rightarrow \mathbb{R}^m$ with $n << m$ (only $O(n)$ sweeps are necessary). This is not the case of neural networks!For cases $n >> m$ a different technique is needed. To this end, we will rewrite the chain rule as:$$\frac{\partial f}{\partial x} = \frac{\partial g}{\partial x} \frac{\partial f}{\partial g}$$to propagate derivatives backward from a given output. This is called *reverse-mode differentiation*. Reverse pass starts at the end (i.e. $\frac{\partial f}{\partial f} = 1$) and propagates backward to all dependencies.
###Code
def dfdx_backward(x, w, b):
import numpy as np
f1 = w * x
f2 = f1 + b
f3 = -f2
f4 = 2.718281828459 ** f3
f5 = 1.0 + f4
f = 1.0/f5
pf = 1.0 # pf = df/df
p5 = 1.0 * -1.0 / (f5 * f5) # p5 = pf * df/df5
p4 = p5 * 1.0 # p4 = p5 * df5/df4
p3 = p4 * np.log(2.718281828459) \
* 2.718281828459 ** f3 # p3 = p4 * df4/df3
p2 = p3 * -1.0 # p2 = p3 * df3/df2
p1 = p2 * 1.0 # p1 = p2 * df2/df1
dfx = p1 * w # df/dx = p1 * df1/dx
return f, dfx
print("df/dx Derivative (aut diff) at (3, 2, 1): ",
dfdx_backward(3, 2, 1)[1])
###Output
df/dx Derivative (aut diff) at (3, 2, 1): 0.0018204423602438348
###Markdown
> ** Exercise: ** Write an automatic differentiation program to compute $\partial f/ \partial w$ and $\partial f/\partial b$.
###Code
# solution code
def dfdx_backward(x, w, b):
pass
###Output
_____no_output_____
###Markdown
In practice, reverse-mode differentiation is a two-stage process. In the first stage the original function code is run forward, populating $f_i$ variables. In the second stage, derivatives are calculated by propagating in reverse, from the outputs to the inputs.The most important property of reverse-mode differentiation is that it is **cheaper than forward-mode differentiation for functions with a high number of input variables**. In our case, $f : \mathbb{R}^n \rightarrow \mathbb{R}$, only one application of the reverse mode is sufficient to compute the full gradient of the function $\nabla f = \big( \frac{\partial y}{\partial x_1}, \dots ,\frac{\partial y}{\partial x_n} \big)$. This is the case of deep learning, where the number of input variables is very high. > As we have seen, AD relies on the fact that all numerical computationsare ultimately compositions of a finite set of elementary operations for which derivatives are known. > For this reason, given a library of derivatives of all elementary functions in a deep neural network, we are able of computing the derivatives of the network with respect to all parameters at machine precision and applying stochastic gradient methods to its training. Without this automation process the design and debugging of optimization processes for complex neural networks with millions of parameters would be impossible. AutogradAutograd is a Python module (with only one function) that implements automatic differentiation.Autograd can automatically differentiate Python and Numpy code:+ It can handle most of Python’s features, including loops, if statements, recursion and closures.+ Autograd allows you to compute gradients of many types of data structures (Any nested combination of lists, tuples, arrays, or dicts).+ It can also compute higher-order derivatives.+ Uses reverse-mode differentiation (backpropagation) so it can efficiently take gradients of scalar-valued functions with respect to array-valued or vector-valued arguments.+ You can easily implement your custim gradients (good for speed, numerical stability, non-compliant code, etc).
###Code
import autograd.numpy as np
from autograd import grad
x1 = np.array([2, 5], dtype=float)
x2 = np.array([5, 2], dtype=float)
def test(x):
if x[0]>3:
return np.log(x[0]) + x[0]*x[1] - np.sin(x[1])
else:
return np.log(x[0]) + x[0]*x[1] + np.sin(x[1])
grad_test = grad(test)
print("({:.2f},{:.2f})".format(grad_test(x1)[0],grad_test(x1)[1]))
print("({:.2f},{:.2f})".format(grad_test(x2)[0],grad_test(x2)[1]))
###Output
(5.50,2.28)
(2.20,5.42)
###Markdown
The ``grad`` function:````grad(fun, argnum=0, *nary_op_args, **nary_op_kwargs)Returns a function which computes the gradient of `fun` with respect to positional argument number `argnum`. The returned function takes the same arguments as `fun`, but returns the gradient instead. The function `fun` should be scalar-valued. The gradient has the same type as the argument.``` Then, a simple (there is no bias term) logistic regression model for $n$-dimensional data like this$$ f(x) = \frac{1}{1 + \exp^{-(\mathbf{w}^T \mathbf{x})}} $$can be implemented in this way:
###Code
import autograd.numpy as np
from autograd import grad
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def logistic_predictions(weights, inputs):
return sigmoid(np.dot(inputs, weights))
def training_loss(weights, inputs, targets):
preds = logistic_predictions(weights, inputs)
loss = preds * targets + (1 - preds) * (1 - targets)
return -np.sum(np.log(loss))
def optimize(inputs, targets, training_loss):
# Optimize weights using gradient descent.
gradient_loss = grad(training_loss)
weights = np.zeros(inputs.shape[1])
print("Initial loss:", training_loss(weights, inputs, targets))
for i in range(100):
weights -= gradient_loss(weights, inputs, targets) * 0.01
print("Final loss:", training_loss(weights, inputs, targets))
return weights
# Build a toy dataset with 3d data
inputs = np.array([[0.52, 1.12, 0.77],
[0.88, -1.08, 0.15],
[0.52, 0.06, -1.30],
[0.74, -2.49, 1.39]])
targets = np.array([True, True, False, True])
weights = optimize(inputs, targets, training_loss)
print("Weights:", weights)
###Output
Initial loss: 2.772588722239781
Final loss: 1.0672706757870165
Weights: [ 0.48307366 -0.37057217 1.06937395]
###Markdown
Any complex function that can be decomposed in a set of elementary functions can be derived in an automatic way, at machine precision, by this algorithm!**We no longer need to code complex derivatives to apply SGD! ** > ** Exercise: ** Make the necessary changes to the code below in order to compute a max-margin solution for a linear separation problem by using SGD.
###Code
%reset
import numpy as np
#Example dataset
N_samples_per_class = 100
d_dimensions = 2
x = np.vstack((np.random.randn(N_samples_per_class, d_dimensions),
np.random.randn(N_samples_per_class, d_dimensions)
+np.array([5,5])))
y = np.concatenate([-1.0*np.ones(N_samples_per_class),
1.*np.ones(N_samples_per_class)])
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
idx = y==1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25)
idx = y==-1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink')
import autograd.numpy as np
from autograd import grad
def SVM_predictions(w, inputs):
return np.dot(w[0,:-1],inputs.T)+w[0,-1]
def SVM_training_loss(weights, inputs, targets):
pred = SVM_predictions(weights, inputs)
return np.sum((targets-pred)**2)/inputs.shape[0]
def optimize(inputs, targets, training_loss):
gradient_loss = grad(training_loss)
weights = np.zeros((1,inputs.shape[1]+1))
print("Initial loss:", training_loss(weights, inputs, targets))
for i in range(100000):
weights -= gradient_loss(weights, inputs, targets) * 0.001
if i%10000 == 0:
print(" Loss:", training_loss(weights, inputs, targets))
print("Final loss:", training_loss(weights, inputs, targets))
return weights
weights = optimize(x, y, SVM_training_loss)
print("Weights", weights)
delta = 0.1
xx = np.arange(-4.0, 10.0, delta)
yy = np.arange(-4.0, 10.0, delta)
XX, YY = np.meshgrid(xx, yy)
Xf = XX.flatten()
Yf = YY.flatten()
sz=XX.shape
test_data = np.concatenate([Xf[:,np.newaxis],Yf[:,np.newaxis]],axis=1)
Z = SVM_predictions(weights,test_data)
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
Z = np.reshape(Z,(xx.shape[0],xx.shape[0]))
plt.contour(XX,YY,Z,[0])
idx = y==1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25)
idx = y==-1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink')
###Output
_____no_output_____
###Markdown
> ** Exercise: ** Make the necessary changes to the code below in order to compute a **new sample** that is optimal for the classifier you have learned in the previous exercise.
###Code
%reset
import numpy as np
#Example dataset
N_samples_per_class = 100
d_dimensions = 2
x = np.vstack((np.random.randn(N_samples_per_class, d_dimensions),
np.random.randn(N_samples_per_class, d_dimensions)
+np.array([2,2])))
y = np.concatenate([-1.0*np.ones(N_samples_per_class),
1.*np.ones(N_samples_per_class)])
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
idx = y==1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25)
idx = y==-1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink')
# solution code
import autograd.numpy as np
from autograd import grad
def SVM_predictions(w, inputs):
return np.dot(w[0,:-1],inputs.T)+w[0,-1]
def SVM_training_loss(weights, inputs, targets):
pred = SVM_predictions(weights, inputs)
return np.sum(np.maximum(0,1-targets*pred))/inputs.shape[0]
def optimize(inputs, targets, training_loss):
gradient_loss = grad(training_loss)
weights = np.array([[1.15196035, 1.06797815, -2.0131]])
print("Initial loss:", training_loss(weights, inputs, targets))
for i in range(10000):
weights -= gradient_loss(weights, inputs, targets) * 0.01
if i%1000 == 0:
print(" Loss:", training_loss(weights, inputs, targets))
print("Final loss:", training_loss(weights, inputs, targets))
return weights
weights = optimize(x, y, SVM_training_loss)
print("Weights", weights)
delta = 0.1
xx = np.arange(-4.0, 6.0, delta)
yy = np.arange(-4.0, 6.0, delta)
XX, YY = np.meshgrid(xx, yy)
Xf = XX.flatten()
Yf = YY.flatten()
sz=XX.shape
test_data = np.concatenate([Xf[:,np.newaxis],Yf[:,np.newaxis]],axis=1)
Z = SVM_predictions(weights,test_data)
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
Z = np.reshape(Z,(xx.shape[0],xx.shape[0]))
plt.contour(XX,YY,Z,[0])
idx = y==1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25)
idx = y==-1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink')
###Output
_____no_output_____
###Markdown
Neural Network
###Code
%reset
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
import sklearn.linear_model
import matplotlib
import autograd.numpy as np
from autograd import grad
from autograd.misc.flatten import flatten
# Display plots inline and change default figure size
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (12.0, 6.0)
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_moons(200, noise=0.20)
plt.scatter(X[:,0], X[:,1], s=40, c=y, alpha=0.75)
###Output
_____no_output_____
###Markdown
Let's now build a 3-layer neural network with one input layer, one hidden layer, and one output layer. The number of nodes in the input layer is determined by the dimensionality of our data, 2. Similarly, the number of nodes in the output layer is determined by the number of classes we have, also 2. Our network makes predictions using forward propagation, which is just a bunch of matrix multiplications and the application of the activation function(s). If $x$ is the 2-dimensional input to our network then we calculate our prediction $\hat{y}$ (also two-dimensional) as follows:$$ z_1 = x W_1 + b_1 $$$$ a_1 = \mbox{tanh}(z_1) $$$$ z_2 = a_1 W_2 + b_2$$$$ a_2 = \mbox{softmax}({z_2})$$$W_1, b_1, W_2, b_2$ are parameters of our network, which we need to learn from our training data. You can think of them as matrices transforming data between layers of the network. Looking at the matrix multiplications above we can figure out the dimensionality of these matrices. If we use 500 nodes for our hidden layer then $W_1 \in \mathbb{R}^{2\times500}$, $b_1 \in \mathbb{R}^{500}$, $W_2 \in \mathbb{R}^{500\times2}$, $b_2 \in \mathbb{R}^{2}$. A common choice with the softmax output is the **cross-entropy loss**. If we have $N$ training examples and $C$ classes then the loss for our prediction $\hat{y}$ with respect to the true labels $y$ is given by:$$\begin{aligned}L(y,\hat{y}) = - \frac{1}{N} \sum_{n \in N} \sum_{i \in C} y_{n,i} \log\hat{y}_{n,i}\end{aligned}$$
###Code
num_examples = len(X) # training set size
nn_input_dim = 2 # input layer dimensionality
nn_output_dim = 2 # output layer dimensionality
# Gradient descent parameters
epsilon = 0.01 # learning rate for gradient descent
def calculate_loss(model):
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Forward propagation to calculate our predictions
z1 = np.dot(X,W1) + b1
a1 = np.tanh(z1)
z2 = np.dot(a1,W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Calculating the loss
corect_logprobs = -np.log(probs[range(num_examples), y])
data_loss = np.sum(corect_logprobs)
return 1./num_examples * data_loss
# output (0 or 1)
def predict(model, x):
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Forward propagation
z1 = np.dot(x,W1) + b1
a1 = np.tanh(z1)
z2 = np.dot(a1,W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
return np.argmax(probs, axis=1)
###Output
_____no_output_____
###Markdown
This program solves the optimization problem by using AD:
###Code
# This function learns parameters for the neural network and returns the model.
# - nn_hdim: Number of nodes in the hidden layer
# - num_passes: Number of passes through the training data for gradient descent
# - print_loss: If True, print the loss every 1000 iterations
def build_model(nn_hdim, num_passes=20000, print_loss=False):
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim)
b1 = np.zeros((1, nn_hdim))
W2 = np.random.randn(nn_hdim, nn_output_dim) / np.sqrt(nn_hdim)
b2 = np.zeros((1, nn_output_dim))
# This is what we return at the end
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
# Gradient descent. For each batch...
for i in range(0, num_passes):
# Forward propagation
z1 = np.dot(X,model['W1']) + model['b1']
a1 = np.tanh(z1)
z2 = np.dot(a1,model['W2']) + model['b2']
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
gradient_loss = grad(calculate_loss)
model_flat, unflatten_m = flatten(model)
grad_flat, unflatten_g = flatten(gradient_loss(model))
model_flat -= grad_flat * 0.05
model = unflatten_m(model_flat)
# Optionally print the loss.
# This is expensive because it uses the whole dataset, so we don't want to do it too often.
if print_loss and i % 1000 == 0:
print("Loss after iteration %i: %f" %(i, calculate_loss(model)))
return model
def plot_decision_boundary(pred_func):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, alpha=0.45)
plt.scatter(X[:, 0], X[:, 1], c=y, alpha=0.45)
# Build a model with a 3-dimensional hidden layer
model = build_model(3, print_loss=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(model, x))
plt.title("Decision Boundary for hidden layer size 3")
###Output
Loss after iteration 0: 0.578766
Loss after iteration 1000: 0.289271
Loss after iteration 2000: 0.233985
Loss after iteration 3000: 0.183354
Loss after iteration 4000: 0.148689
Loss after iteration 5000: 0.120565
Loss after iteration 6000: 0.102844
Loss after iteration 7000: 0.091903
Loss after iteration 8000: 0.085048
Loss after iteration 9000: 0.080741
Loss after iteration 10000: 0.077962
Loss after iteration 11000: 0.076086
Loss after iteration 12000: 0.074758
Loss after iteration 13000: 0.073776
Loss after iteration 14000: 0.073021
Loss after iteration 15000: 0.072423
Loss after iteration 16000: 0.071936
Loss after iteration 17000: 0.071532
Loss after iteration 18000: 0.071192
Loss after iteration 19000: 0.070900
###Markdown
Let's now get a sense of how varying the hidden layer size affects the result.
###Code
plt.figure(figsize=(16, 32))
hidden_layer_dimensions = [1, 2, 3, 4, 5, 20, 50]
for i, nn_hdim in enumerate(hidden_layer_dimensions):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer size %d' % nn_hdim)
model = build_model(nn_hdim)
plot_decision_boundary(lambda x: predict(model, x))
plt.show()
###Output
_____no_output_____
###Markdown
How to learn DL models?> The **backpropagation** algorithm was originally introduced in the 1970s, but its importance wasn't fully appreciated until a famous 1986 paper by David Rumelhart, Geoffrey Hinton, and Ronald Williams.> **Backpropagation** is the key algorithm that makes training deep models computationally tractable. For modern neural networks, it can make training with gradient descent as much as ten million times faster, relative to a naive implementation. That’s the difference between a model taking a week to train and taking 200,000 years. (Christopher Olah, 2016)We have seen that in order to optimize our models we need to compute the derivative of the loss function with respect to all model paramaters. For example, given:$$ L(y, f_{\mathbf w}(\mathbf{x})) = \frac{1}{n} \sum_i (y_i - f_{\mathbf w}(\mathbf{x}_i))^2 $$where ${\mathbf w} = (w_1, w_2, \dots, w_m)$, we need to compute:$$\frac{\delta L}{\delta w_i}$$The computation of derivatives in computer models is addressed by four main methods: + Manually working out derivatives and coding the result (as in the original paper describing backpropagation); + Numerical differentiation (using finite difference approximations); + Symbolic differentiation (using expression manipulation in software, such as Sympy); + and Automatic differentiation (AD). Automatic differentiation**Automatic differentiation** (AD) works by systematically applying the **chain rule** of differential calculus at the elementary operator level.Let $ y = f(g(x)) $ our target function. In its basic form, the chain rule states:$$ \frac{\partial f}{\partial x} = \frac{\partial f}{\partial g} \frac{\partial g}{\partial x} $$or, if there are more than one variable $g_i$ in-between $y$ and $x$ (f.e. if $f$ is a two dimensional function such as $f(g_1(x), g_2(x))$), then:$$ \frac{\partial f}{\partial x} = \sum_i \frac{\partial f}{\partial g_i} \frac{\partial g_i}{\partial x} $$> See https://www.math.hmc.edu/calculus/tutorials/multichainrule/Now, **let's see how AD allows the accurate evaluation of derivatives at machine precision**, with only a small constant factor of overhead.In its most basic description, AD relies on the fact that all numerical computationsare ultimately compositions of a finite set of elementary operations for which derivatives are known. For example, let's consider the computation of the derivative of this function, that represents a 1-layer neural network model:$$ f(x) = \frac{1}{1 + e^{- ({w}^T \cdot x + b)}} $$First, let's write how to evaluate $f(x)$ via a sequence of primitive operations:```pythonx = ? This is an arbitrary pointf1 = w * xf2 = f1 + bf3 = -f2f4 = 2.718281828459 ** f3f5 = 1.0 + f4f = 1.0/f5```The question mark indicates that $x$ is a value that must be provided. This *program* can compute the value of $f(x)$ and also **populate program variables**. By using this sequence, we can evaluate $\frac{\partial f}{\partial x}$ at any $x$ by using the chain rule. This is called *forward-mode differentiation*. In our case:
###Code
def f(x,w,b):
f1 = w * x
f2 = f1 + b
f3 = -f2
f4 = 2.718281828459 ** f3
f5 = 1.0 + f4
return 1.0/f5
def dfdx_forward(x, w, b):
f1 = w * x
p1 = w # p1 = df1/dx
f2 = f1 + b
p2 = p1 * 1.0 # p2 = p1 * df2/df1
f3 = -f2
p3 = p2 * -1.0 # p3 = p2 * df3/df2
f4 = 2.718281828459 ** f3
p4 = p3 * 2.718281828459 ** f3 # p4 = p3 * df4/df3
f5 = 1.0 + f4
p5 = p4 * 1.0 # p5 = p4 * df5/df4
f6 = 1.0/f5
dfx = p5 * -1.0 / f5 ** 2.0 # df/dx = p5 * df6/df5
return f6, dfx
der = (f(3+0.00001, 2, 1) - f(3, 2, 1))/0.00001
print("Value of the function at (3, 2, 1): ",f(3, 2, 1))
print("df/dx Derivative (fin diff) at (3, 2, 1): ",der)
print("df/dx Derivative (aut diff) at (3, 2, 1): ",dfdx_forward(3, 2, 1)[1])
###Output
Value of the function at (3, 2, 1): 0.9990889488055992
df/dx Derivative (fin diff) at (3, 2, 1): 0.0018204242002717306
df/dx Derivative (aut diff) at (3, 2, 1): 0.0018204423602438651
###Markdown
It is interesting to note that this *program* can be automatically derived if we have access to **subroutines implementing the derivatives of primitive functions** (such as $\exp{(x)}$ or $1/x$) and all intermediate variables are computed in the right order. It is also interesting to note that AD allows the accurate evaluation of derivatives at **machine precision**, with only a small constant factor of overhead. > **Exercise**: Write an automatic differentiation program to compute $\partial f/ \partial w$ and $\partial f/\partial b$.
###Code
def f(x,w,b):
f1 = w * x
f2 = f1 + b
f3 = -f2
f4 = 2.718281828459 ** f3
f5 = 1.0 + f4
return 1.0/f5
# solution code
def dfdx_forward_w(x, w, b):
pass
def dfdx_forward_b(x, w, b):
pass
print("df/dw Derivative (aut diff) at (3, 2, 1): ",
dfdx_forward_w(3, 2, 1))
print("df/db Derivative (aut diff) at (3, 2, 1): ",
dfdx_forward_b(3, 2, 1))
# approximate results (just for checking)
derw = (f(3, 2+0.00001, 1) - f(3, 2, 1))/0.00001
derb = (f(3, 2, 1+0.00001) - f(3, 2, 1))/0.00001
print("df/dw Derivative (fin diff) at (3, 2, 1): ",derw)
print("df/db Derivative (fin diff) at (3, 2, 1): ",derb)
###Output
df/dw Derivative (aut diff) at (3, 2, 1): None
df/db Derivative (aut diff) at (3, 2, 1): None
df/dw Derivative (fin diff) at (3, 2, 1): 0.0027306226724199685
df/db Derivative (fin diff) at (3, 2, 1): 0.0009102166464991511
###Markdown
Forward differentiation is efficient for functions $f : \mathbb{R}^n \rightarrow \mathbb{R}^m$ with $n << m$ (only $O(n)$ sweeps are necessary). This is not the case of neural networks!For cases $n >> m$ a different technique is needed. To this end, we will rewrite the chain rule as:$$\frac{\partial f}{\partial x} = \frac{\partial g}{\partial x} \frac{\partial f}{\partial g}$$to propagate derivatives backward from a given output. This is called *reverse-mode differentiation*. Reverse pass starts at the end (i.e. $\frac{\partial f}{\partial f} = 1$) and propagates backward to all dependencies.
###Code
def dfdx_backward(x, w, b):
import numpy as np
f1 = w * x
f2 = f1 + b
f3 = -f2
f4 = 2.718281828459 ** f3
f5 = 1.0 + f4
f = 1.0/f5
pf = 1.0 # pf = df/df
p5 = 1.0 * -1.0 / (f5 * f5) # p5 = pf * df/df5
p4 = p5 * 1.0 # p4 = p5 * df5/df4
p3 = p4 * np.log(2.718281828459) \
* 2.718281828459 ** f3 # p3 = p4 * df4/df3
p2 = p3 * -1.0 # p2 = p3 * df3/df2
p1 = p2 * 1.0 # p1 = p2 * df2/df1
dfx = p1 * w # df/dx = p1 * df1/dx
return f, dfx
print("df/dx Derivative (aut diff) at (3, 2, 1): ",
dfdx_backward(3, 2, 1)[1])
###Output
df/dx Derivative (aut diff) at (3, 2, 1): 0.0018204423602438348
###Markdown
> **Exercise**: Write an automatic differentiation program to compute $\partial f/ \partial w$ and $\partial f/\partial b$.
###Code
# solution code
def dfdx_backward(x, w, b):
pass
###Output
_____no_output_____
###Markdown
In practice, reverse-mode differentiation is a two-stage process. In the first stage the original function code is run forward, populating $f_i$ variables. In the second stage, derivatives are calculated by propagating in reverse, from the outputs to the inputs.The most important property of reverse-mode differentiation is that it is **cheaper than forward-mode differentiation for functions with a high number of input variables**. In our case, $f : \mathbb{R}^n \rightarrow \mathbb{R}$, only one application of the reverse mode is sufficient to compute the full gradient of the function $\nabla f = \big( \frac{\partial y}{\partial x_1}, \dots ,\frac{\partial y}{\partial x_n} \big)$. This is the case of deep learning, where the number of input variables is very high. > As we have seen, AD relies on the fact that all numerical computationsare ultimately compositions of a finite set of elementary operations for which derivatives are known. > For this reason, given a library of derivatives of all elementary functions in a deep neural network, we are able of computing the derivatives of the network with respect to all parameters at machine precision and applying stochastic gradient methods to its training. Without this automation process the design and debugging of optimization processes for complex neural networks with millions of parameters would be impossible. AutogradAutograd is a Python module (with only one function) that implements automatic differentiation.Autograd can automatically differentiate Python and Numpy code:+ It can handle most of Python’s features, including loops, if statements, recursion and closures.+ Autograd allows you to compute gradients of many types of data structures (Any nested combination of lists, tuples, arrays, or dicts).+ It can also compute higher-order derivatives.+ Uses reverse-mode differentiation (backpropagation) so it can efficiently take gradients of scalar-valued functions with respect to array-valued or vector-valued arguments.+ You can easily implement your custim gradients (good for speed, numerical stability, non-compliant code, etc).
###Code
import autograd.numpy as np
from autograd import grad
x1 = np.array([2, 2], dtype=float)
x2 = np.array([5, 2], dtype=float)
def test(x):
if x[0]>3:
return np.log(x[0]) + x[0]*x[1] - np.sin(x[1])
else:
return np.log(x[0]) + x[0]*x[1] + np.sin(x[1])
grad_test = grad(test)
print("({:.2f},{:.2f})".format(grad_test(x1)[0],grad_test(x1)[1]))
print("({:.2f},{:.2f})".format(grad_test(x2)[0],grad_test(x2)[1]))
###Output
(2.50,1.58)
(2.20,5.42)
###Markdown
Then, a simple (there is no bias term) logistic regression model for $n$-dimensional data like this$$ f(x) = \frac{1}{1 + \exp^{-(\mathbf{w}^T \mathbf{x})}} $$can be implemented in this way:
###Code
import autograd.numpy as np
from autograd import grad
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def logistic_predictions(weights, inputs):
return sigmoid(np.dot(inputs, weights))
def training_loss(weights, inputs, targets):
preds = logistic_predictions(weights, inputs)
loss = preds * targets + (1 - preds) * (1 - targets)
return -np.sum(np.log(loss))
def optimize(inputs, targets, training_loss):
# Optimize weights using gradient descent.
gradient_loss = grad(training_loss)
weights = np.zeros(inputs.shape[1])
print("Initial loss:", training_loss(weights, inputs, targets))
for i in range(100):
weights -= gradient_loss(weights, inputs, targets) * 0.01
print("Final loss:", training_loss(weights, inputs, targets))
return weights
# Build a toy dataset with 3d data
inputs = np.array([[0.52, 1.12, 0.77],
[0.88, -1.08, 0.15],
[0.52, 0.06, -1.30],
[0.74, -2.49, 1.39]])
targets = np.array([True, True, False, True])
weights = optimize(inputs, targets, training_loss)
print("Weights:", weights)
###Output
Initial loss: 2.772588722239781
Final loss: 1.0672706757870165
Weights: [ 0.48307366 -0.37057217 1.06937395]
###Markdown
Any complex function that can be decomposed in a set of elementary functions can be derived in an automatic way, at machine precision, by this algorithm!**We no longer need to code complex derivatives to apply SGD!** > **Exercise**: Make the necessary changes to the code below in order to compute a max-margin solution for a linear separation problem by using SGD.
###Code
%reset
import numpy as np
#Example dataset
N_samples_per_class = 100
d_dimensions = 2
x = np.vstack((np.random.randn(N_samples_per_class, d_dimensions),
np.random.randn(N_samples_per_class, d_dimensions)
+np.array([5,5])))
y = np.concatenate([-1.0*np.ones(N_samples_per_class),
1.*np.ones(N_samples_per_class)])
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
idx = y==1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25)
idx = y==-1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink')
import autograd.numpy as np
from autograd import grad
def SVM_predictions(w, inputs):
return np.dot(w[0,:-1],inputs.T)+w[0,-1]
def SVM_training_loss(weights, inputs, targets):
pred = SVM_predictions(weights, inputs)
return np.sum((targets-pred)**2)/inputs.shape[0]
def optimize(inputs, targets, training_loss):
gradient_loss = grad(training_loss)
weights = np.zeros((1,inputs.shape[1]+1))
print("Initial loss:", training_loss(weights, inputs, targets))
for i in range(100000):
weights -= gradient_loss(weights, inputs, targets) * 0.001
if i%10000 == 0:
print(" Loss:", training_loss(weights, inputs, targets))
print("Final loss:", training_loss(weights, inputs, targets))
return weights
weights = optimize(x, y, SVM_training_loss)
print("Weights", weights)
delta = 0.1
xx = np.arange(-4.0, 10.0, delta)
yy = np.arange(-4.0, 10.0, delta)
XX, YY = np.meshgrid(xx, yy)
Xf = XX.flatten()
Yf = YY.flatten()
sz=XX.shape
test_data = np.concatenate([Xf[:,np.newaxis],Yf[:,np.newaxis]],axis=1)
Z = SVM_predictions(weights,test_data)
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
Z = np.reshape(Z,(xx.shape[0],xx.shape[0]))
plt.contour(XX,YY,Z,[0])
idx = y==1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25)
idx = y==-1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink')
###Output
_____no_output_____
###Markdown
> **Exercise**: Make the necessary changes to the code below in order to compute a **new sample** that is optimal for the classifier you have learned in the previous exercise.
###Code
%reset
import numpy as np
#Example dataset
N_samples_per_class = 100
d_dimensions = 2
x = np.vstack((np.random.randn(N_samples_per_class, d_dimensions),
np.random.randn(N_samples_per_class, d_dimensions)
+np.array([2,2])))
y = np.concatenate([-1.0*np.ones(N_samples_per_class),
1.*np.ones(N_samples_per_class)])
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
idx = y==1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25)
idx = y==-1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink')
# solution code
import autograd.numpy as np
from autograd import grad
def SVM_predictions(w, inputs):
return np.dot(w[0,:-1],inputs.T)+w[0,-1]
def SVM_training_loss(weights, inputs, targets):
pred = SVM_predictions(weights, inputs)
return np.sum(np.maximum(0,1-targets*pred))/inputs.shape[0]
def optimize(inputs, targets, training_loss):
gradient_loss = grad(training_loss)
weights = np.array([[1.15196035, 1.06797815, -2.0131]])
print("Initial loss:", training_loss(weights, inputs, targets))
for i in range(10000):
weights -= gradient_loss(weights, inputs, targets) * 0.01
if i%1000 == 0:
print(" Loss:", training_loss(weights, inputs, targets))
print("Final loss:", training_loss(weights, inputs, targets))
return weights
weights = optimize(x, y, SVM_training_loss)
print("Weights", weights)
delta = 0.1
xx = np.arange(-4.0, 6.0, delta)
yy = np.arange(-4.0, 6.0, delta)
XX, YY = np.meshgrid(xx, yy)
Xf = XX.flatten()
Yf = YY.flatten()
sz=XX.shape
test_data = np.concatenate([Xf[:,np.newaxis],Yf[:,np.newaxis]],axis=1)
Z = SVM_predictions(weights,test_data)
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
Z = np.reshape(Z,(xx.shape[0],xx.shape[0]))
plt.contour(XX,YY,Z,[0])
idx = y==1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25)
idx = y==-1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink')
###Output
_____no_output_____
###Markdown
Neural Network
###Code
%reset
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
import sklearn.linear_model
import matplotlib
import autograd.numpy as np
from autograd import grad
from autograd.misc.flatten import flatten
# Display plots inline and change default figure size
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (12.0, 6.0)
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_moons(200, noise=0.20)
plt.scatter(X[:,0], X[:,1], s=40, c=y, alpha=0.75)
###Output
_____no_output_____
###Markdown
Let's now build a 3-layer neural network with one input layer, one hidden layer, and one output layer. The number of nodes in the input layer is determined by the dimensionality of our data, 2. Similarly, the number of nodes in the output layer is determined by the number of classes we have, also 2. Our network makes predictions using forward propagation, which is just a bunch of matrix multiplications and the application of the activation function(s). If $x$ is the 2-dimensional input to our network then we calculate our prediction $\hat{y}$ (also two-dimensional) as follows:$$ z_1 = x W_1 + b_1 $$$$ a_1 = \mbox{tanh}(z_1) $$$$ z_2 = a_1 W_2 + b_2$$$$ a_2 = \mbox{softmax}({z_2})$$$W_1, b_1, W_2, b_2$ are parameters of our network, which we need to learn from our training data. You can think of them as matrices transforming data between layers of the network. Looking at the matrix multiplications above we can figure out the dimensionality of these matrices. If we use 500 nodes for our hidden layer then $W_1 \in \mathbb{R}^{2\times500}$, $b_1 \in \mathbb{R}^{500}$, $W_2 \in \mathbb{R}^{500\times2}$, $b_2 \in \mathbb{R}^{2}$. A common choice with the softmax output is the **cross-entropy loss**. If we have $N$ training examples and $C$ classes then the loss for our prediction $\hat{y}$ with respect to the true labels $y$ is given by:$$\begin{aligned}L(y,\hat{y}) = - \frac{1}{N} \sum_{n \in N} \sum_{i \in C} y_{n,i} \log\hat{y}_{n,i}\end{aligned}$$
###Code
num_examples = len(X) # training set size
nn_input_dim = 2 # input layer dimensionality
nn_output_dim = 2 # output layer dimensionality
# Gradient descent parameters
epsilon = 0.01 # learning rate for gradient descent
def calculate_loss(model):
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Forward propagation to calculate our predictions
z1 = np.dot(X,W1) + b1
a1 = np.tanh(z1)
z2 = np.dot(a1,W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Calculating the loss
corect_logprobs = -np.log(probs[range(num_examples), y])
data_loss = np.sum(corect_logprobs)
return 1./num_examples * data_loss
# output (0 or 1)
def predict(model, x):
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Forward propagation
z1 = np.dot(x,W1) + b1
a1 = np.tanh(z1)
z2 = np.dot(a1,W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
return np.argmax(probs, axis=1)
###Output
_____no_output_____
###Markdown
This program solves the optimization problem by using AD:
###Code
# This function learns parameters for the neural network and returns the model.
# - nn_hdim: Number of nodes in the hidden layer
# - num_passes: Number of passes through the training data for gradient descent
# - print_loss: If True, print the loss every 1000 iterations
def build_model(nn_hdim, num_passes=20000, print_loss=False):
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim)
b1 = np.zeros((1, nn_hdim))
W2 = np.random.randn(nn_hdim, nn_output_dim) / np.sqrt(nn_hdim)
b2 = np.zeros((1, nn_output_dim))
# This is what we return at the end
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
# Gradient descent. For each batch...
for i in range(0, num_passes):
# Forward propagation
z1 = np.dot(X,model['W1']) + model['b1']
a1 = np.tanh(z1)
z2 = np.dot(a1,model['W2']) + model['b2']
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
gradient_loss = grad(calculate_loss)
model_flat, unflatten_m = flatten(model)
grad_flat, unflatten_g = flatten(gradient_loss(model))
model_flat -= grad_flat * 0.05
model = unflatten_m(model_flat)
# Optionally print the loss.
# This is expensive because it uses the whole dataset, so we don't want to do it too often.
if print_loss and i % 1000 == 0:
print("Loss after iteration %i: %f" %(i, calculate_loss(model)))
return model
def plot_decision_boundary(pred_func):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, alpha=0.45)
plt.scatter(X[:, 0], X[:, 1], c=y, alpha=0.45)
# Build a model with a 3-dimensional hidden layer
model = build_model(3, print_loss=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(model, x))
plt.title("Decision Boundary for hidden layer size 3")
###Output
Loss after iteration 0: 0.578567
Loss after iteration 1000: 0.288832
Loss after iteration 2000: 0.233156
Loss after iteration 3000: 0.182124
Loss after iteration 4000: 0.147160
Loss after iteration 5000: 0.118781
Loss after iteration 6000: 0.100822
Loss after iteration 7000: 0.089667
Loss after iteration 8000: 0.082638
Loss after iteration 9000: 0.078198
Loss after iteration 10000: 0.075316
Loss after iteration 11000: 0.073359
Loss after iteration 12000: 0.071963
Loss after iteration 13000: 0.070921
Loss after iteration 14000: 0.070112
Loss after iteration 15000: 0.069465
Loss after iteration 16000: 0.068932
Loss after iteration 17000: 0.068485
Loss after iteration 18000: 0.068102
Loss after iteration 19000: 0.067770
###Markdown
Let's now get a sense of how varying the hidden layer size affects the result.
###Code
plt.figure(figsize=(16, 32))
hidden_layer_dimensions = [1, 2, 3, 4, 5, 20, 50]
for i, nn_hdim in enumerate(hidden_layer_dimensions):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer size %d' % nn_hdim)
model = build_model(nn_hdim)
plot_decision_boundary(lambda x: predict(model, x))
plt.show()
###Output
_____no_output_____
###Markdown
Automatic Differentiation> The **backpropagation** algorithm was originally introduced in the 1970s, but its importance wasn't fully appreciated until a famous 1986 paper by David Rumelhart, Geoffrey Hinton, and Ronald Williams. (Michael Nielsen in "Neural Networks and Deep Learning", http://neuralnetworksanddeeplearning.com/chap2.html).> **Backpropagation** is the key algorithm that makes training deep models computationally tractable. For modern neural networks, it can make training with gradient descent as much as ten million times faster, relative to a naive implementation. That’s the difference between a model taking a week to train and taking 200,000 years. (Christopher Olah, 2016)We have seen that in order to optimize our models we need to compute the derivative of the loss function with respect to all model paramaters. The computation of derivatives in computer models is addressed by four main methods: + manually working out derivatives and coding the result (as in the original paper describing backpropagation); + numerical differentiation (using finite difference approximations); + symbolic differentiation (using expression manipulation in software, such as Sympy); + and automatic differentiation (AD).**Automatic differentiation** (AD) works by systematically applying the **chain rule** of differential calculus at the elementary operator level.Let $ y = f(g(x)) $ our target function. In its basic form, the chain rule states:$$ \frac{\partial y}{\partial x} = \frac{\partial y}{\partial g} \frac{\partial g}{\partial x} $$or, if there are more than one variable $g_i$ in-between $y$ and $x$ (f.e. if $f$ is a two dimensional function such as $f(g_1(x), g_2(x))$), then:$$ \frac{\partial y}{\partial x} = \sum_i \frac{\partial y}{\partial g_i} \frac{\partial g_i}{\partial x} $$> See https://www.math.hmc.edu/calculus/tutorials/multichainrule/AD allows the accurate evaluation of derivatives at machine precision, with only a small constant factor of overhead.In its most basic description, AD relies on the fact that all numerical computationsare ultimately compositions of a finite set of elementary operations for which derivatives are known. For example, let's consider this function:$$y = f(x_1, x_2) = \ln(x_1) + x_1 x_2 − sin(x_2)$$The evaluation of this expression can be described by using intermediate variables $v_i$ such that:+ variables $v_{i−n} = x_i$, $i = 1,\dots, n$ are the input variables,+ variables $v_i$, $i = 1,\dots, l$ are the working variables, and+ variables $y_{m−i} = v_{l−i}$, $i = m − 1, \dots, 0$ are the output variables.Let's write the forward evaluation trace, based on elementary operations, at $(x_1, x_2) = (2,5)$:+ $v_{-1} = x_1 = 2$+ $v_0 = x_2 = 5$+ $v_1 = \ln v_{-1} = \ln 2 = 0.693$+ $v_2 = v_{-1} \times v_0 = 2 \times 5 = 10$+ $v_3 = sin(v_0) = sin(5) = -0.959$+ $v_4 = v_1+ v_2 = 0.693 + 10 = 10.693$+ $v_5 = v_4 - v_3 = 10.693 + 0.959 = 11.652$+ $y = v_5 = 11.652$It is interesting to note that this trace can be represented by a graph that can be automatically derived from $f(x_1, x_2)$. Forward-mode differentiation Given a function made up of several nested function calls, there are several ways to compute its derivative.For example, given $L(x) = f(g(h(x)))$, the chain rule says that its gradient is:$$ \frac{\partial L}{\partial x} = \frac{\partial f}{\partial g} \times \frac{\partial g}{\partial h} \times \frac{\partial h}{\partial x}$$If we evaluate this product from right-to-left: $\frac{\partial F}{\partial G} \times (\frac{\partial G}{\partial H} \times \frac{\partial H}{\partial x})$, the same order as the computations themselves were performed, this is called **forward-mode differentiation**.For computing the derivative of $f$ with respect to $x_1$ we start by associating with each intermediate variable $v_i$ a derivative: $\partial v_i = \frac{\partial v_i}{\partial x_1}$.Then we apply the chain rule to each elementary operation:+ $\partial v_{-1} = \frac{\partial x_1}{\partial x_1} = 1$+ $\partial v_0 = \frac{\partial x_2}{\partial x_1} = 0$+ $\partial v_1 = \frac{\partial \ln(v_{-1})}{\partial v_{-1}} \partial v_{-1}= 1 / 2 \times 1 = 0.5$+ $\partial v_2 = \frac{\partial (v_{-1} \times v_0)}{\partial v_{-1}} \partial v_{-1} + \frac{\partial (v_{-1} \times v_0)}{\partial v_{0}} \partial v_{0}= 5 \times 1 + 2 \times 0 = 5$+ $\partial v_3 = \frac{\partial sin(v_0)}{\partial v_0} \partial v_0 = cos(5) \times 0$+ $\partial v_4 = \partial v_1+ \partial v_2 = 0.5 + 5$+ $\partial v_5 = \partial v_4 - \partial v_3 = 5.5 - 0$+ $\partial y = \partial v_5 = 5.5$At the end we have the derivative of $f$ with respect to $x_1$ at $(2,5)$.It is important to note that this computation can be locally performed at each node $v_i$ of the graph if we: + follow the right evaluation order + we store at each node its corresponding value from the forward evaluation trace+ we know how to compute its derivative with respect to its parent nodes.For example, at node $v_2$: AD relies on the fact that all numerical computations are ultimately compositions of a finite set of elementary operations for which **derivatives are known**.We have seen **forward accumulation** AD. Forward accumulation is efficient for functions $f : \mathbb{R}^n \rightarrow \mathbb{R}^m$ with $n > m$ a different technique is needed. Reverse-mode differentiation Luckily, we can also propagate derivatives backward from a given output. This is **reverse accumulation** AD. If we evaluate this product from left-to-right: $(\frac{\partial f}{\partial g} \times \frac{\partial g}{\partial h}) \times \frac{\partial h}{\partial x}$, this is called **reverse-mode differentiation**.Reverse pass starts at the end (i.e. $\frac{\partial y}{\partial y} = 1$) and propagates backward to all dependencies. In our case, $y$ will correspond to the components of the loss function.Here we have:+ $\partial y = 1 $+ $\partial v_5 = 1 $+ $\partial v_{4} = \partial v_{5} \frac{\partial v_5}{\partial v_{4}} = 1 \times 1 = 1$+ $\partial v_{3} = \bar v_{5} \frac{\partial v_5}{\partial v_{3}} = \partial v_5 \times -1 = -1$+ $\partial v_{1} = \partial v_{4} \frac{\partial v_4}{\partial v_{1}} = \partial v_4 \times 1 = 1$+ $\partial v_{2} = \partial v_{4} \frac{\partial v_4}{\partial v_{2}} = \partial v_4 \times 1 = 1$+ $\partial v_{0} = \partial v_{3} \frac{\partial v_3}{\partial v_{0}} = \partial v_3 \times \cos v_0 = -0.284$+ $\partial v_{-1} = \partial v_{2} \frac{\partial v_2}{\partial v_{-1}} = \partial v_2 \times v_0 = 5 $+ $\partial v_0 = \partial v_0 + \partial v_2 \frac{\partial v_2}{\partial v_{0}} = \partial v_0 + \partial v_2 \times v_{-1} = 1.716$+ $\partial v_{-1} = \partial v_{-1} + \partial v_{1} \frac{\partial v_1}{\partial v_{-1}} = \partial v_{-1} + \partial v_{1}/v_{-1} = 5.5$+ $\partial x_{2} = \partial v_{0} = 1.716$+ $\partial x_{1} = \partial v_{-1} = 5.5$This is a two-stage process. In the first stage the original function code is run forward, populating $v_i$ variables. In the second stage, derivatives are calculated by propagating in reverse, from the outputs to the inputs.The most important property of reverse accumulation AD is that it is cheaper than forward accumulation AD for funtions with a high number of input variables. In our case, $f : \mathbb{R}^n \rightarrow \mathbb{R}$, only one application of the reverse mode is sufficient to compute the full gradient of the function $\nabla f = \big( \frac{\partial y}{\partial x_1}, \dots ,\frac{\partial y}{\partial x_n} \big)$. Autograd is a Python module (with only one function) that implements automatic differentiation.
###Code
!pip install autograd
###Output
_____no_output_____
###Markdown
Autograd can automatically differentiate Python and Numpy code:+ It can handle most of Python’s features, including loops, if statements, recursion and closures.+ Autograd allows you to compute gradients of many types of data structures (Any nested combination of lists, tuples, arrays, or dicts).+ It can also compute higher-order derivatives.+ Uses reverse-mode differentiation (backpropagation) so it can efficiently take gradients of scalar-valued functions with respect to array-valued or vector-valued arguments.+ You can easily implement your custim gradients (good for speed, numerical stability, non-compliant code, etc).
###Code
import autograd.numpy as np
from autograd import grad
x = np.array([2, 5], dtype=float)
def test(x):
return np.log(x[0]) + x[0]*x[1] - np.sin(x[1])
grad_test = grad(test)
print "({:.2f},{:.2f})".format(grad_test(x)[0],grad_test(x)[1])
###Output
_____no_output_____
###Markdown
Then, logistic regression model fitting$$ f(x) = \frac{1}{1 + \exp^{-(w_0 + w_1 x)}} $$can be implemented in this way:
###Code
import autograd.numpy as np
from autograd import grad
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def logistic_predictions(weights, inputs):
return sigmoid(np.dot(inputs, weights))
def training_loss(weights, inputs, targets):
preds = logistic_predictions(weights, inputs)
label_probabilities = preds * targets + (1 - preds) * (1 - targets)
return -np.sum(np.log(label_probabilities))
def optimize(inputs, targets, training_loss):
# Optimize weights using gradient descent.
gradient_loss = grad(training_loss)
weights = np.zeros(inputs.shape[1])
print "Initial loss:", training_loss(weights, inputs, targets)
for i in xrange(100):
weights -= gradient_loss(weights, inputs, targets) * 0.01
print "Final loss:", training_loss(weights, inputs, targets)
return weights
# Build a toy dataset.
inputs = np.array([[0.52, 1.12, 0.77],
[0.88, -1.08, 0.15],
[0.52, 0.06, -1.30],
[0.74, -2.49, 1.39]])
targets = np.array([True, True, False, True])
weights = optimize(inputs, targets, training_loss)
print "Weights:", weights
###Output
_____no_output_____
###Markdown
Any complex function that can be decomposed in a set of elementary functions can be derived in an automatic way, at machine precision, by this algorithm!We no longer need to code complex derivatives to apply SGD! Exercise + Make the necessary changes to the code below in order to compute a max-margin solution for a linear separation problem by using SGD.
###Code
%reset
import numpy as np
#Example dataset
N_samples_per_class = 100
d_dimensions = 2
x = np.vstack((np.random.randn(N_samples_per_class, d_dimensions),
np.random.randn(N_samples_per_class, d_dimensions)
+np.array([5,5])))
y = np.concatenate([-1.0*np.ones(N_samples_per_class),
1.*np.ones(N_samples_per_class)])
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
idx = y==1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25)
idx = y==-1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink')
import autograd.numpy as np
from autograd import grad
def SVM_predictions(w, inputs):
return np.dot(w[0,:-1],inputs.T)+w[0,-1]
def SVM_training_loss(weights, inputs, targets):
pred = SVM_predictions(weights, inputs)
return np.sum(np.maximum(0,1-targets*pred))/inputs.shape[0]
def optimize(inputs, targets, training_loss):
gradient_loss = grad(training_loss)
weights = np.zeros((1,inputs.shape[1]+1))
print "Initial loss:", training_loss(weights, inputs, targets)
for i in xrange(10000):
weights -= gradient_loss(weights, inputs, targets) * 0.05
if i%1000 == 0:
print " Loss:", training_loss(weights, inputs, targets)
print "Final loss:", training_loss(weights, inputs, targets)
return weights
weights = optimize(x, y, SVM_training_loss)
print "Weights", weights
delta = 0.1
xx = np.arange(-4.0, 10.0, delta)
yy = np.arange(-4.0, 10.0, delta)
XX, YY = np.meshgrid(xx, yy)
Xf = XX.flatten()
Yf = YY.flatten()
sz=XX.shape
test_data = np.concatenate([Xf[:,np.newaxis],Yf[:,np.newaxis]],axis=1)
Z = SVM_predictions(weights,test_data)
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
Z = np.reshape(Z,(xx.shape[0],xx.shape[0]))
plt.contour(XX,YY,Z,[0])
idx = y==1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25)
idx = y==-1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink')
###Output
_____no_output_____
###Markdown
Exercise + Make the necessary changes to the code below in order to compute a new sample that is optimal for the classifier you have learned in the previous exercise.
###Code
%reset
import numpy as np
#Example dataset
N_samples_per_class = 100
d_dimensions = 2
x = np.vstack((np.random.randn(N_samples_per_class, d_dimensions),
np.random.randn(N_samples_per_class, d_dimensions)
+np.array([2,2])))
y = np.concatenate([-1.0*np.ones(N_samples_per_class),
1.*np.ones(N_samples_per_class)])
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
idx = y==1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25)
idx = y==-1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink')
import autograd.numpy as np
from autograd import grad
def SVM_predictions(w, inputs):
return np.dot(w[0,:-1],inputs.T)+w[0,-1]
def SVM_training_loss(weights, inputs, targets):
pred = SVM_predictions(weights, inputs)
return np.sum(np.maximum(0,1-targets*pred))/inputs.shape[0]
def optimize(inputs, targets, training_loss):
gradient_loss = grad(training_loss)
weights = np.zeros((1,inputs.shape[1]+1))
print "Initial loss:", training_loss(weights, inputs, targets)
for i in xrange(10000):
weights -= gradient_loss(weights, inputs, targets) * 0.05
if i%1000 == 0:
print " Loss:", training_loss(weights, inputs, targets)
print "Final loss:", training_loss(weights, inputs, targets)
return weights
weights = optimize(x, y, SVM_training_loss)
print "Weights", weights
delta = 0.1
xx = np.arange(-4.0, 6.0, delta)
yy = np.arange(-4.0, 6.0, delta)
XX, YY = np.meshgrid(xx, yy)
Xf = XX.flatten()
Yf = YY.flatten()
sz=XX.shape
test_data = np.concatenate([Xf[:,np.newaxis],Yf[:,np.newaxis]],axis=1)
Z = SVM_predictions(weights,test_data)
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
Z = np.reshape(Z,(xx.shape[0],xx.shape[0]))
plt.contour(XX,YY,Z,[0])
idx = y==1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25)
idx = y==-1
plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink')
# your code here
###Output
_____no_output_____
###Markdown
Neural Network
###Code
%reset
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
import sklearn.linear_model
import matplotlib
import autograd.numpy as np
from autograd import grad
from autograd.util import flatten
# Display plots inline and change default figure size
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (6.0, 4.0)
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_moons(200, noise=0.20)
plt.scatter(X[:,0], X[:,1], s=40, c=y, alpha=0.45)
###Output
_____no_output_____
###Markdown
Let's now build a 3-layer neural network with one input layer, one hidden layer, and one output layer. The number of nodes in the input layer is determined by the dimensionality of our data, 2. Similarly, the number of nodes in the output layer is determined by the number of classes we have, also 2. Our network makes predictions using forward propagation, which is just a bunch of matrix multiplications and the application of the activation function(s). If $x$ is the 2-dimensional input to our network then we calculate our prediction $\hat{y}$ (also two-dimensional) as follows:$$ z_1 = x W_1 + b_1 $$$$ a_1 = \mbox{tanh}(z_1) $$$$ z_2 = a_1 W_2 + b_2$$$$ a_2 = \mbox{softmax}({z_2})$$$W_1, b_1, W_2, b_2$ are parameters of our network, which we need to learn from our training data. You can think of them as matrices transforming data between layers of the network. Looking at the matrix multiplications above we can figure out the dimensionality of these matrices. If we use 500 nodes for our hidden layer then $W_1 \in \mathbb{R}^{2\times500}$, $b_1 \in \mathbb{R}^{500}$, $W_2 \in \mathbb{R}^{500\times2}$, $b_2 \in \mathbb{R}^{2}$. A common choice with the softmax output is the cross-entropy loss. If we have $N$ training examples and $C$ classes then the loss for our prediction $\hat{y}$ with respect to the true labels $y$ is given by:$$\begin{aligned}L(y,\hat{y}) = - \frac{1}{N} \sum_{n \in N} \sum_{i \in C} y_{n,i} \log\hat{y}_{n,i}\end{aligned}$$
###Code
num_examples = len(X) # training set size
nn_input_dim = 2 # input layer dimensionality
nn_output_dim = 2 # output layer dimensionality
# Gradient descent parameters
epsilon = 0.01 # learning rate for gradient descent
reg_lambda = 0.01 # regularization strength
def calculate_loss(model):
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Forward propagation to calculate our predictions
z1 = np.dot(X,W1) + b1
a1 = np.tanh(z1)
z2 = np.dot(a1,W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Calculating the loss
corect_logprobs = -np.log(probs[range(num_examples), y])
data_loss = np.sum(corect_logprobs)
# Add regulatization term to loss (optional)
data_loss += reg_lambda/2 * (np.sum(np.square(W1)) + np.sum(np.square(W2)))
return 1./num_examples * data_loss
# output (0 or 1)
def predict(model, x):
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Forward propagation
z1 = np.dot(x,W1) + b1
a1 = np.tanh(z1)
z2 = np.dot(a1,W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
return np.argmax(probs, axis=1)
###Output
_____no_output_____
###Markdown
This is a version that solves the optimization problem by using the backpropagation algorithm (hand-coded derivatives):
###Code
# This function learns parameters for the neural network and returns the model.
# - nn_hdim: Number of nodes in the hidden layer
# - num_passes: Number of passes through the training data for gradient descent
# - print_loss: If True, print the loss every 1000 iterations
def build_model(nn_hdim, num_passes=20000, print_loss=False):
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim)
b1 = np.zeros((1, nn_hdim))
W2 = np.random.randn(nn_hdim, nn_output_dim) / np.sqrt(nn_hdim)
b2 = np.zeros((1, nn_output_dim))
# This is what we return at the end
model = {}
# Gradient descent. For each batch...
for i in xrange(0, num_passes):
# Forward propagation
z1 = np.dot(X,W1) + b1
a1 = np.tanh(z1)
z2 = np.dot(a1,W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Backpropagation
delta3 = probs
delta3[range(num_examples), y] -= 1
dW2 = (a1.T).dot(delta3)
db2 = np.sum(delta3, axis=0, keepdims=True)
delta2 = delta3.dot(W2.T) * (1 - np.power(a1, 2))
dW1 = np.dot(X.T, delta2)
db1 = np.sum(delta2, axis=0)
# Add regularization terms (b1 and b2 don't have regularization terms)
dW2 += reg_lambda * W2
dW1 += reg_lambda * W1
# Gradient descent parameter update
W1 += -epsilon * dW1
b1 += -epsilon * db1
W2 += -epsilon * dW2
b2 += -epsilon * db2
# Assign new parameters to the model
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
# Optionally print the loss.
# This is expensive because it uses the whole dataset, so we don't want to do it too often.
if print_loss and i % 1000 == 0:
print "Loss after iteration %i: %f" %(i, calculate_loss(model))
return model
def plot_decision_boundary(pred_func):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, alpha=0.45)
plt.scatter(X[:, 0], X[:, 1], c=y, alpha=0.45)
# Build a model with a 3-dimensional hidden layer
model = build_model(3, print_loss=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(model, x))
plt.title("Decision Boundary for hidden layer size 3")
###Output
_____no_output_____
###Markdown
The next version solves the optimization problem by using AD:
###Code
# This function learns parameters for the neural network and returns the model.
# - nn_hdim: Number of nodes in the hidden layer
# - num_passes: Number of passes through the training data for gradient descent
# - print_loss: If True, print the loss every 1000 iterations
def build_model(nn_hdim, num_passes=20000, print_loss=False):
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim)
b1 = np.zeros((1, nn_hdim))
W2 = np.random.randn(nn_hdim, nn_output_dim) / np.sqrt(nn_hdim)
b2 = np.zeros((1, nn_output_dim))
# This is what we return at the end
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
# Gradient descent. For each batch...
for i in xrange(0, num_passes):
# Forward propagation
z1 = np.dot(X,model['W1']) + model['b1']
a1 = np.tanh(z1)
z2 = np.dot(a1,model['W2']) + model['b2']
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
gradient_loss = grad(calculate_loss)
model_flat, unflatten_m = flatten(model)
grad_flat, unflatten_g = flatten(gradient_loss(model))
model_flat -= grad_flat * 0.05
model = unflatten_m(model_flat)
# Optionally print the loss.
# This is expensive because it uses the whole dataset, so we don't want to do it too often.
if print_loss and i % 1000 == 0:
print "Loss after iteration %i: %f" %(i, calculate_loss(model))
return model
def plot_decision_boundary(pred_func):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, alpha=0.45)
plt.scatter(X[:, 0], X[:, 1], c=y, alpha=0.45)
# Build a model with a 3-dimensional hidden layer
model = build_model(3, print_loss=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(model, x))
plt.title("Decision Boundary for hidden layer size 3")
###Output
_____no_output_____
###Markdown
Let's now get a sense of how varying the hidden layer size affects the result.
###Code
plt.figure(figsize=(16, 32))
hidden_layer_dimensions = [1, 2, 3, 4, 5, 20, 50]
for i, nn_hdim in enumerate(hidden_layer_dimensions):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer size %d' % nn_hdim)
model = build_model(nn_hdim)
plot_decision_boundary(lambda x: predict(model, x))
plt.show()
###Output
_____no_output_____ |
week3/MySQL_Exercise_04_Summarizing_Your_Data.ipynb | ###Markdown
Copyright Jana Schaich Borg/Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) MySQL Exercise 4: Summarizing your DataLast week you practiced retrieving and formatting selected subsets of raw data from individual tables in a database. In this lesson we are going to learn how to use SQL to run calculations that summarize your data without having to output all the raw rows or entries. These calculations will serve as building blocks for the queries that will address our business questions about how to improve Dognition test completion rates.These are the five most common aggregate functions used to summarize information stored in tables:You will use COUNT and SUM very frequently.COUNT is the only aggregate function that can work on any type of variable. The other four aggregate functions are only appropriate for numerical data.All aggregate functions require you to enter either a column name or a "\*" in the parentheses after the function word. Let's begin by exploring the COUNT function. 1. The COUNT function**First, load the sql library and the Dognition database, and set dognition as the default database.**
###Code
%load_ext sql
%sql mysql://studentuser:studentpw@mysqlserver/dognitiondb
%sql USE dognitiondb
%config SqlMagic.displaylimit=25
###Output
0 rows affected.
###Markdown
The Jupyter interface conveniently tells us how many rows are in our query output, so we can compare the results of the COUNT function to the results of our SELECT function. If you run:```mySQLSELECT breedFROM dogs ```Jupyter tells that 35050 rows are "affected", meaning there are 35050 rows in the output of the query (although, of course, we have limited the display to only 1000 rows at a time). **Now try running:**```mySQLSELECT COUNT(breed)FROM dogs ```
###Code
%%sql
SELECT COUNT(breed)
FROM dogs
###Output
1 rows affected.
###Markdown
COUNT is reporting how many rows are in the breed column in total. COUNT should give you the same output as Jupyter's output without displaying the actual rows of data that are being aggregated. You can use DISTINCT (which you learned about in MySQL Exercise 3) with COUNT to count all the unique values in a column, but it must be placed inside the parentheses, immediately before the column that is being counted. For example, to count the number of distinct breed names contained within all the entries in the breed column you could query: ```SQLSELECT COUNT(DISTINCT breed) FROM dogs```What if you wanted to know how many indivdual dogs successfully completed at least one test?Since every row in the complete_tests table represents a completed test and we learned earlier that there are no NULL values in the created_at column of the complete_tests table, any non-null Dog_Guid in the complete_tests table will have completed at least one test. **When a column is included in the parentheses, null values are automatically ignored. Therefore, you could use:**```SQLSELECT COUNT(DISTINCT Dog_Guid) FROM complete_tests```**Question 1: Try combining this query with a WHERE clause to find how many individual dogs completed tests after March 1, 2014 (the answer should be 13,289):**
###Code
%%sql
DESCRIBE complete_tests
%%sql
SELECT COUNT(DISTINCT Dog_Guid)
FROM complete_tests
WHERE created_at >= '2014-03-01'
###Output
1 rows affected.
###Markdown
You can use the "\*" in the parentheses of a COUNT function to count how many rows are in the entire table (or subtable). There are two fundamental difference between COUNT(\*) and COUNT(column_name), though. The first difference is that you cannot use DISTINCT with COUNT(\*). **Question 2: To observe the second difference yourself first, count the number of rows in the dogs table using COUNT(\*):**
###Code
%%sql
SELECT COUNT(*)
FROM dogs
###Output
1 rows affected.
###Markdown
**Question 3: Now count the number of rows in the exclude column of the dogs table:**
###Code
%%sql
SELECT COUNT(exclude)
FROM dogs
###Output
1 rows affected.
###Markdown
The output of the second query should return a much smaller number than the output of the first query. That's because:> When a column is included in a count function, null values are ignored in the count. When an asterisk is included in a count function, nulls are included in the count.This will be both useful and important to remember in future queries where you might want to use SELECT(\*) to count items in multiple groups at once. **Question 4: How many distinct dogs have an exclude flag in the dogs table (value will be "1")? (the answer should be 853)**
###Code
%%sql
SELECT COUNT(DISTINCT dog_guid)
FROM dogs
WHERE exclude=1
###Output
1 rows affected.
###Markdown
2. The SUM FunctionThe fact that the output of:```mySQLSELECT COUNT(exclude) FROM dogs```was so much lower than:```mySQLSELECT COUNT(*)FROM dogs```suggests that there must be many NULL values in the exclude column. Conveniently, we can combine the SUM function with ISNULL to count exactly how many NULL values there are. Look up "ISNULL" at this link to MySQL functions I included in an earlier lesson: http://www.w3resource.com/mysql/mysql-functions-and-operators.phpYou will see that ISNULL is a logical function that returns a 1 for every row that has a NULL value in the specified column, and a 0 for everything else. If we sum up the number of 1s outputted by ISNULL(exclude), then, we should get the total number of NULL values in the column. Here's what that query would look like:```mySQLSELECT SUM(ISNULL(exclude))FROM dogs```It might be tempting to treat SQL like a calculator and leave out the SELECT statement, but you will quickly see that doesn't work. >*Every SQL query that extracts data from a database MUST contain a SELECT statement.* **Try counting the number of NULL values in the exclude column:**
###Code
%%sql
SELECT SUM(ISNULL(exclude))
FROM dogs
###Output
1 rows affected.
###Markdown
The output should return a value of 34,025. When you add that number to the 1025 entries that have an exclude flag, you get a total of 35,050, which is the number of rows reported by SELECT COUNT(\*) from dogs. 3. The AVG, MIN, and MAX FunctionsAVG, MIN, and MAX all work very similarly to SUM.During the Dognition test, customers were asked the question: "How surprising were [your dog’s name]’s choices?” after completing a test. Users could choose any number between 1 (not surprising) to 9 (very surprising). We could retrieve the average, minimum, and maximum rating customers gave to this question after completing the "Eye Contact Game" with the following query:```mySQLSELECT test_name, AVG(rating) AS AVG_Rating, MIN(rating) AS MIN_Rating, MAX(rating) AS MAX_RatingFROM reviewsWHERE test_name="Eye Contact Game";```This would give us an output with 4 columns. The last three columns would have titles reflecting the names inputted after the AS clauses. Recall that if you want to title a column with a string of text that contains a space, that string will need to be enclosed in quotation marks after the AS clause in your query. **Question 5: What is the average, minimum, and maximum ratings given to "Memory versus Pointing" game? (Your answer should be 3.5584, 0, and 9, respectively)**
###Code
%%sql
SELECT test_name,
AVG(rating) AS AVG_Rating,
MIN(rating) AS MIN_Rating,
MAX(rating) AS MAX_Rating
FROM reviews
WHERE test_name="Memory versus Pointing";
###Output
1 rows affected.
###Markdown
What if you wanted the average rating for each of the 40 tests in the Reviews table? One way to do that with the tools you know already is to write 40 separate queries like the ones you wrote above for each test, and then copy or transcribe the results into a separate table in another program like Excel to assemble all the results in one place. That would be a very tedious and time-consuming exercise. Fortunately, there is a very simple way to produce the results you want within one query. That's what we will learn how to do in MySQL Exercise 5. However, it is important that you feel comfortable with the syntax we have learned thus far before we start taking advantage of that functionality. Practice is the best way to become comfortable! Practice incorporating aggregate functions with everything else you've learned so far in your own queries.**Question 6: How would you query how much time it took to complete each test provided in the exam_answers table, in minutes? Title the column that represents this data "Duration."** Note that the exam_answers table has over 2 million rows, so if you don't limit your output, it will take longer than usual to run this query. (HINT: use the TIMESTAMPDIFF function described at: http://www.w3resource.com/mysql/date-and-time-functions/date-and-time-functions.php. It might seem unkind of me to keep suggesting you look up and use new functions I haven't demonstrated for you, but I really want you to become confident that you know how to look up and use new functions when you need them! It will give you a very competative edge in the business world.)
###Code
%%sql
SELECT TIMESTAMPDIFF(MINUTE, start_time, end_time) AS Duration
FROM exam_answers
LIMIT 0 , 10;
%%sql
DESCRIBE exam_answers
###Output
8 rows affected.
###Markdown
**Question 7: Include a column for Dog_Guid, start_time, and end_time in your query, and examine the output. Do you notice anything strange?**
###Code
%%sql
SELECT TIMESTAMPDIFF(MINUTE, start_time, end_time) AS Duration, dog_guid, start_time, end_time
FROM exam_answers
LIMIT 10000 , 10;
###Output
10 rows affected.
###Markdown
If you explore your output you will find that some of your calculated durations appear to be "0." In some cases, you will see many entries from the same Dog_ID with the same start time and end time. That should be impossible. These types of entries probably represent tests run by the Dognition team rather than real customer data. In other cases, though, a "0" is entered in the Duration column even though the start_time and end_time are different. This is because we instructed the function to output the time difference in minutes; unless you change your settings, it will output "0" for any time differences less than the integer 1. If you change your function to output the time difference in seconds, the duration in most of these columns will have a non-zero number. **Question 8: What is the average amount of time it took customers to complete all of the tests in the exam_answers table, if you do not exclude any data (the answer will be approximately 587 minutes)?**
###Code
%%sql
SELECT AVG(TIMESTAMPDIFF(MINUTE, start_time, end_time)) AS Duration_avg
FROM exam_answers
LIMIT 0 , 10;
###Output
1 rows affected.
###Markdown
**Question 9: What is the average amount of time it took customers to complete the "Treat Warm-Up" test, according to the exam_answers table (about 165 minutes, if no data is excluded)?**
###Code
%%sql
SELECT AVG(TIMESTAMPDIFF(MINUTE, start_time, end_time)) AS Duration_avg
FROM exam_answers
WHERE test_name='Treat Warm-Up'
LIMIT 0 , 10;
###Output
1 rows affected.
###Markdown
**Question 10: How many possible test names are there in the exam_answers table?**
###Code
%%sql
SELECT COUNT(DISTINCT test_name) AS test_name_num
FROM exam_answers
LIMIT 0 , 10;
%%sql
SELECT COUNT(DISTINCT test_name) AS test_name_num
FROM complete_tests
LIMIT 0 , 10;
###Output
1 rows affected.
###Markdown
You should have discovered that the exam_answers table has many more test names than the completed_tests table. It turns out that this table has information about experimental tests that Dognition has not yet made available to its customers. **Question 11: What is the minimum and maximum value in the Duration column of your query that included the data from the entire table?**
###Code
%%sql
SELECT
MIN(TIMESTAMPDIFF(MINUTE, start_time, end_time)) AS 'Duration_min',
MAX(TIMESTAMPDIFF(MINUTE, start_time, end_time)) AS 'Duration_max'
FROM exam_answers
LIMIT 0 , 10;
###Output
1 rows affected.
###Markdown
The minimum Duration value is *negative*! The end_times entered in rows with negative Duration values are earlier than the start_times. Unless Dognition has created a time machine, that's impossible and these entries must be mistakes. **Question 12: How many of these negative Duration entries are there? (the answer should be 620)**
###Code
%%sql
SELECT COUNT(start_time)
FROM exam_answers
WHERE TIMESTAMPDIFF(MINUTE, start_time, end_time) < 0
LIMIT 0 , 10;
###Output
1 rows affected.
###Markdown
**Question 13: How would you query all the columns of all the rows that have negative durations so that you could examine whether they share any features that might give you clues about what caused the entry mistake?**
###Code
%%sql
SELECT *
FROM exam_answers
WHERE TIMESTAMPDIFF(MINUTE, start_time, end_time) < 0
LIMIT 0 , 10;
###Output
10 rows affected.
###Markdown
**Question 14: What is the average amount of time it took customers to complete all of the tests in the exam_answers table when 0 and the negative durations are excluded from your calculation (you should get 11233 minutes)?**
###Code
%%sql
SELECT AVG(TIMESTAMPDIFF(MINUTE, start_time, end_time))
FROM exam_answers
WHERE TIMESTAMPDIFF(MINUTE, start_time, end_time) > 0
LIMIT 0 , 10;
###Output
1 rows affected.
|
tp2/Ejercicio 4.ipynb | ###Markdown
Ejercicio 4 Para realizar nuestra simulacion de Simpy, definimos nuestro escenario Banco, al cual le asignamos un recurso de capacidad 1, es decir, que sólo una persona a la vez puede estar utilizando nuestro recurso ATM a la vez. Este recurso se corresponde con un elemento Resource de Simpy, y al mismo le otorgamos una capacidad de 1. Para la generacion de la llegada de personas, aplicamos una distribución exponencial, y por cada llegada, creamos una nueva Persona, que intenta utilizar el recurso(nuestra ATM).El tiempo de utilización del recurso se basa en una distribución uniforme, con probabilidades dependiendo del tipo de persona que llega. Además, consideramos un grado de tolerancia, que también se basa en una distribución uniforme.Finalmente tenemos dos atributos en nuestro escenario, donde guardamos la cantidad máxima de personas en la fila, y otro para registrar a la persona que tardo mas tiempo para utilizar el cajero.
###Code
import simpy
import numpy as np
import random
class Banco:
maxPersonasEnFila = 0
maxTiempoEnFila = 0
def __init__(self):
self.maxPersonasEnFila = 0
self.maxTiempoEnFila = 0
def banco(self,env):
atm = simpy.Resource(env,capacity=1)
cantMaxPersonasEnFila = 0
print("Esperando que abra el banco...")
yield env.timeout(10*3600)
print("----------------Abrimos!----------------------")
personasList = list()
i = 0;
while True:
#Calculo la llegada de las personas
t = random.expovariate(frecuenciaArribo(env))
yield env.timeout(t)
persona = Persona()
i+=1;
persona.nombre = str(i);
print(getTimeFromSeconds(env.now), "Llega al cajero ", persona.tipo + "-" + persona.nombre)
env.process(self.cajero(env, atm, persona))
def cajero(self,env,atm,persona):
with atm.request() as req:
print(getTimeFromSeconds(env.now),"--Cantidad de personas en fila: ", len(atm.queue))
if (self.maxPersonasEnFila < len(atm.queue)):
self.maxPersonasEnFila = len(atm.queue)
#Seteamos el tiempo de espera de la persona.
persona.tiempoEspera = env.now
yield req
tiempoEspera = (env.now - persona.tiempoEspera);
if (tiempoEspera > 0):
print("---Usando finalmente ATM--: ", persona.nombre)
print("Tiempo de espera en fila : ", getMinutesFromSeconds(tiempoEspera))
if (self.maxTiempoEnFila < tiempoEspera):
self.maxTiempoEnFila = tiempoEspera
tiempoIngreso = env.now
yield env.timeout(persona.tiempoCajero*60)
print(getTimeFromSeconds(env.now), "-- Sale del cajero: ", persona.tipo + "-" + persona.nombre)
print("---Tiempo de uso --: ", getMinutesFromSeconds(env.now - tiempoIngreso))
def getTimeFromSeconds(seconds):
m, s = divmod(seconds, 60)
h, m = divmod(m, 60)
return "%d:%02d:%02d" % (h, m, s)
def getMinutesFromSeconds(seconds):
m, s = divmod(seconds, 60)
return "%02d.%02d" % (m, s) + " Minutos"
def frecuenciaArribo(env):
if (env.now <= 12*3600):
return 1.0/260.0
if (env.now <= 15*3600):
return 1.0/120.0
return 1.0/360.0
def getMaxTiempo(self):
return (getMinutesFromSeconds(self.maxTiempoEnFila))
class Persona:
tipo = ""
tiempoCajero = 1
nombre = ""
tiempoEspera = 1
def __init__(self, tipo,tiempoCajero, tiempoEspera):
self.tipo = tipo
self.tiempoCajero = tiempoCajero
self.tiempoEspera = tiempoEspera
def __init__(self):
val = random.uniform(0,1);
if (val <= 0.1):
#print("--Persona tipo 1--")
self.tipo = "Tipo 1" #Tolerancia +-3
if (random.uniform(0,1) >= .5):
self.tiempoCajero = 4.0 + (np.random.uniform(0,3));
else:
self.tiempoCajero = 4.0 + (np.random.uniform(0,3))*-1;
elif (val <= .8):
#print("--Persona tipo 2--")
self.tipo = "Tipo 2" #Tolerancia +-1
if (random.uniform(0,1) >= .5):
self.tiempoCajero = 2.0 + np.random.uniform(0,1);
else:
self.tiempoCajero = 2.0 + np.random.uniform(0,1)*-1;
#print("--Persona tipo 3--")
else:
self.tipo = "Tipo 3" #Tolerancia 3+-2
if (random.uniform(0,1) >= .5):
self.tiempoCajero = 3.0 + np.random.uniform(0,2);
else:
self.tiempoCajero = 3.0 + np.random.uniform(0,2)*-1;
env = simpy.Environment()
banco = Banco()
env.process(banco.banco(env))
env.run(until=19*3600)
print("--------------------Banco Cerrado :( ----------------------------")
print('\r\n')
print(" -----------------------RESULTADOS-------------------------------")
print(" CANTIDAD MAXIMA DE PERSONAS EN FILA: ",banco.maxPersonasEnFila)
print(" TIEMPO DE MAXIMA DE PERSONAS: ",banco.getMaxTiempo())
###Output
Esperando que abra el banco...
----------------Abrimos!----------------------
('10:07:55', 'Llega al cajero ', 'Tipo 2-1')
('10:07:55', '--Cantidad de personas en fila: ', 0)
('10:09:24', '-- Sale del cajero: ', 'Tipo 2-1')
('---Tiempo de uso --: ', '01.29 Minutos')
('10:14:12', 'Llega al cajero ', 'Tipo 2-2')
('10:14:12', '--Cantidad de personas en fila: ', 0)
('10:15:19', '-- Sale del cajero: ', 'Tipo 2-2')
('---Tiempo de uso --: ', '01.07 Minutos')
('10:19:22', 'Llega al cajero ', 'Tipo 2-3')
('10:19:22', '--Cantidad de personas en fila: ', 0)
('10:20:16', 'Llega al cajero ', 'Tipo 2-4')
('10:20:16', '--Cantidad de personas en fila: ', 1)
('10:20:59', 'Llega al cajero ', 'Tipo 2-5')
('10:20:59', '--Cantidad de personas en fila: ', 2)
('10:22:06', '-- Sale del cajero: ', 'Tipo 2-3')
('---Tiempo de uso --: ', '02.44 Minutos')
('---Usando finalmente ATM--: ', '4')
('Tiempo de espera en fila : ', '01.50 Minutos')
('10:23:03', 'Llega al cajero ', 'Tipo 2-6')
('10:23:03', '--Cantidad de personas en fila: ', 2)
('10:23:51', '-- Sale del cajero: ', 'Tipo 2-4')
('---Tiempo de uso --: ', '01.44 Minutos')
('---Usando finalmente ATM--: ', '5')
('Tiempo de espera en fila : ', '02.51 Minutos')
('10:25:53', 'Llega al cajero ', 'Tipo 2-7')
('10:25:53', '--Cantidad de personas en fila: ', 2)
('10:26:25', '-- Sale del cajero: ', 'Tipo 2-5')
('---Tiempo de uso --: ', '02.34 Minutos')
('---Usando finalmente ATM--: ', '6')
('Tiempo de espera en fila : ', '03.21 Minutos')
('10:28:31', 'Llega al cajero ', 'Tipo 2-8')
('10:28:31', '--Cantidad de personas en fila: ', 2)
('10:28:31', 'Llega al cajero ', 'Tipo 3-9')
('10:28:31', '--Cantidad de personas en fila: ', 3)
('10:28:33', '-- Sale del cajero: ', 'Tipo 2-6')
('---Tiempo de uso --: ', '02.07 Minutos')
('---Usando finalmente ATM--: ', '7')
('Tiempo de espera en fila : ', '02.40 Minutos')
('10:29:53', 'Llega al cajero ', 'Tipo 2-10')
('10:29:53', '--Cantidad de personas en fila: ', 3)
('10:30:13', 'Llega al cajero ', 'Tipo 2-11')
('10:30:13', '--Cantidad de personas en fila: ', 4)
('10:31:00', '-- Sale del cajero: ', 'Tipo 2-7')
('---Tiempo de uso --: ', '02.27 Minutos')
('---Usando finalmente ATM--: ', '8')
('Tiempo de espera en fila : ', '02.29 Minutos')
('10:32:46', '-- Sale del cajero: ', 'Tipo 2-8')
('---Tiempo de uso --: ', '01.46 Minutos')
('---Usando finalmente ATM--: ', '9')
('Tiempo de espera en fila : ', '04.14 Minutos')
('10:35:53', '-- Sale del cajero: ', 'Tipo 3-9')
('---Tiempo de uso --: ', '03.07 Minutos')
('---Usando finalmente ATM--: ', '10')
('Tiempo de espera en fila : ', '05.59 Minutos')
('10:36:10', 'Llega al cajero ', 'Tipo 2-12')
('10:36:10', '--Cantidad de personas en fila: ', 2)
('10:38:39', '-- Sale del cajero: ', 'Tipo 2-10')
('---Tiempo de uso --: ', '02.45 Minutos')
('---Usando finalmente ATM--: ', '11')
('Tiempo de espera en fila : ', '08.26 Minutos')
('10:40:54', 'Llega al cajero ', 'Tipo 1-13')
('10:40:54', '--Cantidad de personas en fila: ', 2)
('10:41:03', '-- Sale del cajero: ', 'Tipo 2-11')
('---Tiempo de uso --: ', '02.24 Minutos')
('---Usando finalmente ATM--: ', '12')
('Tiempo de espera en fila : ', '04.53 Minutos')
('10:42:59', 'Llega al cajero ', 'Tipo 2-14')
('10:42:59', '--Cantidad de personas en fila: ', 2)
('10:43:26', '-- Sale del cajero: ', 'Tipo 2-12')
('---Tiempo de uso --: ', '02.22 Minutos')
('---Usando finalmente ATM--: ', '13')
('Tiempo de espera en fila : ', '02.32 Minutos')
('10:45:17', '-- Sale del cajero: ', 'Tipo 1-13')
('---Tiempo de uso --: ', '01.51 Minutos')
('---Usando finalmente ATM--: ', '14')
('Tiempo de espera en fila : ', '02.17 Minutos')
('10:46:42', 'Llega al cajero ', 'Tipo 3-15')
('10:46:42', '--Cantidad de personas en fila: ', 1)
('10:47:05', '-- Sale del cajero: ', 'Tipo 2-14')
('---Tiempo de uso --: ', '01.48 Minutos')
('---Usando finalmente ATM--: ', '15')
('Tiempo de espera en fila : ', '00.22 Minutos')
('10:48:47', '-- Sale del cajero: ', 'Tipo 3-15')
('---Tiempo de uso --: ', '01.42 Minutos')
('10:54:30', 'Llega al cajero ', 'Tipo 2-16')
('10:54:30', '--Cantidad de personas en fila: ', 0)
('10:56:10', '-- Sale del cajero: ', 'Tipo 2-16')
('---Tiempo de uso --: ', '01.40 Minutos')
('10:59:56', 'Llega al cajero ', 'Tipo 2-17')
('10:59:56', '--Cantidad de personas en fila: ', 0)
('11:02:41', '-- Sale del cajero: ', 'Tipo 2-17')
('---Tiempo de uso --: ', '02.45 Minutos')
('11:10:42', 'Llega al cajero ', 'Tipo 2-18')
('11:10:42', '--Cantidad de personas en fila: ', 0)
('11:12:05', '-- Sale del cajero: ', 'Tipo 2-18')
('---Tiempo de uso --: ', '01.23 Minutos')
('11:13:33', 'Llega al cajero ', 'Tipo 3-19')
('11:13:33', '--Cantidad de personas en fila: ', 0)
('11:13:53', 'Llega al cajero ', 'Tipo 2-20')
('11:13:53', '--Cantidad de personas en fila: ', 1)
('11:15:55', 'Llega al cajero ', 'Tipo 1-21')
('11:15:55', '--Cantidad de personas en fila: ', 2)
('11:18:00', '-- Sale del cajero: ', 'Tipo 3-19')
('---Tiempo de uso --: ', '04.27 Minutos')
('---Usando finalmente ATM--: ', '20')
('Tiempo de espera en fila : ', '04.07 Minutos')
('11:18:25', 'Llega al cajero ', 'Tipo 2-22')
('11:18:25', '--Cantidad de personas en fila: ', 2)
('11:19:46', '-- Sale del cajero: ', 'Tipo 2-20')
('---Tiempo de uso --: ', '01.45 Minutos')
('---Usando finalmente ATM--: ', '21')
('Tiempo de espera en fila : ', '03.50 Minutos')
('11:20:41', 'Llega al cajero ', 'Tipo 2-23')
('11:20:41', '--Cantidad de personas en fila: ', 2)
('11:22:58', '-- Sale del cajero: ', 'Tipo 1-21')
('---Tiempo de uso --: ', '03.11 Minutos')
('---Usando finalmente ATM--: ', '22')
('Tiempo de espera en fila : ', '04.32 Minutos')
('11:23:57', 'Llega al cajero ', 'Tipo 2-24')
('11:23:57', '--Cantidad de personas en fila: ', 2)
('11:24:54', '-- Sale del cajero: ', 'Tipo 2-22')
('---Tiempo de uso --: ', '01.56 Minutos')
('---Usando finalmente ATM--: ', '23')
('Tiempo de espera en fila : ', '04.12 Minutos')
('11:27:27', '-- Sale del cajero: ', 'Tipo 2-23')
('---Tiempo de uso --: ', '02.32 Minutos')
('---Usando finalmente ATM--: ', '24')
('Tiempo de espera en fila : ', '03.29 Minutos')
('11:29:28', 'Llega al cajero ', 'Tipo 2-25')
('11:29:28', '--Cantidad de personas en fila: ', 1)
('11:30:24', '-- Sale del cajero: ', 'Tipo 2-24')
('---Tiempo de uso --: ', '02.56 Minutos')
('---Usando finalmente ATM--: ', '25')
('Tiempo de espera en fila : ', '00.55 Minutos')
('11:32:07', '-- Sale del cajero: ', 'Tipo 2-25')
('---Tiempo de uso --: ', '01.43 Minutos')
('11:35:47', 'Llega al cajero ', 'Tipo 2-26')
('11:35:47', '--Cantidad de personas en fila: ', 0)
('11:36:41', 'Llega al cajero ', 'Tipo 2-27')
('11:36:41', '--Cantidad de personas en fila: ', 1)
('11:37:09', 'Llega al cajero ', 'Tipo 3-28')
('11:37:09', '--Cantidad de personas en fila: ', 2)
('11:38:32', '-- Sale del cajero: ', 'Tipo 2-26')
('---Tiempo de uso --: ', '02.44 Minutos')
('---Usando finalmente ATM--: ', '27')
('Tiempo de espera en fila : ', '01.50 Minutos')
('11:39:42', '-- Sale del cajero: ', 'Tipo 2-27')
('---Tiempo de uso --: ', '01.10 Minutos')
('---Usando finalmente ATM--: ', '28')
('Tiempo de espera en fila : ', '02.33 Minutos')
('11:40:19', 'Llega al cajero ', 'Tipo 2-29')
('11:40:19', '--Cantidad de personas en fila: ', 1)
('11:43:14', '-- Sale del cajero: ', 'Tipo 3-28')
('---Tiempo de uso --: ', '03.31 Minutos')
('---Usando finalmente ATM--: ', '29')
('Tiempo de espera en fila : ', '02.54 Minutos')
('11:44:34', '-- Sale del cajero: ', 'Tipo 2-29')
('---Tiempo de uso --: ', '01.19 Minutos')
('11:45:34', 'Llega al cajero ', 'Tipo 2-30')
('11:45:34', '--Cantidad de personas en fila: ', 0)
('11:45:56', 'Llega al cajero ', 'Tipo 2-31')
('11:45:56', '--Cantidad de personas en fila: ', 1)
('11:47:24', 'Llega al cajero ', 'Tipo 2-32')
('11:47:24', '--Cantidad de personas en fila: ', 2)
('11:48:33', '-- Sale del cajero: ', 'Tipo 2-30')
('---Tiempo de uso --: ', '02.58 Minutos')
('---Usando finalmente ATM--: ', '31')
('Tiempo de espera en fila : ', '02.36 Minutos')
('11:49:45', '-- Sale del cajero: ', 'Tipo 2-31')
('---Tiempo de uso --: ', '01.12 Minutos')
('---Usando finalmente ATM--: ', '32')
('Tiempo de espera en fila : ', '02.21 Minutos')
('11:50:41', 'Llega al cajero ', 'Tipo 2-33')
('11:50:41', '--Cantidad de personas en fila: ', 1)
('11:52:25', '-- Sale del cajero: ', 'Tipo 2-32')
('---Tiempo de uso --: ', '02.40 Minutos')
('---Usando finalmente ATM--: ', '33')
('Tiempo de espera en fila : ', '01.43 Minutos')
('11:54:59', 'Llega al cajero ', 'Tipo 2-34')
('11:54:59', '--Cantidad de personas en fila: ', 1)
('11:55:00', 'Llega al cajero ', 'Tipo 2-35')
('11:55:00', '--Cantidad de personas en fila: ', 2)
('11:55:17', '-- Sale del cajero: ', 'Tipo 2-33')
('---Tiempo de uso --: ', '02.51 Minutos')
('---Usando finalmente ATM--: ', '34')
('Tiempo de espera en fila : ', '00.18 Minutos')
('11:55:38', 'Llega al cajero ', 'Tipo 1-36')
('11:55:38', '--Cantidad de personas en fila: ', 2)
('11:57:33', '-- Sale del cajero: ', 'Tipo 2-34')
('---Tiempo de uso --: ', '02.15 Minutos')
('---Usando finalmente ATM--: ', '35')
('Tiempo de espera en fila : ', '02.33 Minutos')
('11:57:34', 'Llega al cajero ', 'Tipo 2-37')
('11:57:34', '--Cantidad de personas en fila: ', 2)
('11:58:45', '-- Sale del cajero: ', 'Tipo 2-35')
('---Tiempo de uso --: ', '01.12 Minutos')
('---Usando finalmente ATM--: ', '36')
('Tiempo de espera en fila : ', '03.06 Minutos')
('11:59:57', 'Llega al cajero ', 'Tipo 2-38')
('11:59:57', '--Cantidad de personas en fila: ', 2)
('12:03:19', '-- Sale del cajero: ', 'Tipo 1-36')
('---Tiempo de uso --: ', '04.34 Minutos')
('---Usando finalmente ATM--: ', '37')
('Tiempo de espera en fila : ', '05.45 Minutos')
('12:05:09', '-- Sale del cajero: ', 'Tipo 2-37')
('---Tiempo de uso --: ', '01.50 Minutos')
('---Usando finalmente ATM--: ', '38')
('Tiempo de espera en fila : ', '05.12 Minutos')
('12:06:49', '-- Sale del cajero: ', 'Tipo 2-38')
('---Tiempo de uso --: ', '01.39 Minutos')
('12:07:04', 'Llega al cajero ', 'Tipo 2-39')
('12:07:04', '--Cantidad de personas en fila: ', 0)
('12:09:51', '-- Sale del cajero: ', 'Tipo 2-39')
('---Tiempo de uso --: ', '02.46 Minutos')
('12:10:13', 'Llega al cajero ', 'Tipo 3-40')
('12:10:13', '--Cantidad de personas en fila: ', 0)
('12:11:02', 'Llega al cajero ', 'Tipo 2-41')
('12:11:02', '--Cantidad de personas en fila: ', 1)
('12:12:34', 'Llega al cajero ', 'Tipo 1-42')
('12:12:34', '--Cantidad de personas en fila: ', 2)
('12:14:21', '-- Sale del cajero: ', 'Tipo 3-40')
('---Tiempo de uso --: ', '04.07 Minutos')
('---Usando finalmente ATM--: ', '41')
('Tiempo de espera en fila : ', '03.18 Minutos')
('12:14:38', 'Llega al cajero ', 'Tipo 2-43')
('12:14:38', '--Cantidad de personas en fila: ', 2)
('12:14:49', 'Llega al cajero ', 'Tipo 2-44')
('12:14:49', '--Cantidad de personas en fila: ', 3)
('12:15:41', '-- Sale del cajero: ', 'Tipo 2-41')
('---Tiempo de uso --: ', '01.20 Minutos')
('---Usando finalmente ATM--: ', '42')
('Tiempo de espera en fila : ', '03.07 Minutos')
('12:17:48', 'Llega al cajero ', 'Tipo 2-45')
('12:17:48', '--Cantidad de personas en fila: ', 3)
('12:19:36', 'Llega al cajero ', 'Tipo 2-46')
('12:19:36', '--Cantidad de personas en fila: ', 4)
('12:20:37', '-- Sale del cajero: ', 'Tipo 1-42')
('---Tiempo de uso --: ', '04.55 Minutos')
('---Usando finalmente ATM--: ', '43')
('Tiempo de espera en fila : ', '05.59 Minutos')
('12:21:23', 'Llega al cajero ', 'Tipo 2-47')
('12:21:23', '--Cantidad de personas en fila: ', 4)
('12:21:48', 'Llega al cajero ', 'Tipo 2-48')
('12:21:48', '--Cantidad de personas en fila: ', 5)
('12:23:11', '-- Sale del cajero: ', 'Tipo 2-43')
('---Tiempo de uso --: ', '02.33 Minutos')
('---Usando finalmente ATM--: ', '44')
('Tiempo de espera en fila : ', '08.21 Minutos')
('12:23:56', 'Llega al cajero ', 'Tipo 2-49')
('12:23:56', '--Cantidad de personas en fila: ', 5)
('12:26:03', '-- Sale del cajero: ', 'Tipo 2-44')
('---Tiempo de uso --: ', '02.52 Minutos')
('---Usando finalmente ATM--: ', '45')
('Tiempo de espera en fila : ', '08.14 Minutos')
('12:28:40', '-- Sale del cajero: ', 'Tipo 2-45')
('---Tiempo de uso --: ', '02.36 Minutos')
('---Usando finalmente ATM--: ', '46')
('Tiempo de espera en fila : ', '09.03 Minutos')
('12:29:33', 'Llega al cajero ', 'Tipo 3-50')
('12:29:33', '--Cantidad de personas en fila: ', 4)
('12:29:51', '-- Sale del cajero: ', 'Tipo 2-46')
('---Tiempo de uso --: ', '01.10 Minutos')
('---Usando finalmente ATM--: ', '47')
('Tiempo de espera en fila : ', '08.27 Minutos')
('12:30:37', 'Llega al cajero ', 'Tipo 3-51')
('12:30:37', '--Cantidad de personas en fila: ', 4)
('12:30:57', '-- Sale del cajero: ', 'Tipo 2-47')
('---Tiempo de uso --: ', '01.06 Minutos')
('---Usando finalmente ATM--: ', '48')
('Tiempo de espera en fila : ', '09.08 Minutos')
('12:31:51', 'Llega al cajero ', 'Tipo 2-52')
('12:31:51', '--Cantidad de personas en fila: ', 4)
('12:32:42', '-- Sale del cajero: ', 'Tipo 2-48')
('---Tiempo de uso --: ', '01.44 Minutos')
('---Usando finalmente ATM--: ', '49')
('Tiempo de espera en fila : ', '08.46 Minutos')
('12:33:11', 'Llega al cajero ', 'Tipo 3-53')
('12:33:11', '--Cantidad de personas en fila: ', 4)
('12:34:05', '-- Sale del cajero: ', 'Tipo 2-49')
('---Tiempo de uso --: ', '01.23 Minutos')
('---Usando finalmente ATM--: ', '50')
('Tiempo de espera en fila : ', '04.32 Minutos')
('12:35:27', '-- Sale del cajero: ', 'Tipo 3-50')
('---Tiempo de uso --: ', '01.22 Minutos')
('---Usando finalmente ATM--: ', '51')
('Tiempo de espera en fila : ', '04.50 Minutos')
('12:36:20', 'Llega al cajero ', 'Tipo 1-54')
('12:36:20', '--Cantidad de personas en fila: ', 3)
('12:36:49', '-- Sale del cajero: ', 'Tipo 3-51')
('---Tiempo de uso --: ', '01.21 Minutos')
('---Usando finalmente ATM--: ', '52')
('Tiempo de espera en fila : ', '04.57 Minutos')
('12:38:23', '-- Sale del cajero: ', 'Tipo 2-52')
('---Tiempo de uso --: ', '01.34 Minutos')
('---Usando finalmente ATM--: ', '53')
('Tiempo de espera en fila : ', '05.12 Minutos')
('12:40:03', '-- Sale del cajero: ', 'Tipo 3-53')
('---Tiempo de uso --: ', '01.40 Minutos')
('---Usando finalmente ATM--: ', '54')
('Tiempo de espera en fila : ', '03.43 Minutos')
('12:40:07', 'Llega al cajero ', 'Tipo 2-55')
('12:40:07', '--Cantidad de personas en fila: ', 1)
('12:40:20', 'Llega al cajero ', 'Tipo 3-56')
('12:40:20', '--Cantidad de personas en fila: ', 2)
('12:45:30', 'Llega al cajero ', 'Tipo 1-57')
('12:45:30', '--Cantidad de personas en fila: ', 3)
('12:45:55', '-- Sale del cajero: ', 'Tipo 1-54')
('---Tiempo de uso --: ', '05.51 Minutos')
('---Usando finalmente ATM--: ', '55')
('Tiempo de espera en fila : ', '05.48 Minutos')
('12:46:05', 'Llega al cajero ', 'Tipo 2-58')
('12:46:05', '--Cantidad de personas en fila: ', 3)
('12:47:00', 'Llega al cajero ', 'Tipo 2-59')
('12:47:00', '--Cantidad de personas en fila: ', 4)
('12:47:49', '-- Sale del cajero: ', 'Tipo 2-55')
('---Tiempo de uso --: ', '01.53 Minutos')
('---Usando finalmente ATM--: ', '56')
('Tiempo de espera en fila : ', '07.29 Minutos')
('12:48:18', 'Llega al cajero ', 'Tipo 2-60')
('12:48:18', '--Cantidad de personas en fila: ', 4)
('12:50:27', '-- Sale del cajero: ', 'Tipo 3-56')
('---Tiempo de uso --: ', '02.37 Minutos')
('---Usando finalmente ATM--: ', '57')
('Tiempo de espera en fila : ', '04.56 Minutos')
('12:51:59', 'Llega al cajero ', 'Tipo 3-61')
('12:51:59', '--Cantidad de personas en fila: ', 4)
('12:52:24', '-- Sale del cajero: ', 'Tipo 1-57')
('---Tiempo de uso --: ', '01.57 Minutos')
('---Usando finalmente ATM--: ', '58')
('Tiempo de espera en fila : ', '06.19 Minutos')
('12:53:18', 'Llega al cajero ', 'Tipo 2-62')
('12:53:18', '--Cantidad de personas en fila: ', 4)
('12:53:27', 'Llega al cajero ', 'Tipo 2-63')
('12:53:27', '--Cantidad de personas en fila: ', 5)
('12:54:50', '-- Sale del cajero: ', 'Tipo 2-58')
('---Tiempo de uso --: ', '02.25 Minutos')
('---Usando finalmente ATM--: ', '59')
('Tiempo de espera en fila : ', '07.49 Minutos')
('12:55:53', 'Llega al cajero ', 'Tipo 3-64')
('12:55:53', '--Cantidad de personas en fila: ', 5)
('12:57:13', '-- Sale del cajero: ', 'Tipo 2-59')
('---Tiempo de uso --: ', '02.23 Minutos')
('---Usando finalmente ATM--: ', '60')
('Tiempo de espera en fila : ', '08.54 Minutos')
('12:57:36', 'Llega al cajero ', 'Tipo 2-65')
('12:57:36', '--Cantidad de personas en fila: ', 5)
('12:58:05', 'Llega al cajero ', 'Tipo 2-66')
('12:58:05', '--Cantidad de personas en fila: ', 6)
('12:58:17', '-- Sale del cajero: ', 'Tipo 2-60')
('---Tiempo de uso --: ', '01.03 Minutos')
('---Usando finalmente ATM--: ', '61')
('Tiempo de espera en fila : ', '06.17 Minutos')
('12:58:54', 'Llega al cajero ', 'Tipo 3-67')
('12:58:54', '--Cantidad de personas en fila: ', 6)
('12:59:19', '-- Sale del cajero: ', 'Tipo 3-61')
('---Tiempo de uso --: ', '01.01 Minutos')
('---Usando finalmente ATM--: ', '62')
('Tiempo de espera en fila : ', '06.00 Minutos')
('13:00:23', 'Llega al cajero ', 'Tipo 2-68')
('13:00:23', '--Cantidad de personas en fila: ', 6)
('13:02:03', '-- Sale del cajero: ', 'Tipo 2-62')
('---Tiempo de uso --: ', '02.44 Minutos')
('---Usando finalmente ATM--: ', '63')
('Tiempo de espera en fila : ', '08.36 Minutos')
('13:03:39', 'Llega al cajero ', 'Tipo 2-69')
('13:03:39', '--Cantidad de personas en fila: ', 6)
('13:04:36', '-- Sale del cajero: ', 'Tipo 2-63')
('---Tiempo de uso --: ', '02.32 Minutos')
('---Usando finalmente ATM--: ', '64')
('Tiempo de espera en fila : ', '08.43 Minutos')
('13:05:46', '-- Sale del cajero: ', 'Tipo 3-64')
('---Tiempo de uso --: ', '01.10 Minutos')
('---Usando finalmente ATM--: ', '65')
('Tiempo de espera en fila : ', '08.09 Minutos')
('13:07:50', '-- Sale del cajero: ', 'Tipo 2-65')
('---Tiempo de uso --: ', '02.04 Minutos')
('---Usando finalmente ATM--: ', '66')
('Tiempo de espera en fila : ', '09.45 Minutos')
('13:07:54', 'Llega al cajero ', 'Tipo 2-70')
('13:07:54', '--Cantidad de personas en fila: ', 4)
('13:09:26', 'Llega al cajero ', 'Tipo 2-71')
('13:09:26', '--Cantidad de personas en fila: ', 5)
('13:09:46', 'Llega al cajero ', 'Tipo 2-72')
('13:09:46', '--Cantidad de personas en fila: ', 6)
('13:10:20', '-- Sale del cajero: ', 'Tipo 2-66')
('---Tiempo de uso --: ', '02.29 Minutos')
('---Usando finalmente ATM--: ', '67')
('Tiempo de espera en fila : ', '11.25 Minutos')
('13:10:25', 'Llega al cajero ', 'Tipo 2-73')
('13:10:25', '--Cantidad de personas en fila: ', 6)
('13:11:48', 'Llega al cajero ', 'Tipo 2-74')
('13:11:48', '--Cantidad de personas en fila: ', 7)
('13:12:40', '-- Sale del cajero: ', 'Tipo 3-67')
('---Tiempo de uso --: ', '02.20 Minutos')
('---Usando finalmente ATM--: ', '68')
('Tiempo de espera en fila : ', '12.17 Minutos')
('13:15:14', '-- Sale del cajero: ', 'Tipo 2-68')
('---Tiempo de uso --: ', '02.33 Minutos')
('---Usando finalmente ATM--: ', '69')
('Tiempo de espera en fila : ', '11.34 Minutos')
('13:15:33', 'Llega al cajero ', 'Tipo 2-75')
('13:15:33', '--Cantidad de personas en fila: ', 6)
('13:16:36', 'Llega al cajero ', 'Tipo 2-76')
('13:16:36', '--Cantidad de personas en fila: ', 7)
('13:17:54', '-- Sale del cajero: ', 'Tipo 2-69')
('---Tiempo de uso --: ', '02.40 Minutos')
('---Usando finalmente ATM--: ', '70')
('Tiempo de espera en fila : ', '10.00 Minutos')
('13:18:42', 'Llega al cajero ', 'Tipo 3-77')
('13:18:42', '--Cantidad de personas en fila: ', 7)
('13:20:10', '-- Sale del cajero: ', 'Tipo 2-70')
('---Tiempo de uso --: ', '02.15 Minutos')
('---Usando finalmente ATM--: ', '71')
('Tiempo de espera en fila : ', '10.43 Minutos')
('13:20:38', 'Llega al cajero ', 'Tipo 2-78')
('13:20:38', '--Cantidad de personas en fila: ', 7)
('13:22:42', '-- Sale del cajero: ', 'Tipo 2-71')
('---Tiempo de uso --: ', '02.32 Minutos')
('---Usando finalmente ATM--: ', '72')
('Tiempo de espera en fila : ', '12.56 Minutos')
('13:24:10', '-- Sale del cajero: ', 'Tipo 2-72')
('---Tiempo de uso --: ', '01.28 Minutos')
('---Usando finalmente ATM--: ', '73')
('Tiempo de espera en fila : ', '13.45 Minutos')
('13:24:57', 'Llega al cajero ', 'Tipo 2-79')
('13:24:57', '--Cantidad de personas en fila: ', 6)
('13:25:59', '-- Sale del cajero: ', 'Tipo 2-73')
('---Tiempo de uso --: ', '01.48 Minutos')
('---Usando finalmente ATM--: ', '74')
('Tiempo de espera en fila : ', '14.11 Minutos')
('13:28:47', '-- Sale del cajero: ', 'Tipo 2-74')
('---Tiempo de uso --: ', '02.47 Minutos')
('---Usando finalmente ATM--: ', '75')
('Tiempo de espera en fila : ', '13.14 Minutos')
('13:30:03', 'Llega al cajero ', 'Tipo 2-80')
('13:30:03', '--Cantidad de personas en fila: ', 5)
('13:30:08', 'Llega al cajero ', 'Tipo 3-81')
('13:30:08', '--Cantidad de personas en fila: ', 6)
('13:30:10', 'Llega al cajero ', 'Tipo 2-82')
('13:30:10', '--Cantidad de personas en fila: ', 7)
('13:30:19', 'Llega al cajero ', 'Tipo 2-83')
('13:30:19', '--Cantidad de personas en fila: ', 8)
('13:30:56', '-- Sale del cajero: ', 'Tipo 2-75')
('---Tiempo de uso --: ', '02.08 Minutos')
('---Usando finalmente ATM--: ', '76')
('Tiempo de espera en fila : ', '14.19 Minutos')
('13:32:33', 'Llega al cajero ', 'Tipo 2-84')
('13:32:33', '--Cantidad de personas en fila: ', 8)
('13:33:37', '-- Sale del cajero: ', 'Tipo 2-76')
('---Tiempo de uso --: ', '02.40 Minutos')
('---Usando finalmente ATM--: ', '77')
('Tiempo de espera en fila : ', '14.55 Minutos')
('13:33:47', 'Llega al cajero ', 'Tipo 2-85')
('13:33:47', '--Cantidad de personas en fila: ', 8)
('13:35:32', 'Llega al cajero ', 'Tipo 2-86')
('13:35:32', '--Cantidad de personas en fila: ', 9)
('13:35:49', 'Llega al cajero ', 'Tipo 2-87')
('13:35:49', '--Cantidad de personas en fila: ', 10)
('13:36:38', '-- Sale del cajero: ', 'Tipo 3-77')
('---Tiempo de uso --: ', '03.00 Minutos')
('---Usando finalmente ATM--: ', '78')
('Tiempo de espera en fila : ', '15.59 Minutos')
('13:39:14', '-- Sale del cajero: ', 'Tipo 2-78')
('---Tiempo de uso --: ', '02.36 Minutos')
('---Usando finalmente ATM--: ', '79')
('Tiempo de espera en fila : ', '14.17 Minutos')
('13:41:29', '-- Sale del cajero: ', 'Tipo 2-79')
('---Tiempo de uso --: ', '02.15 Minutos')
('---Usando finalmente ATM--: ', '80')
('Tiempo de espera en fila : ', '11.26 Minutos')
('13:41:42', 'Llega al cajero ', 'Tipo 3-88')
('13:41:42', '--Cantidad de personas en fila: ', 8)
('13:43:11', '-- Sale del cajero: ', 'Tipo 2-80')
('---Tiempo de uso --: ', '01.42 Minutos')
('---Usando finalmente ATM--: ', '81')
('Tiempo de espera en fila : ', '13.02 Minutos')
('13:44:02', 'Llega al cajero ', 'Tipo 2-89')
('13:44:02', '--Cantidad de personas en fila: ', 8)
('13:44:12', 'Llega al cajero ', 'Tipo 3-90')
('13:44:12', '--Cantidad de personas en fila: ', 9)
('13:44:13', '-- Sale del cajero: ', 'Tipo 3-81')
('---Tiempo de uso --: ', '01.01 Minutos')
('---Usando finalmente ATM--: ', '82')
('Tiempo de espera en fila : ', '14.02 Minutos')
('13:45:33', 'Llega al cajero ', 'Tipo 1-91')
('13:45:33', '--Cantidad de personas en fila: ', 9)
('13:46:40', '-- Sale del cajero: ', 'Tipo 2-82')
('---Tiempo de uso --: ', '02.27 Minutos')
('---Usando finalmente ATM--: ', '83')
('Tiempo de espera en fila : ', '16.20 Minutos')
('13:47:45', '-- Sale del cajero: ', 'Tipo 2-83')
('---Tiempo de uso --: ', '01.05 Minutos')
('---Usando finalmente ATM--: ', '84')
('Tiempo de espera en fila : ', '15.11 Minutos')
('13:48:14', 'Llega al cajero ', 'Tipo 2-92')
('13:48:14', '--Cantidad de personas en fila: ', 8)
('13:49:18', '-- Sale del cajero: ', 'Tipo 2-84')
('---Tiempo de uso --: ', '01.32 Minutos')
('---Usando finalmente ATM--: ', '85')
('Tiempo de espera en fila : ', '15.30 Minutos')
('13:51:23', '-- Sale del cajero: ', 'Tipo 2-85')
('---Tiempo de uso --: ', '02.04 Minutos')
('---Usando finalmente ATM--: ', '86')
('Tiempo de espera en fila : ', '15.50 Minutos')
('13:52:38', '-- Sale del cajero: ', 'Tipo 2-86')
('---Tiempo de uso --: ', '01.15 Minutos')
('---Usando finalmente ATM--: ', '87')
('Tiempo de espera en fila : ', '16.48 Minutos')
('13:53:14', 'Llega al cajero ', 'Tipo 1-93')
('13:53:14', '--Cantidad de personas en fila: ', 6)
('13:53:49', 'Llega al cajero ', 'Tipo 2-94')
('13:53:49', '--Cantidad de personas en fila: ', 7)
('13:54:21', 'Llega al cajero ', 'Tipo 2-95')
('13:54:21', '--Cantidad de personas en fila: ', 8)
('13:55:24', '-- Sale del cajero: ', 'Tipo 2-87')
('---Tiempo de uso --: ', '02.45 Minutos')
('---Usando finalmente ATM--: ', '88')
('Tiempo de espera en fila : ', '13.41 Minutos')
('13:57:47', 'Llega al cajero ', 'Tipo 3-96')
('13:57:47', '--Cantidad de personas en fila: ', 8)
('13:58:29', 'Llega al cajero ', 'Tipo 2-97')
('13:58:29', '--Cantidad de personas en fila: ', 9)
('13:59:38', 'Llega al cajero ', 'Tipo 2-98')
('13:59:38', '--Cantidad de personas en fila: ', 10)
('14:00:17', '-- Sale del cajero: ', 'Tipo 3-88')
('---Tiempo de uso --: ', '04.53 Minutos')
('---Usando finalmente ATM--: ', '89')
('Tiempo de espera en fila : ', '16.15 Minutos')
('14:02:02', '-- Sale del cajero: ', 'Tipo 2-89')
('---Tiempo de uso --: ', '01.44 Minutos')
('---Usando finalmente ATM--: ', '90')
('Tiempo de espera en fila : ', '17.50 Minutos')
('14:05:17', 'Llega al cajero ', 'Tipo 2-99')
('14:05:17', '--Cantidad de personas en fila: ', 9)
('14:05:29', 'Llega al cajero ', 'Tipo 2-100')
('14:05:29', '--Cantidad de personas en fila: ', 10)
('14:06:57', '-- Sale del cajero: ', 'Tipo 3-90')
('---Tiempo de uso --: ', '04.55 Minutos')
('---Usando finalmente ATM--: ', '91')
('Tiempo de espera en fila : ', '21.24 Minutos')
('14:10:09', '-- Sale del cajero: ', 'Tipo 1-91')
('---Tiempo de uso --: ', '03.11 Minutos')
('---Usando finalmente ATM--: ', '92')
('Tiempo de espera en fila : ', '21.54 Minutos')
('14:11:06', 'Llega al cajero ', 'Tipo 1-101')
('14:11:06', '--Cantidad de personas en fila: ', 9)
('14:11:10', '-- Sale del cajero: ', 'Tipo 2-92')
('---Tiempo de uso --: ', '01.00 Minutos')
('---Usando finalmente ATM--: ', '93')
('Tiempo de espera en fila : ', '17.55 Minutos')
('14:12:20', 'Llega al cajero ', 'Tipo 3-102')
('14:12:20', '--Cantidad de personas en fila: ', 9)
('14:13:17', 'Llega al cajero ', 'Tipo 2-103')
('14:13:17', '--Cantidad de personas en fila: ', 10)
('14:15:56', 'Llega al cajero ', 'Tipo 2-104')
('14:15:56', '--Cantidad de personas en fila: ', 11)
('14:17:11', '-- Sale del cajero: ', 'Tipo 1-93')
('---Tiempo de uso --: ', '06.00 Minutos')
('---Usando finalmente ATM--: ', '94')
('Tiempo de espera en fila : ', '23.21 Minutos')
('14:19:21', '-- Sale del cajero: ', 'Tipo 2-94')
('---Tiempo de uso --: ', '02.10 Minutos')
('---Usando finalmente ATM--: ', '95')
('Tiempo de espera en fila : ', '24.59 Minutos')
('14:20:54', 'Llega al cajero ', 'Tipo 2-105')
('14:20:54', '--Cantidad de personas en fila: ', 10)
('14:21:14', 'Llega al cajero ', 'Tipo 2-106')
('14:21:14', '--Cantidad de personas en fila: ', 11)
('14:22:12', '-- Sale del cajero: ', 'Tipo 2-95')
('---Tiempo de uso --: ', '02.51 Minutos')
('---Usando finalmente ATM--: ', '96')
('Tiempo de espera en fila : ', '24.24 Minutos')
('14:23:55', 'Llega al cajero ', 'Tipo 2-107')
('14:23:55', '--Cantidad de personas en fila: ', 11)
('14:24:29', 'Llega al cajero ', 'Tipo 3-108')
('14:24:29', '--Cantidad de personas en fila: ', 12)
('14:24:30', '-- Sale del cajero: ', 'Tipo 3-96')
('---Tiempo de uso --: ', '02.18 Minutos')
('---Usando finalmente ATM--: ', '97')
('Tiempo de espera en fila : ', '26.01 Minutos')
('14:26:16', '-- Sale del cajero: ', 'Tipo 2-97')
('---Tiempo de uso --: ', '01.45 Minutos')
('---Usando finalmente ATM--: ', '98')
('Tiempo de espera en fila : ', '26.37 Minutos')
('14:28:49', '-- Sale del cajero: ', 'Tipo 2-98')
('---Tiempo de uso --: ', '02.33 Minutos')
('---Usando finalmente ATM--: ', '99')
('Tiempo de espera en fila : ', '23.31 Minutos')
('14:28:55', 'Llega al cajero ', 'Tipo 1-109')
('14:28:55', '--Cantidad de personas en fila: ', 10)
('14:31:14', '-- Sale del cajero: ', 'Tipo 2-99')
('---Tiempo de uso --: ', '02.25 Minutos')
('---Usando finalmente ATM--: ', '100')
('Tiempo de espera en fila : ', '25.45 Minutos')
('14:31:21', 'Llega al cajero ', 'Tipo 3-110')
('14:31:21', '--Cantidad de personas en fila: ', 10)
('14:32:47', '-- Sale del cajero: ', 'Tipo 2-100')
('---Tiempo de uso --: ', '01.32 Minutos')
('---Usando finalmente ATM--: ', '101')
('Tiempo de espera en fila : ', '21.40 Minutos')
('14:34:17', 'Llega al cajero ', 'Tipo 2-111')
('14:34:17', '--Cantidad de personas en fila: ', 10)
('14:34:47', 'Llega al cajero ', 'Tipo 2-112')
('14:34:47', '--Cantidad de personas en fila: ', 11)
('14:37:51', '-- Sale del cajero: ', 'Tipo 1-101')
('---Tiempo de uso --: ', '05.04 Minutos')
('---Usando finalmente ATM--: ', '102')
('Tiempo de espera en fila : ', '25.30 Minutos')
('14:39:56', 'Llega al cajero ', 'Tipo 2-113')
('14:39:56', '--Cantidad de personas en fila: ', 11)
('14:40:43', 'Llega al cajero ', 'Tipo 1-114')
('14:40:43', '--Cantidad de personas en fila: ', 12)
('14:40:59', 'Llega al cajero ', 'Tipo 2-115')
('14:40:59', '--Cantidad de personas en fila: ', 13)
('14:42:12', '-- Sale del cajero: ', 'Tipo 3-102')
('---Tiempo de uso --: ', '04.20 Minutos')
('---Usando finalmente ATM--: ', '103')
('Tiempo de espera en fila : ', '28.55 Minutos')
('14:43:33', '-- Sale del cajero: ', 'Tipo 2-103')
('---Tiempo de uso --: ', '01.21 Minutos')
('---Usando finalmente ATM--: ', '104')
('Tiempo de espera en fila : ', '27.36 Minutos')
('14:44:33', 'Llega al cajero ', 'Tipo 2-116')
('14:44:33', '--Cantidad de personas en fila: ', 12)
('14:46:20', '-- Sale del cajero: ', 'Tipo 2-104')
('---Tiempo de uso --: ', '02.46 Minutos')
('---Usando finalmente ATM--: ', '105')
('Tiempo de espera en fila : ', '25.25 Minutos')
('14:48:39', '-- Sale del cajero: ', 'Tipo 2-105')
('---Tiempo de uso --: ', '02.19 Minutos')
('---Usando finalmente ATM--: ', '106')
('Tiempo de espera en fila : ', '27.24 Minutos')
('14:48:39', 'Llega al cajero ', 'Tipo 2-117')
('14:48:39', '--Cantidad de personas en fila: ', 11)
('14:49:51', 'Llega al cajero ', 'Tipo 2-118')
('14:49:51', '--Cantidad de personas en fila: ', 12)
('14:50:08', '-- Sale del cajero: ', 'Tipo 2-106')
('---Tiempo de uso --: ', '01.29 Minutos')
('---Usando finalmente ATM--: ', '107')
('Tiempo de espera en fila : ', '26.12 Minutos')
('14:51:55', 'Llega al cajero ', 'Tipo 2-119')
('14:51:55', '--Cantidad de personas en fila: ', 12)
('14:52:02', 'Llega al cajero ', 'Tipo 2-120')
('14:52:02', '--Cantidad de personas en fila: ', 13)
('14:52:15', '-- Sale del cajero: ', 'Tipo 2-107')
('---Tiempo de uso --: ', '02.07 Minutos')
('---Usando finalmente ATM--: ', '108')
('Tiempo de espera en fila : ', '27.46 Minutos')
('14:53:47', 'Llega al cajero ', 'Tipo 2-121')
('14:53:47', '--Cantidad de personas en fila: ', 13)
('14:55:15', 'Llega al cajero ', 'Tipo 2-122')
('14:55:15', '--Cantidad de personas en fila: ', 14)
('14:56:01', 'Llega al cajero ', 'Tipo 3-123')
('14:56:01', '--Cantidad de personas en fila: ', 15)
('14:56:05', 'Llega al cajero ', 'Tipo 2-124')
('14:56:05', '--Cantidad de personas en fila: ', 16)
('14:56:20', 'Llega al cajero ', 'Tipo 2-125')
('14:56:20', '--Cantidad de personas en fila: ', 17)
('14:56:56', '-- Sale del cajero: ', 'Tipo 3-108')
('---Tiempo de uso --: ', '04.41 Minutos')
('---Usando finalmente ATM--: ', '109')
('Tiempo de espera en fila : ', '28.01 Minutos')
('14:59:26', 'Llega al cajero ', 'Tipo 2-126')
('14:59:26', '--Cantidad de personas en fila: ', 17)
('14:59:44', 'Llega al cajero ', 'Tipo 2-127')
('14:59:44', '--Cantidad de personas en fila: ', 18)
('14:59:50', '-- Sale del cajero: ', 'Tipo 1-109')
('---Tiempo de uso --: ', '02.53 Minutos')
('---Usando finalmente ATM--: ', '110')
('Tiempo de espera en fila : ', '28.28 Minutos')
('14:59:57', 'Llega al cajero ', 'Tipo 3-128')
('14:59:57', '--Cantidad de personas en fila: ', 18)
('15:00:17', 'Llega al cajero ', 'Tipo 2-129')
('15:00:17', '--Cantidad de personas en fila: ', 19)
('15:01:45', '-- Sale del cajero: ', 'Tipo 3-110')
('---Tiempo de uso --: ', '01.55 Minutos')
('---Usando finalmente ATM--: ', '111')
('Tiempo de espera en fila : ', '27.28 Minutos')
('15:03:56', '-- Sale del cajero: ', 'Tipo 2-111')
('---Tiempo de uso --: ', '02.11 Minutos')
('---Usando finalmente ATM--: ', '112')
('Tiempo de espera en fila : ', '29.09 Minutos')
('15:05:50', 'Llega al cajero ', 'Tipo 2-130')
('15:05:50', '--Cantidad de personas en fila: ', 18)
('15:06:19', '-- Sale del cajero: ', 'Tipo 2-112')
('---Tiempo de uso --: ', '02.23 Minutos')
('---Usando finalmente ATM--: ', '113')
('Tiempo de espera en fila : ', '26.23 Minutos')
('15:07:56', '-- Sale del cajero: ', 'Tipo 2-113')
('---Tiempo de uso --: ', '01.36 Minutos')
('---Usando finalmente ATM--: ', '114')
('Tiempo de espera en fila : ', '27.13 Minutos')
('15:14:13', '-- Sale del cajero: ', 'Tipo 1-114')
('---Tiempo de uso --: ', '06.17 Minutos')
('---Usando finalmente ATM--: ', '115')
('Tiempo de espera en fila : ', '33.14 Minutos')
('15:15:44', '-- Sale del cajero: ', 'Tipo 2-115')
('---Tiempo de uso --: ', '01.30 Minutos')
('---Usando finalmente ATM--: ', '116')
('Tiempo de espera en fila : ', '31.10 Minutos')
('15:17:46', 'Llega al cajero ', 'Tipo 2-131')
('15:17:46', '--Cantidad de personas en fila: ', 15)
('15:18:31', '-- Sale del cajero: ', 'Tipo 2-116')
('---Tiempo de uso --: ', '02.47 Minutos')
('---Usando finalmente ATM--: ', '117')
('Tiempo de espera en fila : ', '29.52 Minutos')
('15:21:28', '-- Sale del cajero: ', 'Tipo 2-117')
('---Tiempo de uso --: ', '02.56 Minutos')
('---Usando finalmente ATM--: ', '118')
('Tiempo de espera en fila : ', '31.37 Minutos')
('15:23:50', '-- Sale del cajero: ', 'Tipo 2-118')
('---Tiempo de uso --: ', '02.21 Minutos')
('---Usando finalmente ATM--: ', '119')
('Tiempo de espera en fila : ', '31.54 Minutos')
('15:24:44', 'Llega al cajero ', 'Tipo 2-132')
('15:24:44', '--Cantidad de personas en fila: ', 13)
('15:26:03', '-- Sale del cajero: ', 'Tipo 2-119')
('---Tiempo de uso --: ', '02.13 Minutos')
('---Usando finalmente ATM--: ', '120')
('Tiempo de espera en fila : ', '34.01 Minutos')
('15:27:31', '-- Sale del cajero: ', 'Tipo 2-120')
('---Tiempo de uso --: ', '01.27 Minutos')
('---Usando finalmente ATM--: ', '121')
('Tiempo de espera en fila : ', '33.44 Minutos')
('15:28:39', '-- Sale del cajero: ', 'Tipo 2-121')
('---Tiempo de uso --: ', '01.08 Minutos')
('---Usando finalmente ATM--: ', '122')
('Tiempo de espera en fila : ', '33.24 Minutos')
('15:30:08', '-- Sale del cajero: ', 'Tipo 2-122')
('---Tiempo de uso --: ', '01.28 Minutos')
('---Usando finalmente ATM--: ', '123')
('Tiempo de espera en fila : ', '34.07 Minutos')
('15:32:42', '-- Sale del cajero: ', 'Tipo 3-123')
('---Tiempo de uso --: ', '02.34 Minutos')
('---Usando finalmente ATM--: ', '124')
('Tiempo de espera en fila : ', '36.37 Minutos')
('15:34:01', '-- Sale del cajero: ', 'Tipo 2-124')
('---Tiempo de uso --: ', '01.18 Minutos')
('---Usando finalmente ATM--: ', '125')
('Tiempo de espera en fila : ', '37.40 Minutos')
('15:35:27', '-- Sale del cajero: ', 'Tipo 2-125')
('---Tiempo de uso --: ', '01.26 Minutos')
('---Usando finalmente ATM--: ', '126')
('Tiempo de espera en fila : ', '36.01 Minutos')
('15:35:35', 'Llega al cajero ', 'Tipo 2-133')
('15:35:35', '--Cantidad de personas en fila: ', 7)
('15:37:05', 'Llega al cajero ', 'Tipo 2-134')
('15:37:05', '--Cantidad de personas en fila: ', 8)
('15:38:07', '-- Sale del cajero: ', 'Tipo 2-126')
('---Tiempo de uso --: ', '02.40 Minutos')
('---Usando finalmente ATM--: ', '127')
('Tiempo de espera en fila : ', '38.23 Minutos')
('15:39:13', '-- Sale del cajero: ', 'Tipo 2-127')
('---Tiempo de uso --: ', '01.06 Minutos')
('---Usando finalmente ATM--: ', '128')
('Tiempo de espera en fila : ', '39.16 Minutos')
('15:40:45', '-- Sale del cajero: ', 'Tipo 3-128')
('---Tiempo de uso --: ', '01.31 Minutos')
('---Usando finalmente ATM--: ', '129')
('Tiempo de espera en fila : ', '40.27 Minutos')
('15:43:27', '-- Sale del cajero: ', 'Tipo 2-129')
('---Tiempo de uso --: ', '02.42 Minutos')
('---Usando finalmente ATM--: ', '130')
('Tiempo de espera en fila : ', '37.36 Minutos')
('15:45:01', 'Llega al cajero ', 'Tipo 2-135')
('15:45:01', '--Cantidad de personas en fila: ', 5)
('15:45:42', '-- Sale del cajero: ', 'Tipo 2-130')
('---Tiempo de uso --: ', '02.14 Minutos')
('---Usando finalmente ATM--: ', '131')
('Tiempo de espera en fila : ', '27.55 Minutos')
('15:46:51', '-- Sale del cajero: ', 'Tipo 2-131')
('---Tiempo de uso --: ', '01.09 Minutos')
('---Usando finalmente ATM--: ', '132')
('Tiempo de espera en fila : ', '22.07 Minutos')
('15:49:35', '-- Sale del cajero: ', 'Tipo 2-132')
('---Tiempo de uso --: ', '02.43 Minutos')
('---Usando finalmente ATM--: ', '133')
('Tiempo de espera en fila : ', '13.59 Minutos')
('15:52:08', 'Llega al cajero ', 'Tipo 3-136')
('15:52:08', '--Cantidad de personas en fila: ', 3)
('15:52:17', '-- Sale del cajero: ', 'Tipo 2-133')
('---Tiempo de uso --: ', '02.42 Minutos')
('---Usando finalmente ATM--: ', '134')
('Tiempo de espera en fila : ', '15.12 Minutos')
('15:53:21', '-- Sale del cajero: ', 'Tipo 2-134')
('---Tiempo de uso --: ', '01.03 Minutos')
('---Usando finalmente ATM--: ', '135')
('Tiempo de espera en fila : ', '08.19 Minutos')
('15:55:16', '-- Sale del cajero: ', 'Tipo 2-135')
('---Tiempo de uso --: ', '01.54 Minutos')
('---Usando finalmente ATM--: ', '136')
('Tiempo de espera en fila : ', '03.07 Minutos')
('15:55:41', 'Llega al cajero ', 'Tipo 3-137')
('15:55:41', '--Cantidad de personas en fila: ', 1)
('15:55:52', 'Llega al cajero ', 'Tipo 2-138')
('15:55:52', '--Cantidad de personas en fila: ', 2)
('15:58:50', 'Llega al cajero ', 'Tipo 3-139')
('15:58:50', '--Cantidad de personas en fila: ', 3)
('16:00:01', '-- Sale del cajero: ', 'Tipo 3-136')
('---Tiempo de uso --: ', '04.45 Minutos')
('---Usando finalmente ATM--: ', '137')
('Tiempo de espera en fila : ', '04.20 Minutos')
('16:05:01', '-- Sale del cajero: ', 'Tipo 3-137')
('---Tiempo de uso --: ', '04.59 Minutos')
('---Usando finalmente ATM--: ', '138')
('Tiempo de espera en fila : ', '09.08 Minutos')
('16:06:22', '-- Sale del cajero: ', 'Tipo 2-138')
('---Tiempo de uso --: ', '01.20 Minutos')
('---Usando finalmente ATM--: ', '139')
('Tiempo de espera en fila : ', '07.31 Minutos')
('16:07:57', 'Llega al cajero ', 'Tipo 2-140')
('16:07:57', '--Cantidad de personas en fila: ', 1)
('16:08:12', 'Llega al cajero ', 'Tipo 3-141')
('16:08:12', '--Cantidad de personas en fila: ', 2)
('16:09:16', '-- Sale del cajero: ', 'Tipo 3-139')
('---Tiempo de uso --: ', '02.54 Minutos')
('---Usando finalmente ATM--: ', '140')
('Tiempo de espera en fila : ', '01.19 Minutos')
('16:11:42', '-- Sale del cajero: ', 'Tipo 2-140')
('---Tiempo de uso --: ', '02.25 Minutos')
('---Usando finalmente ATM--: ', '141')
('Tiempo de espera en fila : ', '03.30 Minutos')
('16:13:38', '-- Sale del cajero: ', 'Tipo 3-141')
('---Tiempo de uso --: ', '01.55 Minutos')
('16:25:37', 'Llega al cajero ', 'Tipo 3-142')
('16:25:37', '--Cantidad de personas en fila: ', 0)
('16:27:07', '-- Sale del cajero: ', 'Tipo 3-142')
('---Tiempo de uso --: ', '01.30 Minutos')
('16:28:49', 'Llega al cajero ', 'Tipo 3-143')
('16:28:49', '--Cantidad de personas en fila: ', 0)
('16:28:59', 'Llega al cajero ', 'Tipo 2-144')
('16:28:59', '--Cantidad de personas en fila: ', 1)
('16:30:12', 'Llega al cajero ', 'Tipo 2-145')
('16:30:12', '--Cantidad de personas en fila: ', 2)
('16:31:16', 'Llega al cajero ', 'Tipo 3-146')
('16:31:16', '--Cantidad de personas en fila: ', 3)
('16:32:32', '-- Sale del cajero: ', 'Tipo 3-143')
('---Tiempo de uso --: ', '03.43 Minutos')
('---Usando finalmente ATM--: ', '144')
('Tiempo de espera en fila : ', '03.33 Minutos')
('16:35:17', '-- Sale del cajero: ', 'Tipo 2-144')
('---Tiempo de uso --: ', '02.45 Minutos')
('---Usando finalmente ATM--: ', '145')
('Tiempo de espera en fila : ', '05.05 Minutos')
('16:37:55', 'Llega al cajero ', 'Tipo 2-147')
('16:37:55', '--Cantidad de personas en fila: ', 2)
('16:38:07', '-- Sale del cajero: ', 'Tipo 2-145')
('---Tiempo de uso --: ', '02.49 Minutos')
('---Usando finalmente ATM--: ', '146')
('Tiempo de espera en fila : ', '06.50 Minutos')
('16:40:50', 'Llega al cajero ', 'Tipo 2-148')
('16:40:50', '--Cantidad de personas en fila: ', 2)
('16:40:57', '-- Sale del cajero: ', 'Tipo 3-146')
('---Tiempo de uso --: ', '02.49 Minutos')
('---Usando finalmente ATM--: ', '147')
('Tiempo de espera en fila : ', '03.01 Minutos')
('16:42:57', '-- Sale del cajero: ', 'Tipo 2-147')
('---Tiempo de uso --: ', '02.00 Minutos')
('---Usando finalmente ATM--: ', '148')
('Tiempo de espera en fila : ', '02.07 Minutos')
('16:45:11', '-- Sale del cajero: ', 'Tipo 2-148')
('---Tiempo de uso --: ', '02.14 Minutos')
('16:48:10', 'Llega al cajero ', 'Tipo 1-149')
('16:48:10', '--Cantidad de personas en fila: ', 0)
('16:53:04', 'Llega al cajero ', 'Tipo 1-150')
('16:53:04', '--Cantidad de personas en fila: ', 1)
('16:54:11', '-- Sale del cajero: ', 'Tipo 1-149')
('---Tiempo de uso --: ', '06.01 Minutos')
('---Usando finalmente ATM--: ', '150')
('Tiempo de espera en fila : ', '01.07 Minutos')
('16:56:56', 'Llega al cajero ', 'Tipo 2-151')
('16:56:56', '--Cantidad de personas en fila: ', 1)
('16:56:57', 'Llega al cajero ', 'Tipo 2-152')
('16:56:57', '--Cantidad de personas en fila: ', 2)
('16:58:30', '-- Sale del cajero: ', 'Tipo 1-150')
('---Tiempo de uso --: ', '04.18 Minutos')
('---Usando finalmente ATM--: ', '151')
('Tiempo de espera en fila : ', '01.34 Minutos')
('17:00:33', '-- Sale del cajero: ', 'Tipo 2-151')
('---Tiempo de uso --: ', '02.03 Minutos')
('---Usando finalmente ATM--: ', '152')
('Tiempo de espera en fila : ', '03.36 Minutos')
('17:00:46', 'Llega al cajero ', 'Tipo 3-153')
('17:00:46', '--Cantidad de personas en fila: ', 1)
('17:01:42', '-- Sale del cajero: ', 'Tipo 2-152')
('---Tiempo de uso --: ', '01.09 Minutos')
('---Usando finalmente ATM--: ', '153')
('Tiempo de espera en fila : ', '00.56 Minutos')
('17:04:30', 'Llega al cajero ', 'Tipo 3-154')
('17:04:30', '--Cantidad de personas en fila: ', 1)
('17:06:05', '-- Sale del cajero: ', 'Tipo 3-153')
('---Tiempo de uso --: ', '04.22 Minutos')
('---Usando finalmente ATM--: ', '154')
('Tiempo de espera en fila : ', '01.34 Minutos')
('17:10:58', '-- Sale del cajero: ', 'Tipo 3-154')
('---Tiempo de uso --: ', '04.52 Minutos')
('17:16:01', 'Llega al cajero ', 'Tipo 2-155')
('17:16:01', '--Cantidad de personas en fila: ', 0)
('17:17:37', '-- Sale del cajero: ', 'Tipo 2-155')
('---Tiempo de uso --: ', '01.36 Minutos')
('17:18:45', 'Llega al cajero ', 'Tipo 2-156')
('17:18:45', '--Cantidad de personas en fila: ', 0)
('17:21:28', '-- Sale del cajero: ', 'Tipo 2-156')
('---Tiempo de uso --: ', '02.43 Minutos')
('17:22:59', 'Llega al cajero ', 'Tipo 1-157')
('17:22:59', '--Cantidad de personas en fila: ', 0)
('17:27:40', 'Llega al cajero ', 'Tipo 2-158')
('17:27:40', '--Cantidad de personas en fila: ', 1)
('17:29:49', '-- Sale del cajero: ', 'Tipo 1-157')
('---Tiempo de uso --: ', '06.49 Minutos')
('---Usando finalmente ATM--: ', '158')
('Tiempo de espera en fila : ', '02.09 Minutos')
('17:29:58', 'Llega al cajero ', 'Tipo 2-159')
('17:29:58', '--Cantidad de personas en fila: ', 1)
('17:32:07', 'Llega al cajero ', 'Tipo 2-160')
('17:32:07', '--Cantidad de personas en fila: ', 2)
('17:32:39', '-- Sale del cajero: ', 'Tipo 2-158')
('---Tiempo de uso --: ', '02.50 Minutos')
('---Usando finalmente ATM--: ', '159')
('Tiempo de espera en fila : ', '02.41 Minutos')
('17:32:41', 'Llega al cajero ', 'Tipo 2-161')
('17:32:41', '--Cantidad de personas en fila: ', 2)
('17:33:25', 'Llega al cajero ', 'Tipo 3-162')
('17:33:25', '--Cantidad de personas en fila: ', 3)
('17:34:13', '-- Sale del cajero: ', 'Tipo 2-159')
('---Tiempo de uso --: ', '01.33 Minutos')
('---Usando finalmente ATM--: ', '160')
('Tiempo de espera en fila : ', '02.06 Minutos')
('17:36:27', 'Llega al cajero ', 'Tipo 2-163')
('17:36:27', '--Cantidad de personas en fila: ', 3)
('17:36:49', '-- Sale del cajero: ', 'Tipo 2-160')
('---Tiempo de uso --: ', '02.35 Minutos')
('---Usando finalmente ATM--: ', '161')
('Tiempo de espera en fila : ', '04.07 Minutos')
('17:38:00', '-- Sale del cajero: ', 'Tipo 2-161')
('---Tiempo de uso --: ', '01.11 Minutos')
('---Usando finalmente ATM--: ', '162')
('Tiempo de espera en fila : ', '04.35 Minutos')
('17:39:17', '-- Sale del cajero: ', 'Tipo 3-162')
('---Tiempo de uso --: ', '01.17 Minutos')
('---Usando finalmente ATM--: ', '163')
('Tiempo de espera en fila : ', '02.50 Minutos')
('17:40:55', '-- Sale del cajero: ', 'Tipo 2-163')
('---Tiempo de uso --: ', '01.37 Minutos')
('17:46:44', 'Llega al cajero ', 'Tipo 2-164')
('17:46:44', '--Cantidad de personas en fila: ', 0)
('17:47:56', '-- Sale del cajero: ', 'Tipo 2-164')
('---Tiempo de uso --: ', '01.12 Minutos')
('18:08:38', 'Llega al cajero ', 'Tipo 2-165')
('18:08:38', '--Cantidad de personas en fila: ', 0)
('18:10:43', '-- Sale del cajero: ', 'Tipo 2-165')
('---Tiempo de uso --: ', '02.05 Minutos')
('18:12:43', 'Llega al cajero ', 'Tipo 2-166')
('18:12:43', '--Cantidad de personas en fila: ', 0)
('18:14:48', '-- Sale del cajero: ', 'Tipo 2-166')
('---Tiempo de uso --: ', '02.04 Minutos')
('18:15:54', 'Llega al cajero ', 'Tipo 2-167')
('18:15:54', '--Cantidad de personas en fila: ', 0)
('18:18:08', '-- Sale del cajero: ', 'Tipo 2-167')
('---Tiempo de uso --: ', '02.13 Minutos')
('18:24:39', 'Llega al cajero ', 'Tipo 1-168')
('18:24:39', '--Cantidad de personas en fila: ', 0)
('18:25:18', 'Llega al cajero ', 'Tipo 1-169')
('18:25:18', '--Cantidad de personas en fila: ', 1)
('18:25:59', 'Llega al cajero ', 'Tipo 3-170')
('18:25:59', '--Cantidad de personas en fila: ', 2)
('18:31:12', '-- Sale del cajero: ', 'Tipo 1-168')
('---Tiempo de uso --: ', '06.33 Minutos')
('---Usando finalmente ATM--: ', '169')
('Tiempo de espera en fila : ', '05.54 Minutos')
('18:32:54', '-- Sale del cajero: ', 'Tipo 1-169')
('---Tiempo de uso --: ', '01.41 Minutos')
('---Usando finalmente ATM--: ', '170')
('Tiempo de espera en fila : ', '06.54 Minutos')
('18:33:34', 'Llega al cajero ', 'Tipo 2-171')
('18:33:34', '--Cantidad de personas en fila: ', 1)
('18:37:46', '-- Sale del cajero: ', 'Tipo 3-170')
('---Tiempo de uso --: ', '04.51 Minutos')
('---Usando finalmente ATM--: ', '171')
('Tiempo de espera en fila : ', '04.11 Minutos')
('18:39:21', '-- Sale del cajero: ', 'Tipo 2-171')
('---Tiempo de uso --: ', '01.35 Minutos')
('18:52:51', 'Llega al cajero ', 'Tipo 2-172')
('18:52:51', '--Cantidad de personas en fila: ', 0)
('18:54:12', 'Llega al cajero ', 'Tipo 1-173')
('18:54:12', '--Cantidad de personas en fila: ', 1)
('18:55:20', 'Llega al cajero ', 'Tipo 3-174')
('18:55:20', '--Cantidad de personas en fila: ', 2)
('18:55:43', '-- Sale del cajero: ', 'Tipo 2-172')
('---Tiempo de uso --: ', '02.52 Minutos')
('---Usando finalmente ATM--: ', '173')
('Tiempo de espera en fila : ', '01.30 Minutos')
('18:56:58', 'Llega al cajero ', 'Tipo 2-175')
('18:56:58', '--Cantidad de personas en fila: ', 2)
('18:58:18', '-- Sale del cajero: ', 'Tipo 1-173')
('---Tiempo de uso --: ', '02.34 Minutos')
('---Usando finalmente ATM--: ', '174')
('Tiempo de espera en fila : ', '02.58 Minutos')
--------------------Banco Cerrado :( ----------------------------
-----------------------RESULTADOS-------------------------------
(' CANTIDAD MAXIMA DE PERSONAS EN FILA: ', 19)
(' TIEMPO DE MAXIMA DE PERSONAS: ', '40.27 Minutos')
|
Econometrics - Submission 1.ipynb | ###Markdown
0. ETF SelectionWe select the SPDR Gold Shares (GLD) ETF as the gold ETF. It is traded on Nasdaq, the currency is USD.Similarly, we choose the Amundi CAC 40 UCITS ETF-C (C40.PA) as the equity ETF. It will track the CAC 40 index of France. It is traded on Paris Euronext, the currency is EUR.Data source: https://finance.yahoo.com/ 1. Data Importing
###Code
import arch
import holidays
import pandas as pd
import numpy as np
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.tsa.arima.model import ARIMA
from scipy import stats
from datetime import datetime
from nelson_siegel_svensson import NelsonSiegelSvenssonCurve, NelsonSiegelCurve
from nelson_siegel_svensson.calibrate import calibrate_ns_ols, calibrate_nss_ols
%matplotlib inline
gold_df = pd.read_csv("data/SPDR_Gold_Shares_USD.csv")
equity_df = pd.read_csv("data/C40.PA.csv")
treasury_Yield_df = pd.read_csv('data/Treasury_Yield.csv')
###Output
_____no_output_____
###Markdown
Convert the data into the datetime format and make it the index to query the dataframe easier.
###Code
gold_df["Date"] = pd.to_datetime(gold_df["Date"], format="%Y-%m-%d")
gold_df.set_index("Date", inplace=True)
equity_df["Date"] = pd.to_datetime(equity_df["Date"], format="%Y-%m-%d")
equity_df.set_index("Date", inplace=True)
###Output
_____no_output_____
###Markdown
Verify that the time range is correct.
###Code
treasury_Yield_df.head()
treasury_Yield_df.tail()
gold_df.head()
gold_df.tail()
equity_df.head()
equity_df.tail()
###Output
_____no_output_____
###Markdown
One notable difference between gold and equity prices is that we have prices for gold ETF every day of a week while we don't have prices for equity ETF for weekends (Saturday and Sunday). In order to make the analysis comparable, we will drop the prices of gold ETF on Saturday and Sunday before making further preprocessing and analysis. Another difference is that November 28, 2019 is a Bank Holiday in the US market and we don't have the data that day for the gold ETF. In order to calculate the Pearson correlation, we will also drop the data of that day for the equity market to have two time series with the same length.
###Code
gold_df = gold_df[gold_df.index.dayofweek < 5]
gold_df.shape
equity_df = equity_df[equity_df.index != "2019-11-28"]
equity_df.shape
###Output
_____no_output_____
###Markdown
2. Data Processing We use adjusted close prices to calculate the daily returns. Adjusted close prices are the prices that already take into account stock split and dividends, which reflex more accurate the change of the prices.
###Code
gold_df["Daily Return"] = gold_df["Adj Close"].pct_change(1)
gold_df.head()
equity_df["Daily Return"] = equity_df["Adj Close"].pct_change(1)
equity_df.head()
###Output
_____no_output_____
###Markdown
3. Data Summaries The value at 2019-10-31 is the statistic for the whole October, likewise, the value at 2019-11-30 is the statistic for November. The daily high minus low and the required statistics of part 7 will also be presented here.
###Code
# 3.1
df_Oct = treasury_Yield_df[treasury_Yield_df['Date'].str.contains("Oct")]
average_yield_Oct = np.mean(df_Oct)
print("Average October Yield is \n{}\n".format(average_yield_Oct))
df_Nov = treasury_Yield_df[treasury_Yield_df['Date'].str.contains("Nov")]
average_yield_Nov = np.mean(df_Nov)
print("Average November Yield is \n{}".format(average_yield_Nov))
st_dev_Oct = np.std(df_Oct)
st_dev_NoV = np.std(df_Nov)
print("Standard Deviation for October Yield is \n{}\n".format(st_dev_Oct))
print("Standard Deviation for November Yield is \n{}".format(st_dev_NoV))
gold_df["High minus low"] = gold_df["High"] - gold_df["Low"]
equity_df["High minus low"] = equity_df["High"] - equity_df["Low"]
gold_df.resample('M').mean()
equity_df.resample('M').mean()
gold_df.resample('M').std()
equity_df.resample('M').std()
###Output
_____no_output_____
###Markdown
4. Graphing
###Code
treasury_Yield_df.set_index('Date').plot(figsize=(10,5), grid=True)
plt.figure(figsize=(12,5))
plt.title('The prices of gold ETF (in USD) and equity ETF (in EUR) in October and November 2019')
ax1 = gold_df["Adj Close"].plot(color='blue', grid=True, label='gold ETF')
ax2 = equity_df["Adj Close"].plot(color='red', grid=True, secondary_y=True, label='equity ETF')
h1, l1 = ax1.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
plt.legend(h1+h2, l1+l2, loc=2)
plt.show()
###Output
_____no_output_____
###Markdown
5. Fitting the yield curve
###Code
t =np.array([2.0,3.0,5.0,7.0,10.0,30.0])
#avarage yield rate for October
y = np.array([1.551385,1.527154,1.525115,1.614000,1.701423,2.187269])
curve_fit, status = calibrate_nss_ols(t,y)
NSS_ZC = NelsonSiegelSvenssonCurve.zero(curve_fit,t)
NSS_ZC
Oct_curve, status = calibrate_nss_ols(t,NSS_ZC)
assert status.success
print(Oct_curve)
t = np.linspace(0,20,100)
plt.plot(t,Oct_curve(t))
plt.show()
#avarage yield rate for November
t =np.array([2.0,3.0,5.0,7.0,10.0,30.0])
y = np.array([1.616750,1.618042,1.641167,1.736833,1.811625,2.276708])
curve_fit, status = calibrate_nss_ols(t,y)
NSS_ZC = NelsonSiegelSvenssonCurve.zero(curve_fit,t)
NSS_ZC
Nov_curve, status = calibrate_nss_ols(t,NSS_ZC)
assert status.success
print(Nov_curve)
t = np.linspace(0,20,100)
plt.plot(t,Nov_curve(t))
plt.show()
###Output
NelsonSiegelSvenssonCurve(beta0=2.6460791130382035, beta1=-0.7093417256161145, beta2=-1.3222551460581986, beta3=-2.0635765939331834, tau1=1.623812678632114, tau2=5.1802674914969495)
###Markdown
6. Modelling Prices
###Code
def get_data(df, month, column):
return df[(df.index >= f"2019-{month:02d}-01") & (df.index < f"2019-{(month+1):02d}-01")][column]
###Output
_____no_output_____
###Markdown
ARMA model is a specific case of ARIMA model with i = 0, that allows us to use the ARIMA model here.
###Code
def fit_arima(data):
model = ARIMA(data, order=(3,0,3))
model_fit = model.fit()
print(model_fit.summary())
residuals = pd.DataFrame(model_fit.resid)
ax1 = residuals.plot(label='residual')
plt.title("Residuals during the month")
ax1.get_legend().remove()
plt.show()
ax2 = residuals.plot(kind='kde')
plt.title("Kernel density estimation of the residuals")
ax2.get_legend().remove()
plt.show()
df_name = {0: "gold ETF", 1: "equity ETF"}
month_name = {10: "October", 11: "November"}
for index, df in enumerate([gold_df, equity_df]):
for month in [10, 11]:
print("-" * 78)
print("-" * 78)
print("-" * 78)
print(f"ARMA model for {df_name[index]} in {month_name[month]}")
data = get_data(df, month, "Adj Close")
fit_arima(data)
###Output
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
ARMA model for gold ETF in October
###Markdown
7. Modelling Volatility The high minus low for the ETF's prices and their average, as well as the standard deviation of returns are presented in part 3
###Code
def fit_garch(data):
garch = arch.arch_model(data, vol='garch', p=1, o=0, q=1)
garch_fitted = garch.fit()
print(garch_fitted.summary())
for index, df in enumerate([gold_df, equity_df]):
for month in [10, 11]:
print("-" * 78)
print("-" * 78)
print("-" * 78)
print(f"GARCH model for {df_name[index]} in {month_name[month]}")
data = get_data(df, month, "Daily Return")
data = data.dropna()
fit_garch(data)
###Output
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
GARCH model for gold ETF in October
Iteration: 1, Func. Count: 6, Neg. LLF: 2211987.449503363
Iteration: 2, Func. Count: 16, Neg. LLF: -81.257589920448
Optimization terminated successfully (Exit mode 0)
Current function value: -81.25758995275967
Iterations: 6
Function evaluations: 16
Gradient evaluations: 2
Constant Mean - GARCH Model Results
==============================================================================
Dep. Variable: Daily Return R-squared: -0.000
Mean Model: Constant Mean Adj. R-squared: -0.000
Vol Model: GARCH Log-Likelihood: 81.2576
Distribution: Normal AIC: -154.515
Method: Maximum Likelihood BIC: -150.151
No. Observations: 22
Date: Tue, Jan 12 2021 Df Residuals: 18
Time: 19:31:08 Df Model: 4
Mean Model
=============================================================================
coef std err t P>|t| 95.0% Conf. Int.
-----------------------------------------------------------------------------
mu 8.7955e-04 1.323e-03 0.665 0.506 [-1.714e-03,3.473e-03]
Volatility Model
=============================================================================
coef std err t P>|t| 95.0% Conf. Int.
-----------------------------------------------------------------------------
omega 1.0914e-05 5.820e-10 1.875e+04 0.000 [1.091e-05,1.092e-05]
alpha[1] 1.0000e-02 1.941e-02 0.515 0.606 [-2.805e-02,4.805e-02]
beta[1] 0.6900 7.299e-02 9.454 3.273e-21 [ 0.547, 0.833]
=============================================================================
Covariance estimator: robust
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
GARCH model for gold ETF in November
Iteration: 1, Func. Count: 6, Neg. LLF: 12684817.542295108
Iteration: 2, Func. Count: 16, Neg. LLF: -73.97790009442136
Optimization terminated successfully (Exit mode 0)
Current function value: -73.9779001217992
Iterations: 6
Function evaluations: 16
Gradient evaluations: 2
Constant Mean - GARCH Model Results
==============================================================================
Dep. Variable: Daily Return R-squared: -0.000
Mean Model: Constant Mean Adj. R-squared: -0.000
Vol Model: GARCH Log-Likelihood: 73.9779
Distribution: Normal AIC: -139.956
Method: Maximum Likelihood BIC: -135.973
No. Observations: 20
Date: Tue, Jan 12 2021 Df Residuals: 16
Time: 19:31:08 Df Model: 4
Mean Model
===============================================================================
coef std err t P>|t| 95.0% Conf. Int.
-------------------------------------------------------------------------------
mu -1.4838e-03 7.497e-07 -1979.293 0.000 [-1.485e-03,-1.482e-03]
Volatility Model
============================================================================
coef std err t P>|t| 95.0% Conf. Int.
----------------------------------------------------------------------------
omega 1.0913e-05 4.658e-11 2.343e+05 0.000 [1.091e-05,1.091e-05]
alpha[1] 0.0500 0.274 0.183 0.855 [ -0.487, 0.587]
beta[1] 0.6500 0.237 2.741 6.122e-03 [ 0.185, 1.115]
============================================================================
Covariance estimator: robust
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
GARCH model for equity ETF in October
Iteration: 1, Func. Count: 5, Neg. LLF: -72.84794178149747
Optimization terminated successfully (Exit mode 0)
Current function value: -72.84794218002007
Iterations: 5
Function evaluations: 5
Gradient evaluations: 1
Constant Mean - GARCH Model Results
==============================================================================
Dep. Variable: Daily Return R-squared: -0.000
Mean Model: Constant Mean Adj. R-squared: -0.000
Vol Model: GARCH Log-Likelihood: 72.8479
Distribution: Normal AIC: -137.696
Method: Maximum Likelihood BIC: -133.332
No. Observations: 22
Date: Tue, Jan 12 2021 Df Residuals: 18
Time: 19:31:08 Df Model: 4
Mean Model
============================================================================
coef std err t P>|t| 95.0% Conf. Int.
----------------------------------------------------------------------------
mu 1.2314e-03 2.601e-06 473.508 0.000 [1.226e-03,1.237e-03]
Volatility Model
============================================================================
coef std err t P>|t| 95.0% Conf. Int.
----------------------------------------------------------------------------
omega 1.9376e-06 9.202e-10 2105.643 0.000 [1.936e-06,1.939e-06]
alpha[1] 0.2000 8.072e-02 2.478 1.322e-02 [4.180e-02, 0.358]
beta[1] 0.7800 5.716e-02 13.647 2.108e-42 [ 0.668, 0.892]
============================================================================
Covariance estimator: robust
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
GARCH model for equity ETF in November
Iteration: 1, Func. Count: 6, Neg. LLF: 197742414.82374087
Iteration: 2, Func. Count: 17, Neg. LLF: -33.194659986845565
Iteration: 3, Func. Count: 25, Neg. LLF: -10.286129970616884
Iteration: 4, Func. Count: 33, Neg. LLF: -82.56739316456598
Iteration: 5, Func. Count: 38, Neg. LLF: 258146278.98570937
Iteration: 6, Func. Count: 49, Neg. LLF: 556727004.1629564
Iteration: 7, Func. Count: 60, Neg. LLF: 405442990.2330078
Iteration: 8, Func. Count: 71, Neg. LLF: 176430175.20465565
Iteration: 9, Func. Count: 82, Neg. LLF: 519.795341797703
Iteration: 10, Func. Count: 88, Neg. LLF: 403929955.2030698
Iteration: 11, Func. Count: 95, Neg. LLF: -82.61557826768482
Optimization terminated successfully (Exit mode 0)
Current function value: -82.61557835134657
Iterations: 15
Function evaluations: 95
Gradient evaluations: 11
Constant Mean - GARCH Model Results
==============================================================================
Dep. Variable: Daily Return R-squared: -0.021
Mean Model: Constant Mean Adj. R-squared: -0.021
Vol Model: GARCH Log-Likelihood: 82.6156
Distribution: Normal AIC: -157.231
Method: Maximum Likelihood BIC: -153.248
No. Observations: 20
Date: Tue, Jan 12 2021 Df Residuals: 16
Time: 19:31:08 Df Model: 4
Mean Model
============================================================================
coef std err t P>|t| 95.0% Conf. Int.
----------------------------------------------------------------------------
mu 2.0804e-03 3.332e-08 6.243e+04 0.000 [2.080e-03,2.080e-03]
Volatility Model
============================================================================
coef std err t P>|t| 95.0% Conf. Int.
----------------------------------------------------------------------------
omega 6.9482e-06 4.271e-11 1.627e+05 0.000 [6.948e-06,6.948e-06]
alpha[1] 0.7206 0.346 2.083 3.724e-02 [4.259e-02, 1.399]
beta[1] 0.0120 9.873e-02 0.122 0.903 [ -0.181, 0.206]
============================================================================
Covariance estimator: robust
###Markdown
8. Correlation
###Code
corr_oct = stats.pearsonr(gold_df[("2019-10-01" < gold_df.index) & (gold_df.index < "2019-11-01")]["Daily Return"], equity_df[("2019-10-01" < equity_df.index) & (equity_df.index < "2019-11-01")]["Daily Return"])[0]
print(f"The correlation of gold and equity ETFs in October is {corr_oct}")
corr_nov = stats.pearsonr(gold_df[gold_df.index >= "2019-11-01"]["Daily Return"], equity_df[equity_df.index >= "2019-11-01"]["Daily Return"])[0]
print(f"The correlation of gold and equity ETFs in November is {corr_nov}")
###Output
The correlation of gold and equity ETFs in November is -0.4119305823448921
###Markdown
1. Data Importing
###Code
import arch
import holidays
import pandas as pd
from statsmodels.tsa.arima.model import ARIMA
from scipy import stats
from matplotlib import pyplot
gold_df = pd.read_csv("data/SPDR_Gold_Shares_USD.csv")
equity_df = pd.read_csv("data/CAC40_EUR.csv")
gold_df["Date"] = pd.to_datetime(gold_df["Date"], format="%Y-%m-%d")
gold_df.set_index("Date", inplace=True)
equity_df["Date"] = pd.to_datetime(equity_df["Date"], format="%Y-%m-%d")
equity_df.set_index("Date", inplace=True)
gold_df.head()
gold_df.tail()
equity_df.head()
equity_df.tail()
###Output
_____no_output_____
###Markdown
One notable difference between gold and equity prices is that we have prices for gold ETF every day of a week while we don't have prices for equity ETF for weekends (Saturday and Sunday). In order to make the analysis comparable, we will drop the prices of gold ETF on Saturday and Sunday before making further preprocessing and analysis. The ETF that tracks CAC 40, the France's index, traded every weekday in October and November 2019, so it is enough to drop the prices in the weekend of the gold ETF to make the data comparable. November 28, 2019 is a Bank Holiday
###Code
gold_df = gold_df[gold_df.index.dayofweek < 5]
gold_df.shape
equity_df = equity_df[equity_df.index != "2019-11-28"]
equity_df.shape
###Output
_____no_output_____
###Markdown
2. Data Processing Daily returns of the gold and equity ETFs
###Code
gold_df["Daily Return"] = gold_df["Adj Close"].pct_change(1)
gold_df.head()
equity_df["Daily Return"] = equity_df["Adj Close"].pct_change(1)
equity_df.head()
###Output
_____no_output_____
###Markdown
3. Data Summaries
###Code
gold_df.resample('M').mean()
equity_df.resample('M').mean()
gold_df.resample('M').std()
equity_df.resample('M').std()
###Output
_____no_output_____
###Markdown
6. Modelling Prices
###Code
# fit model
model = ARIMA(gold_df[gold_df.index < "2019-11-01"]["Daily Return"], order=(1,0,1))
model_fit = model.fit()
# summary of fit model
print(model_fit.summary())
# line plot of residuals
residuals = pd.DataFrame(model_fit.resid)
residuals.plot()
pyplot.show()
# density plot of residuals
residuals.plot(kind='kde')
pyplot.show()
# summary stats of residuals
print(residuals.describe())
# fit model
model = ARIMA(gold_df[gold_df.index >= "2019-11-01"]["Daily Return"], order=(1,0,1))
model_fit = model.fit()
# summary of fit model
print(model_fit.summary())
# line plot of residuals
residuals = pd.DataFrame(model_fit.resid)
residuals.plot()
pyplot.show()
# density plot of residuals
residuals.plot(kind='kde')
pyplot.show()
# summary stats of residuals
print(residuals.describe())
###Output
SARIMAX Results
==============================================================================
Dep. Variable: Daily Return No. Observations: 20
Model: ARIMA(1, 0, 1) Log Likelihood 74.368
Date: Sun, 10 Jan 2021 AIC -140.737
Time: 16:43:39 BIC -136.754
Sample: 0 HQIC -139.959
- 20
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const -0.0017 0.002 -1.010 0.313 -0.005 0.002
ar.L1 -0.5260 1.950 -0.270 0.787 -4.347 3.295
ma.L1 0.3870 2.223 0.174 0.862 -3.969 4.743
sigma2 3.439e-05 1.35e-05 2.550 0.011 7.96e-06 6.08e-05
===================================================================================
Ljung-Box (L1) (Q): 0.02 Jarque-Bera (JB): 3.89
Prob(Q): 0.88 Prob(JB): 0.14
Heteroskedasticity (H): 0.26 Skew: -1.05
Prob(H) (two-sided): 0.10 Kurtosis: 3.47
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
###Markdown
7. Modelling Volatility
###Code
gold_df["High minus low"] = gold_df["High"] - gold_df["Low"]
gold_df.resample('M').mean()
garch = arch.arch_model(gold_df[gold_df.index >= "2019-11-01"]["Daily Return"], vol='garch', p=1, o=0, q=1)
garch_fitted = garch.fit()
garch_fitted.summary()
garch = arch.arch_model(equity_df[equity_df.index >= "2019-11-01"]["Daily Return"], vol='garch', p=1, o=0, q=1)
garch_fitted = garch.fit()
garch_fitted.summary()
stats.pearsonr(gold_df[gold_df.index >= "2019-11-01"]["Daily Return"], equity_df[equity_df.index >= "2019-11-01"]["Daily Return"])
stats.pearsonr(gold_df[("2019-10-01" < gold_df.index) & (gold_df.index < "2019-11-01")]["Daily Return"], equity_df[("2019-10-01" < equity_df.index) & (equity_df.index < "2019-11-01")]["Daily Return"])
###Output
_____no_output_____
###Markdown
0. ETF SelectionWe select the SPDR Gold Shares (GLD) ETF as the gold ETF. It is traded on Nasdaq, the currency is USD.Similarly, we choose the Amundi CAC 40 UCITS ETF-C (C40.PA) as the equity ETF. It will track the CAC 40 index of France. It is traded on Paris Euronext, the currency is EUR.Data source: https://finance.yahoo.com/ 1. Data Importing
###Code
import arch
import holidays
import pandas as pd
import numpy as np
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.tsa.arima.model import ARIMA
from scipy import stats
from datetime import datetime
from nelson_siegel_svensson import NelsonSiegelSvenssonCurve, NelsonSiegelCurve
from nelson_siegel_svensson.calibrate import calibrate_ns_ols, calibrate_nss_ols
%matplotlib inline
gold_df = pd.read_csv("data/SPDR_Gold_Shares_USD.csv")
equity_df = pd.read_csv("data/C40.PA.csv")
treasury_Yield_df = pd.read_csv('data/Treasury_Yield.csv')
###Output
_____no_output_____
###Markdown
Convert the data into the datetime format and make it the index to query the dataframe easier.
###Code
gold_df["Date"] = pd.to_datetime(gold_df["Date"], format="%Y-%m-%d")
gold_df.set_index("Date", inplace=True)
equity_df["Date"] = pd.to_datetime(equity_df["Date"], format="%Y-%m-%d")
equity_df.set_index("Date", inplace=True)
###Output
_____no_output_____
###Markdown
Verify that the time range is correct.
###Code
treasury_Yield_df.head()
treasury_Yield_df.tail()
gold_df.head()
gold_df.tail()
equity_df.head()
equity_df.tail()
###Output
_____no_output_____
###Markdown
One notable difference between gold and equity prices is that we have prices for gold ETF every day of a week while we don't have prices for equity ETF for weekends (Saturday and Sunday). In order to make the analysis comparable, we will drop the prices of gold ETF on Saturday and Sunday before making further preprocessing and analysis. Another difference is that November 28, 2019 is a Bank Holiday in the US market and we don't have the data that day for the gold ETF. In order to calculate the Pearson correlation, we will also drop the data of that day for the equity market to have two time series with the same length.
###Code
gold_df = gold_df[gold_df.index.dayofweek < 5]
gold_df.shape
equity_df = equity_df[equity_df.index != "2019-11-28"]
equity_df.shape
###Output
_____no_output_____
###Markdown
2. Data Processing We use adjusted close prices to calculate the daily returns. Adjusted close prices are the prices that already take into account stock split and dividends, which reflex more accurate the change of the prices.
###Code
gold_df["Daily Return"] = gold_df["Adj Close"].pct_change(1)
gold_df.head()
equity_df["Daily Return"] = equity_df["Adj Close"].pct_change(1)
equity_df.head()
###Output
_____no_output_____
###Markdown
3. Data Summaries The value at 2019-10-31 is the statistic for the whole October, likewise, the value at 2019-11-30 is the statistic for November. The daily high minus low and the required statistics of part 7 will also be presented here.
###Code
# 3.1
df_Oct = treasury_Yield_df[treasury_Yield_df['Date'].str.contains("Oct")]
average_yield_Oct = np.mean(df_Oct)
print("Average October Yield is \n{}\n".format(average_yield_Oct))
df_Nov = treasury_Yield_df[treasury_Yield_df['Date'].str.contains("Nov")]
average_yield_Nov = np.mean(df_Nov)
print("Average November Yield is \n{}".format(average_yield_Nov))
st_dev_Oct = np.std(df_Oct)
st_dev_NoV = np.std(df_Nov)
print("Standard Deviation for October Yield is \n{}\n".format(st_dev_Oct))
print("Standard Deviation for November Yield is \n{}".format(st_dev_NoV))
gold_df["High minus low"] = gold_df["High"] - gold_df["Low"]
equity_df["High minus low"] = equity_df["High"] - equity_df["Low"]
gold_df.resample('M').mean()
equity_df.resample('M').mean()
gold_df.resample('M').std()
equity_df.resample('M').std()
###Output
_____no_output_____
###Markdown
4. Graphing
###Code
treasury_Yield_df.set_index('Date').plot(figsize=(10,5), grid=True)
plt.figure(figsize=(12,5))
plt.title('The prices of gold ETF (in USD) and equity ETF (in EUR) in October and November 2019')
ax1 = gold_df["Adj Close"].plot(color='blue', grid=True, label='gold ETF')
ax2 = equity_df["Adj Close"].plot(color='red', grid=True, secondary_y=True, label='equity ETF')
h1, l1 = ax1.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
plt.legend(h1+h2, l1+l2, loc=2)
plt.show()
###Output
_____no_output_____
###Markdown
5. Fitting the yield curve
###Code
t =np.array([2.0,3.0,5.0,7.0,10.0,30.0])
#avarage yield rate for October
y = np.array([1.551385,1.527154,1.525115,1.614000,1.701423,2.187269])
curve_fit, status = calibrate_nss_ols(t,y)
NSS_ZC = NelsonSiegelSvenssonCurve.zero(curve_fit,t)
NSS_ZC
Oct_curve, status = calibrate_nss_ols(t,NSS_ZC)
assert status.success
print(Oct_curve)
t = np.linspace(0,20,100)
plt.plot(t,Oct_curve(t))
plt.show()
#avarage yield rate for November
t =np.array([2.0,3.0,5.0,7.0,10.0,30.0])
y = np.array([1.616750,1.618042,1.641167,1.736833,1.811625,2.276708])
curve_fit, status = calibrate_nss_ols(t,y)
NSS_ZC = NelsonSiegelSvenssonCurve.zero(curve_fit,t)
NSS_ZC
Nov_curve, status = calibrate_nss_ols(t,NSS_ZC)
assert status.success
print(Nov_curve)
t = np.linspace(0,20,100)
plt.plot(t,Nov_curve(t))
plt.show()
###Output
NelsonSiegelSvenssonCurve(beta0=2.6460791130382035, beta1=-0.7093417256161145, beta2=-1.3222551460581986, beta3=-2.0635765939331834, tau1=1.623812678632114, tau2=5.1802674914969495)
###Markdown
6. Modelling Prices
###Code
def get_data(df, month, column):
return df[(df.index >= f"2019-{month:02d}-01") & (df.index < f"2019-{(month+1):02d}-01")][column]
###Output
_____no_output_____
###Markdown
ARMA model is a specific case of ARIMA model with i = 0, that allows us to use the ARIMA model here.
###Code
def fit_arima(data):
model = ARIMA(data, order=(3,0,3))
model_fit = model.fit()
print(model_fit.summary())
residuals = pd.DataFrame(model_fit.resid)
ax1 = residuals.plot(label='residual')
plt.title("Residuals during the month")
ax1.get_legend().remove()
plt.show()
ax2 = residuals.plot(kind='kde')
plt.title("Kernel density estimation of the residuals")
ax2.get_legend().remove()
plt.show()
df_name = {0: "gold ETF", 1: "equity ETF"}
month_name = {10: "October", 11: "November"}
for index, df in enumerate([gold_df, equity_df]):
for month in [10, 11]:
print("-" * 78)
print("-" * 78)
print("-" * 78)
print(f"ARMA model for {df_name[index]} in {month_name[month]}")
data = get_data(df, month, "Adj Close")
fit_arima(data)
###Output
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
ARMA model for gold ETF in October
###Markdown
7. Modelling Volatility The high minus low for the ETF's prices and their average, as well as the standard deviation of returns are presented in part 3
###Code
def fit_garch(data):
garch = arch.arch_model(data, vol='garch', p=1, o=0, q=1)
garch_fitted = garch.fit()
print(garch_fitted.summary())
for index, df in enumerate([gold_df, equity_df]):
for month in [10, 11]:
print("-" * 78)
print("-" * 78)
print("-" * 78)
print(f"GARCH model for {df_name[index]} in {month_name[month]}")
data = get_data(df, month, "Daily Return")
data = data.dropna()
fit_garch(data)
###Output
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
GARCH model for gold ETF in October
Iteration: 1, Func. Count: 6, Neg. LLF: 2211987.449503363
Iteration: 2, Func. Count: 16, Neg. LLF: -81.257589920448
Optimization terminated successfully (Exit mode 0)
Current function value: -81.25758995275967
Iterations: 6
Function evaluations: 16
Gradient evaluations: 2
Constant Mean - GARCH Model Results
==============================================================================
Dep. Variable: Daily Return R-squared: -0.000
Mean Model: Constant Mean Adj. R-squared: -0.000
Vol Model: GARCH Log-Likelihood: 81.2576
Distribution: Normal AIC: -154.515
Method: Maximum Likelihood BIC: -150.151
No. Observations: 22
Date: Sat, Feb 27 2021 Df Residuals: 18
Time: 09:36:23 Df Model: 4
Mean Model
=============================================================================
coef std err t P>|t| 95.0% Conf. Int.
-----------------------------------------------------------------------------
mu 8.7955e-04 1.323e-03 0.665 0.506 [-1.714e-03,3.473e-03]
Volatility Model
=============================================================================
coef std err t P>|t| 95.0% Conf. Int.
-----------------------------------------------------------------------------
omega 1.0914e-05 5.820e-10 1.875e+04 0.000 [1.091e-05,1.092e-05]
alpha[1] 1.0000e-02 1.941e-02 0.515 0.606 [-2.805e-02,4.805e-02]
beta[1] 0.6900 7.299e-02 9.454 3.273e-21 [ 0.547, 0.833]
=============================================================================
Covariance estimator: robust
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
GARCH model for gold ETF in November
Iteration: 1, Func. Count: 6, Neg. LLF: 12684817.542295108
Iteration: 2, Func. Count: 16, Neg. LLF: -73.97790009442136
Optimization terminated successfully (Exit mode 0)
Current function value: -73.9779001217992
Iterations: 6
Function evaluations: 16
Gradient evaluations: 2
Constant Mean - GARCH Model Results
==============================================================================
Dep. Variable: Daily Return R-squared: -0.000
Mean Model: Constant Mean Adj. R-squared: -0.000
Vol Model: GARCH Log-Likelihood: 73.9779
Distribution: Normal AIC: -139.956
Method: Maximum Likelihood BIC: -135.973
No. Observations: 20
Date: Sat, Feb 27 2021 Df Residuals: 16
Time: 09:36:23 Df Model: 4
Mean Model
===============================================================================
coef std err t P>|t| 95.0% Conf. Int.
-------------------------------------------------------------------------------
mu -1.4838e-03 7.497e-07 -1979.293 0.000 [-1.485e-03,-1.482e-03]
Volatility Model
============================================================================
coef std err t P>|t| 95.0% Conf. Int.
----------------------------------------------------------------------------
omega 1.0913e-05 4.658e-11 2.343e+05 0.000 [1.091e-05,1.091e-05]
alpha[1] 0.0500 0.274 0.183 0.855 [ -0.487, 0.587]
beta[1] 0.6500 0.237 2.741 6.122e-03 [ 0.185, 1.115]
============================================================================
Covariance estimator: robust
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
GARCH model for equity ETF in October
Iteration: 1, Func. Count: 5, Neg. LLF: -72.84794178149747
Optimization terminated successfully (Exit mode 0)
Current function value: -72.84794218002007
Iterations: 5
Function evaluations: 5
Gradient evaluations: 1
Constant Mean - GARCH Model Results
==============================================================================
Dep. Variable: Daily Return R-squared: -0.000
Mean Model: Constant Mean Adj. R-squared: -0.000
Vol Model: GARCH Log-Likelihood: 72.8479
Distribution: Normal AIC: -137.696
Method: Maximum Likelihood BIC: -133.332
No. Observations: 22
Date: Sat, Feb 27 2021 Df Residuals: 18
Time: 09:36:23 Df Model: 4
Mean Model
============================================================================
coef std err t P>|t| 95.0% Conf. Int.
----------------------------------------------------------------------------
mu 1.2314e-03 2.601e-06 473.508 0.000 [1.226e-03,1.237e-03]
Volatility Model
============================================================================
coef std err t P>|t| 95.0% Conf. Int.
----------------------------------------------------------------------------
omega 1.9376e-06 9.202e-10 2105.643 0.000 [1.936e-06,1.939e-06]
alpha[1] 0.2000 8.072e-02 2.478 1.322e-02 [4.180e-02, 0.358]
beta[1] 0.7800 5.716e-02 13.647 2.108e-42 [ 0.668, 0.892]
============================================================================
Covariance estimator: robust
------------------------------------------------------------------------------
------------------------------------------------------------------------------
------------------------------------------------------------------------------
GARCH model for equity ETF in November
Iteration: 1, Func. Count: 6, Neg. LLF: 197742414.82374087
Iteration: 2, Func. Count: 17, Neg. LLF: -33.194659986845565
Iteration: 3, Func. Count: 25, Neg. LLF: -10.286129970616884
Iteration: 4, Func. Count: 33, Neg. LLF: -82.56739316456598
Iteration: 5, Func. Count: 38, Neg. LLF: 258146278.98570937
Iteration: 6, Func. Count: 49, Neg. LLF: 556727004.1629564
Iteration: 7, Func. Count: 60, Neg. LLF: 405442990.2330078
Iteration: 8, Func. Count: 71, Neg. LLF: 176430175.20465565
Iteration: 9, Func. Count: 82, Neg. LLF: 519.795341797703
Iteration: 10, Func. Count: 88, Neg. LLF: 403929955.2030698
Iteration: 11, Func. Count: 95, Neg. LLF: -82.61557826768482
Optimization terminated successfully (Exit mode 0)
Current function value: -82.61557835134657
Iterations: 15
Function evaluations: 95
Gradient evaluations: 11
Constant Mean - GARCH Model Results
==============================================================================
Dep. Variable: Daily Return R-squared: -0.021
Mean Model: Constant Mean Adj. R-squared: -0.021
Vol Model: GARCH Log-Likelihood: 82.6156
Distribution: Normal AIC: -157.231
Method: Maximum Likelihood BIC: -153.248
No. Observations: 20
Date: Sat, Feb 27 2021 Df Residuals: 16
Time: 09:36:23 Df Model: 4
Mean Model
============================================================================
coef std err t P>|t| 95.0% Conf. Int.
----------------------------------------------------------------------------
mu 2.0804e-03 3.332e-08 6.243e+04 0.000 [2.080e-03,2.080e-03]
Volatility Model
============================================================================
coef std err t P>|t| 95.0% Conf. Int.
----------------------------------------------------------------------------
omega 6.9482e-06 4.271e-11 1.627e+05 0.000 [6.948e-06,6.948e-06]
alpha[1] 0.7206 0.346 2.083 3.724e-02 [4.259e-02, 1.399]
beta[1] 0.0120 9.873e-02 0.122 0.903 [ -0.181, 0.206]
============================================================================
Covariance estimator: robust
###Markdown
8. Correlation
###Code
corr_oct = stats.pearsonr(gold_df[("2019-10-01" < gold_df.index) & (gold_df.index < "2019-11-01")]["Daily Return"], equity_df[("2019-10-01" < equity_df.index) & (equity_df.index < "2019-11-01")]["Daily Return"])[0]
print(f"The correlation of gold and equity ETFs in October is {corr_oct}")
corr_nov = stats.pearsonr(gold_df[gold_df.index >= "2019-11-01"]["Daily Return"], equity_df[equity_df.index >= "2019-11-01"]["Daily Return"])[0]
print(f"The correlation of gold and equity ETFs in November is {corr_nov}")
###Output
The correlation of gold and equity ETFs in November is -0.4119305823448921
|
Save and load of AI models.ipynb | ###Markdown
Save and load of AI models Model progress can be saved during and after training. When publishing research models and techniques, most machine learning practitioners share:* code to create the model, and* the trained weights, or parameters, for the modelSharing this data helps others understand how the model works and try it themselves with new data. Setup Installs and imports
###Code
!pip install pyyaml h5py
# Required to save models in HDF5 format
import os
import tensorflow as tf
from tensorflow import keras
print(tf.version.VERSION)
###Output
2.5.0
###Markdown
Get an example dataset[MNIST dataset] is used for demonstration. To speed up these runs, use the first 1000 examples:
###Code
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
###Output
_____no_output_____
###Markdown
Define a model Start by building a simple sequential model:
###Code
# Define a simple sequential model
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation='relu', input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.metrics.SparseCategoricalAccuracy()])
return model
# Create a basic model instance
model = create_model()
# Display the model's architecture
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 512) 401920
_________________________________________________________________
dropout (Dropout) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 10) 5130
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
###Markdown
Save checkpoints during training Use a trained model without having to retrain it, or pick-up training anywhere left off in case the training process was interrupted. The `tf.keras.callbacks.ModelCheckpoint` callback allows to continually save the model both *during* and at *the end* of training. Checkpoint callback usageCreate a `tf.keras.callbacks.ModelCheckpoint` callback that saves weights only during training:
###Code
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Create a callback that saves the model's weights
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=1)
# Train the model with the new callback
model.fit(train_images,
train_labels,
epochs=10,
validation_data=(test_images, test_labels),
callbacks=[cp_callback]) # Pass callback to training
# This may generate warnings related to saving the state of the optimizer.
# These warnings (and similar warnings throughout this notebook)
# are in place to discourage outdated usage, and can be ignored.
###Output
Epoch 1/10
32/32 [==============================] - 5s 7ms/step - loss: 1.1137 - sparse_categorical_accuracy: 0.6900 - val_loss: 0.7065 - val_sparse_categorical_accuracy: 0.7870
Epoch 00001: saving model to training_1\cp.ckpt
Epoch 2/10
32/32 [==============================] - 0s 3ms/step - loss: 0.4214 - sparse_categorical_accuracy: 0.8690 - val_loss: 0.5313 - val_sparse_categorical_accuracy: 0.8430
Epoch 00002: saving model to training_1\cp.ckpt
Epoch 3/10
32/32 [==============================] - 0s 3ms/step - loss: 0.2848 - sparse_categorical_accuracy: 0.9290 - val_loss: 0.4698 - val_sparse_categorical_accuracy: 0.8500
Epoch 00003: saving model to training_1\cp.ckpt
Epoch 4/10
32/32 [==============================] - 0s 3ms/step - loss: 0.1879 - sparse_categorical_accuracy: 0.9520 - val_loss: 0.4548 - val_sparse_categorical_accuracy: 0.8500
Epoch 00004: saving model to training_1\cp.ckpt
Epoch 5/10
32/32 [==============================] - 1s 47ms/step - loss: 0.1527 - sparse_categorical_accuracy: 0.9680 - val_loss: 0.4515 - val_sparse_categorical_accuracy: 0.8490
Epoch 00005: saving model to training_1\cp.ckpt
Epoch 6/10
32/32 [==============================] - 0s 3ms/step - loss: 0.1231 - sparse_categorical_accuracy: 0.9690 - val_loss: 0.4346 - val_sparse_categorical_accuracy: 0.8620
Epoch 00006: saving model to training_1\cp.ckpt
Epoch 7/10
32/32 [==============================] - 0s 3ms/step - loss: 0.0902 - sparse_categorical_accuracy: 0.9820 - val_loss: 0.4297 - val_sparse_categorical_accuracy: 0.8580
Epoch 00007: saving model to training_1\cp.ckpt
Epoch 8/10
32/32 [==============================] - 0s 3ms/step - loss: 0.0662 - sparse_categorical_accuracy: 0.9920 - val_loss: 0.4232 - val_sparse_categorical_accuracy: 0.8630
Epoch 00008: saving model to training_1\cp.ckpt
Epoch 9/10
32/32 [==============================] - 0s 3ms/step - loss: 0.0471 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4034 - val_sparse_categorical_accuracy: 0.8730
Epoch 00009: saving model to training_1\cp.ckpt
Epoch 10/10
32/32 [==============================] - 0s 3ms/step - loss: 0.0360 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4148 - val_sparse_categorical_accuracy: 0.8710
Epoch 00010: saving model to training_1\cp.ckpt
###Markdown
This creates a single collection of TensorFlow checkpoint files that are updated at the end of each epoch:
###Code
os.listdir(checkpoint_dir)
###Output
_____no_output_____
###Markdown
For two models with the same architecture, weights can be shared between them. So, when restoring a model from weights-only, create a model with the same architecture as the original model and then set its weights. Now rebuild a fresh, untrained model and evaluate it on the test set. An untrained model will perform at chance levels (~10% accuracy):
###Code
# Create a basic model instance
model = create_model()
# Evaluate the model
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Untrained model, accuracy: {:5.2f}%".format(100 * acc))
###Output
32/32 - 0s - loss: 2.3180 - sparse_categorical_accuracy: 0.1040
Untrained model, accuracy: 10.40%
###Markdown
Then load the weights from the checkpoint and re-evaluate:
###Code
# Loads the weights
model.load_weights(checkpoint_path)
# Re-evaluate the model
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100 * acc))
###Output
32/32 - 0s - loss: 0.4148 - sparse_categorical_accuracy: 0.8710
Restored model, accuracy: 87.10%
###Markdown
Checkpoint callback optionsThe callback provides several options to provide unique names for checkpoints and adjust the checkpointing frequency.Train a new model, and save uniquely named checkpoints once every five epochs:
###Code
# Include the epoch in the file name (uses `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
batch_size = 32
# Create a callback that saves the model's weights every 5 epochs
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
save_freq=5*batch_size)
# Create a new model instance
model = create_model()
# Save the weights using the `checkpoint_path` format
model.save_weights(checkpoint_path.format(epoch=0))
# Train the model with the new callback
model.fit(train_images,
train_labels,
epochs=50,
batch_size=batch_size,
callbacks=[cp_callback],
validation_data=(test_images, test_labels),
verbose=0)
###Output
Epoch 00005: saving model to training_2\cp-0005.ckpt
Epoch 00010: saving model to training_2\cp-0010.ckpt
Epoch 00015: saving model to training_2\cp-0015.ckpt
Epoch 00020: saving model to training_2\cp-0020.ckpt
Epoch 00025: saving model to training_2\cp-0025.ckpt
Epoch 00030: saving model to training_2\cp-0030.ckpt
Epoch 00035: saving model to training_2\cp-0035.ckpt
Epoch 00040: saving model to training_2\cp-0040.ckpt
Epoch 00045: saving model to training_2\cp-0045.ckpt
Epoch 00050: saving model to training_2\cp-0050.ckpt
###Markdown
Check the resulting checkpoints and choose the latest one:
###Code
os.listdir(checkpoint_dir)
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
###Output
_____no_output_____
###Markdown
Note: the default TensorFlow format only saves the 5 most recent checkpoints.To test, reset the model and load the latest checkpoint:
###Code
# Create a new model instance
model = create_model()
# Load the previously saved weights
model.load_weights(latest)
# Re-evaluate the model
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100 * acc))
###Output
32/32 - 0s - loss: 0.4814 - sparse_categorical_accuracy: 0.8770
Restored model, accuracy: 87.70%
###Markdown
Files for saving weight The above code stores the weights to a collection of [checkpoint]-formatted files that contain only the trained weights in a binary format. Checkpoints contain:* One or more shards that contain your model's weights.* An index file that indicates which weights are stored in which shard. Manually save weightsManually saving weights with the `Model.save_weights` method. By default, `tf.keras`—and `save_weights` in particular—uses the TensorFlow [checkpoint](../../guide/checkpoint.ipynb) format with a `.ckpt` extension (saving in [HDF5](https://js.tensorflow.org/tutorials/import-keras.html) with a `.h5` extension is covered in the [Save and serialize models](../../guide/keras/save_and_serializeweights-only_saving_in_savedmodel_format) guide):
###Code
# Save the weights
model.save_weights('./checkpoints/my_checkpoint')
# Create a new model instance
model = create_model()
# Restore the weights
model.load_weights('./checkpoints/my_checkpoint')
# Evaluate the model
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100 * acc))
###Output
32/32 - 0s - loss: 0.4814 - sparse_categorical_accuracy: 0.8770
Restored model, accuracy: 87.70%
###Markdown
Save the entire modelCall [`model.save`](https://www.tensorflow.org/api_docs/python/tf/keras/Modelsave) to save a model's architecture, weights, and training configuration in a single file/folder. Export a model and it can be used without access to the original Python code*.An entire model can be saved in two different file formats (`SavedModel` and `HDF5`). The TensorFlow `SavedModel` format is the default file format in TF2.x. However, models can be saved in `HDF5` format. SavedModel format The SavedModel format is another way to serialize models. Models saved in this format can be restored using `tf.keras.models.load_model` and are compatible with TensorFlow Serving. The [SavedModel guide](https://www.tensorflow.org/guide/saved_model) goes into detail about how to serve/inspect the SavedModel.
###Code
# Create and train a new model instance.
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# Save the entire model as a SavedModel.
!mkdir -p saved_model
model.save('saved_model/my_model')
###Output
Epoch 1/5
32/32 [==============================] - 0s 2ms/step - loss: 1.1449 - sparse_categorical_accuracy: 0.6880
Epoch 2/5
32/32 [==============================] - 0s 2ms/step - loss: 0.4428 - sparse_categorical_accuracy: 0.8630
Epoch 3/5
32/32 [==============================] - 0s 2ms/step - loss: 0.2971 - sparse_categorical_accuracy: 0.9180
Epoch 4/5
32/32 [==============================] - 0s 2ms/step - loss: 0.2240 - sparse_categorical_accuracy: 0.9490
Epoch 5/5
32/32 [==============================] - 0s 2ms/step - loss: 0.1576 - sparse_categorical_accuracy: 0.9610
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
INFO:tensorflow:Assets written to: saved_model/my_model\assets
###Markdown
The SavedModel format is a directory containing a protobuf binary and a TensorFlow checkpoint. Inspect the saved model directory:
###Code
# my_model directory
!ls saved_model
# Contains an assets folder, saved_model.pb, and variables folder.
!ls saved_model/my_model
###Output
'ls' is not recognized as an internal or external command,
operable program or batch file.
'ls' is not recognized as an internal or external command,
operable program or batch file.
###Markdown
Reload a fresh Keras model from the saved model:
###Code
new_model = tf.keras.models.load_model('saved_model/my_model')
# Check its architecture
new_model.summary()
###Output
Model: "sequential_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_10 (Dense) (None, 512) 401920
_________________________________________________________________
dropout_5 (Dropout) (None, 512) 0
_________________________________________________________________
dense_11 (Dense) (None, 10) 5130
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
###Markdown
The restored model is compiled with the same arguments as the original model. Try running evaluate and predict with the loaded model:
###Code
# Evaluate the restored model
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print('Restored model, accuracy: {:5.2f}%'.format(100 * acc))
print(new_model.predict(test_images).shape)
###Output
32/32 - 0s - loss: 0.4025 - sparse_categorical_accuracy: 0.8710
Restored model, accuracy: 87.10%
(1000, 10)
###Markdown
HDF5 formatKeras provides a basic save format using the [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format) standard.
###Code
# Create and train a new model instance.
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# Save the entire model to a HDF5 file.
# The '.h5' extension indicates that the model should be saved to HDF5.
model.save('my_model.h5')
###Output
Epoch 1/5
32/32 [==============================] - 0s 2ms/step - loss: 1.1562 - sparse_categorical_accuracy: 0.6730
Epoch 2/5
32/32 [==============================] - 0s 2ms/step - loss: 0.4070 - sparse_categorical_accuracy: 0.8900
Epoch 3/5
32/32 [==============================] - 0s 1ms/step - loss: 0.2749 - sparse_categorical_accuracy: 0.9350
Epoch 4/5
32/32 [==============================] - 0s 2ms/step - loss: 0.2003 - sparse_categorical_accuracy: 0.9490
Epoch 5/5
32/32 [==============================] - 0s 2ms/step - loss: 0.1590 - sparse_categorical_accuracy: 0.9610
###Markdown
Now, recreate the model from that file:
###Code
# Recreate the exact same model, including its weights and the optimizer
new_model = tf.keras.models.load_model('my_model.h5')
# Show the model architecture
new_model.summary()
###Output
Model: "sequential_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_12 (Dense) (None, 512) 401920
_________________________________________________________________
dropout_6 (Dropout) (None, 512) 0
_________________________________________________________________
dense_13 (Dense) (None, 10) 5130
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
###Markdown
Check its accuracy:
###Code
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print('Restored model, accuracy: {:5.2f}%'.format(100 * acc))
###Output
32/32 - 0s - loss: 0.4396 - sparse_categorical_accuracy: 0.8620
Restored model, accuracy: 86.20%
|
2_TCN/BagOfWords/CNN_TREC.ipynb | ###Markdown
MLP Classification with CR DatasetWe will build a text classification model using MLP model on the Customer Reviews Dataset. Since there is no standard train/test split for this dataset, we will use 10-Fold Cross Validation (CV). Load the library
###Code
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import re
import nltk
import random
from nltk.corpus import stopwords, twitter_samples
# from nltk.tokenize import TweetTokenizer
from sklearn.model_selection import KFold
from nltk.stem import PorterStemmer
from string import punctuation
from sklearn.preprocessing import OneHotEncoder
from tensorflow.keras.preprocessing.text import Tokenizer
import time
%config IPCompleter.greedy=True
%config IPCompleter.use_jedi=False
# nltk.download('twitter_samples')
###Output
_____no_output_____
###Markdown
Load the Dataset
###Code
corpus = pd.read_pickle('../../0_data/TREC/TREC.pkl')
corpus.label = corpus.label.astype(int)
print(corpus.shape)
corpus
corpus.info()
corpus.groupby( by=['split','label']).count()
corpus.groupby(by='split').count()
# Separate the sentences and the labels
# Separate the sentences and the labels for training and testing
train_x = list(corpus[corpus.split=='train'].sentence)
train_y = np.array(corpus[corpus.split=='train'].label)
print(len(train_x))
print(len(train_y))
test_x = list(corpus[corpus.split=='test'].sentence)
test_y = np.array(corpus[corpus.split=='test'].label)
print(len(test_x))
print(len(test_y))
###Output
5452
5452
500
500
###Markdown
Raw Number of Vocabulary
###Code
# Build the raw vocobulary for first inspection
tokenizer = Tokenizer()
tokenizer.fit_on_texts(corpus.sentence)
vocab_raw = tokenizer.word_index
print('\nThe vocabulary size: {}\n'.format(len(vocab_raw)))
print(vocab_raw)
###Output
The vocabulary size: 8759
{'the': 1, 'what': 2, 'is': 3, 'of': 4, 'in': 5, 'a': 6, 'how': 7, "'s": 8, 'was': 9, 'who': 10, 'to': 11, 'are': 12, 'for': 13, 'and': 14, 'did': 15, 'does': 16, "''": 17, 'do': 18, 'name': 19, 'on': 20, 'many': 21, 'where': 22, 'first': 23, 'when': 24, 'i': 25, 'you': 26, 'can': 27, 'from': 28, 'world': 29, 's': 30, 'u': 31, 'which': 32, 'that': 33, 'most': 34, 'by': 35, 'an': 36, 'country': 37, 'as': 38, 'city': 39, 'with': 40, 'have': 41, 'has': 42, 'why': 43, 'it': 44, 'there': 45, 'year': 46, 'state': 47, 'called': 48, 'be': 49, 'president': 50, 'people': 51, 'at': 52, 'get': 53, 'were': 54, 'find': 55, 'his': 56, 'american': 57, 'mean': 58, 'two': 59, 'largest': 60, 'war': 61, 'made': 62, 'new': 63, 'much': 64, 'fear': 65, 'long': 66, 'between': 67, "'": 68, 'its': 69, 'used': 70, 'word': 71, 'known': 72, 'origin': 73, 'day': 74, 'company': 75, 'kind': 76, 'movie': 77, 'about': 78, 'tv': 79, 'one': 80, 'film': 81, 'all': 82, 'famous': 83, 'stand': 84, 'invented': 85, 'make': 86, 'or': 87, 'color': 88, 'best': 89, 'game': 90, 'live': 91, 'take': 92, 'he': 93, 'your': 94, 'up': 95, 'man': 96, 'time': 97, 'old': 98, 'states': 99, 'john': 100, 'only': 101, 'into': 102, 'book': 103, 'come': 104, 'play': 105, 'river': 106, 'wrote': 107, 'my': 108, 'not': 109, 'out': 110, 'term': 111, 'born': 112, 'their': 113, 'show': 114, 'america': 115, 'star': 116, 'baseball': 117, 'highest': 118, 'south': 119, 'last': 120, 'call': 121, 'won': 122, 'team': 123, 'home': 124, 'use': 125, 'countries': 126, 'united': 127, 'four': 128, 'named': 129, 'if': 130, 'had': 131, 'population': 132, 'difference': 133, 'character': 134, 'king': 135, 'number': 136, 'after': 137, 'english': 138, 'capital': 139, 'water': 140, 'died': 141, '1': 142, 'three': 143, 'us': 144, 'become': 145, 'body': 146, 'dog': 147, 'average': 148, 'earth': 149, 'north': 150, 'die': 151, 'work': 152, 'song': 153, 'novel': 154, 'played': 155, 'some': 156, 'said': 157, 'black': 158, 'information': 159, 'go': 160, 'common': 161, 'space': 162, 'been': 163, 'york': 164, 'they': 165, 'actress': 166, 'years': 167, 'say': 168, 'computer': 169, 'actor': 170, 'will': 171, 'mountain': 172, 'woman': 173, 'located': 174, 'college': 175, 'group': 176, 'names': 177, 'during': 178, 'second': 179, 'would': 180, 'california': 181, 'longest': 182, 'sport': 183, 'food': 184, 'killed': 185, 'national': 186, 'line': 187, 'sea': 188, 'island': 189, 'drink': 190, 'part': 191, 'money': 192, 'way': 193, 'university': 194, 'like': 195, 'good': 196, 'top': 197, 'than': 198, 'times': 199, 'popular': 200, 'major': 201, 'date': 202, 'great': 203, 'should': 204, 'through': 205, 'animal': 206, "'t": 207, 'school': 208, 'person': 209, 'language': 210, 'e': 211, 'red': 212, 'west': 213, 'over': 214, 'history': 215, 'law': 216, 'big': 217, 'portrayed': 218, 'more': 219, 'her': 220, 'life': 221, 'french': 222, 'makes': 223, 'car': 224, 'cities': 225, 'different': 226, 'miles': 227, 'd': 228, 'place': 229, 'horse': 230, 'five': 231, 'meaning': 232, 'internet': 233, '2': 234, 'moon': 235, 'battle': 236, 'address': 237, 'leader': 238, 'general': 239, 'title': 240, 'each': 241, 'abbreviation': 242, 'write': 243, 'white': 244, 'causes': 245, 'international': 246, 'cost': 247, 'contains': 248, 'created': 249, 'russian': 250, 'so': 251, 'charles': 252, 'craft': 253, 'colors': 254, 'kennedy': 255, 'rate': 256, 'became': 257, 'power': 258, 'human': 259, 'form': 260, 'feet': 261, 'century': 262, 'begin': 263, 'baby': 264, 'little': 265, 'me': 266, 'biggest': 267, 'f': 268, 'british': 269, 'c': 270, 'letter': 271, 'built': 272, 'randy': 273, 'airport': 274, 'type': 275, 'bridge': 276, 'whose': 277, 'park': 278, 'league': 279, '5': 280, 'st': 281, 'system': 282, 'nickname': 283, 'found': 284, 'features': 285, 'female': 286, 'o': 287, 'disease': 288, 'george': 289, 'eat': 290, 'no': 291, 'games': 292, 'queen': 293, 'san': 294, 'love': 295, 'high': 296, 'far': 297, 'seven': 298, 'ii': 299, 'real': 300, 'boasts': 301, 'islands': 302, 'house': 303, 'following': 304, 'comic': 305, 'blood': 306, 'air': 307, 'death': 308, 'canada': 309, 'james': 310, 'center': 311, 'william': 312, 'children': 313, 'spanish': 314, 'london': 315, 'men': 316, 'england': 317, 'someone': 318, 'office': 319, 'nixon': 320, 'germany': 321, 'player': 322, 'newspaper': 323, 'european': 324, 'animals': 325, 'product': 326, 'bowl': 327, 'japanese': 328, 'mother': 329, 'washington': 330, 'nn': 331, 'area': 332, 'ocean': 333, 'win': 334, 'television': 335, 'hole': 336, 'bill': 337, 'hit': 338, 'soft': 339, 'tree': 340, 'letters': 341, 'father': 342, 'may': 343, 'held': 344, 'oldest': 345, 'super': 346, 'series': 347, 'sun': 348, 'prime': 349, 'minister': 350, 'county': 351, 'business': 352, 'member': 353, 'before': 354, 'radio': 355, 'lawyer': 356, 'hitler': 357, 'married': 358, 'another': 359, 'runs': 360, 'fast': 361, 'building': 362, 'know': 363, 'music': 364, 'words': 365, 'happened': 366, 'chemical': 367, 'store': 368, 'ever': 369, 'definition': 370, 'singing': 371, 'percentage': 372, 'mile': 373, 'lives': 374, 'africa': 375, 'ball': 376, 'ship': 377, '3': 378, 'prize': 379, 'we': 380, 'once': 381, 'ice': 382, 'served': 383, 'soldiers': 384, 'm': 385, 'kentucky': 386, 'being': 387, 'whom': 388, 'age': 389, 'tax': 390, 'code': 391, 'lake': 392, 'christmas': 393, 'com': 394, 'travel': 395, 'golf': 396, 'six': 397, 'mississippi': 398, 'cnn': 399, 'cold': 400, 'point': 401, 'cross': 402, 'founded': 403, 'sports': 404, 'around': 405, 'department': 406, 'web': 407, 'birth': 408, 'gold': 409, 'other': 410, 'beer': 411, 'role': 412, 'girl': 413, 'texas': 414, 'jack': 415, 'them': 416, 'soviet': 417, 'indians': 418, 'original': 419, 'night': 420, 'main': 421, 'boy': 422, 'australia': 423, '10': 424, 'end': 425, 'greek': 426, 'singer': 427, 'alaska': 428, 'flag': 429, 'dick': 430, 'tuberculosis': 431, 'tell': 432, 'civil': 433, 'gas': 434, 'comedian': 435, 'story': 436, 'football': 437, 'rock': 438, 'china': 439, 'then': 440, 'olympic': 441, 'lived': 442, 'paper': 443, 'starred': 444, 'head': 445, 'hair': 446, 'now': 447, 'japan': 448, 'cartoon': 449, 'back': 450, 'acid': 451, 'pope': 452, 'musical': 453, 'discovered': 454, 'indian': 455, 'board': 456, 'street': 457, 'start': 458, 'fame': 459, 'introduced': 460, 'size': 461, 'answers': 462, '8': 463, 'saw': 464, 'down': 465, 'but': 466, 'card': 467, 'vietnam': 468, 'never': 469, 'bible': 470, 'need': 471, 'worth': 472, 'thing': 473, 'shot': 474, 'east': 475, 'originate': 476, 'former': 477, 'light': 478, 'see': 479, 'richard': 480, 'own': 481, 'claim': 482, 'while': 483, 'god': 484, 'tennis': 485, 'blue': 486, 'art': 487, 'famed': 488, 'fly': 489, 'produce': 490, 'appear': 491, 'continent': 492, 'son': 493, 'basketball': 494, 'african': 495, 'marvel': 496, 'planet': 497, 'list': 498, 'chinese': 499, 'full': 500, 'cards': 501, 'family': 502, 'sometimes': 503, 'bear': 504, 'website': 505, 'de': 506, 'under': 507, 'los': 508, 'species': 509, 'brothers': 510, 'symbol': 511, 'instrument': 512, 't': 513, 'tom': 514, 'director': 515, 'prince': 516, 'child': 517, 'organization': 518, 'caused': 519, 'tallest': 520, 'flight': 521, '1984': 522, 'berlin': 523, 'latin': 524, 'mount': 525, 'weight': 526, 'fought': 527, 'author': 528, 'fastest': 529, 'union': 530, 'dubbed': 531, 'went': 532, 'types': 533, 'women': 534, 'give': 535, 'steven': 536, 'sioux': 537, 'temperature': 538, 'wife': 539, 'plant': 540, 'often': 541, 'wine': 542, 'chicago': 543, '15': 544, 'nuclear': 545, 'produced': 546, 'americans': 547, 'also': 548, 'europe': 549, 'look': 550, 'income': 551, 'jackson': 552, 'favorite': 553, 'record': 554, 'strip': 555, 'tall': 556, 'every': 557, 'started': 558, 'boxing': 559, 'monopoly': 560, 'motto': 561, 'beach': 562, 'believe': 563, 'cat': 564, 'l': 565, 'angeles': 566, 'j': 567, 'r': 568, 'element': 569, 'square': 570, 'eye': 571, 'desert': 572, 'thatcher': 573, 'months': 574, 'elected': 575, 'france': 576, 'jaws': 577, 'winter': 578, 'wall': 579, 'field': 580, '7': 581, 'brand': 582, 'green': 583, 'eyes': 584, 'band': 585, 'buried': 586, 'el': 587, 'heart': 588, 'peter': 589, 'million': 590, 'speed': 591, 'shakespeare': 592, 'stop': 593, 'telephone': 594, 'comes': 595, 'fire': 596, 'hand': 597, 'spoken': 598, 'government': 599, 'oil': 600, 'rights': 601, 'mail': 602, 'because': 603, 'characters': 604, 'museum': 605, 'asian': 606, 'hands': 607, 'numbers': 608, 'run': 609, 'writer': 610, 'languages': 611, 'march': 612, '6': 613, 'shea': 614, 'gould': 615, 'left': 616, 'italy': 617, 'middle': 618, 'massachusetts': 619, 'henry': 620, 'captain': 621, 'plays': 622, 'birds': 623, 'side': 624, 'per': 625, 'eggs': 626, 'orange': 627, 'van': 628, 'court': 629, 'magazine': 630, 'sound': 631, 'greatest': 632, 'presidents': 633, 'led': 634, 'german': 635, 'month': 636, 'days': 637, 'lost': 638, 'featured': 639, 'considered': 640, 'nine': 641, 'aids': 642, 'video': 643, 'cowboy': 644, 'rain': 645, 'headquarters': 646, 'small': 647, 'rule': 648, 'off': 649, 'products': 650, 'foot': 651, 'party': 652, 'oscar': 653, 'put': 654, 'cover': 655, 'build': 656, 'k': 657, 'southern': 658, 'based': 659, 'dead': 660, 'watch': 661, 'titanic': 662, 'olympics': 663, 'security': 664, 'chicken': 665, 'buy': 666, 'bond': 667, 'poet': 668, 'nobel': 669, 'opera': 670, '12': 671, 'assassinated': 672, 'roman': 673, 'empire': 674, 'governor': 675, 'titled': 676, 'painted': 677, 'this': 678, 'golden': 679, 'inside': 680, 'artist': 681, 'wear': 682, 'cars': 683, 'thomas': 684, 'lakes': 685, 'nations': 686, 'milk': 687, 'allowed': 688, 'paid': 689, 'students': 690, 'week': 691, 'stars': 692, 'brown': 693, 'ten': 694, 'jimmy': 695, 'claimed': 696, 'she': 697, 'setting': 698, 'producer': 699, 'b': 700, 'ireland': 701, 'castle': 702, 'living': 703, 'current': 704, 'elephant': 705, 'medical': 706, 'least': 707, 'got': 708, 'balls': 709, 'near': 710, 'sleep': 711, 'race': 712, 'birthday': 713, 'companies': 714, 'face': 715, 'twins': 716, 'male': 717, 'developed': 718, 'columbia': 719, 'program': 720, 'gave': 721, 'follow': 722, 'selling': 723, 'set': 724, 'federal': 725, 'town': 726, 'nationality': 727, 'wars': 728, 'mexico': 729, 'making': 730, 'formed': 731, 'spain': 732, 'currency': 733, 'louis': 734, '11': 735, 'points': 736, 'free': 737, 'inches': 738, 'caribbean': 739, 'ask': 740, 'corpus': 741, 'page': 742, 'nnp': 743, 'bird': 744, 'w': 745, 'lyrics': 746, 'books': 747, 'historical': 748, 'don': 749, 'spumante': 750, 'occur': 751, 'season': 752, 'mayor': 753, 'could': 754, 'police': 755, 'commercial': 756, 'design': 757, 'cigarette': 758, 'schools': 759, 'ray': 760, 'hero': 761, 'came': 762, 'turn': 763, 'automobile': 764, 'remove': 765, 'followed': 766, 'online': 767, 'dollar': 768, 'pounds': 769, 'bone': 770, 'miss': 771, 'successful': 772, 'right': 773, 'serve': 774, 'mountains': 775, 'yellow': 776, 'reason': 777, 'must': 778, 'snow': 779, 'g': 780, 'always': 781, 'order': 782, '1899': 783, 'kid': 784, 'army': 785, 'secretary': 786, 'machines': 787, 'sign': 788, 'stock': 789, 'given': 790, 'electric': 791, 'diego': 792, 'expression': 793, 'contact': 794, 'players': 795, 'india': 796, 'buffalo': 797, 'paint': 798, 'without': 799, 'early': 800, 'told': 801, 'italian': 802, 'written': 803, 'murder': 804, 'mozambique': 805, 'minimum': 806, 'wage': 807, '0': 808, 'husband': 809, 'atlantic': 810, 'glass': 811, 'comics': 812, 'committee': 813, '1983': 814, 'career': 815, 'create': 816, 'celebrated': 817, 'mark': 818, 'same': 819, 'awarded': 820, 'amount': 821, 'cup': 822, 'weigh': 823, 'brain': 824, 'society': 825, 'flower': 826, 'natural': 827, 'silver': 828, 'cream': 829, 'p': 830, 'eleven': 831, 'alphabet': 832, 'fish': 833, '1963': 834, 'tokyo': 835, 'aaron': 836, 'our': 837, 'korea': 838, 'vatican': 839, 'lady': 840, 'period': 841, 'nfl': 842, 'salt': 843, 'affect': 844, 'florida': 845, 'friend': 846, 'trial': 847, 'transplant': 848, 'originally': 849, 'effect': 850, 'richest': 851, 'leave': 852, 'films': 853, 'silly': 854, 'course': 855, 'bay': 856, 'perfect': 857, 'bowling': 858, 'score': 859, 'religion': 860, 'inspired': 861, 'arch': 862, 'johnny': 863, 'fox': 864, 'sister': 865, 'reims': 866, 'painting': 867, 'read': 868, 'complete': 869, 'church': 870, 'jude': 871, 'elements': 872, 'received': 873, 'broadway': 874, 'produces': 875, "n't": 876, 'border': 877, 'album': 878, 'poem': 879, 'grow': 880, 'maurizio': 881, 'pellegrin': 882, 'cocaine': 883, 'forest': 884, 'pole': 885, 'taste': 886, 'education': 887, 'watergate': 888, 'daily': 889, 'against': 890, 'composer': 891, 'swimming': 892, '21': 893, 'dc': 894, 'vegas': 895, 'microsoft': 896, '1994': 897, 'ways': 898, 'official': 899, 'spielberg': 900, 'done': 901, 'fifth': 902, 'adult': 903, 'leading': 904, 'cancer': 905, '1991': 906, 'christian': 907, 'jewish': 908, 'declared': 909, 'chocolate': 910, 'award': 911, 'equal': 912, '24': 913, 'represented': 914, 'lee': 915, 'mrs': 916, 'emperor': 917, 'energy': 918, 'correct': 919, 'model': 920, 'jane': 921, 'any': 922, 'questions': 923, 'charlie': 924, 'pop': 925, 'appeared': 926, 'commonly': 927, 'tale': 928, 'gate': 929, 'kept': 930, 'degrees': 931, 'meant': 932, 'host': 933, 'liberty': 934, 'mccarren': 935, 'magic': 936, 'presidential': 937, 'file': 938, 'members': 939, 'drive': 940, 'turned': 941, '000': 942, 'contract': 943, 'site': 944, 'going': 945, 'stone': 946, 'pound': 947, 'bureau': 948, 'investigation': 949, 'johnson': 950, 'phone': 951, 'daughter': 952, 'lincoln': 953, 'gulf': 954, 'literary': 955, 'bottle': 956, 'away': 957, 'houses': 958, 'birthstone': 959, 'sold': 960, 'keep': 961, 'reach': 962, 'alley': 963, 'events': 964, 'francisco': 965, 'madonna': 966, 'native': 967, 'medicine': 968, 'himself': 969, 'voice': 970, 'project': 971, 'mercury': 972, 'caffeine': 973, 'took': 974, 'using': 975, 'owns': 976, 'vice': 977, 'vhs': 978, 'research': 979, 'large': 980, 'michael': 981, 'drug': 982, 'britain': 983, 'invent': 984, 'treat': 985, 'mr': 986, 'mary': 987, 'contain': 988, 'late': 989, 'jersey': 990, 'seen': 991, 'conference': 992, 'justice': 993, 'iron': 994, 'simpsons': 995, 'close': 996, 'having': 997, 'location': 998, 'usa': 999, 'muppets': 1000, 'rocky': 1001, 'hockey': 1002, 'yankee': 1003, '27': 1004, 'franklin': 1005, 'roosevelt': 1006, 'sex': 1007, 'animated': 1008, 'fruit': 1009, 'diamond': 1010, 'non': 1011, 'single': 1012, 'asia': 1013, 'across': 1014, 'distance': 1015, 'dam': 1016, 'universe': 1017, 'working': 1018, 'harvey': 1019, 'measure': 1020, '9': 1021, 'visit': 1022, 'wings': 1023, 'volcano': 1024, 'summer': 1025, 'control': 1026, 'wind': 1027, 'mutombo': 1028, 'submarine': 1029, 'salary': 1030, 'kids': 1031, 'test': 1032, 'lion': 1033, 'russia': 1034, 'stole': 1035, 'profession': 1036, 'amendment': 1037, 'constitution': 1038, 'clothing': 1039, 'putty': 1040, 'weapon': 1041, 'prophet': 1042, 'stage': 1043, 'fuel': 1044, 'cd': 1045, 'central': 1046, 'oceans': 1047, 'medium': 1048, 'stuart': 1049, 'hamblen': 1050, 'tiger': 1051, 'election': 1052, 'claims': 1053, 'starring': 1054, 'hollywood': 1055, 'jean': 1056, 'rascals': 1057, 'standard': 1058, '1967': 1059, 'market': 1060, 'just': 1061, 'directed': 1062, 'v': 1063, 'occupation': 1064, 'nicholas': 1065, 'ventura': 1066, 'published': 1067, 'races': 1068, '1960': 1069, 'widely': 1070, 'folic': 1071, 'qigong': 1072, 'him': 1073, 'astronaut': 1074, 'rum': 1075, 'am': 1076, 'december': 1077, 'las': 1078, '1980': 1079, 'pregnancy': 1080, 'lead': 1081, '16th': 1082, 'level': 1083, 'egg': 1084, 'pink': 1085, 'social': 1086, 'silent': 1087, 'pearl': 1088, 'harbor': 1089, 'academy': 1090, 'ago': 1091, 'sink': 1092, 'calories': 1093, 'creature': 1094, 'robert': 1095, 'literature': 1096, 'join': 1097, 'professional': 1098, 'peace': 1099, 'define': 1100, 'club': 1101, 'coach': 1102, '1969': 1103, "'ll": 1104, 'act': 1105, 'troops': 1106, 'rivers': 1107, 'rogers': 1108, '1989': 1109, 'function': 1110, 'hour': 1111, 'clock': 1112, '1965': 1113, 'powerful': 1114, 'irish': 1115, 'still': 1116, 'plastic': 1117, 'republic': 1118, 'well': 1119, 'classical': 1120, 'goodall': 1121, 'bob': 1122, 'procter': 1123, 'gamble': 1124, 'carolina': 1125, 'theme': 1126, 'master': 1127, 'purpose': 1128, 'here': 1129, 'reign': 1130, 'bee': 1131, 'dictator': 1132, 'pacific': 1133, 'spent': 1134, 'kevin': 1135, 'costner': 1136, 'thunder': 1137, 'baking': 1138, 'peachy': 1139, 'oat': 1140, 'muffins': 1141, 'patent': 1142, 'ads': 1143, 'associated': 1144, 'gay': 1145, 'desmond': 1146, 'operating': 1147, 'ibm': 1148, 'compatible': 1149, 'pennsylvania': 1150, '1939': 1151, 'battery': 1152, 'substance': 1153, 'blind': 1154, 'question': 1155, 'council': 1156, 'beat': 1157, 'martin': 1158, 'chapter': 1159, 'la': 1160, 'ben': 1161, 'clean': 1162, 'email': 1163, 'try': 1164, 'growing': 1165, 'upon': 1166, 'these': 1167, 'cells': 1168, 'convicted': 1169, 'bomb': 1170, 'surrounds': 1171, 'songs': 1172, 'al': 1173, 'percent': 1174, 'station': 1175, 'ride': 1176, 'third': 1177, 'pitcher': 1178, 'road': 1179, 'dogtown': 1180, 'camp': 1181, 'congress': 1182, 'price': 1183, 'depression': 1184, 'winning': 1185, 'position': 1186, 'colin': 1187, 'powell': 1188, 'detective': 1189, 'stewart': 1190, 'january': 1191, 'advertise': 1192, 'range': 1193, 'humans': 1194, 'itself': 1195, '1998': 1196, 'diameter': 1197, 'enter': 1198, 'senate': 1199, 'mammal': 1200, 'sinn': 1201, 'fein': 1202, 'bush': 1203, 'phoenix': 1204, 'astronauts': 1205, 'attend': 1206, 'auto': 1207, 'confederate': 1208, 'criminal': 1209, 'example': 1210, 'kuwait': 1211, 'ford': 1212, 'better': 1213, 'navy': 1214, 'engines': 1215, 'matter': 1216, 'luther': 1217, 'lindbergh': 1218, 'shape': 1219, 'dickens': 1220, 'andy': 1221, 'typical': 1222, 'service': 1223, 'appearance': 1224, 'statistics': 1225, '13': 1226, 'cereal': 1227, 'philippines': 1228, 'engine': 1229, 'magna': 1230, 'carta': 1231, 'citizen': 1232, 'garry': 1233, 'kasparov': 1234, 'clouds': 1235, 'western': 1236, 'mouth': 1237, 'restaurant': 1238, 'happen': 1239, 'zip': 1240, 'paul': 1241, 'supreme': 1242, 'writing': 1243, 'roll': 1244, 'describe': 1245, 'pull': 1246, 'tab': 1247, 'robin': 1248, 'ages': 1249, 'airplane': 1250, 'canadian': 1251, 'williams': 1252, 'georgia': 1253, '2000': 1254, 'rome': 1255, 'aspartame': 1256, 'source': 1257, 'stations': 1258, 'forces': 1259, 'creator': 1260, 'benny': 1261, 'carter': 1262, 'future': 1263, 'along': 1264, 'theory': 1265, 'caliente': 1266, 'exchange': 1267, 'sexual': 1268, 'arthur': 1269, 'prevent': 1270, 'milky': 1271, 'nation': 1272, 'pregnant': 1273, 'pro': 1274, 'motion': 1275, 'surface': 1276, 'abraham': 1277, 'mexican': 1278, 'oz': 1279, 'rubber': 1280, 'covers': 1281, 'lose': 1282, 'expectancy': 1283, 'mouse': 1284, 'chain': 1285, 'today': 1286, 'grand': 1287, 'catch': 1288, 'hot': 1289, 'secret': 1290, 'walk': 1291, 'post': 1292, 'islamic': 1293, 'counterpart': 1294, 'sing': 1295, 'coffee': 1296, 'signed': 1297, 'crown': 1298, 'asked': 1299, 'independent': 1300, 'flying': 1301, 'dinner': 1302, 'terms': 1303, 'butler': 1304, 'acronym': 1305, 'drugs': 1306, 'oscars': 1307, 'policy': 1308, 'short': 1309, 'dikembe': 1310, 'land': 1311, 'camera': 1312, 'weather': 1313, 'finger': 1314, 'sites': 1315, 'rotary': 1316, 'cpr': 1317, 'help': 1318, 'ruth': 1319, 'popeye': 1320, 'cork': 1321, 'bounty': 1322, 'hunter': 1323, 'gandhi': 1324, 'insurance': 1325, 'executed': 1326, 'scarlett': 1327, 'sells': 1328, 'nature': 1329, 'adventures': 1330, 'yahoo': 1331, 'cash': 1332, 'exist': 1333, 'next': 1334, 'door': 1335, '19th': 1336, 'painter': 1337, 'callosum': 1338, 'command': 1339, 'occupy': 1340, 'domesticated': 1341, 'bull': 1342, 'circle': 1343, 'fred': 1344, 'michelangelo': 1345, 'trip': 1346, 'purchase': 1347, 'sides': 1348, 'raise': 1349, 'iq': 1350, 'brazil': 1351, 'beatles': 1352, 'root': 1353, '14': 1354, 'wide': 1355, 'steps': 1356, 'sings': 1357, 'doing': 1358, 'involved': 1359, 'cage': 1360, 'process': 1361, 'royal': 1362, 'explorer': 1363, 'planted': 1364, '1972': 1365, 'technique': 1366, 'detect': 1367, 'defects': 1368, 'railroad': 1369, 'independence': 1370, 'brought': 1371, 'windsor': 1372, 'minutes': 1373, 'bar': 1374, '98': 1375, 'scotland': 1376, 'dt': 1377, 'ring': 1378, 'playing': 1379, 'ad': 1380, 'eruption': 1381, 'fathom': 1382, 'windows': 1383, 'seattle': 1384, 'sons': 1385, 'lowest': 1386, 'direct': 1387, 'spy': 1388, 'columbus': 1389, 'romans': 1390, 'dealt': 1391, '22': 1392, 'chemicals': 1393, 'toy': 1394, 'bought': 1395, 'stadium': 1396, 'einstein': 1397, '1942': 1398, 'latitude': 1399, 'longitude': 1400, 'rainbow': 1401, 'normal': 1402, 'ohio': 1403, 'golfer': 1404, 'equivalent': 1405, 'behind': 1406, 'pig': 1407, 'tour': 1408, 'fight': 1409, 'cleveland': 1410, 'case': 1411, 'records': 1412, 'closest': 1413, 'triangle': 1414, 'soap': 1415, 'assassination': 1416, 'failure': 1417, 'unaccounted': 1418, 'everest': 1419, 'billy': 1420, 'peak': 1421, 'jones': 1422, 'eternity': 1423, 'hold': 1424, 'sweet': 1425, 'michigan': 1426, 'gene': 1427, 'manufacturer': 1428, 'outside': 1429, 'spider': 1430, 'wheel': 1431, 'statue': 1432, 'rent': 1433, 'label': 1434, 'export': 1435, 'stamp': 1436, 'logan': 1437, 'wives': 1438, 'soup': 1439, 'poker': 1440, 'dakota': 1441, 'seaport': 1442, 'lord': 1443, 'borders': 1444, 'acted': 1445, 'santa': 1446, 'edward': 1447, 'sang': 1448, 'israel': 1449, 'apples': 1450, 'bastille': 1451, 'contest': 1452, 'sees': 1453, 'flies': 1454, 'such': 1455, 'low': 1456, 'championship': 1457, 'maryland': 1458, 'publish': 1459, 'laugh': 1460, 'nadia': 1461, 'comaneci': 1462, 'virginia': 1463, 'chief': 1464, 'peanut': 1465, 'hearing': 1466, 'receive': 1467, '19': 1468, 'sperm': 1469, 'victoria': 1470, 'hydrogen': 1471, 'toll': 1472, 'pass': 1473, '4': 1474, 'holds': 1475, 'nothing': 1476, 'feature': 1477, 'seas': 1478, 'discover': 1479, 'horses': 1480, 'geese': 1481, '1981': 1482, 'bug': 1483, 'plants': 1484, 'hurricane': 1485, 'hugo': 1486, 'attack': 1487, 'infectious': 1488, 'fungal': 1489, 'infection': 1490, 'disc': 1491, 'jockey': 1492, 'smallest': 1493, 'congressman': 1494, 'fourth': 1495, 'killer': 1496, 'less': 1497, 'dr': 1498, 'breed': 1499, '1978': 1500, 'banned': 1501, 'force': 1502, 'takes': 1503, 'gods': 1504, 'think': 1505, 'chairman': 1506, 'tried': 1507, 'cartoons': 1508, 'august': 1509, 'fare': 1510, 'return': 1511, 'probability': 1512, '25': 1513, 'something': 1514, 'display': 1515, 'reactivity': 1516, 'teaspoon': 1517, 'stick': 1518, 'province': 1519, 'jr': 1520, 'value': 1521, 'david': 1522, 'twin': 1523, 'brother': 1524, 'marijuana': 1525, 'wolfe': 1526, 'worked': 1527, '18': 1528, 'thank': 1529, 'july': 1530, 'credit': 1531, 'co': 1532, 'colorado': 1533, 'winnie': 1534, 'pooh': 1535, 'larry': 1536, 'established': 1537, 'clark': 1538, 'mission': 1539, 'enterprise': 1540, 'chess': 1541, 'search': 1542, 'beauty': 1543, 'morning': 1544, 'canyon': 1545, 'alice': 1546, 'gain': 1547, 'forced': 1548, 'target': 1549, 'released': 1550, 'style': 1551, 'pizza': 1552, 'hearst': 1553, 'boxer': 1554, 'draft': 1555, 'gives': 1556, 'minnesota': 1557, 'cosmology': 1558, 'eisenhower': 1559, 'superman': 1560, 'science': 1561, 'round': 1562, 'legal': 1563, 'half': 1564, 'unemployment': 1565, 'meat': 1566, 'presidency': 1567, 'virgin': 1568, 'edmund': 1569, 'penned': 1570, 'important': 1571, 'trivial': 1572, 'pursuit': 1573, 'talk': 1574, 'firm': 1575, 'trail': 1576, 'colony': 1577, 'span': 1578, 'sodium': 1579, 'trials': 1580, 'jail': 1581, 'shuttle': 1582, 'sank': 1583, '1953': 1584, 'oven': 1585, '16': 1586, 'becoming': 1587, 'organ': 1588, 'medal': 1589, 'ton': 1590, 'ends': 1591, '1940': 1592, 'samuel': 1593, 'pollock': 1594, 'ileana': 1595, 'cotrubas': 1596, 'included': 1597, 'ended': 1598, '1990': 1599, 'zeppelin': 1600, 'suit': 1601, '1950': 1602, 'sullivan': 1603, 'edgar': 1604, 'poe': 1605, 'witch': 1606, 'cathedral': 1607, 'editor': 1608, 'rest': 1609, 'potlatch': 1610, 'meters': 1611, 'utah': 1612, 'eight': 1613, 'rolling': 1614, 'active': 1615, 'pittsburgh': 1616, 'oswald': 1617, 'trees': 1618, 'vegetable': 1619, 'assassinate': 1620, 'february': 1621, 'network': 1622, 'wild': 1623, 'roger': 1624, 'allen': 1625, 'software': 1626, 'exercise': 1627, 'formula': 1628, 'move': 1629, 'fawaz': 1630, 'younis': 1631, 'happens': 1632, '1956': 1633, 'shirley': 1634, 'flows': 1635, 'hamburger': 1636, 'ships': 1637, 'staff': 1638, 'tip': 1639, 'apple': 1640, 'falls': 1641, 'institute': 1642, 'carries': 1643, 'gone': 1644, 'hemingway': 1645, 'coming': 1646, 'doctor': 1647, 'continental': 1648, 'hawaii': 1649, 'businesses': 1650, 'screen': 1651, 'steinbeck': 1652, 'davis': 1653, 'pulitzer': 1654, '1957': 1655, 'communist': 1656, '1975': 1657, 'louisiana': 1658, 'exactly': 1659, 'gorbachev': 1660, 'chancellor': 1661, 'colorful': 1662, 'h': 1663, 'spacecraft': 1664, 'fields': 1665, 'manufactured': 1666, 'derby': 1667, 'room': 1668, 'production': 1669, 'direction': 1670, 'length': 1671, 'soda': 1672, 'material': 1673, 'modern': 1674, 'primary': 1675, 'howard': 1676, 'sergeant': 1677, 'stripes': 1678, 'deep': 1679, 'carbon': 1680, 'babe': 1681, 'celebrities': 1682, 'monkey': 1683, 'contemptible': 1684, 'scoundrel': 1685, 'lunch': 1686, 'liver': 1687, 'faced': 1688, 'objects': 1689, 'yankees': 1690, 'wwii': 1691, 'bibliography': 1692, 'inventor': 1693, 'collect': 1694, 'manufactures': 1695, 'community': 1696, 'mormons': 1697, 'relative': 1698, 'speak': 1699, 'venus': 1700, 'wonder': 1701, 'learning': 1702, 'young': 1703, 'flags': 1704, 'tourist': 1705, 'attractions': 1706, 'founder': 1707, 'nun': 1708, 'nazis': 1709, 'nebraska': 1710, 'marx': 1711, 'markets': 1712, 'dew': 1713, 'jesus': 1714, 'era': 1715, 'scholar': 1716, 'bears': 1717, 'signature': 1718, 'classic': 1719, '6th': 1720, 'piano': 1721, 'mutiny': 1722, 'delaware': 1723, 'raid': 1724, 'hard': 1725, 'temple': 1726, 'sales': 1727, 'canal': 1728, 'busiest': 1729, 'rose': 1730, 'maker': 1731, 'colonies': 1732, 'revolution': 1733, 'shows': 1734, 'plan': 1735, 'hemisphere': 1736, 'northernmost': 1737, 'seized': 1738, 'tongue': 1739, 'journal': 1740, 'syndrome': 1741, 'andrew': 1742, 'stopped': 1743, 'nicknamed': 1744, 'pilot': 1745, 'began': 1746, 'rites': 1747, 'cable': 1748, 'safe': 1749, 'deer': 1750, 'broken': 1751, 'expectant': 1752, 'beethoven': 1753, 'fortune': 1754, 'harry': 1755, 'duke': 1756, 'review': 1757, 'z': 1758, 'pan': 1759, 'cc': 1760, 'steel': 1761, 'hotel': 1762, 'burned': 1763, 'november': 1764, 'loss': 1765, 'helen': 1766, 'significant': 1767, 'penny': 1768, 'success': 1769, 'getting': 1770, 'marks': 1771, 'bottles': 1772, 'christopher': 1773, 'florence': 1774, 'tower': 1775, 'according': 1776, 'genesis': 1777, 'wood': 1778, 'likely': 1779, 'teddy': 1780, 'arctic': 1781, 'paso': 1782, 'boat': 1783, 'havoc': 1784, 'match': 1785, 'hazmat': 1786, 'earn': 1787, 'piece': 1788, 'movies': 1789, 'lay': 1790, 'conrad': 1791, 'swimmer': 1792, '1948': 1793, 'association': 1794, '1973': 1795, 'iii': 1796, 'owned': 1797, 'individuals': 1798, 'disabilities': 1799, 'missouri': 1800, 'spectrum': 1801, 'instead': 1802, 'promote': 1803, 'dry': 1804, 'crop': 1805, 'korean': 1806, 'dogs': 1807, 'running': 1808, 'generator': 1809, 'nevada': 1810, 'soccer': 1811, 'marley': 1812, 'ears': 1813, '1941': 1814, 'rich': 1815, 'closing': 1816, 'beers': 1817, 'rules': 1818, 'tie': 1819, 'clothes': 1820, 'intercourse': 1821, 'www': 1822, 'autobiography': 1823, 'angels': 1824, 'mao': 1825, 'belgium': 1826, 'fresh': 1827, 'festival': 1828, 'nino': 1829, 'flavor': 1830, 'novelist': 1831, 'spice': 1832, 'includes': 1833, 'household': 1834, 'tube': 1835, 'beaver': 1836, 'jefferson': 1837, 'sensitive': 1838, 'let': 1839, 'facial': 1840, 'territory': 1841, 'muhammad': 1842, 'degas': 1843, 'illinois': 1844, 'bees': 1845, 'professor': 1846, 'incredible': 1847, 'hulk': 1848, 'dots': 1849, 'reading': 1850, 'hooligans': 1851, 'fans': 1852, 'include': 1853, 'terrorist': 1854, 'youngest': 1855, 'equipment': 1856, 'gates': 1857, 'sort': 1858, 'arnold': 1859, 'graced': 1860, 'calculator': 1861, 'bulls': 1862, '1993': 1863, 'easter': 1864, 'tutu': 1865, 'verses': 1866, 'worst': 1867, 'saint': 1868, 'abbreviated': 1869, 'increase': 1870, 'monarch': 1871, 'nazi': 1872, 'woodrow': 1873, 'wilson': 1874, 'chronic': 1875, 'movement': 1876, 'typewriter': 1877, 'keyboard': 1878, 'subway': 1879, 'kansas': 1880, 'railway': 1881, 'bombay': 1882, 'double': 1883, 'expensive': 1884, 'virtual': 1885, 'landed': 1886, 'legs': 1887, 'lobster': 1888, 'divorce': 1889, 'recorded': 1890, 'want': 1891, 'manned': 1892, 'bank': 1893, 'determine': 1894, 'hate': 1895, 'gray': 1896, 'democratic': 1897, 'event': 1898, 'skin': 1899, 'fingers': 1900, 'vacuum': 1901, 'division': 1902, 'cube': 1903, 'glitters': 1904, 'fired': 1905, 'maria': 1906, 'hunting': 1907, 'bars': 1908, 'deadliest': 1909, 'quit': 1910, 'vietnamese': 1911, 'fiction': 1912, 'seller': 1913, 'anthony': 1914, 'hitting': 1915, 'northeast': 1916, 'rice': 1917, 'davies': 1918, 'various': 1919, 'americas': 1920, 'daughters': 1921, 'jerry': 1922, 'erected': 1923, 'scrabble': 1924, 'brooks': 1925, 'diseases': 1926, '1999': 1927, 'box': 1928, 'april': 1929, 'met': 1930, 'strong': 1931, 'global': 1932, 'nile': 1933, 'supposed': 1934, 'parks': 1935, 'boys': 1936, 'chris': 1937, 'hall': 1938, 'sentence': 1939, 'ticket': 1940, '1966': 1941, 'shall': 1942, 'companion': 1943, 'tennessee': 1944, 'russians': 1945, 'release': 1946, 'roy': 1947, 'sherlock': 1948, 'holmes': 1949, 'recipe': 1950, 'starting': 1951, 'infamous': 1952, 'format': 1953, 'competition': 1954, 'study': 1955, 'tea': 1956, 'teams': 1957, 'angel': 1958, 'perform': 1959, 'girls': 1960, 'coast': 1961, 'capita': 1962, 'cut': 1963, 'possession': 1964, 'notes': 1965, 'pigs': 1966, 'brush': 1967, 'anniversary': 1968, 'trying': 1969, 'jackie': 1970, 'battles': 1971, 'generals': 1972, 'send': 1973, 'un': 1974, 'job': 1975, 'rust': 1976, 'tracy': 1977, 'monster': 1978, 'neil': 1979, 'peninsula': 1980, 'vincent': 1981, 'gogh': 1982, 'twice': 1983, 'cap': 1984, 'treatment': 1985, 'hawaiian': 1986, 'ivy': 1987, 'remain': 1988, '1971': 1989, 'offer': 1990, 'treasure': 1991, 'leon': 1992, 'hat': 1993, 'jamiroquai': 1994, 'ms': 1995, 'homelite': 1996, 'inc': 1997, 'region': 1998, 'humpty': 1999, 'dumpty': 2000, '1933': 2001, 'harrison': 2002, '28': 2003, '1992': 2004, 'mix': 2005, 'figure': 2006, 'graduate': 2007, 'owner': 2008, 'rider': 2009, 'cow': 2010, 'etc': 2011, 'perpetual': 2012, 'calendar': 2013, 'reference': 2014, 'floor': 2015, 'easiest': 2016, 'cats': 2017, 'dwight': 2018, 'giving': 2019, 'designer': 2020, 'meter': 2021, 'separates': 2022, 'personality': 2023, 'tournament': 2024, 'polio': 2025, 'vaccine': 2026, '1936': 2027, 'save': 2028, 'scientists': 2029, 'kosovo': 2030, 'economic': 2031, 'approximate': 2032, 'tristar': 2033, 'bolivia': 2034, 'ali': 2035, 'journey': 2036, 'dracula': 2037, 'wonders': 2038, 'ancient': 2039, 'sir': 2040, 'hillary': 2041, 'joseph': 2042, 'fined': 2043, 'mystery': 2044, 'glory': 2045, 'honey': 2046, 'launched': 2047, "'re": 2048, 'pit': 2049, '2th': 2050, 'q': 2051, 'manager': 2052, 'ear': 2053, 'coastline': 2054, 'count': 2055, 'odds': 2056, 'hiv': 2057, 'populated': 2058, 'luke': 2059, 'hours': 2060, 'hass': 2061, 'imaginary': 2062, 'chronicles': 2063, 'honecker': 2064, 'architect': 2065, 'broadcast': 2066, 'malaysia': 2067, 'chance': 2068, 'armed': 2069, 'sequel': 2070, 'maneuver': 2071, 'boston': 2072, 'frank': 2073, 'establish': 2074, 'shelley': 2075, 'stanley': 2076, 'yalta': 2077, 'agent': 2078, 'humphrey': 2079, 'bogart': 2080, 'sculpture': 2081, 'front': 2082, 'cans': 2083, 'landmark': 2084, 'moore': 2085, 'variety': 2086, 'saying': 2087, 'dealer': 2088, 'performer': 2089, 'butter': 2090, 'anglican': 2091, '1923': 2092, 'putting': 2093, 'gallon': 2094, 'spill': 2095, 'scientific': 2096, 'fur': 2097, 'recently': 2098, 'toilet': 2099, 'fax': 2100, 'sam': 2101, 'dancing': 2102, 'headed': 2103, 'armstrong': 2104, 'military': 2105, 'effects': 2106, 'idea': 2107, 'guitar': 2108, 'constellation': 2109, 'travels': 2110, 'gestation': 2111, 'kinds': 2112, 'nba': 2113, 'ash': 2114, 'outcome': 2115, 'clay': 2116, 'lucas': 2117, 'seasons': 2118, 'elvis': 2119, 'presley': 2120, 'switzerland': 2121, 'wave': 2122, 'valdez': 2123, 'representative': 2124, 'electoral': 2125, 'votes': 2126, 'flowers': 2127, '1945': 2128, 'lens': 2129, 'prayer': 2130, 'ioc': 2131, 'uses': 2132, 'faces': 2133, 'celebrity': 2134, 'thalassemia': 2135, 'method': 2136, 'drinking': 2137, 'officer': 2138, 'nautical': 2139, 'algeria': 2140, 'foods': 2141, 'anything': 2142, 'triple': 2143, 'drew': 2144, 'barrymore': 2145, 'identity': 2146, 'table': 2147, 'mc2': 2148, 'leg': 2149, 'barbie': 2150, 'usually': 2151, 'storm': 2152, 'needle': 2153, '80': 2154, 'advertising': 2155, 'ewoks': 2156, 'september': 2157, 'playboy': 2158, 'lucy': 2159, 'trade': 2160, 'ambassador': 2161, 'maclaine': 2162, 'kings': 2163, 'nhl': 2164, 'practice': 2165, 'colonists': 2166, 'darth': 2167, 'vader': 2168, 'moby': 2169, 'winners': 2170, 'vienna': 2171, 'tunnel': 2172, 'trademark': 2173, 'panama': 2174, 'hear': 2175, 'actually': 2176, 'juice': 2177, '1935': 2178, 'victor': 2179, 'saddam': 2180, 'hussein': 2181, 'compared': 2182, '17': 2183, 'organism': 2184, 'argentina': 2185, 'add': 2186, 'stephen': 2187, 'paris': 2188, 'shoe': 2189, 'dime': 2190, 'commerce': 2191, 'train': 2192, 'birthplace': 2193, 'appearances': 2194, 'beginning': 2195, 'wake': 2196, 'lack': 2197, 'everything': 2198, 'library': 2199, 'self': 2200, 'willie': 2201, "'hara": 2202, 'steal': 2203, 'ernest': 2204, 'properties': 2205, "'ve": 2206, 'joan': 2207, 'palace': 2208, 'opens': 2209, 'cop': 2210, 'check': 2211, 'christ': 2212, 'june': 2213, 'indonesia': 2214, 'godfather': 2215, 'diabetes': 2216, 'screenplay': 2217, 'spawned': 2218, 'empty': 2219, 'typist': 2220, 'hungarian': 2221, 'pay': 2222, 'flew': 2223, 'lips': 2224, 'dwarfs': 2225, 'info': 2226, 'dialing': 2227, 'oklahoma': 2228, 'wearing': 2229, 'bones': 2230, 'vote': 2231, 'purposes': 2232, 'taylor': 2233, 'sent': 2234, '1952': 2235, 'worldwide': 2236, 'penis': 2237, '1968': 2238, 'albert': 2239, 'fine': 2240, 'opened': 2241, 'sauce': 2242, 'galaxy': 2243, 'opposite': 2244, 'mikhail': 2245, 'operation': 2246, 'journalist': 2247, 'traditional': 2248, 'district': 2249, 'greeting': 2250, 'butterfield': 2251, 'orleans': 2252, 'austria': 2253, 'pitchers': 2254, 'spend': 2255, '1974': 2256, 'airline': 2257, 'ceremony': 2258, 'mediterranean': 2259, 'training': 2260, 'gateway': 2261, 'represent': 2262, 'pyramid': 2263, 'object': 2264, 'goes': 2265, 'present': 2266, 'pictures': 2267, 'biography': 2268, 'others': 2269, 'senator': 2270, 'strait': 2271, 'designed': 2272, 'solar': 2273, "'em": 2274, 'woody': 2275, 'starts': 2276, 'nasa': 2277, 'fitzgerald': 2278, 'picture': 2279, 'cooking': 2280, 'websites': 2281, 'digits': 2282, 'those': 2283, 'dollars': 2284, 'erupt': 2285, 'mentioned': 2286, 'joe': 2287, 'coca': 2288, 'cola': 2289, 'casablanca': 2290, 'struck': 2291, 'class': 2292, 'comedy': 2293, 'plane': 2294, 'winner': 2295, 'lawrence': 2296, 'toward': 2297, 'zero': 2298, 'egypt': 2299, 'map': 2300, 'charge': 2301, 'clown': 2302, 'alexander': 2303, 'doodle': 2304, 'gallons': 2305, 'again': 2306, 'x': 2307, 'vs': 2308, 'worlds': 2309, 'howdy': 2310, 'doody': 2311, '500': 2312, 'foundation': 2313, 'hymn': 2314, 'fair': 2315, 'melting': 2316, 'satellite': 2317, 'develop': 2318, 'browns': 2319, 'enzymes': 2320, 'heavier': 2321, 'pride': 2322, 'airports': 2323, 'deal': 2324, 'jews': 2325, 'concentration': 2326, 'inch': 2327, 'articles': 2328, 'vowel': 2329, 'myrtle': 2330, 'goddess': 2331, 'robinson': 2332, 'register': 2333, '1920s': 2334, 'rode': 2335, 'tony': 2336, 'cocktail': 2337, 'total': 2338, 'apollo': 2339, 'enemy': 2340, 'wears': 2341, 'issue': 2342, 'herb': 2343, 'relationship': 2344, 'danube': 2345, 'flow': 2346, 'log': 2347, 'nns': 2348, 'valuable': 2349, 'resource': 2350, 'lice': 2351, 'zealand': 2352, 'broadcasting': 2353, 'temperatures': 2354, 'tolkien': 2355, 'architecture': 2356, 'flourish': 2357, 'rail': 2358, 'rhodes': 2359, 'champions': 2360, 'fort': 2361, 'knox': 2362, 'rabbit': 2363, 'flintstones': 2364, 'database': 2365, '5th': 2366, 'graders': 2367, 'announced': 2368, 'catholic': 2369, 'calcium': 2370, 'joined': 2371, 'andrews': 2372, 'sisters': 2373, 'pistol': 2374, 'crust': 2375, 'equity': 2376, 'orinoco': 2377, 'penn': 2378, 'landing': 2379, 'commune': 2380, 'athlete': 2381, 'clip': 2382, 'electricity': 2383, 'yet': 2384, 'colored': 2385, 'hook': 2386, 'worms': 2387, 'sale': 2388, 'valley': 2389, 'revolve': 2390, '49': 2391, 'tribe': 2392, 'labels': 2393, 'very': 2394, 'simple': 2395, 'territories': 2396, 'luck': 2397, 'pneumonia': 2398, 'attempts': 2399, 'kalahari': 2400, 'menace': 2401, 'honorary': 2402, 'areas': 2403, 'eagle': 2404, 'wedding': 2405, 'modem': 2406, 'access': 2407, 'ross': 2408, 'offices': 2409, 'requirement': 2410, 'minute': 2411, 'chappellet': 2412, 'vineyard': 2413, 'correctly': 2414, 'coal': 2415, 'sharp': 2416, 'minor': 2417, 'revolutions': 2418, 'spacewalk': 2419, 'mosquito': 2420, 'proposition': 2421, 'ground': 2422, 'bombing': 2423, 'powers': 2424, 'lantern': 2425, 'mill': 2426, 'diminutive': 2427, 'vbp': 2428, 'cry': 2429, 'arma': 2430, 'mascot': 2431, 'beanie': 2432, 'afghanistan': 2433, 'stretch': 2434, 'besides': 2435, 'wines': 2436, 'nelson': 2437, 'cuckoo': 2438, 'espionage': 2439, 'producing': 2440, 'balance': 2441, 'account': 2442, 'valentine': 2443, 'italians': 2444, 'eiffel': 2445, 'bet': 2446, 'lethal': 2447, 'injection': 2448, 'doonesbury': 2449, 'werewolf': 2450, 'miami': 2451, 'speaking': 2452, 'wallpaper': 2453, 'voting': 2454, 'awards': 2455, 'deepest': 2456, 'bloom': 2457, 'healthy': 2458, 'stands': 2459, 'caesar': 2460, 'systems': 2461, 'analysis': 2462, 'piggy': 2463, 'unknown': 2464, 'devices': 2465, 'memphis': 2466, 'dominoes': 2467, 'holiday': 2468, 'admit': 2469, 'toes': 2470, 'tend': 2471, 'motors': 2472, 'lie': 2473, 'arkansas': 2474, 'ram': 2475, 'dwarf': 2476, 'olsen': 2477, 'ulysses': 2478, 'rings': 2479, 'patented': 2480, 'grant': 2481, '1797': 2482, '185': 2483, 'ratified': 2484, 'row': 2485, 'handle': 2486, 'conditioning': 2487, 'interesting': 2488, 'danced': 2489, 'astaire': 2490, 'headquartered': 2491, 'garden': 2492, 'eskimo': 2493, 'dentist': 2494, 'teenager': 2495, 'prewitt': 2496, 'sarge': 2497, 'metal': 2498, '197': 2499, 'brightest': 2500, 'marked': 2501, 'debut': 2502, 'dna': 2503, 'maid': 2504, 'agency': 2505, 'responsible': 2506, 'whisky': 2507, 'vermouth': 2508, 'cranberry': 2509, 'stereo': 2510, 'sitcom': 2511, 'maiden': 2512, 'ethel': 2513, 'diary': 2514, 'singles': 2515, 'cooler': 2516, 'cucumber': 2517, 'moscow': 2518, 'almost': 2519, 'ranger': 2520, 'yogi': 2521, 'lovers': 2522, 'superstar': 2523, 'float': 2524, 'fraction': 2525, 'sheep': 2526, 'indies': 2527, 'importer': 2528, 'cognac': 2529, 'walker': 2530, 'aged': 2531, 'commodity': 2532, 'cent': 2533, 'zones': 2534, 'employ': 2535, 'stanford': 2536, 'innings': 2537, 'softball': 2538, 'coney': 2539, 'gained': 2540, 'pea': 2541, 'physical': 2542, 'colleges': 2543, 'wyoming': 2544, 'stocks': 2545, 'hastings': 2546, 'monet': 2547, 'rocks': 2548, 'background': 2549, 'viscosity': 2550, 'bog': 2551, 'claus': 2552, 'tourists': 2553, 'steve': 2554, 'whitcomb': 2555, 'judson': 2556, 'consist': 2557, 'addresses': 2558, 'representatives': 2559, 'creeps': 2560, 'hearts': 2561, 'octopus': 2562, '1979': 2563, 'shouts': 2564, 'felt': 2565, 'holes': 2566, 'tenpin': 2567, 'housewife': 2568, 'daycare': 2569, 'provider': 2570, 'zodiacal': 2571, 'technical': 2572, 'debate': 2573, 'hardest': 2574, 'several': 2575, 'serving': 2576, 'tornado': 2577, 'suffer': 2578, 'artificial': 2579, 'intelligence': 2580, 'holy': 2581, 'fatal': 2582, 'trinidad': 2583, 'carl': 2584, 'handheld': 2585, 'pressure': 2586, 'equator': 2587, 'shooting': 2588, 'determined': 2589, 'concorde': 2590, 'wash': 2591, 'stops': 2592, 'chairs': 2593, 'coin': 2594, 'data': 2595, 'collection': 2596, 'tourism': 2597, 'artists': 2598, 'statues': 2599, 'amphibians': 2600, 'razor': 2601, 'costs': 2602, 'uniform': 2603, 'loved': 2604, 'killing': 2605, 'regarding': 2606, 'outer': 2607, 'fat': 2608, 'neurological': 2609, 'attacks': 2610, 'protein': 2611, 'causing': 2612, 'highway': 2613, 'mckinley': 2614, 'duck': 2615, 'talking': 2616, 'estate': 2617, 'hudson': 2618, 'layers': 2619, 'pencil': 2620, 'enough': 2621, 'ip': 2622, '55': 2623, 'might': 2624, 'files': 2625, 'pairs': 2626, 'louie': 2627, 'aldrin': 2628, 'mel': 2629, 'closed': 2630, 'ouija': 2631, 'requirements': 2632, 'handed': 2633, 'care': 2634, 'jordan': 2635, 'connecticut': 2636, 'millennium': 2637, 'draw': 2638, 'heavily': 2639, 'caffeinated': 2640, 'varian': 2641, 'associates': 2642, 'rubik': 2643, 'prepared': 2644, 'measured': 2645, 'employee': 2646, 'fictional': 2647, 'dropped': 2648, 'jogging': 2649, 'orca': 2650, 'news': 2651, 'belushi': 2652, 'saturday': 2653, 'thanksgiving': 2654, 'liner': 2655, 'hijacked': 2656, '1985': 2657, 'philadelphia': 2658, 'private': 2659, 'marlowe': 2660, 'copies': 2661, '1958': 2662, 'suite': 2663, 'bra': 2664, 'cooper': 2665, 'colt': 2666, 'plo': 2667, 'bronze': 2668, 'laser': 2669, 'lauren': 2670, 'bacall': 2671, 'faith': 2672, 'scrooge': 2673, 'student': 2674, 'amherst': 2675, 'dow': 2676, 'churchill': 2677, '1964': 2678, 'martha': 2679, 'whale': 2680, 'blacks': 2681, 'efficient': 2682, 'sistine': 2683, 'chapel': 2684, 'cuba': 2685, '26': 2686, 'betting': 2687, 'nj': 2688, 'cement': 2689, 'chromosome': 2690, 'brandenburg': 2691, 'spaces': 2692, 'groups': 2693, 'lung': 2694, 'spelling': 2695, 'stefan': 2696, 'edberg': 2697, 'phenomenon': 2698, 'photographer': 2699, 'wise': 2700, 'begins': 2701, 'condiment': 2702, '1929': 2703, 'embassy': 2704, 'costa': 2705, 'rica': 2706, 'gatsby': 2707, 'share': 2708, 'select': 2709, 'bottom': 2710, 'surname': 2711, 'massacre': 2712, 'lagoon': 2713, '1982': 2714, 'heaven': 2715, 'lou': 2716, 'gehrig': 2717, 'camaro': 2718, 'replaced': 2719, 'bert': 2720, 'homer': 2721, 'snap': 2722, 'whip': 2723, 'jogis': 2724, 'pollution': 2725, 'trilogy': 2726, 'murdered': 2727, 'cairo': 2728, 'barbados': 2729, 'hills': 2730, 'holidays': 2731, 'numerals': 2732, 'veterans': 2733, 'dana': 2734, 'invade': 2735, 'driven': 2736, '199': 2737, 'smokey': 2738, 'gross': 2739, 'gerald': 2740, 'pseudonym': 2741, 'tragedy': 2742, 'statement': 2743, 'both': 2744, 'brooklyn': 2745, 'net': 2746, 'amateur': 2747, 'legged': 2748, 'heavyweight': 2749, 'champion': 2750, 'jacques': 2751, 'cousteau': 2752, 'carry': 2753, 'epic': 2754, 'dates': 2755, 'unique': 2756, 'message': 2757, 'colombia': 2758, 'gang': 2759, '1976': 2760, 'poetry': 2761, 'corner': 2762, 'public': 2763, 'abuse': 2764, 'hell': 2765, 'sugar': 2766, 'salesman': 2767, 'snore': 2768, 'capp': 2769, 'lengths': 2770, 'prompted': 2771, 'dozen': 2772, 'glacier': 2773, 'toast': 2774, 'revolt': 2775, 'available': 2776, 'junk': 2777, 'provides': 2778, 'guinea': 2779, 'slinky': 2780, 'scooby': 2781, 'doo': 2782, 'shevardnadze': 2783, 'villain': 2784, '1951': 2785, 'upper': 2786, 'rating': 2787, "'n": 2788, 'brazilian': 2789, 'impress': 2790, 'tomato': 2791, 'educational': 2792, 'naval': 2793, 'luxury': 2794, 'centers': 2795, 'alexandra': 2796, 'josie': 2797, 'pussycats': 2798, 'poison': 2799, 'sand': 2800, 'sierra': 2801, 'parents': 2802, 'dating': 2803, 'trek': 2804, 'spangled': 2805, 'banner': 2806, 'anthem': 2807, 'appears': 2808, 'gettysburg': 2809, 'bobby': 2810, 'walls': 2811, 'touching': 2812, 'hub': 2813, 'panic': 2814, 'sigmund': 2815, 'freud': 2816, 'uris': 2817, 'fever': 2818, 'breakfast': 2819, 'mixing': 2820, 'saltpeter': 2821, '39': 2822, 'wonderland': 2823, 'gore': 2824, 'martinis': 2825, 'rex': 2826, 'numeral': 2827, 'legendary': 2828, 'wrist': 2829, 'agree': 2830, 'really': 2831, 'rank': 2832, 'among': 2833, 'introduce': 2834, 'growth': 2835, 'literal': 2836, 'translations': 2837, 'ezra': 2838, 'differences': 2839, 'religions': 2840, 'oxygen': 2841, 'keeps': 2842, 'sided': 2843, 'depicted': 2844, 'sailor': 2845, 'racing': 2846, 'bat': 2847, 'patricia': 2848, 'kidnaped': 2849, 'viking': 2850, 'deodorant': 2851, 'triplets': 2852, 'eaten': 2853, 'phrase': 2854, 'bourbon': 2855, 'restored': 2856, 'napoleon': 2857, 'polish': 2858, 'viii': 2859, 'distinction': 2860, 'marshall': 2861, 'mae': 2862, 'comedienne': 2863, 'inauguration': 2864, 'randolph': 2865, 'fusion': 2866, 'costume': 2867, 'glove': 2868, 'rows': 2869, '35': 2870, 'millimeter': 2871, 'goat': 2872, 'naples': 2873, 'lawyers': 2874, 'salonen': 2875, 'ethnic': 2876, 'category': 2877, 'fantastic': 2878, 'ballet': 2879, 'daminozide': 2880, 'zimbabwe': 2881, 'amazing': 2882, 'fraze': 2883, 'cases': 2884, 'avery': 2885, 'political': 2886, 'kitchen': 2887, 'economy': 2888, 'seventeen': 2889, 'n': 2890, 'apart': 2891, 'synonymous': 2892, 'plot': 2893, 'stalin': 2894, 'jump': 2895, 'parrot': 2896, 'beak': 2897, 'hermann': 2898, 'condition': 2899, 'studios': 2900, 'biblical': 2901, 'traveled': 2902, 'proper': 2903, 'nominated': 2904, 'coastal': 2905, 'greece': 2906, 'grapes': 2907, 'wrath': 2908, 'li': 2909, "'l": 2910, 'abner': 2911, 'calls': 2912, 'mall': 2913, 'lies': 2914, 'baltic': 2915, 'avenue': 2916, 'press': 2917, 'attempt': 2918, 'cache': 2919, 'jazz': 2920, 'blues': 2921, 'programming': 2922, 'adopted': 2923, 'borrow': 2924, 'atom': 2925, 'aol': 2926, 'princess': 2927, 'strikes': 2928, 'ella': 2929, '1642': 2930, '1649': 2931, 'gender': 2932, 'bradbury': 2933, 'illustrated': 2934, 'processor': 2935, '1913': 2936, 'uk': 2937, 'frogs': 2938, 'stevie': 2939, 'taiwanese': 2940, 'skywalker': 2941, 'weapons': 2942, 'nero': 2943, 'barbeque': 2944, 'things': 2945, 'atm': 2946, 'fibrosis': 2947, 'majority': 2948, 'truman': 2949, 'quadruplets': 2950, 'strontium': 2951, 'purified': 2952, 'molecules': 2953, 'australian': 2954, 'graces': 2955, 'daniel': 2956, 'cousins': 2957, 'mars': 2958, 'tenses': 2959, 'norway': 2960, 'toys': 2961, 'donated': 2962, 'tammy': 2963, 'judy': 2964, 'garland': 2965, 'stationed': 2966, 'strips': 2967, 'relations': 2968, 'giraffe': 2969, 'heat': 2970, 'livingstone': 2971, 'edith': 2972, 'smoking': 2973, 'medals': 2974, 'medieval': 2975, 'guild': 2976, 'marry': 2977, 'occurs': 2978, 'honor': 2979, 'url': 2980, 'extensions': 2981, 'gravity': 2982, 'marciano': 2983, 'details': 2984, 'underwater': 2985, 'varieties': 2986, 'falklands': 2987, 'placed': 2988, 'pascal': 2989, 'opener': 2990, 'regular': 2991, 'recognize': 2992, 'declaration': 2993, 'pickering': 2994, 'hawks': 2995, 'credited': 2996, 'rooftops': 2997, 'steam': 2998, 'frozen': 2999, 'seeing': 3000, 'candy': 3001, 'split': 3002, 'marriage': 3003, 'madrid': 3004, 'horsepower': 3005, 'boosters': 3006, 'brave': 3007, 'refer': 3008, 'hibernia': 3009, 'chefs': 3010, 'topped': 3011, 'moving': 3012, 'cowboys': 3013, 'experienced': 3014, 'mighty': 3015, 'crokinole': 3016, 'monaco': 3017, 'likes': 3018, 'helium': 3019, 'true': 3020, 'dream': 3021, 'pendulum': 3022, 'frederick': 3023, 'cubic': 3024, 'munich': 3025, 'suspension': 3026, 'soldier': 3027, 'clear': 3028, 'michener': 3029, 'subtitled': 3030, 'feudal': 3031, 'solo': 3032, 'semper': 3033, 'fidelis': 3034, 'candle': 3035, 'kiss': 3036, 'pronounce': 3037, 'squares': 3038, 'means': 3039, 'neurosurgeon': 3040, 'changed': 3041, 'jiggy': 3042, 'preacher': 3043, 'leads': 3044, 'slave': 3045, 'teeth': 3046, 'decathlon': 3047, 'volcanoes': 3048, 'victorian': 3049, 'nests': 3050, 'aristotle': 3051, '23': 3052, 'quarts': 3053, 'lebanon': 3054, 'principles': 3055, 'stains': 3056, 'frequency': 3057, 'vhf': 3058, 'edition': 3059, 'commandments': 3060, 'damage': 3061, 'math': 3062, 'proud': 3063, 'kappa': 3064, 'hepburn': 3065, 'shake': 3066, 'declare': 3067, '1961': 3068, 'guy': 3069, 'mine': 3070, 'hoover': 3071, 'bugs': 3072, 'beetle': 3073, 'spins': 3074, 'larger': 3075, 'ruled': 3076, '31': 3077, 'gymnastics': 3078, 'dot': 3079, '1962': 3080, 'frankfurt': 3081, 'feeling': 3082, 'incident': 3083, 'khrushchev': 3084, 'odors': 3085, 'calculate': 3086, 'apartment': 3087, 'report': 3088, 'industrial': 3089, 'classification': 3090, 'codes': 3091, 'rosa': 3092, 'seat': 3093, 'marine': 3094, 'adopt': 3095, 'equation': 3096, 'rear': 3097, 'weeks': 3098, 'trophy': 3099, 'commit': 3100, 'piracy': 3101, 'spread': 3102, 'buys': 3103, 'bakery': 3104, 'iraq': 3105, 'invasion': 3106, 'sinatra': 3107, 'dooby': 3108, 'maurice': 3109, 'port': 3110, 'attraction': 3111, 'lightning': 3112, 'von': 3113, 'poland': 3114, 'pepper': 3115, 'miller': 3116, 'lite': 3117, 'hamlet': 3118, 'operations': 3119, '1847': 3120, 'sydney': 3121, 'handful': 3122, 'mad': 3123, 'afternoon': 3124, 'postal': 3125, 'choo': 3126, 'since': 3127, 'shoot': 3128, 'jake': 3129, 'horoscope': 3130, 'hungary': 3131, 'uprising': 3132, 'arcadia': 3133, 'exports': 3134, 'pines': 3135, 'walter': 3136, 'huston': 3137, 'terry': 3138, 'genetics': 3139, 'nuts': 3140, 'ham': 3141, 'cotton': 3142, 'caught': 3143, 'englishman': 3144, 'canine': 3145, 'huckleberry': 3146, 'forests': 3147, 'dinosaur': 3148, 'camel': 3149, 'clara': 3150, 'learn': 3151, 'pages': 3152, 'horrors': 3153, 'jungle': 3154, 'alcoholic': 3155, 'syrup': 3156, 'vcr': 3157, 'plain': 3158, 'inventors': 3159, 'image': 3160, 'truth': 3161, 'approximately': 3162, 'fbi': 3163, 'hughes': 3164, 'milton': 3165, 'tools': 3166, 'scandinavian': 3167, 'connected': 3168, 'alan': 3169, 'leukemia': 3170, 'tail': 3171, 'elections': 3172, 'grooves': 3173, 'consecutive': 3174, 'chip': 3175, 'premiered': 3176, 'noble': 3177, 'even': 3178, 'aid': 3179, 'helps': 3180, 'verdict': 3181, 'holland': 3182, 'says': 3183, 'administration': 3184, 'defense': 3185, 'moral': 3186, 'willy': 3187, 'uniforms': 3188, 'swap': 3189, 'stones': 3190, 'shampoo': 3191, 'banana': 3192, 'evil': 3193, 'rhett': 3194, 'leaving': 3195, 'marcos': 3196, 'treasury': 3197, 'lemmon': 3198, 'thrilled': 3199, 'cricket': 3200, 'kong': 3201, 'amazon': 3202, 'boxes': 3203, 'barnum': 3204, 'thumb': 3205, 'civilization': 3206, 'fun': 3207, 'prison': 3208, 'leper': 3209, 'ghost': 3210, 'wright': 3211, 'gin': 3212, 'youngsters': 3213, 'exxon': 3214, 'certain': 3215, 'village': 3216, 'shield': 3217, 'vehicles': 3218, 'nowadays': 3219, 'digital': 3220, 'change': 3221, 'kafka': 3222, 'engineer': 3223, 'postage': 3224, 'slow': 3225, 'condoms': 3226, 'bottled': 3227, 'ranges': 3228, 'nostradamus': 3229, 'maine': 3230, 'annual': 3231, 'meeting': 3232, 'experts': 3233, 'constructed': 3234, 'alphabetically': 3235, 'wrestler': 3236, 'tells': 3237, 'geographical': 3238, 'including': 3239, 'device': 3240, 'cardinal': 3241, 'ingredient': 3242, 'ranch': 3243, 'bligh': 3244, 'monitor': 3245, 'linked': 3246, 'nintendo': 3247, 'francis': 3248, 'pow': 3249, '1977': 3250, 'chilean': 3251, 'coup': 3252, 'punch': 3253, 'entertainer': 3254, 'directly': 3255, 'magazines': 3256, 'else': 3257, 'beautiful': 3258, 'disk': 3259, 'corn': 3260, '1812': 3261, '100': 3262, 'imposed': 3263, 'related': 3264, 'fdr': 3265, 'special': 3266, '1779': 3267, 'archie': 3268, 'farthest': 3269, 'rick': 3270, 'rise': 3271, 'buildings': 3272, 'capone': 3273, 'fairy': 3274, 'importers': 3275, '900': 3276, '740': 3277, 'yards': 3278, 'hermit': 3279, 'crabs': 3280, 'advertises': 3281, 'persian': 3282, '1988': 3283, 'carol': 3284, 'crystals': 3285, 'controls': 3286, 'ripening': 3287, 'breeds': 3288, 'bette': 3289, '196': 3290, 'satellites': 3291, 'bread': 3292, 'interview': 3293, 'hampshire': 3294, 'broke': 3295, 'polo': 3296, 'hike': 3297, 'entire': 3298, 'sled': 3299, 'defined': 3300, 'cologne': 3301, 'ready': 3302, 'deals': 3303, 'partner': 3304, 'slam': 3305, 'spaghetti': 3306, 'indiglo': 3307, 'touch': 3308, 'labor': 3309, 'parking': 3310, 'printing': 3311, 'output': 3312, 'globe': 3313, '1946': 3314, 'development': 3315, 'nursery': 3316, 'ferry': 3317, 'chances': 3318, 'pregnacy': 3319, 'penetrate': 3320, 'vagina': 3321, 'conversion': 3322, 'spears': 3323, 'action': 3324, 'par': 3325, '455': 3326, 'yard': 3327, 'fit': 3328, 'tiny': 3329, 'bonnie': 3330, 'foreclosure': 3331, 'scottish': 3332, 'edinburgh': 3333, 'bell': 3334, "'clock": 3335, 'recommended': 3336, 'scopes': 3337, '15th': 3338, 'veins': 3339, 'panel': 3340, 'actors': 3341, 'factor': 3342, 'ernie': 3343, 'yellowstone': 3344, 'palmer': 3345, 'hoffman': 3346, 'mills': 3347, 'wizard': 3348, 'frankenstein': 3349, 'doubles': 3350, 'lsd': 3351, 'alternative': 3352, 'dutch': 3353, 'renaissance': 3354, 'bust': 3355, 'episode': 3356, 'colonial': 3357, 'until': 3358, 'visitors': 3359, 'frontier': 3360, 'restore': 3361, 'scotch': 3362, 'liverpool': 3363, 'grace': 3364, 'athletic': 3365, 'underworld': 3366, 'elevation': 3367, 'structure': 3368, 'looking': 3369, 'seventh': 3370, '1853': 3371, 'within': 3372, 'principle': 3373, 'parker': 3374, 'print': 3375, 'false': 3376, 'consciousness': 3377, 'athletes': 3378, 'bed': 3379, 'rockefeller': 3380, 'warner': 3381, 'bros': 3382, 'spelled': 3383, 'y': 3384, 'mont': 3385, 'blanc': 3386, 'ill': 3387, 'fated': 3388, 'amtrak': 3389, 'eastern': 3390, 'netherlands': 3391, 'orgasm': 3392, 'cane': 3393, 'block': 3394, 'hundred': 3395, '1940s': 3396, 'donald': 3397, 'shores': 3398, 'section': 3399, 'jeep': 3400, 'leaders': 3401, 'brunei': 3402, 'steamboat': 3403, 'marrow': 3404, 'violins': 3405, 'snail': 3406, 'lewis': 3407, 'surrendered': 3408, 'cabinet': 3409, 'da': 3410, 'wimbledon': 3411, 'chile': 3412, 'ratio': 3413, 'kangaroo': 3414, 'romantic': 3415, 'gordon': 3416, 'support': 3417, 'idealab': 3418, 'sizes': 3419, 'roller': 3420, 'legend': 3421, 'separated': 3422, 'originated': 3423, 'shared': 3424, 'geographic': 3425, 'limits': 3426, 'frog': 3427, 'channel': 3428, 'duel': 3429, 'santos': 3430, 'dumont': 3431, 'woodpecker': 3432, 'gilbert': 3433, 'figures': 3434, 'ability': 3435, 'prism': 3436, 'heroine': 3437, 'scruples': 3438, 'charley': 3439, 'wayne': 3440, 'machine': 3441, 'eighth': 3442, 'wet': 3443, 'desire': 3444, 'thought': 3445, 'pox': 3446, 'typing': 3447, 'knight': 3448, 'possible': 3449, 'hypertension': 3450, 'casting': 3451, 'poisoning': 3452, 'pineapple': 3453, 'mau': 3454, 'kills': 3455, 'jolson': 3456, 'past': 3457, 'shouldn': 3458, 'abstract': 3459, 'vessels': 3460, 'silversmiths': 3461, 'jules': 3462, 'verne': 3463, 'hood': 3464, 'harper': 3465, 'cheese': 3466, 'coconut': 3467, 'measures': 3468, 'clinton': 3469, 'defeat': 3470, 'poll': 3471, 'cause': 3472, 'developing': 3473, 'magnetic': 3474, 'nose': 3475, 'pilots': 3476, 'seizure': 3477, 'sicily': 3478, 'maximum': 3479, 'appointed': 3480, 'dicken': 3481, 'helens': 3482, 'specifically': 3483, 'heaviest': 3484, 'bald': 3485, 'bud': 3486, 'cave': 3487, 'bestselling': 3488, 'lights': 3489, 'too': 3490, 'mineral': 3491, 'spoke': 3492, 'revolutionaries': 3493, 'darkness': 3494, '48': 3495, 'namath': 3496, 'oriented': 3497, 'limit': 3498, 'drunk': 3499, 'beans': 3500, 'sailors': 3501, 'pet': 3502, 'ruin': 3503, 'ce': 3504, 'blow': 3505, 'disney': 3506, 'margaret': 3507, 'engineering': 3508, 'benefits': 3509, 'hank': 3510, 'smith': 3511, 'simpson': 3512, 'surfing': 3513, 'manufacture': 3514, 'mayonnaise': 3515, 'narrates': 3516, 'affected': 3517, 'lloyd': 3518, 'bad': 3519, 'mirror': 3520, 'opening': 3521, 'taj': 3522, 'argentine': 3523, 'shoes': 3524, 'cookies': 3525, 'mentor': 3526, 'mules': 3527, 'habitat': 3528, 'alcohol': 3529, 'simon': 3530, 'denver': 3531, 'gourd': 3532, 'combat': 3533, 'fountain': 3534, 'earthquake': 3535, 'cool': 3536, 'armor': 3537, 'jonathan': 3538, 'matterhorn': 3539, 'pins': 3540, '1937': 3541, 'hope': 3542, 'fall': 3543, 'ronald': 3544, 'reagan': 3545, 'dreams': 3546, 'yukon': 3547, 'themselves': 3548, 'dolly': 3549, 'parton': 3550, 'throat': 3551, 'pronounced': 3552, 'raised': 3553, 'floors': 3554, 'results': 3555, 'drinks': 3556, 'pepsi': 3557, 'dextropropoxyphen': 3558, 'napsylate': 3559, 'slogan': 3560, 'neon': 3561, 'layer': 3562, 'ozone': 3563, 'electrical': 3564, 'purchased': 3565, 'corporate': 3566, 'dioxide': 3567, 'removed': 3568, 'surrender': 3569, 'habeas': 3570, 'translated': 3571, 'pets': 3572, 'litmus': 3573, 'tin': 3574, 'materials': 3575, 'problems': 3576, 'tune': 3577, 'diamonds': 3578, 'teflon': 3579, 'bunker': 3580, 'flavors': 3581, 'roe': 3582, 'millions': 3583, 'casey': 3584, 'copper': 3585, 'resistance': 3586, '2112': 3587, 'existence': 3588, 'endangered': 3589, 'doll': 3590, 'sell': 3591, '1959': 3592, 'usenet': 3593, 'melissa': 3594, 'turkey': 3595, 'scouts': 3596, 'creatures': 3597, 'canary': 3598, 'helicopter': 3599, 'sundaes': 3600, 'singular': 3601, 'monument': 3602, 'testament': 3603, 'fatman': 3604, 'joint': 3605, 'couple': 3606, 'divided': 3607, 'scale': 3608, 'earthquakes': 3609, 'abandoned': 3610, 'anti': 3611, 'acne': 3612, 'muscles': 3613, 'victory': 3614, 'merchant': 3615, 'crash': 3616, 'marco': 3617, 'climate': 3618, 'vitamin': 3619, 'fool': 3620, 'dale': 3621, 'gulliver': 3622, 'mold': 3623, 'humidity': 3624, 'ph': 3625, 'mona': 3626, 'lisa': 3627, 'poor': 3628, 'whole': 3629, 'jesse': 3630, 'alphabetical': 3631, 'fiber': 3632, 'aborigines': 3633, 'corporation': 3634, 'curious': 3635, 'inaugurated': 3636, 'please': 3637, 'beverly': 3638, 'hillbillies': 3639, 'arrow': 3640, 'iceland': 3641, 'puppy': 3642, 'coral': 3643, 'joplin': 3644, 'streak': 3645, 'legally': 3646, 'interest': 3647, 'crickets': 3648, 'disorder': 3649, 'worn': 3650, 'mo': 3651, 'issued': 3652, 'osteoporosis': 3653, 'lyndon': 3654, 'braves': 3655, 'serfdom': 3656, 'doyle': 3657, 'fowl': 3658, 'grabs': 3659, 'spotlight': 3660, 'scar': 3661, 'ozzy': 3662, 'osbourne': 3663, 'downhill': 3664, 'faster': 3665, 'costliest': 3666, 'disaster': 3667, 'industry': 3668, 'sprawling': 3669, 'repealed': 3670, 'camps': 3671, 'nails': 3672, 'annotated': 3673, 'tokens': 3674, 'martyrs': 3675, 'waterfall': 3676, 'enclose': 3677, 'chesapeake': 3678, 'spermologer': 3679, 'fivepin': 3680, 'hardware': 3681, 'chest': 3682, 'neanderthal': 3683, 'isis': 3684, 'swiss': 3685, 'racoon': 3686, 'conscious': 3687, 'colonel': 3688, 'edwin': 3689, 'drake': 3690, 'drill': 3691, 'milo': 3692, 'choose': 3693, 'witnesses': 3694, 'execution': 3695, 'doxat': 3696, 'stirred': 3697, 'shaken': 3698, 'isps': 3699, 'stranger': 3700, 'mythological': 3701, 'proficient': 3702, 'galapagos': 3703, 'belong': 3704, 'ethology': 3705, 'muslim': 3706, '86ed': 3707, 'snoopy': 3708, 'kashmir': 3709, 'loop': 3710, 'extended': 3711, 'tirana': 3712, 'titanium': 3713, 'tootsie': 3714, 'caldera': 3715, 'calluses': 3716, 'cushman': 3717, 'wakefield': 3718, 'scientology': 3719, 'footed': 3720, 'musca': 3721, 'domestica': 3722, 'enters': 3723, 'skateboarding': 3724, 'bricks': 3725, 'recycled': 3726, 'marquesas': 3727, 'shark': 3728, 'villi': 3729, 'intestine': 3730, 'wenceslas': 3731, 'shadows': 3732, 'excluded': 3733, 'anzus': 3734, 'alliance': 3735, 'shiver': 3736, 'bilbo': 3737, 'baggins': 3738, 'logo': 3739, 'cascade': 3740, 'rococo': 3741, 'passenger': 3742, 'via': 3743, 'attic': 3744, 'delilah': 3745, 'samson': 3746, 'paleozoic': 3747, 'comprised': 3748, 'defunct': 3749, 'paintball': 3750, 'chihuahuas': 3751, 'barney': 3752, 'rubble': 3753, 'drops': 3754, 'snake': 3755, 'similar': 3756, 'server': 3757, 'injury': 3758, 'lawsuit': 3759, 'acetylsalicylic': 3760, 'georgetown': 3761, 'hoya': 3762, 'chickens': 3763, 'chicks': 3764, 'bingo': 3765, 'crooner': 3766, 'packin': 3767, 'mama': 3768, 'immortals': 3769, 'bladerunner': 3770, 'transistor': 3771, 'arms': 3772, 'harlow': 3773, '1932': 3774, 'tent': 3775, 'nominations': 3776, 'securities': 3777, 'gymnophobia': 3778, 'fossils': 3779, 'banks': 3780, 'cervantes': 3781, 'quixote': 3782, 'jj': 3783, 'hostages': 3784, 'entebbe': 3785, 'inri': 3786, 'earns': 3787, 'merchandise': 3788, 'bends': 3789, 'dental': 3790, 'wanna': 3791, 'riots': 3792, 'domestic': 3793, 'marbella': 3794, 'blasted': 3795, 'mojave': 3796, 'gaulle': 3797, 'themes': 3798, 'dawson': 3799, 'creek': 3800, 'felicity': 3801, 'terrific': 3802, 'troop': 3803, 'perpetually': 3804, 'pudding': 3805, 'nonchlorine': 3806, 'bleach': 3807, 'pictorial': 3808, 'directions': 3809, 'treehouse': 3810, 'irkutsk': 3811, 'yakutsk': 3812, 'kamchatka': 3813, 'managing': 3814, 'apricot': 3815, 'horseshoes': 3816, 'bring': 3817, '401': 3818, 'gringo': 3819, 'therapy': 3820, 'elicit': 3821, 'primal': 3822, 'scream': 3823, 'shipment': 3824, 'gotham': 3825, 'dangles': 3826, 'palate': 3827, 'dennis': 3828, 'dummy': 3829, 'degree': 3830, 'northwestern': 3831, 'alvin': 3832, 'styloid': 3833, 'jayne': 3834, 'mansfield': 3835, 'fergie': 3836, 'either': 3837, 'iberia': 3838, 'hits': 3839, 'foul': 3840, 'hamburgers': 3841, 'steakburgers': 3842, 'browser': 3843, 'mosaic': 3844, 'accompanying': 3845, 'circumcision': 3846, 'newly': 3847, 'judaism': 3848, 'betsy': 3849, 'sharks': 3850, 'snoogans': 3851, 'shrubs': 3852, 'forms': 3853, 'acreage': 3854, 'entered': 3855, 'filmmakers': 3856, 'collabrative': 3857, 'prelude': 3858, 'houdini': 3859, 'lp': 3860, 'nightmare': 3861, 'elm': 3862, 'slowest': 3863, 'stroke': 3864, 'transmitted': 3865, 'anopheles': 3866, 'mixable': 3867, 'contents': 3868, '103': 3869, 'lockerbie': 3870, 'paracetamol': 3871, 'weaknesses': 3872, 'bite': 3873, 'draws': 3874, 'gymnast': 3875, 'katie': 3876, 'terrence': 3877, 'malick': 3878, 'notre': 3879, 'dame': 3880, '84': 3881, 'nicois': 3882, 'tragic': 3883, 'owe': 3884, 'lifting': 3885, 'brontosauruses': 3886, 'rhine': 3887, 'sleepless': 3888, 'ozzie': 3889, 'harriet': 3890, 'molybdenum': 3891, 'judiciary': 3892, 'somme': 3893, 'kill': 3894, 'akita': 3895, 'hackers': 3896, 'tracking': 3897, 'maze': 3898, 'founding': 3899, 'floyd': 3900, 'budweis': 3901, 'psi': 3902, 'mare': 3903, 'nostrum': 3904, 'incompetent': 3905, 'apparel': 3906, 'pusher': 3907, 'poorly': 3908, 'micro': 3909, 'bowls': 3910, 'nasdaq': 3911, 'bph': 3912, 'dolphins': 3913, 'poop': 3914, 'attacked': 3915, 'affectionate': 3916, 'tabulates': 3917, 'ballots': 3918, 'tradition': 3919, 'halloween': 3920, 'silk': 3921, 'screening': 3922, 'gopher': 3923, 'screensaver': 3924, 'resident': 3925, 'wreaks': 3926, 'resting': 3927, 'mac': 3928, 'scares': 3929, 'zenger': 3930, 'deveopment': 3931, 'chessboard': 3932, 'videotape': 3933, 'advertizing': 3934, 'frito': 3935, 'competitor': 3936, 'trans': 3937, 'dare': 3938, 'knock': 3939, 'shoulder': 3940, 'approaches': 3941, 'doctors': 3942, 'diagnose': 3943, 'pulls': 3944, 'strings': 3945, 'speaks': 3946, 'developmental': 3947, 'stages': 3948, 'nicklaus': 3949, 'golfers': 3950, 'sonnets': 3951, 'redness': 3952, 'cheeks': 3953, 'blush': 3954, 'challengers': 3955, 'pheonix': 3956, 'crusaders': 3957, 'recapture': 3958, 'muslims': 3959, 'rupee': 3960, 'depreciates': 3961, 'gaming': 3962, 'marbles': 3963, 'arometherapy': 3964, 'mideast': 3965, 'copier': 3966, 'guarantee': 3967, 'dudley': 3968, 'whores': 3969, 'shan': 3970, 'ripping': 3971, 'contestant': 3972, 'picking': 3973, 'shower': 3974, 'rolls': 3975, 'royce': 3976, 'nightingale': 3977, 'stardust': 3978, 'necessarily': 3979, 'consecutively': 3980, 'g2': 3981, 'angles': 3982, 'isosceles': 3983, 'cockroaches': 3984, 'cbs': 3985, 'interrupted': 3986, 'bulletin': 3987, 'dreamed': 3988, 'marvin': 3989, 'basque': 3990, 'hirohito': 3991, 'euchre': 3992, 'intractable': 3993, 'plantar': 3994, 'keratoma': 3995, 'joyce': 3996, 'famine': 3997, 'kimpo': 3998, 'tails': 3999, 'predators': 4000, 'antarctica': 4001, 'alternator': 4002, 'advertised': 4003, 'quantity': 4004, 'golfing': 4005, 'accessory': 4006, 'ligurian': 4007, 'watchers': 4008, 'suffrage': 4009, 'polynesian': 4010, 'inhabit': 4011, 'admonition': 4012, '219': 4013, 'oftentimes': 4014, 'dropping': 4015, 'goalie': 4016, 'permitted': 4017, 'operant': 4018, 'facts': 4019, 'dogsledding': 4020, 'joel': 4021, 'babar': 4022, 'dumbo': 4023, 'insured': 4024, 'stardom': 4025, 'stein': 4026, 'eriksen': 4027, 'billie': 4028, 'allah': 4029, 'bicornate': 4030, 'weakness': 4031, 'equals': 4032, 'inuit': 4033, 'dye': 4034, 'doesn': 4035, 'yesterdays': 4036, 'emil': 4037, 'goldfus': 4038, 'tornadoes': 4039, 'covrefeu': 4040, 'popcorn': 4041, 'iguana': 4042, 'gustav': 4043, '195': 4044, 'visible': 4045, 'redford': 4046, 'directorial': 4047, 'hyenas': 4048, 'cinderslut': 4049, 'skein': 4050, 'wool': 4051, 'governmental': 4052, 'dealing': 4053, 'racism': 4054, 'referees': 4055, 'concoct': 4056, 'hives': 4057, 'bogs': 4058, 'lynmouth': 4059, 'floods': 4060, 'tulip': 4061, 'madilyn': 4062, 'kahn': 4063, 'wilder': 4064, 'maximo': 4065, 'slightly': 4066, 'ahead': 4067, 'potter': 4068, 'lifesaver': 4069, 'crucifixion': 4070, 'grass': 4071, 'firewall': 4072, 'correspondent': 4073, 'reuter': 4074, 'picked': 4075, 'stalled': 4076, 'evaporate': 4077, 'deck': 4078, 'facility': 4079, 'ballcock': 4080, 'overflow': 4081, 'betty': 4082, 'boop': 4083, 'luis': 4084, 'rey': 4085, 'economists': 4086, 'dominica': 4087, 'johnnie': 4088, 'sids': 4089, 'grenada': 4090, 'clitoris': 4091, 'mandrake': 4092, 'goddamn': 4093, 'airborne': 4094, 'commercially': 4095, 'bayer': 4096, 'leverkusen': 4097, 'nanometer': 4098, 'regulation': 4099, 'brigham': 4100, 'hiding': 4101, 'scars': 4102, 'boardwalk': 4103, 'renown': 4104, 'fogs': 4105, 'janelle': 4106, 'deadwood': 4107, 'distinct': 4108, 'characterstics': 4109, 'arabian': 4110, 'traded': 4111, 'extension': 4112, 'dbf': 4113, 'derogatory': 4114, 'applied': 4115, 'painters': 4116, 'sisley': 4117, 'pissarro': 4118, 'renoir': 4119, 'wop': 4120, 'noise': 4121, 'climb': 4122, 'nepal': 4123, 'alyssa': 4124, 'milano': 4125, 'danza': 4126, 'nordic': 4127, 'vladimir': 4128, 'nabokov': 4129, 'humbert': 4130, 'myth': 4131, 'chairbound': 4132, 'basophobic': 4133, 'wrestling': 4134, 'attracts': 4135, 'mcqueen': 4136, 'cincinnati': 4137, 'fastener': 4138, '1893': 4139, 'auberge': 4140, 'booth': 4141, 'catherine': 4142, 'ribavirin': 4143, 'toothbrush': 4144, 'abominable': 4145, 'snowman': 4146, 'wander': 4147, 'alone': 4148, 'adding': 4149, 'lactobacillus': 4150, 'bulgaricus': 4151, 'classified': 4152, 'eenty': 4153, 'seed': 4154, 'approaching': 4155, 'cyclist': 4156, 'earned': 4157, 'chomper': 4158, 'comediennes': 4159, 'nora': 4160, 'wiggins': 4161, 'eunice': 4162, 'justin': 4163, 'muscle': 4164, 'veal': 4165, 'roasts': 4166, 'chops': 4167, 'amish': 4168, '187s': 4169, 'mining': 4170, 'ringo': 4171, 'stagecoach': 4172, 'coppertop': 4173, 'carrier': 4174, 'open': 4175, 'shostakovich': 4176, 'rostropovich': 4177, 'nipsy': 4178, 'russell': 4179, 'vehicle': 4180, 'shipyard': 4181, 'inspector': 4182, 'kilroy': 4183, 'designate': 4184, 'satisfactory': 4185, 'jellicle': 4186, 'evidence': 4187, 'hermitage': 4188, 'shots': 4189, 'm16': 4190, 'protanopia': 4191, 'guys': 4192, 'whoever': 4193, 'finds': 4194, 'wins': 4195, 'coronado': 4196, 'sharon': 4197, 'schwarzenegger': 4198, 'flatfish': 4199, 'stubborn': 4200, 'gummed': 4201, 'diskettes': 4202, 'accidents': 4203, 'shag': 4204, 'presided': 4205, 'airwaves': 4206, 'pearls': 4207, 'ya': 4208, 'lo': 4209, 'ove': 4210, 'naked': 4211, 'portly': 4212, 'criminologist': 4213, 'hyatt': 4214, 'checkmate': 4215, 'gina': 4216, 'advanced': 4217, 'belt': 4218, 'encyclopedia': 4219, 'sunday': 4220, 'narragansett': 4221, 'tribes': 4222, 'rhode': 4223, 'extinct': 4224, 'quarters': 4225, 'fodor': 4226, 'rowan': 4227, 'thine': 4228, 'bic': 4229, 'flame': 4230, 'clockwise': 4231, 'counterclockwise': 4232, 'wealthiest': 4233, 'producers': 4234, 'promoters': 4235, 'kingdom': 4236, 'passing': 4237, 'mourning': 4238, 'rarest': 4239, 'judith': 4240, 'rossner': 4241, 'diane': 4242, 'keaton': 4243, 'parasites': 4244, 'powhatan': 4245, 'rolfe': 4246, 'pecan': 4247, 'nihilist': 4248, 'feynman': 4249, 'physics': 4250, 'relevant': 4251, '118': 4252, '126': 4253, '134': 4254, '142': 4255, '158': 4256, '167': 4257, '177': 4258, 'founders': 4259, 'ejaculation': 4260, 'disposable': 4261, 'cents': 4262, 'tropical': 4263, 'distributions': 4264, 'oddsmaker': 4265, 'snyder': 4266, 'autoimmune': 4267, 'sheath': 4268, 'nerve': 4269, 'gradual': 4270, 'dorsets': 4271, 'lincolns': 4272, 'oxfords': 4273, 'southdowns': 4274, 'shortest': 4275, 'hajo': 4276, 'plural': 4277, 'yes': 4278, 'copyright': 4279, 'homes': 4280, '280': 4281, 'crew': 4282, 'doctorate': 4283, 'sunnyside': 4284, 'faber': 4285, 'mongol': 4286, 'lucky': 4287, 'sprayed': 4288, 'cook': 4289, 'rawhide': 4290, 'wingspan': 4291, 'condor': 4292, 'typically': 4293, 'hairs': 4294, 'admiral': 4295, 'viceroy': 4296, 'granted': 4297, 'profits': 4298, 'voyage': 4299, 'transport': 4300, '468': 4301, 'marlin': 4302, 'harness': 4303, '193': 4304, 'attendant': 4305, 'malta': 4306, 'blackhawk': 4307, '1832': 4308, 'ante': 4309, 'mortem': 4310, 'shawn': 4311, 'settlement': 4312, '85': 4313, 'sheika': 4314, 'dena': 4315, 'farri': 4316, 'delivered': 4317, 'newscast': 4318, 'knowpost': 4319, 'liked': 4320, 'decade': 4321, 'buzz': 4322, 'permanent': 4323, 'gibson': 4324, 'boards': 4325, 'zorro': 4326, 'tabs': 4327, 'heating': 4328, 'barbara': 4329, 'nominee': 4330, 'gillette': 4331, 'olestra': 4332, 'cows': 4333, 'bow': 4334, 'enlist': 4335, 'mostly': 4336, 'quarterbacks': 4337, 'solve': 4338, 'mustard': 4339, 'warned': 4340, 'antidisestablishmentarianism': 4341, 'topophobic': 4342, 'ybarra': 4343, 'retrievers': 4344, 'tee': 4345, 'masters': 4346, 'puccini': 4347, 'boheme': 4348, 'showed': 4349, 'fondness': 4350, 'munching': 4351, 'pollen': 4352, 'curies': 4353, 'universal': 4354, 'import': 4355, 'melancholy': 4356, 'dane': 4357, 'longtime': 4358, 'yeat': 4359, 'hermits': 4360, 'filthiest': 4361, 'alive': 4362, 'elect': 4363, 'isthmus': 4364, 'dan': 4365, 'aykroyd': 4366, 'injectors': 4367, 'macy': 4368, 'parade': 4369, 'wolfman': 4370, 'touted': 4371, 'nafta': 4372, 'phonograph': 4373, 'tips': 4374, 'fireplace': 4375, 'manatees': 4376, 'certified': 4377, 'nurse': 4378, 'midwife': 4379, 'philip': 4380, 'boris': 4381, 'pasternak': 4382, 'tout': 4383, 'coined': 4384, 'cyberspace': 4385, 'neuromancer': 4386, 'multicolored': 4387, '42': 4388, 'quintillion': 4389, 'potential': 4390, 'combinations': 4391, 'rhomboideus': 4392, 'bjorn': 4393, 'borg': 4394, 'forehand': 4395, 'recruited': 4396, 'winston': 4397, 'finish': 4398, '1926': 4399, 'pokemon': 4400, 'benelux': 4401, 'petrified': 4402, 'fences': 4403, 'neighbors': 4404, 'mandy': 4405, 'costumed': 4406, 'personas': 4407, 'pym': 4408, 'conditions': 4409, 'buses': 4410, 'explorers': 4411, 'goldie': 4412, 'hawn': 4413, 'boyfriend': 4414, 'virtues': 4415, 'casinos': 4416, 'embedded': 4417, 'rev': 4418, 'falwell': 4419, 'crossword': 4420, 'nonaggression': 4421, 'pact': 4422, 'traversed': 4423, 'wwi': 4424, 'doughboys': 4425, 'corporal': 4426, 'cds': 4427, 'garth': 4428, 'vessel': 4429, 'atari': 4430, 'specializing': 4431, 'punctuation': 4432, 'drills': 4433, 'grader': 4434, 'sacred': 4435, 'gland': 4436, 'regenerate': 4437, 'gutenberg': 4438, 'bibles': 4439, 'astronomical': 4440, 'jan': 4441, 'fairground': 4442, 'folk': 4443, 'chapman': 4444, 'median': 4445, 'yousuf': 4446, 'karsh': 4447, 'shiest': 4448, 'godiva': 4449, 'chocolates': 4450, 'sullen': 4451, 'untamed': 4452, 'rainfall': 4453, 'archy': 4454, 'mehitabel': 4455, 'glowsticks': 4456, 'barroom': 4457, 'judge': 4458, 'pecos': 4459, 'excuse': 4460, 'nato': 4461, 'meyer': 4462, 'wolfsheim': 4463, 'fixed': 4464, 'defensive': 4465, 'diplomacy': 4466, 'khyber': 4467, 'braun': 4468, 'variations': 4469, 'canfield': 4470, 'klondike': 4471, 'chivington': 4472, 'pirate': 4473, 'dialogue': 4474, 'gm': 4475, 'pageant': 4476, 'winslow': 4477, '1872': 4478, 'aim': 4479, '54c': 4480, 'shorn': 4481, 'provide': 4482, 'intake': 4483, '1990s': 4484, 'extant': 4485, 'neoclassical': 4486, 'romanticism': 4487, 'inducted': 4488, 'fats': 4489, 'blobbo': 4490, 'leno': 4491, 'rosemary': 4492, 'labianca': 4493, 'rita': 4494, 'hayworth': 4495, 'prefix': 4496, 'surnames': 4497, 'oxidation': 4498, 'boob': 4499, 'prehistoric': 4500, 'etta': 4501, 'butch': 4502, 'cassidey': 4503, 'sundance': 4504, 'laos': 4505, 'wilbur': 4506, 'reed': 4507, 'breeding': 4508, 'organized': 4509, 'pulaski': 4510, '1866': 4511, 'marion': 4512, 'actual': 4513, 'fourteenth': 4514, 'perfume': 4515, 'rosanne': 4516, 'rosanna': 4517, 'despondent': 4518, 'freddie': 4519, 'prinze': 4520, 'iraqi': 4521, 'referring': 4522, 'bandit': 4523, 'backup': 4524, '1915': 4525, '86': 4526, 'ticklish': 4527, 'canadians': 4528, 'emmigrate': 4529, 'menus': 4530, 'washed': 4531, 'vodka': 4532, 'mpilo': 4533, 'drinker': 4534, 'respirator': 4535, 'batteries': 4536, 'recommend': 4537, 'boil': 4538, 'snowboard': 4539, '200': 4540, 'suzette': 4541, 'assume': 4542, 'sutcliffe': 4543, 'emma': 4544, 'peel': 4545, 'journalism': 4546, 'befell': 4547, 'immaculate': 4548, 'conception': 4549, 'aircraft': 4550, 'carriers': 4551, 'candlemas': 4552, 'ouarterly': 4553, 'doublespeak': 4554, 'inoperative': 4555, 'tufts': 4556, 'pyrotechnic': 4557, 'jobs': 4558, 'pianos': 4559, 'motorcycles': 4560, 'argon': 4561, 'boeing': 4562, '737': 4563, '1930s': 4564, 'uber': 4565, 'cornell': 4566, '175': 4567, 'tons': 4568, 'edmonton': 4569, 'sonny': 4570, 'liston': 4571, 'succeed': 4572, 'belonged': 4573, 'zatanna': 4574, 'dispatched': 4575, 'cruiser': 4576, 'novels': 4577, 'hooters': 4578, 'swim': 4579, 'copyrighted': 4580, 'refuge': 4581, 'preserve': 4582, 'wildlife': 4583, 'wilderness': 4584, 'plc': 4585, 'acres': 4586, 'carrying': 4587, 'barkis': 4588, 'willin': 4589, 'peggy': 4590, 'hyperlink': 4591, 'artemis': 4592, 'watching': 4593, 'grammys': 4594, 'elysium': 4595, 'shalom': 4596, 'pump': 4597, 'organic': 4598, 'dwellers': 4599, 'slane': 4600, 'manhatten': 4601, 'prizes': 4602, 'rid': 4603, 'woodpeckers': 4604, 'syllables': 4605, 'hendecasyllabic': 4606, 'waco': 4607, '1885': 4608, 'mainland': 4609, 'victims': 4610, 'generation': 4611, 'dear': 4612, 'abby': 4613, 'sussex': 4614, 'ottawa': 4615, 'jell': 4616, 'incorporate': 4617, 'panoramic': 4618, 'ones': 4619, 'scene': 4620, 'dish': 4621, 'intestines': 4622, 'guam': 4623, 'executor': 4624, 'inescapable': 4625, 'purveyor': 4626, 'commission': 4627, 'microprocessors': 4628, 'microcontrollers': 4629, 'sought': 4630, 'necklaces': 4631, 'proof': 4632, 'houseplants': 4633, 'metabolize': 4634, 'carcinogens': 4635, 'enola': 4636, 'hen': 4637, 'protestant': 4638, 'supremacy': 4639, 'arab': 4640, 'strap': 4641, 'whatever': 4642, 'catalogues': 4643, 'coastlines': 4644, 'biscay': 4645, 'egyptians': 4646, 'shave': 4647, 'eyebrows': 4648, 'waverly': 4649, 'assign': 4650, 'agents': 4651, 'eduard': 4652, '000th': 4653, 'michagin': 4654, 'eh': 4655, 'approval': 4656, 'funk': 4657, 'lata': 4658, 'pc': 4659, 'hinckley': 4660, 'jodie': 4661, 'foster': 4662, 'liners': 4663, 'trafalgar': 4664, 'carmania': 4665, 'slime': 4666, 'feathered': 4667, 'yugoslavians': 4668, 'vlaja': 4669, 'gaja': 4670, 'raja': 4671, 'stolen': 4672, 'unusual': 4673, 'mind': 4674, 'anybody': 4675, 'dying': 4676, 'wiener': 4677, 'schnitzel': 4678, 'urals': 4679, 'itch': 4680, 'dunes': 4681, 'older': 4682, 'lacan': 4683, 'riding': 4684, 'budweiser': 4685, 'fischer': 4686, 'exterminate': 4687, 'gaelic': 4688, 'transcript': 4689, 'quart': 4690, 'filling': 4691, 'fascinated': 4692, 'experimenting': 4693, 'neurasthenia': 4694, 'terrible': 4695, 'typhoid': 4696, 'jay': 4697, 'kay': 4698, '9th': 4699, 'symphony': 4700, 'explosive': 4701, 'charcoal': 4702, 'sulfur': 4703, 'mountainous': 4704, 'lhasa': 4705, 'apso': 4706, 'karenna': 4707, 'teenage': 4708, 'olives': 4709, 'rossetti': 4710, 'beata': 4711, 'beatrix': 4712, 'wiz': 4713, 'touring': 4714, 'modestly': 4715, 'stove': 4716, 'reputation': 4717, 'stealing': 4718, 'jokes': 4719, 'cyclone': 4720, 'amendements': 4721, 'passed': 4722, 'painful': 4723, 'sleet': 4724, 'freezing': 4725, 'econoline': 4726, 'f25': 4727, 'v1': 4728, 'ian': 4729, 'fleming': 4730, 'm3': 4731, 'routinely': 4732, 'dem': 4733, 'bums': 4734, 'frustrated': 4735, 'depended': 4736, 'turning': 4737, 'gaza': 4738, 'jericho': 4739, 'methodist': 4740, 'retrograde': 4741, 'breweries': 4742, 'meta': 4743, 'appropriately': 4744, 'masterson': 4745, 'cwt': 4746, 'tenants': 4747, 'adjoining': 4748, 'cabinets': 4749, 'raging': 4750, 'advised': 4751, 'listeners': 4752, 'chevrolet': 4753, 'ctbt': 4754, 'waste': 4755, 'dairy': 4756, 'laureate': 4757, 'expelled': 4758, 'timor': 4759, 'carried': 4760, 'multiple': 4761, 'births': 4762, 'ge': 4763, 'ejaculate': 4764, 'pandoro': 4765, 'throne': 4766, 'abdication': 4767, 'anka': 4768, 'install': 4769, 'tile': 4770, 'quickest': 4771, 'nail': 4772, 'zipper': 4773, 'industrialized': 4774, 'anne': 4775, 'boleyn': 4776, 'stat': 4777, 'quickly': 4778, 'thurgood': 4779, 'useful': 4780, 'battleship': 4781, 'upstaged': 4782, 'amicable': 4783, 'publisher': 4784, 'molly': 4785, 'skim': 4786, 'decided': 4787, 'sprocket': 4788, 'sponsor': 4789, 'czech': 4790, 'algiers': 4791, 'seawater': 4792, 'finnish': 4793, 'caucasian': 4794, 'stratocaster': 4795, 'sculptress': 4796, 'lightest': 4797, 'twirl': 4798, 'giant': 4799, 'masquerade': 4800, 'bikini': 4801, 'bathing': 4802, 'rubens': 4803, 'dyck': 4804, 'bruegel': 4805, 'citizens': 4806, 'examples': 4807, 'tex': 4808, 'arriving': 4809, 'mgm': 4810, 'shirtwaist': 4811, 'spiritual': 4812, 'ocho': 4813, 'rios': 4814, '32': 4815, 'mauritania': 4816, 'cookers': 4817, 'jury': 4818, 'computers': 4819, 'impact': 4820, 'salvador': 4821, 'dali': 4822, 'irate': 4823, 'oxide': 4824, 'gallery': 4825, 'viagra': 4826, 'monarchs': 4827, 'crowned': 4828, 'bellworts': 4829, 'darius': 4830, 'anna': 4831, 'anderson': 4832, 'czar': 4833, 'dita': 4834, 'beard': 4835, 'coot': 4836, 'turks': 4837, 'libya': 4838, 'fray': 4839, 'bentos': 4840, 'magoo': 4841, 'flog': 4842, 'inspiration': 4843, 'schoolteacher': 4844, 'poets': 4845, 'positions': 4846, 'succession': 4847, 'flights': 4848, '165': 4849, 'gunpowder': 4850, 'burkina': 4851, 'faso': 4852, 'stripped': 4853, 'barred': 4854, 'blondes': 4855, 'kelly': 4856, 'phalanx': 4857, 'mustachioed': 4858, 'frankie': 4859, 'utilities': 4860, 'airman': 4861, 'goering': 4862, 'storms': 4863, 'nicolo': 4864, 'paganini': 4865, 'sheila': 4866, 'burnford': 4867, 'hammer': 4868, 'believed': 4869, 'quotation': 4870, 'together': 4871, 'elevators': 4872, 'infant': 4873, 'seal': 4874, 'respones': 4875, 'goodnight': 4876, 'mankiewicz': 4877, 'electronics': 4878, 'bulge': 4879, 'grandeur': 4880, 'destination': 4881, '1830': 4882, 'iraqis': 4883, 'adorns': 4884, 'rwanda': 4885, 'vb': 4886, 'pos': 4887, 'darwin': 4888, 'olympia': 4889, 'overlook': 4890, 'chronicled': 4891, 'katy': 4892, 'holstrum': 4893, 'glen': 4894, 'morley': 4895, 'farthings': 4896, 'abundant': 4897, 'furth': 4898, 'rednitz': 4899, 'pegnitz': 4900, 'converge': 4901, 'dixville': 4902, 'notch': 4903, 'cisalpine': 4904, 'shopping': 4905, 'protagonist': 4906, 'dostoevski': 4907, 'idiot': 4908, 'penalty': 4909, 'dismissed': 4910, 'burglary': 4911, 'meerkat': 4912, 'nicolet': 4913, 'authors': 4914, 'memory': 4915, 'midwest': 4916, 'slang': 4917, 'darn': 4918, 'tootin': 4919, 'cupboard': 4920, 'bare': 4921, 'cleaner': 4922, 'attends': 4923, 'pencey': 4924, 'prep': 4925, 'antonio': 4926, 'ducats': 4927, 'melts': 4928, 'prescription': 4929, 'voyager': 4930, 'jim': 4931, 'bohannon': 4932, 'commanders': 4933, 'alamein': 4934, 'replies': 4935, 'leia': 4936, 'confession': 4937, 'rockettes': 4938, 'elroy': 4939, 'hirsch': 4940, 'oh': 4941, 'ermal': 4942, 'seashell': 4943, 'haunted': 4944, 'bigger': 4945, 'thighs': 4946, 'cheetahs': 4947, '45mhz': 4948, 'puzzle': 4949, 'alexandre': 4950, 'dumas': 4951, 'mystical': 4952, 'ravens': 4953, 'odin': 4954, 'virus': 4955, 'hesse': 4956, 'fe': 4957, 'shock': 4958, 'dipsomaniac': 4959, 'crave': 4960, 'contributions': 4961, 'personal': 4962, 'braille': 4963, 'isle': 4964, 'pinatubo': 4965, 'officially': 4966, 'garfield': 4967, 'delegate': 4968, 'processing': 4969, 'lagos': 4970, 'greenland': 4971, '985': 4972, 'distinguishing': 4973, 'cystic': 4974, 'equivalence': 4975, 'philatelist': 4976, 'wheatfield': 4977, 'crows': 4978, 'uol': 4979, 'dimensions': 4980, 'goal': 4981, 'signs': 4982, 'recession': 4983, 'circumorbital': 4984, 'hematoma': 4985, 'governed': 4986, 'ouagadougou': 4987, 'sunflowers': 4988, 'frequent': 4989, 'enemies': 4990, 'translation': 4991, 'handicraft': 4992, 'requires': 4993, 'interlace': 4994, 'warp': 4995, 'weft': 4996, 'sunlight': 4997, 'milliseconds': 4998, 'cherokee': 4999, 'camcorders': 5000, 'conceiving': 5001, 'cody': 5002, 'biceps': 5003, 'tender': 5004, 'resignation': 5005, 'thucydides': 5006, 'boulevard': 5007, 'tonsils': 5008, 'fluorine': 5009, 'magnesium': 5010, 'mayans': 5011, 'balloon': 5012, 'posh': 5013, 'anyone': 5014, 'loomis': 5015, 'shillings': 5016, 'tarantula': 5017, 'secondary': 5018, 'sabrina': 5019, 'conductor': 5020, 'pops': 5021, 'fiedler': 5022, 'hebephrenia': 5023, 'terrorized': 5024, 'stalker': 5025, 'christina': 5026, 'peanuts': 5027, 'guitarist': 5028, 'camels': 5029, 'humps': 5030, 'hospital': 5031, 'orthopedics': 5032, 'sense': 5033, 'betrayed': 5034, 'doodyville': 5035, 'orphans': 5036, 'fund': 5037, 'bourdon': 5038, 'wisconsin': 5039, 'badgers': 5040, 'picts': 5041, 'caroll': 5042, 'baker': 5043, 'grimes': 5044, 'debbie': 5045, 'reynolds': 5046, 'noir': 5047, 'swampy': 5048, 'diplomatic': 5049, 'dressed': 5050, 'affair': 5051, 'winters': 5052, 'seeking': 5053, 'missile': 5054, 'sidewinder': 5055, 'sheboygan': 5056, 'firemen': 5057, 'nantucket': 5058, 'shipwreck': 5059, 'divers': 5060, 'exploring': 5061, '52': 5062, 'stories': 5063, 'contained': 5064, 'wharton': 5065, 'problem': 5066, '1997': 5067, 'constipation': 5068, 'symptom': 5069, 'viewing': 5070, 'ursula': 5071, 'andress': 5072, 'honeymooners': 5073, 'promising': 5074, 'submarines': 5075, 'mortgage': 5076, 'lifter': 5077, 'll': 5078, 'associaton': 5079, 'havlicek': 5080, '46': 5081, '227': 5082, 'happy': 5083, 'donor': 5084, 'thor': 5085, 'canon': 5086, 'eels': 5087, 'madding': 5088, 'crowd': 5089, 'graphic': 5090, 'recessed': 5091, 'filter': 5092, 'appendix': 5093, 'heir': 5094, 'raising': 5095, 'wreckage': 5096, 'andrea': 5097, 'doria': 5098, 'leprosy': 5099, 'chartered': 5100, 'vermont': 5101, 'shoreline': 5102, 'cartesian': 5103, 'diver': 5104, 'libraries': 5105, 'document': 5106, 'copy': 5107, 'burial': 5108, 'remembered': 5109, 'blaise': 5110, 'obtained': 5111, 'dams': 5112, 'metropolis': 5113, 'anorexia': 5114, 'tyler': 5115, 'tadeus': 5116, 'wladyslaw': 5117, 'konopka': 5118, 'cactus': 5119, 'sky': 5120, 'overalls': 5121, 'dungri': 5122, 'suburb': 5123, 'rainstorm': 5124, 'permanently': 5125, 'connection': 5126, 'krypton': 5127, 'daxam': 5128, 'blackjack': 5129, 'reaches': 5130, 'yemen': 5131, 'reunified': 5132, 'skrunch': 5133, 'oompas': 5134, 'organs': 5135, 'necrosis': 5136, 'magnetar': 5137, 'feud': 5138, '1891': 5139, 'stripe': 5140, 'coho': 5141, 'salmon': 5142, 'lucia': 5143, 'primitives': 5144, 'rural': 5145, 'serial': 5146, 'yell': 5147, 'hail': 5148, 'taxi': 5149, 'apartheid': 5150, 'yale': 5151, 'lock': 5152, 'loosely': 5153, 'aztec': 5154, 'populous': 5155, '576': 5156, 'knute': 5157, 'rockne': 5158, 'aldous': 5159, 'huxley': 5160, 'risk': 5161, 'venture': 5162, 'devo': 5163, 'hiccup': 5164, 'warfare': 5165, 'bombshell': 5166, 'descendents': 5167, 'tutankhamun': 5168, 'exhibit': 5169, 'transported': 5170, 'involves': 5171, 'martian': 5172, 'attire': 5173, 'pothooks': 5174, 'seriously': 5175, 'toulmin': 5176, 'logic': 5177, 'corgi': 5178, 'dangling': 5179, 'participle': 5180, 'stores': 5181, 'nightclubs': 5182, 'conceived': 5183, 'flush': 5184, 'nathan': 5185, 'hamill': 5186, 'prequel': 5187, 'disks': 5188, 'departments': 5189, 'woo': 5190, 'wu': 5191, 'dialect': 5192, 'neither': 5193, 'borrower': 5194, 'nor': 5195, 'lender': 5196, 'moxie': 5197, 'spade': 5198, 'patti': 5199, 'gestapo': 5200, 'exclusive': 5201, 'copacabana': 5202, 'ipanema': 5203, 'bavaria': 5204, 'guernsey': 5205, 'sark': 5206, 'herm': 5207, 'hence': 5208, 'overcome': 5209, 'plea': 5210, 'destructor': 5211, 'destroying': 5212, 'creativity': 5213, 'midsummer': 5214, 'ed': 5215, 'allegedly': 5216, 'obscene': 5217, 'gesture': 5218, 'allan': 5219, 'minded': 5220, 'casper': 5221, 'girlfriend': 5222, 'devoured': 5223, 'mob': 5224, 'starving': 5225, 'potsdam': 5226, 'contagious': 5227, 'recruits': 5228, 'nitrates': 5229, 'environment': 5230, 'nitrox': 5231, 'diving': 5232, 'unsuccessful': 5233, 'overthrow': 5234, 'bavarian': 5235, 'earthworms': 5236, 'pasture': 5237, 'stonehenge': 5238, 'hourly': 5239, 'workers': 5240, 'snatches': 5241, 'jerks': 5242, 'aeul': 5243, 'laid': 5244, 'relax': 5245, 'culture': 5246, 'jimi': 5247, 'hendrix': 5248, 'castor': 5249, 'pollux': 5250, 'reflections': 5251, 'milligrams': 5252, 'gram': 5253, 'stringed': 5254, 'fires': 5255, 'bolt': 5256, 'hepcats': 5257, 'angelica': 5258, 'wick': 5259, 'musician': 5260, 'prophecies': 5261, 'witches': 5262, 'macbeth': 5263, 'dominos': 5264, 'contraceptives': 5265, 'jeremy': 5266, 'piven': 5267, 'yo': 5268, 'yos': 5269, 'fossilizes': 5270, 'coprolite': 5271, 'mayo': 5272, 'clinic': 5273, 'amsterdam': 5274, 'knows': 5275, 'sewer': 5276, 'commissioner': 5277, 'provo': 5278, 'pasta': 5279, 'coulee': 5280, 'psorisis': 5281, 'disappear': 5282, '4th': 5283, 'bermuda': 5284, 'lust': 5285, 'constitutes': 5286, 'reliable': 5287, 'download': 5288, 'heretic': 5289, 'required': 5290, '1879': 5291, '1880': 5292, '1881': 5293, 'amaretto': 5294, 'biscuits': 5295, 'ailment': 5296, 'vernal': 5297, 'equinox': 5298, 'protect': 5299, 'innocent': 5300, '1873': 5301, 'massage': 5302, 'cracker': 5303, 'manifest': 5304, 'latent': 5305, 'theories': 5306, 'styron': 5307, 'nitrogen': 5308, 'fade': 5309, 'employed': 5310, '72': 5311, 'baseemen': 5312, 'snowballs': 5313, 'rodder': 5314, 'tyrannosaurus': 5315, 'ozymandias': 5316, 'kenyan': 5317, 'safari': 5318, 'le': 5319, 'carre': 5320, 'echidna': 5321, 'adjournment': 5322, '25th': 5323, 'session': 5324, 'assembly': 5325, 'rifle': 5326, 'folklore': 5327, 'laundry': 5328, 'detergent': 5329, 'manche': 5330, 'easy': 5331, 'onassis': 5332, 'yacht': 5333, 'dirty': 5334, 'tumbled': 5335, 'marble': 5336, 'user': 5337, 'satisfaction': 5338, 'athens': 5339, 'bocci': 5340, 'void': 5341, 'pulse': 5342, 'easily': 5343, 'shirts': 5344, 'mla': 5345, 'bibliographies': 5346, 'imam': 5347, 'hussain': 5348, 'shia': 5349, 'barr': 5350, 'arabic': 5351, 'policeman': 5352, 'digitalis': 5353, 'flintknapping': 5354, 'canker': 5355, 'sores': 5356, 'castellated': 5357, 'kremlin': 5358, 'bureaucracy': 5359, 'gambler': 5360, 'consider': 5361, 'blunder': 5362, 'trinitrotoluene': 5363, 'absolute': 5364, 'mammals': 5365, 'alpha': 5366, 'theta': 5367, 'freidreich': 5368, 'wilhelm': 5369, 'ludwig': 5370, 'leichhardt': 5371, 'prussian': 5372, 'unfamiliar': 5373, 'spokespeople': 5374, 'katharine': 5375, 'fcc': 5376, 'newton': 5377, 'minow': 5378, 'miniature': 5379, 'landlocked': 5380, 'tiffany': 5381, 'magnets': 5382, 'attract': 5383, 'diana': 5384, 'determines': 5385, 'sunk': 5386, 'havana': 5387, 'bunch': 5388, 'emblazoned': 5389, 'jolly': 5390, 'alveoli': 5391, 'konigsberg': 5392, 'leos': 5393, 'herbert': 5394, 'neal': 5395, 'martini': 5396, 'bernard': 5397, 'crack': 5398, '1835': 5399, 'tolling': 5400, 'fiddlers': 5401, 'cole': 5402, 'promises': 5403, 'ethylene': 5404, 'irwin': 5405, 'widmark': 5406, 'butt': 5407, 'kicked': 5408, 'mess': 5409, 'vw': 5410, 'changes': 5411, 'pallbearer': 5412, 'pub': 5413, 'bathroom': 5414, '192': 5415, 'baja': 5416, 'mar': 5417, 'shallow': 5418, 'deadrise': 5419, 'omni': 5420, 'ultimate': 5421, 'unanswerable': 5422, 'cookbook': 5423, '198': 5424, 'liz': 5425, 'chandler': 5426, 'guiteau': 5427, 'snakes': 5428, 'superbowls': 5429, 'ers': 5430, 'gandy': 5431, 'dancer': 5432, 'biz': 5433, 'belmont': 5434, 'stakes': 5435, 'iowa': 5436, 'loco': 5437, 'rathaus': 5438, 'canning': 5439, 'summit': 5440, 'pi': 5441, 'bachelor': 5442, 'bedroom': 5443, 'cobol': 5444, 'fortran': 5445, 'supergirl': 5446, 'sic': 5447, 'woodward': 5448, 'brethren': 5449, 'surfboard': 5450, 'neighborhood': 5451, 'refusing': 5452, 'bus': 5453, 'paying': 5454, 'roulette': 5455, 'corps': 5456, 'ingmar': 5457, 'bergman': 5458, 'deltiologist': 5459, 'hans': 5460, 'henderson': 5461, 'walking': 5462, 'hog': 5463, 'shadow': 5464, 'packers': 5465, 'philosophized': 5466, 'stokes': 5467, 'lawn': 5468, 'challenge': 5469, 'nylon': 5470, 'stockings': 5471, 'expedition': 5472, 'climbing': 5473, 'hostage': 5474, 'taking': 5475, 'wart': 5476, 'spaceball': 5477, 'rabies': 5478, 'revive': 5479, 'lying': 5480, 'preface': 5481, 'foreword': 5482, 'fingernails': 5483, '123': 5484, 'calcutta': 5485, 'sterilize': 5486, 'eclairs': 5487, 'businessman': 5488, 'humor': 5489, 'enigmatic': 5490, 'acquitted': 5491, 'treason': 5492, 'jurassic': 5493, 'chemiosmotic': 5494, 'cromwell': 5495, 'ion': 5496, 'trace': 5497, 'roots': 5498, 'diet': 5499, 'courier': 5500, 'creams': 5501, 'seaweed': 5502, 'dumb': 5503, 'loveable': 5504, 'gosfield': 5505, 'phil': 5506, 'silvers': 5507, 'usb': 5508, 'cayman': 5509, 'vegetation': 5510, 'lol': 5511, 'amelia': 5512, 'earhart': 5513, 'disappeared': 5514, 'holden': 5515, 'caulfield': 5516, 'ace': 5517, 'manfred': 5518, 'richthofen': 5519, 'invaded': 5520, 'petroleum': 5521, 'asthma': 5522, 'biloxi': 5523, 'esperanto': 5524, 'nouns': 5525, 'heuristic': 5526, 'ostriches': 5527, 'blackhawks': 5528, 'maintain': 5529, 'freed': 5530, 'slaves': 5531, 'porter': 5532, 'gift': 5533, 'magi': 5534, 'waugh': 5535, 'dust': 5536, 'linus': 5537, 'cheery': 5538, 'fellow': 5539, '9971': 5540, 'yoo': 5541, 'hoo': 5542, 'settle': 5543, 'nearest': 5544, 'inoco': 5545, 'assassinations': 5546, '1865': 5547, 'brake': 5548, 'touchdowns': 5549, 'bullets': 5550, 'bebrenia': 5551, 'amazonis': 5552, 'natick': 5553, 'gimli': 5554, 'sparkling': 5555, '33': 5556, 'christi': 5557, 'caber': 5558, 'tossing': 5559, 'houston': 5560, 'oilers': 5561, 'oakland': 5562, 'raiders': 5563, 'stockyards': 5564, 'weird': 5565, 'knife': 5566, 'kinks': 5567, 'madre': 5568, 'lapwarmers': 5569, 'bovine': 5570, 'weakest': 5571, 'garrett': 5572, 'morgan': 5573, 'dipper': 5574, 'ishmael': 5575, 'flea': 5576, 'antilles': 5577, 'comparisons': 5578, 'prices': 5579, 'multimedia': 5580, 'cloth': 5581, 'camptown': 5582, 'racetrack': 5583, 'blob': 5584, 'herculoids': 5585, 'ninety': 5586, 'theses': 5587, 'inkhorn': 5588, 'pocahontas': 5589, 'bloodhound': 5590, 'plymouth': 5591, 'bunyan': 5592, 'ox': 5593, 'dartmouth': 5594, 'environmental': 5595, 'influences': 5596, 'backstreet': 5597, 'murdering': 5598, 'budapest': 5599, 'belgrade': 5600, 'wreaked': 5601, 'marching': 5602, 'conservationist': 5603, 'spokesperson': 5604, 'grape': 5605, 'carpal': 5606, 'acidic': 5607, 'redskin': 5608, 'fan': 5609, 'belly': 5610, 'buttons': 5611, 'suburban': 5612, 'feminine': 5613, 'mystique': 5614, 'ukrainians': 5615, 'perry': 5616, '600': 5617, '387': 5618, 'airplanes': 5619, 'hound': 5620, 'daws': 5621, 'palindromic': 5622, 'sailing': 5623, 'twain': 5624, 'ethnological': 5625, 'belle': 5626, 'beast': 5627, 'microscope': 5628, 'remains': 5629, 'tap': 5630, 'grandma': 5631, 'shoplifts': 5632, 'socratic': 5633, 'hypertext': 5634, 'peller': 5635, 'wendy': 5636, 'beef': 5637, 'blatty': 5638, 'recounts': 5639, 'regan': 5640, 'macneil': 5641, 'devil': 5642, 'lions': 5643, 'pomegranate': 5644, 'magee': 5645, 'calypso': 5646, 'basilica': 5647, 'advantages': 5648, 'selecting': 5649, 'bernadette': 5650, 'peters': 5651, 'reviews': 5652, 'turbulent': 5653, 'souls': 5654, 'gothic': 5655, 'alleged': 5656, 'shroud': 5657, 'turin': 5658, 'salk': 5659, 'martialled': 5660, 'criticizing': 5661, 'insanity': 5662, 'convert': 5663, 'enrolled': 5664, 'bands': 5665, 'instruments': 5666, 'koresh': 5667, 'langston': 5668, 'achievements': 5669, 'naacp': 5670, 'grinch': 5671, 'gompers': 5672, 'rayburn': 5673, 'pita': 5674, 'peacocks': 5675, 'mate': 5676, 'obote': 5677, 'niagara': 5678, 'crewel': 5679, 'narcolepsy': 5680, '1896': 5681, 'alda': 5682, 'smithsonian': 5683, 'mixture': 5684, 'sondheim': 5685, 'ballad': 5686, 'maybe': 5687, 'kite': 5688, 'pere': 5689, 'lachaise': 5690, 'cemetery': 5691, 'occurred': 5692, 'marilyn': 5693, 'monroe': 5694, 'skunks': 5695, 'medina': 5696, 'zoological': 5697, 'ruminant': 5698, 'hyperopia': 5699, 'assigned': 5700, 'longer': 5701, 'aladdin': 5702, 'tzimisce': 5703, "'the": 5704, 'boycott': 5705, 'funeral': 5706, 'springfield': 5707, 'merrick': 5708, 'ogre': 5709, 'urged': 5710, 'outstanding': 5711, 'dynasty': 5712, 'remote': 5713, 'hurt': 5714, 'hurting': 5715, 'ultraviolet': 5716, 'lizzie': 5717, 'borden': 5718, 'polis': 5719, 'minneapolis': 5720, 'detailed': 5721, 'manchukuo': 5722, 'learned': 5723, 'saxophone': 5724, 'gum': 5725, 'pelvic': 5726, 'carson': 5727, '7847': 5728, '5943': 5729, 'preservation': 5730, 'favoured': 5731, 'struggle': 5732, 'chickenpoxs': 5733, 'attorneys': 5734, 'sheri': 5735, 'primate': 5736, 'pigment': 5737, 'palms': 5738, 'scalene': 5739, 'bearer': 5740, 'wants': 5741, 'gets': 5742, 'chilly': 5743, 'respond': 5744, 'millenium': 5745, 'hypnotherapy': 5746, 'rcd': 5747, 'pursued': 5748, 'tweety': 5749, 'pie': 5750, 'internal': 5751, 'combustion': 5752, 'biorhythm': 5753, 'portrait': 5754, 'grilled': 5755, 'bacon': 5756, 'brunettes': 5757, 'conservancy': 5758, 'sung': 5759, 'pajamas': 5760, 'transplants': 5761, 'wee': 5762, 'winkie': 5763, 'philippine': 5764, 'ex': 5765, 'prostitute': 5766, 'pimp': 5767, 'fighting': 5768, 'cinzano': 5769, 'fiesta': 5770, 'honors': 5771, '1996': 5772, 'vocal': 5773, 'sampling': 5774, 'windmills': 5775, 'hong': 5776, 'slum': 5777, 'badge': 5778, 'courage': 5779, 'kodak': 5780, 'inuits': 5781, 'trigonometry': 5782, 'compaq': 5783, 'trading': 5784, 'lap': 5785, 'sit': 5786, 'traffic': 5787, 'cone': 5788, 'jett': 5789, 'warhol': 5790, 'visine': 5791, 'cozumel': 5792, 'teenagers': 5793, 'sixties': 5794, 'granary': 5795, 'arsenal': 5796, 'mint': 5797, 'telegraph': 5798, 'whorehouse': 5799, 'fallen': 5800, 'haunt': 5801, 'roommates': 5802, 'saved': 5803, 'tanker': 5804, 'snowiest': 5805, 'crabgrass': 5806, 'mancha': 5807, 'hawking': 5808, 'kindergarden': 5809, 'optical': 5810, 'clause': 5811, 'altered': 5812, 'amended': 5813, 'guess': 5814, 'anus': 5815, 'rectum': 5816, 'jpeg': 5817, 'bitmap': 5818, 'franz': 5819, 'battlefield': 5820, 'wheat': 5821, 'compass': 5822, 'counties': 5823, 'indiana': 5824, 'folies': 5825, 'bergeres': 5826, 'aesop': 5827, 'fable': 5828, 'swift': 5829, 'steady': 5830, 'bound': 5831, 'venetian': 5832, 'venice': 5833, 'treated': 5834, 'protection': 5835, 'limited': 5836, 'partnership': 5837, 'roses': 5838, 'embracing': 5839, 'napoleonic': 5840, 'critical': 5841, 'consisting': 5842, 'corners': 5843, 'spritsail': 5844, 'baghdad': 5845, 'multiplexer': 5846, 'centurion': 5847, 'poconos': 5848, 'nike': 5849, 'powered': 5850, 'norwegian': 5851, 'southernmost': 5852, 'brian': 5853, 'boru': 5854, '11th': 5855, 'asleep': 5856, 'norman': 5857, 'poems': 5858, 'fools': 5859, 'docklands': 5860, 'wonderbra': 5861, 'proverb': 5862, 'stitch': 5863, 'saves': 5864, 'thursday': 5865, 'telephones': 5866, 'vichyssoise': 5867, 'manson': 5868, 'rams': 5869, 'grab': 5870, 'gusto': 5871, 'portraits': 5872, 'expect': 5873, 'monthly': 5874, 'publication': 5875, 'bigfoot': 5876, 'collins': 5877, 'punishment': 5878, 'mailman': 5879, 'beasley': 5880, 'provided': 5881, 'listen': 5882, 'incubate': 5883, 'parts': 5884, 'surgeon': 5885, 'performed': 5886, 'haifa': 5887, 'yogurt': 5888, 'benedict': 5889, 'agricultural': 5890, 'electronic': 5891, 'visual': 5892, 'displays': 5893, 'corresponding': 5894, 'signals': 5895, 'goldenseal': 5896, 'composition': 5897, 'rodeo': 5898, 'iris': 5899, 'tetrinet': 5900, 'marvelous': 5901, 'spokesman': 5902, 'chiricahua': 5903, 'beryl': 5904, 'romania': 5905, 'lcd': 5906, 'amezaiku': 5907, 'brenner': 5908, '64': 5909, 'predominant': 5910, 'assisi': 5911, 'megawatts': 5912, 'consortium': 5913, 'chancery': 5914, 'sexiest': 5915, 'photograph': 5916, 'quirk': 5917, 'germanic': 5918, 'hungry': 5919, 'kisser': 5920, 'beating': 5921, 'conjugations': 5922, 'woke': 5923, "'etat": 5924, 'photographs': 5925, 'calhoun': 5926, 'acting': 5927, 'lunt': 5928, 'fontanne': 5929, 'exercises': 5930, 'juices': 5931, 'principal': 5932, 'poodle': 5933, 'zionism': 5934, 'bills': 5935, 'backgammon': 5936, 'volley': 5937, 'pulp': 5938, 'rabbits': 5939, 'swastika': 5940, 'stood': 5941, 'goosebumps': 5942, 'emotional': 5943, 'aroused': 5944, 'frames': 5945, 'theo': 5946, 'rousseau': 5947, 'fontaine': 5948, 'yield': 5949, 'maturity': 5950, 'bonds': 5951, 'surpassing': 5952, 'crypt': 5953, 'beneath': 5954, 'rotunda': 5955, 'superbowl': 5956, 'cribbage': 5957, 'prisoner': 5958, 'refugee': 5959, 'tsetse': 5960, 'agreement': 5961, 'serves': 5962, 'kane': 5963, 'troilism': 5964, 'flyer': 5965, 'mistakenly': 5966, 'html': 5967, 'identify': 5968, 'migrates': 5969, 'everybody': 5970, 'hocks': 5971, 'flogged': 5972, 'inga': 5973, 'nielsen': 5974, 'lund': 5975, 'lifelong': 5976, 'funnel': 5977, 'spouting': 5978, 'kissed': 5979, 'pushes': 5980, 'executive': 5981, 'cranes': 5982, 'finally': 5983, 'imprisoned': 5984, '1931': 5985, 'possum': 5986, 'cholera': 5987, 'firehole': 5988, 'base': 5989, 'indicator': 5990, 'eckley': 5991, 'stairway': 5992, 'curl': 5993, 'pushed': 5994, 'coupled': 5995, 'hump': 5996, 'hosted': 5997, 'breony': 5998, 'sardonyx': 5999, 'wallbanger': 6000, 'beholder': 6001, 'oldtime': 6002, 'guide': 6003, 'jeff': 6004, 'greenfield': 6005, 'subversive': 6006, 'collier': 6007, 'saudi': 6008, 'arabia': 6009, 'lingo': 6010, 'cambodia': 6011, 'profit': 6012, '836': 6013, 'vamp': 6014, 'portrays': 6015, 'joad': 6016, 'dustbowl': 6017, 'eli': 6018, 'lilly': 6019, 'servers': 6020, 'oilseeds': 6021, 'thru': 6022, 'bulbs': 6023, 'jar': 6024, 'mayan': 6025, 'warmup': 6026, 'pitches': 6027, 'reliever': 6028, 'dylan': 6029, 'livestock': 6030, 'creating': 6031, 'scandal': 6032, 'daring': 6033, 'gown': 6034, 'wassermann': 6035, 'specific': 6036, 'ornaments': 6037, 'communications': 6038, 'yachts': 6039, 'fig': 6040, 'newtons': 6041, 'premier': 6042, 'cigar': 6043, 'chewing': 6044, 'observed': 6045, 'feel': 6046, 'stickers': 6047, 'cisco': 6048, 'packages': 6049, 'vichy': 6050, 'kidnaping': 6051, 'termed': 6052, 'crime': 6053, '1922': 6054, 'buxom': 6055, 'blonde': 6056, 'recruitment': 6057, 'donation': 6058, 'entail': 6059, 'blythe': 6060, 'rises': 6061, 'garment': 6062, 'bradley': 6063, 'voorhees': 6064, 'barrier': 6065, 'destroyed': 6066, 'occam': 6067, 'grimace': 6068, 'mccheese': 6069, 'appalachian': 6070, 'fruits': 6071, 'survival': 6072, 'clitoridectomy': 6073, 'tampa': 6074, 'surge': 6075, 'farther': 6076, 'opposed': 6077, 'further': 6078, 'alternate': 6079, 'ran': 6080, 'nickel': 6081, 'cadmium': 6082, 'rechargeable': 6083, 'recharged': 6084, 'seats': 6085, 'batmobile': 6086, 'rummy': 6087, 'phillip': 6088, 'kramer': 6089, 'erica': 6090, 'jong': 6091, 'isadora': 6092, 'wing': 6093, 'thai': 6094, 'tournaments': 6095, 'prevailing': 6096, 'winds': 6097, 'metamorphosis': 6098, 'awakes': 6099, 'translate': 6100, 'mia': 6101, 'farrow': 6102, 'svga': 6103, 'adapter': 6104, 'g7': 6105, 'walked': 6106, 'vocalist': 6107, 'hansel': 6108, 'gretel': 6109, 'pain': 6110, 'canonize': 6111, 'nonconsecutive': 6112, 'tornados': 6113, 'lot': 6114, 'carolingian': 6115, 'merrie': 6116, 'melodies': 6117, 'reports': 6118, 'emperors': 6119, 'cabarnet': 6120, 'sauvignon': 6121, 'frosted': 6122, 'flakes': 6123, 'brief': 6124, 'conquered': 6125, 'spock': 6126, 'newspapers': 6127, 'dispose': 6128, 'garbage': 6129, 'prosecutor': 6130, 'later': 6131, 'screens': 6132, 'magnet': 6133, 'nina': 6134, 'theatre': 6135, 'burn': 6136, '1954': 6137, 'sed': 6138, 'nomadic': 6139, 'gathering': 6140, 'caine': 6141, 'flab': 6142, 'chin': 6143, 'rhyme': 6144, 'needs': 6145, 'freedy': 6146, 'johnston': 6147, 'gametophytic': 6148, 'tissue': 6149, 'catsup': 6150, 'conifer': 6151, 'perfectly': 6152, 'textiles': 6153, 'ambassadorial': 6154, 'shays': 6155, 'rebellion': 6156, '1787': 6157, 'chromatology': 6158, 'edge': 6159, 'aclu': 6160, 'albums': 6161, 'goldfish': 6162, 'dimly': 6163, 'lit': 6164, 'prix': 6165, 'driving': 6166, 'straight': 6167, 'lesson': 6168, 'teaching': 6169, 'metric': 6170, 'kythnos': 6171, 'siphnos': 6172, 'seriphos': 6173, 'mykonos': 6174, 'skater': 6175, 'lines': 6176, 'footballs': 6177, 'savings': 6178, 'mature': 6179, 'abigail': 6180, 'arcane': 6181, 'villainous': 6182, 'opponent': 6183, 'swamp': 6184, 'harmful': 6185, 'spray': 6186, 'kenya': 6187, 'bernstein': 6188, 'fermont': 6189, 'theorem': 6190, 'tim': 6191, 'heliologist': 6192, 'prevents': 6193, 'eczema': 6194, 'seborrhea': 6195, 'psoriasis': 6196, 'antichrist': 6197, 'exclusively': 6198, 'residence': 6199, 'teats': 6200, 'kilamanjaro': 6201, 'crocodile': 6202, 'swallow': 6203, 'mushroom': 6204, 'deployed': 6205, 'microwaves': 6206, 'bullfighting': 6207, 'article': 6208, 'estimated': 6209, 'whitetail': 6210, 'farmer': 6211, 'almanac': 6212, 'assent': 6213, 'emblem': 6214, 'dartboard': 6215, 'dramatized': 6216, 'offered': 6217, 'aquatic': 6218, 'scenes': 6219, 'springs': 6220, 'brimstone': 6221, 'monk': 6222, 'burnt': 6223, 'stake': 6224, 'audio': 6225, 'afs': 6226, 'quetzalcoatl': 6227, 'sparkles': 6228, 'circulatory': 6229, 'bagdad': 6230, 'bubble': 6231, 'wrap': 6232, 'java': 6233, 'squats': 6234, 'doubleheader': 6235, 'rhymes': 6236, 'solomon': 6237, 'health': 6238, 'nutrition': 6239, 'sebastian': 6240, 'yiddish': 6241, 'theater': 6242, 'stethoscope': 6243, 'mathematical': 6244, 'millionth': 6245, 'nbc': 6246, 'congressional': 6247, 'delegation': 6248, 'erupts': 6249, 'retired': 6250, '755': 6251, 'represents': 6252, 'abbey': 6253, 'rubin': 6254, 'hayden': 6255, 'rossini': 6256, 'siskel': 6257, 'snoring': 6258, 'ridge': 6259, 'eastward': 6260, 'westward': 6261, 'flowing': 6262, 'wished': 6263, 'looked': 6264, 'cowardly': 6265, 'chiropodist': 6266, 'porphyria': 6267, 'soy': 6268, 'kurt': 6269, 'cobain': 6270, 'shine': 6271, 'clot': 6272, 'pleasure': 6273, 'fertile': 6274, 'jeans': 6275, 'calvin': 6276, 'klein': 6277, 'comfortable': 6278, 'abbie': 6279, 'dose': 6280, 'friction': 6281, 'mormon': 6282, '69': 6283, 'indianapolis': 6284, 'tucson': 6285, 'melbourne': 6286, 'compare': 6287, 'pillar': 6288, 'contemplating': 6289, 'brilliant': 6290, 'economist': 6291, 'creation': 6292, 'sally': 6293, 'dyke': 6294, 'experience': 6295, 'mythical': 6296, 'hourglass': 6297, 'scythe': 6298, 'twenty': 6299, 'didn': 6300, 'challenged': 6301, 'explore': 6302, 'sleeping': 6303, 'donate': 6304, 'truly': 6305, 'numbered': 6306, 'vats': 6307, 'judged': 6308, '1863': 6309, 'criticism': 6310, 'throw': 6311, 'housewarming': 6312, 'hurley': 6313, 'impulse': 6314, 'hardening': 6315, 'kim': 6316, 'philby': 6317, 'freddy': 6318, 'freeman': 6319, 'rona': 6320, 'barrett': 6321, 'lustrum': 6322, 'encounters': 6323, 'mathematician': 6324, 'glamorous': 6325, 'metalious': 6326, 'unleashed': 6327, 'celestials': 6328, 'paths': 6329, 'enhance': 6330, 'sporting': 6331, 'collapsed': 6332, 'erle': 6333, 'gardner': 6334, 'terrified': 6335, 'cleopatra': 6336, 'expert': 6337, 'describing': 6338, 'residents': 6339, 'lesbos': 6340, 'organizational': 6341, 'delhi': 6342, 'indira': 6343, 'mistletoe': 6344, 'plugged': 6345, 'spectacle': 6346, 'telecast': 6347, 'amen': 6348, 'baffin': 6349, 'frobisher': 6350, 'limbo': 6351, 'credits': 6352, 'physician': 6353, 'inventions': 6354, 'bremer': 6355, 'escape': 6356, 'apostle': 6357, 'caldwell': 6358, 'zone': 6359, 'archery': 6360, 'anesthetic': 6361, 'allow': 6362, 'periodic': 6363, 'solid': 6364, 'liquid': 6365, 'tonne': 6366, 'entirely': 6367, 'deet': 6368, 'sagebrush': 6369, 'bernoulli': 6370, 'poster': 6371, 'scrum': 6372, 'improve': 6373, 'morale': 6374, 'bowler': 6375, 'facing': 6376, '37803': 6377, 'pin': 6378, 'resources': 6379, 'teachers': 6380, 'israeli': 6381, '168': 6382, 'recomended': 6383, 'switch': 6384, 'crib': 6385, 'jdr3': 6386, 'mendelevium': 6387, 'users': 6388, 'friz': 6389, 'freleng': 6390, 'ranks': 6391, 'sideburns': 6392, 'resulting': 6393, '1849': 6394, 'sutter': 6395, 'moorish': 6396, 'erich': 6397, 'melt': 6398, 'taught': 6399, 'matt': 6400, 'murdock': 6401, 'extraordinary': 6402, 'abilities': 6403, 'wile': 6404, 'coyote': 6405, 'lent': 6406, 'mandibulofacial': 6407, 'dysostosis': 6408, 'partition': 6409, 'churches': 6410, 'famously': 6411, 'warn': 6412, 'dtmf': 6413, 'sandra': 6414, 'bullock': 6415, 'blew': 6416, 'lakehurst': 6417, 'commanded': 6418, 'individual': 6419, 'tested': 6420, 'captained': 6421, 'ernst': 6422, 'lehmann': 6423, 'sprouted': 6424, 'opposition': 6425, 'konrad': 6426, 'adenauer': 6427, 'lipstick': 6428, 'wax': 6429, 'madame': 6430, 'tussaud': 6431, 'terror': 6432, 'horton': 6433, 'touched': 6434, 'shortstop': 6435, 'iditarod': 6436, 'stay': 6437, 'reinstate': 6438, 'selective': 6439, 'registration': 6440, 'pamplona': 6441, 'motor': 6442, 'collectible': 6443, '7th': 6444, 'inning': 6445, 'gitchee': 6446, 'gumee': 6447, 'tristan': 6448, 'reb': 6449, 'yank': 6450, 'guidance': 6451, 'jpl': 6452, 'goldfinger': 6453, 'hobby': 6454, 'shelf': 6455, 'beside': 6456, 'crouching': 6457, '1886': 6458, 'tub': 6459, 'treatments': 6460, 'jessica': 6461, 'gangland': 6462, 'slaughter': 6463, 'membership': 6464, 'moran': 6465, 'outfit': 6466, 'exile': 6467, 'tailors': 6468, 'elongated': 6469, 'afoot': 6470, 'goldilocks': 6471, 'kreme': 6472, 'collided': 6473, 'truck': 6474, 'swatch': 6475, 'nuremberg': 6476, 'keller': 6477, 'taken': 6478, 'track': 6479, 'etched': 6480, 'excellence': 6481, 'exposition': 6482, 'campbell': 6483, 'parma': 6484, 'traditions': 6485, 'elizabethian': 6486, 'quicker': 6487, 'sultan': 6488, 'ski': 6489, 'dolomites': 6490, 'weekend': 6491, 'monterey': 6492, 'stern': 6493, 'caul': 6494, 'propaganda': 6495, 'successfully': 6496, 'quantum': 6497, 'leaps': 6498, 'simpler': 6499, 'acoustic': 6500, 'med': 6501, 'edentulous': 6502, 'smile': 6503, 'jealousy': 6504, 'flytrap': 6505, '327': 6506, 'shelves': 6507, 'banking': 6508, 'makepeace': 6509, 'thackeray': 6510, 'kubrick': 6511, 'reproduce': 6512, 'reputed': 6513, 'priest': 6514, 'marxism': 6515, 'boiled': 6516, 'skyline': 6517, 'belize': 6518, 'paine': 6519, 'sued': 6520, 'dannon': 6521, 'yougurt': 6522, 'ron': 6523, 'raider': 6524, 'promotion': 6525, 'carroll': 6526, 'robb': 6527, 'hydroelectricity': 6528, 'taller': 6529, 'unsafe': 6530, 'antigua': 6531, 'abacus': 6532, 'popularly': 6533, 'mass': 6534, 'exposed': 6535, 'granite': 6536, 'commander': 6537, 'yorktown': 6538, '1781': 6539, 'kinsey': 6540, 'preference': 6541, 'males': 6542, 'procedure': 6543, 'drilling': 6544, 'skull': 6545, 'acheive': 6546, 'higher': 6547, 'garmat': 6548, 'karl': 6549, 'madsen': 6550, 'byzantine': 6551, 'appoint': 6552, 'splatterpunk': 6553, 'orgin': 6554, 'xoxoxox': 6555, 'southeast': 6556, 'wang': 6557, 'joining': 6558, 'ping': 6559, 'tak': 6560, '155': 6561, 'leonardo': 6562, 'vinci': 6563, 'michaelangelo': 6564, 'machiavelli': 6565, 'fascist': 6566, 'lottery': 6567, 'haboob': 6568, 'blows': 6569, 'fabric': 6570, 'cake': 6571, 'msg': 6572, 'saks': 6573, 'zoo': 6574, 'yaroslavl': 6575, 'gemstone': 6576, 'nebbish': 6577, 'powdered': 6578, 'recognition': 6579, 'services': 6580, 'quelling': 6581, 'rebellions': 6582, 'bytes': 6583, 'terabyte': 6584, 'hooked': 6585, 'ally': 6586, 'mcbeal': 6587, 'ivan': 6588, 'iv': 6589, 'expansion': 6590, 'forged': 6591, 'cliff': 6592, 'robertson': 6593, 'damocles': 6594, 'televised': 6595, 'dondi': 6596, 'adoptive': 6597, 'grandfather': 6598, 'smelly': 6599, 'lemon': 6600, 'automobiles': 6601, 'zolotow': 6602, 'concerts': 6603, 'groundshog': 6604, 'andie': 6605, 'macdowell': 6606, 'hairy': 6607, 'chiang': 6608, 'kai': 6609, 'shek': 6610, 'hijack': 6611, 'rah': 6612, 'enlivens': 6613, 'hanover': 6614, 'cousin': 6615, 'theodore': 6616, 'arts': 6617, 'footwear': 6618, 'boats': 6619, "'neal": 6620, 'unification': 6621, 'zeros': 6622, 'trillion': 6623, 'crimean': 6624, 'eligible': 6625, 'drunken': 6626, 'drivers': 6627, 'dragged': 6628, 'terrier': 6629, 'forfeited': 6630, 'lawnmower': 6631, 'letterman': 6632, 'knicks': 6633, 'titles': 6634, 'rated': 6635, 'sony': 6636, 'playstation': 6637, 'symbolizes': 6638, 'urban': 6639, 'bells': 6640, 'bering': 6641, 'smartnet': 6642, 'synonym': 6643, 'vermicilli': 6644, 'rigati': 6645, 'zitoni': 6646, 'tubetti': 6647, 'grocer': 6648, 'fingertips': 6649, 'philosophy': 6650, 'plans': 6651, 'forerunner': 6652, 'buds': 6653, 'snickers': 6654, 'musketeers': 6655, 'sysrq': 6656, 'key': 6657, 'stricken': 6658, 'contibution': 6659, 'experiment': 6660, 'gabel': 6661, 'maris': 6662, '61': 6663, 'submerged': 6664, 'fringe': 6665, 'ossining': 6666, 'application': 6667, 'hydrosulfite': 6668, 'allsburg': 6669, 'tries': 6670, 'components': 6671, 'polyester': 6672, 'dig': 6673, 'intergovernmental': 6674, 'affairs': 6675, 'espn': 6676, 'laptop': 6677, 'natchitoches': 6678, 'pointed': 6679, 'handwriting': 6680, 'analyst': 6681, 'recovery': 6682, 'taiwan': 6683, 'hawkins': 6684, '1562': 6685, 'burr': 6686, 'cartier': 6687, 'aviator': 6688, 'tempelhol': 6689, 'igor': 6690, 'suicides': 6691, 'regardless': 6692, 'priestley': 6693, 'erase': 6694, 'licensed': 6695, 'blend': 6696, 'herbs': 6697, 'spices': 6698, 'mid': 6699, '1900s': 6700, 'janet': 6701, 'enthalpy': 6702, 'reaction': 6703, 'sired': 6704, 'hustle': 6705, 'gemini': 6706, 'grange': 6707, 'gretzky': 6708, 'nones': 6709, 'warm': 6710, 'peabody': 6711, 'sherman': 6712, 'bullwinkle': 6713, "'d": 6714, 'lovely': 6715, 'dumplings': 6716, 'celestial': 6717, '864': 6718, 'circus': 6719, 'wittenberg': 6720, 'kathryn': 6721, 'hinduism': 6722, 'denmark': 6723, 'ankle': 6724, 'sprain': 6725, '313': 6726, 'biochemists': 6727, 'alpert': 6728, 'moss': 6729, 'thermal': 6730, 'equilibrium': 6731, 'behavior': 6732, 'violates': 6733, 'accepted': 6734, 'standards': 6735, 'morality': 6736, 'magnate': 6737, 'initials': 6738, 'sleeve': 6739, 'padres': 6740, 'neurons': 6741, 'reptiles': 6742, 'ridder': 6743, 'kdge': 6744, 'executioner': 6745, 'bid': 6746, 'chamber': 6747, 'doegs': 6748, 'plumbism': 6749, 'relatives': 6750, 'tears': 6751, 'salzburg': 6752, 'shown': 6753, '188': 6754, 'arles': 6755, 'stings': 6756, 'below': 6757, 'fulton': 6758, 'infomatics': 6759, 'bios': 6760, 'keck': 6761, 'telescope': 6762, 'apartments': 6763, 'brunswick': 6764, 'resurrectionist': 6765, 'vegetables': 6766, 'combined': 6767, 'succotash': 6768, 'reality': 6769, 'manufacturers': 6770, 'poke': 6771, 'cullion': 6772, 'safest': 6773, 'pedestrians': 6774, 'craig': 6775, 'stevens': 6776, 'meanie': 6777, 'angela': 6778, 'divide': 6779, 'mvp': 6780, '999': 6781, 'celebrations': 6782, 'fears': 6783, 'palpatine': 6784, 'wilkes': 6785, 'plantation': 6786, 'flat': 6787, 'explosion': 6788, 'sphere': 6789, 'statistical': 6790, 'barnstorming': 6791, 'dumbest': 6792, 'importance': 6793, 'magellan': 6794, 'grades': 6795, 'husbands': 6796, 'hilton': 6797, 'wilding': 6798, 'fubu': 6799, 'oop': 6800, 'moo': 6801, 'tastes': 6802, 'distinguish': 6803, 'travelers': 6804, 'covered': 6805, 'menu': 6806, 'item': 6807, 'spicey': 6808, 'supporting': 6809, 'stradivarius': 6810, 'childhood': 6811, 'ticker': 6812, '1870': 6813, 'afraid': 6814, 'debts': 6815, 'qintex': 6816, 'hates': 6817, 'mankind': 6818, 'milt': 6819, 'austerlitz': 6820, 'ty': 6821, 'cobb': 6822, 'philanthropist': 6823, 'portal': 6824, 'goodness': 6825, 'describes': 6826, 'usage': 6827, 'avoid': 6828, 'darning': 6829, 'needles': 6830, 'stingers': 6831, 'excite': 6832, 'proceed': 6833, 'vitamins': 6834, 'penguins': 6835, 'richards': 6836, 'idle': 6837, 'fordham': 6838, 'waynesburg': 6839, '12601': 6840, 'serigraph': 6841, 'hallie': 6842, 'woods': 6843, 'macarthur': 6844, '1767': 6845, '1834': 6846, 'racehorse': 6847, '20th': 6848, 'eminem': 6849, 'slim': 6850, 'shady': 6851, 'final': 6852, 'weir': 6853, 'subaru': 6854, 'endometriosis': 6855, 'geoscientist': 6856, 'robust': 6857, 'imported': 6858, 'instructor': 6859, 'judo': 6860, 'stem': 6861, 'edessa': 6862, 'levitation': 6863, 'btu': 6864, 'untouchables': 6865, 'vdrl': 6866, 'tackle': 6867, 'eagles': 6868, 'xv': 6869, 'endurance': 6870, 'hardy': 6871, 'silversmith': 6872, 'violent': 6873, 'niece': 6874, 'nephew': 6875, 'assassin': 6876, 'tumbling': 6877, 'maudie': 6878, 'frickett': 6879, 'leaky': 6880, 'valve': 6881, 'myself': 6882, 'manicure': 6883, 'circumnavigator': 6884, 'syzygy': 6885, 'waterways': 6886, '76': 6887, 'liberated': 6888, 'strasbourg': 6889, 'baseman': 6890, 'ports': 6891, 'christine': 6892, 'possessed': 6893, 'goals': 6894, 'scored': 6895, 'resembled': 6896, 'jackass': 6897, 'tattoo': 6898, 'forever': 6899, 'frommer': 6900, 'observances': 6901, 'chair': 6902, 'reserve': 6903, 'friendliness': 6904, 'scsi': 6905, 'funny': 6906, 'preferably': 6907, 'radiation': 6908, 'marzipan': 6909, 'polyorchid': 6910, 'abolished': 6911, 'permutations': 6912, 'osteichthyes': 6913, 'nasty': 6914, 'topic': 6915, 'outline': 6916, 'conformist': 6917, 'dripper': 6918, 'furlongs': 6919, 'quarter': 6920, 'recetrack': 6921, 'millimeters': 6922, 'symbolize': 6923, '1699': 6924, '172': 6925, 'foreigner': 6926, 'sum': 6927, 'genetic': 6928, 'soundtrack': 6929, 'melman': 6930, 'limestone': 6931, 'deposit': 6932, 'rising': 6933, 'swing': 6934, 'bookshop': 6935, 'silkworm': 6936, 'moth': 6937, 'domestication': 6938, 'tenths': 6939, 'marl': 6940, 'sourness': 6941, 'lan': 6942, 'activated': 6943, 'insects': 6944, 'spiracles': 6945, 'arches': 6946, 'natives': 6947, 'stevenson': 6948, 'deacon': 6949, 'brodie': 6950, 'cabinetmaker': 6951, 'burglar': 6952, 'rejection': 6953, 'rallying': 6954, 'dubliners': 6955, 'underage': 6956, 'watchman': 6957, 'wills': 6958, 'sword': 6959, 'candice': 6960, 'bergen': 6961, 'jacqueline': 6962, 'bisset': 6963, 'remake': 6964, '1943': 6965, 'acquaintance': 6966, '43rd': 6967, 'aerodynamics': 6968, 'laboratory': 6969, '1912': 6970, 'calder': 6971, 'oas': 6972, 'forsyth': 6973, 'toppling': 6974, 'mercenaries': 6975, 'baretta': 6976, 'cockatoo': 6977, 'trader': 6978, 'conterminous': 6979, 'sequencing': 6980, 'chop': 6981, 'suey': 6982, 'satelite': 6983, 'archimedes': 6984, 'lucille': 6985, 'delicate': 6986, 'tasting': 6987, 'onion': 6988, '239': 6989, '48th': 6990, 'quotes': 6991, 'bullseye': 6992, 'darts': 6993, 'mythology': 6994, 'cunnilingus': 6995, 'reunited': 6996, 'maltese': 6997, 'falconers': 6998, 'astor': 6999, 'sidney': 7000, 'greenstreet': 7001, 'deranged': 7002, 'otto': 7003, 'octavius': 7004, 'acceptance': 7005, 'speech': 7006, 'yous': 7007, 'seuss': 7008, 'verdandi': 7009, 'dined': 7010, 'oysters': 7011, 'carpenter': 7012, 'guadalcanal': 7013, 'elk': 7014, 'badly': 7015, 'tarnished': 7016, 'brass': 7017, 'tied': 7018, 'ruble': 7019, 'irl': 7020, 'scott': 7021, '194': 7022, 'ants': 7023, 'ku': 7024, 'klux': 7025, 'klan': 7026, 'ukraine': 7027, 'hdlc': 7028, 'joins': 7029, 'spritz': 7030, 'spritzer': 7031, 'nematode': 7032, 'phobophobe': 7033, 'capitalism': 7034, 'max': 7035, 'weber': 7036, 'arson': 7037, 'refuse': 7038, 'orly': 7039, 'woodstock': 7040, 'gambling': 7041, 'task': 7042, 'bouvier': 7043, 'somene': 7044, 'solved': 7045, 'bella': 7046, 'abzug': 7047, 'sartorial': 7048, 'macdonald': 7049, 'lew': 7050, 'archer': 7051, 'superb': 7052, 'affiant': 7053, 'raced': 7054, 'threat': 7055, 'thefts': 7056, 'lickin': 7057, 'commandant': 7058, 'stalag': 7059, 'terrorism': 7060, 'accompanied': 7061, 'missions': 7062, 'est': 7063, 'pas': 7064, 'except': 7065, 'repeats': 7066, 'tampon': 7067, 'cct': 7068, 'diagram': 7069, 'ismail': 7070, 'farouk': 7071, 'enchanted': 7072, 'evening': 7073, 'supplement': 7074, 'locomotive': 7075, 'horlick': 7076, 'adventuring': 7077, 'rann': 7078, 'adam': 7079, 'strange': 7080, 'desk': 7081, 'loud': 7082, 'inaction': 7083, 'ecological': 7084, 'niche': 7085, 'fireplug': 7086, 'walt': 7087, 'none': 7088, 'employees': 7089, 'kwai': 7090, 'vending': 7091, 'distribute': 7092, 'humanitarian': 7093, 'relief': 7094, 'somalia': 7095, 'elephants': 7096, 'doris': 7097, 'certainly': 7098, 'practical': 7099, 'marketed': 7100, 'drought': 7101, '173': 7102, '732': 7103, 'mailing': 7104, 'lists': 7105, 'billingsgate': 7106, 'fishmarket': 7107, 'oj': 7108, 'detroit': 7109, 'calleda': 7110, '1928': 7111, 'thin': 7112, 'clubs': 7113, 'peasant': 7114, 'peugeot': 7115, 'continents': 7116, 'refers': 7117, 'automation': 7118, 'knighted': 7119, 'eating': 7120, 'utensils': 7121, 'handicapped': 7122, 'daylight': 7123, 'lutine': 7124, 'announce': 7125, 'chernobyl': 7126, 'accident': 7127, 'maya': 7128, 'scratch': 7129, 'ancients': 7130, 'haversian': 7131, 'canals': 7132, 'julie': 7133, 'poppins': 7134, 'status': 7135, 'predicted': 7136, 'topple': 7137, '2010': 7138, '2020': 7139, 'wasps': 7140, 'manuel': 7141, 'noriega': 7142, 'ousted': 7143, 'authorities': 7144, 'breaking': 7145, 'ladybugs': 7146, 'taft': 7147, 'benson': 7148, 'majal': 7149, 'brothel': 7150, 'doughnut': 7151, 'pompeii': 7152, 'farrier': 7153, 'saliva': 7154, 'nearsightedness': 7155, 'mayfly': 7156, 'petersburg': 7157, 'petrograd': 7158, 'dissented': 7159, 'pia': 7160, 'zadora': 7161, 'millionaire': 7162, 'compounds': 7163, 'astronomer': 7164, 'umbrellas': 7165, 'feminist': 7166, 'politics': 7167, 'zodiac': 7168, 'districts': 7169, 'snafu': 7170, 'chablis': 7171, 'vince': 7172, 'lombardi': 7173, 'coaching': 7174, 'fatalism': 7175, 'determinism': 7176, 'fractal': 7177, 'blockade': 7178, '1603': 7179, 'cameras': 7180, 'naseem': 7181, 'hamed': 7182, 'scorpion': 7183, 'logarithmic': 7184, 'scales': 7185, 'slide': 7186, 'webster': 7187, 'circulation': 7188, 'britney': 7189, 'everyday': 7190, 'midi': 7191, 'pesth': 7192, 'buda': 7193, 'merged': 7194, 'fishing': 7195, 'pail': 7196, 'gangster': 7197, 'youngman': 7198, 'beats': 7199, 'papers': 7200, 'textile': 7201, 'snakebite': 7202, 'admitted': 7203, 'billion': 7204, 'appointments': 7205, 'worm': 7206, '1980s': 7207, 'captured': 7208, 'syrian': 7209, 'cloud': 7210, '924': 7211, 'rebounds': 7212, 'continuing': 7213, 'dialog': 7214, 'contemporary': 7215, 'issues': 7216, 'readers': 7217, 'tape': 7218, 'understand': 7219, 'cables': 7220, 'manchester': 7221, 'discontinued': 7222, 'batman': 7223, 'batcycle': 7224, 'saute': 7225, 'schematics': 7226, 'windshield': 7227, 'wiper': 7228, 'mechanism': 7229, 'archenemy': 7230, 'schoolhouse': 7231, 'schooling': 7232, 'highschool': 7233, 'crops': 7234, 'showers': 7235, 'aztecs': 7236, 'flightless': 7237, 'sawyer': 7238, 'aunt': 7239, 'micronauts': 7240, 'traveling': 7241, 'microverse': 7242, 'pugilist': 7243, 'cauliflower': 7244, 'mcpugg': 7245, 'seagull': 7246, 'ouzo': 7247, '137': 7248, 'uruguay': 7249, 'eliot': 7250, 'wordsworth': 7251, 'replica': 7252, 'disneyland': 7253, 'deserts': 7254, 'qatar': 7255, 'crisscrosses': 7256, 'urologist': 7257, 'jeroboams': 7258, 'rugby': 7259, 'warren': 7260, 'spahn': 7261, '20': 7262, 'skittles': 7263, 'qualifications': 7264, 'donating': 7265, 'olivia': 7266, 'havilland': 7267, 'pressured': 7268, 'appointing': 7269, 'conflicts': 7270, 'hairdryer': 7271, 'eats': 7272, 'sleeps': 7273, 'underground': 7274, 'portuguese': 7275, 'martial': 7276, 'sinning': 7277, 'edison': 7278, 'extends': 7279, 'alabama': 7280, 'lemurs': 7281, 'agencies': 7282, 'employment': 7283, 'verification': 7284, 'onetime': 7285, 'socialism': 7286, 'claws': 7287, 'bucks': 7288, 'condensed': 7289, 'spamming': 7290, 'scores': 7291, 'rockin': 7292, 'protects': 7293, 'realm': 7294, 'droppings': 7295, 'feat': 7296, 'homerian': 7297, 'trojan': 7298, 'vesuvius': 7299, 'prenatal': 7300, 'supercontinent': 7301, 'pangaea': 7302, 'break': 7303, 'lime': 7304, 'cherry': 7305, 'thirds': 7306, 'preston': 7307, 'snarly': 7308, 'shelleen': 7309, 'pens': 7310, 'englishmen': 7311, 'walks': 7312, '1919': 7313, 'occurrence': 7314, 'unarmed': 7315, 'protestors': 7316, 'moog': 7317, 'synthesizer': 7318, 'niigata': 7319, 'filenes': 7320, 'radiographer': 7321, 'disaccharide': 7322, 'faring': 7323, 'inhumans': 7324, 'appropriates': 7325, 'rarely': 7326, 'lavender': 7327, 'wwf': 7328, 'rude': 7329, 'porgy': 7330, 'bess': 7331, 'clone': 7332, 'larynx': 7333, 'luggage': 7334, 'flier': 7335, 'rearranged': 7336, 'lucelly': 7337, 'garcia': 7338, 'honduras': 7339, 'sneezing': 7340, 'quick': 7341, 'tbk': 7342, 'seafaring': 7343, 'swapped': 7344, 'families': 7345, 'plagues': 7346, 'wheels': 7347, 'rounded': 7348, 'matchbook': 7349, 'gregorian': 7350, 'corsica': 7351, 'hive': 7352, 'slotbacks': 7353, 'tailbacks': 7354, 'touchbacks': 7355, 'complemented': 7356, 'potatoes': 7357, 'peas': 7358, 'repeating': 7359, 'voter': 7360, 'dingoes': 7361, 'atlas': 7362, 'leoncavallo': 7363, 'prologue': 7364, 'stratton': 7365, 'southwestern': 7366, 'pomegranates': 7367, 'pharmacists': 7368, 'allies': 7369, 'avalanche': 7370, 'hernando': 7371, 'soto': 7372, 'epicenter': 7373, 'quality': 7374, 'charcter': 7375, 'chiefly': 7376, 'enormous': 7377, 'corbett': 7378, '1892': 7379, 'marino': 7380, 'tastebud': 7381, 'astroturf': 7382, 'hiemal': 7383, 'activity': 7384, 'normally': 7385, 'beaches': 7386, "'m": 7387, 'jealous': 7388, 'prankster': 7389, 'waved': 7390, 'caboose': 7391, 'haven': 7392, 'hairless': 7393, 'volume': 7394, 'jewelry': 7395, 'pictured': 7396, 'entries': 7397, '1669': 7398, 'walden': 7399, 'puddle': 7400, 'socrates': 7401, 'obelisk': 7402, 'albee': 7403, 'regained': 7404, 'ted': 7405, 'predict': 7406, 'observing': 7407, 'cannon': 7408, 'divides': 7409, 'frenchman': 7410, 'necessary': 7411, 'stomach': 7412, 'directors': 7413, 'advisory': 7414, 'voices': 7415, 'hurdle': 7416, 'runner': 7417, 'steeplechase': 7418, 'owning': 7419, 'svhs': 7420, 'mackenzie': 7421, 'cultural': 7422, 'condemn': 7423, 'pushy': 7424, 'mtv': 7425, 'sap': 7426, 'atmosphere': 7427, 'feather': 7428, 'macaroni': 7429, 'particularly': 7430, 'photoshop': 7431, 'pitched': 7432, 'nevermind': 7433, 'steering': 7434, '1842': 7435, 'westview': 7436, 'funky': 7437, 'winkerbean': 7438, 'chick': 7439, 'breathe': 7440, 'mcgwire': 7441, 'maids': 7442, 'milking': 7443, 'celtic': 7444, 'transparent': 7445, 'limelight': 7446, 'tequila': 7447, 'galliano': 7448, 'geological': 7449, 'dunk': 7450, 'massive': 7451, 'complex': 7452, 'hohenzollerns': 7453, 'snowboarding': 7454, 'stallone': 7455, 'rhinestone': 7456, 'turnkey': 7457, 'extinction': 7458, '528': 7459, 'destroyers': 7460, 'maddox': 7461, 'turner': 7462, 'joy': 7463, 'kemper': 7464, 'genome': 7465, 'coordinate': 7466, 'mapping': 7467, 'abbreviate': 7468, 'chaplin': 7469, 'uncle': 7470, 'replied': 7471, 'begun': 7472, 'writ': 7473, 'categorized': 7474, 'bourgeoisie': 7475, 'leo': 7476, 'tolstoy': 7477, 'zapper': 7478, 'interlata': 7479, 'hanks': 7480, 'dimension': 7481, 'motown': 7482, 'anymore': 7483, 'skiing': 7484, 'calgary': 7485, 'dennison': 7486, 'railways': 7487, 'drain': 7488, 'bmw': 7489, 'biologist': 7490, 'revelation': 7491, 'mauis': 7492, 'extensively': 7493, 'grown': 7494, 'pythagoras': 7495, '1927': 7496, 'revival': 7497, 'aaa': 7498, 'liability': 7499, 'lmds': 7500, 'pointsettia': 7501, 'hiking': 7502, 'graveyard': 7503, 'writers': 7504, 'smothers': 7505, 'prewett': 7506, 'panther': 7507, 'louse': 7508, 'madeira': 7509, 'travelling': 7510, 'iberian': 7511, 'mines': 7512, 'properly': 7513, 'niagra': 7514, 'turns': 7515, '36893': 7516, 'adults': 7517, 'machinery': 7518, 'fickle': 7519, 'fate': 7520, 'sinclair': 7521, 'hide': 7522, 'seek': 7523, 'annie': 7524, 'neurotic': 7525, 'duane': 7526, 'thirst': 7527, 'quencher': 7528, 'prussia': 7529, 'node': 7530, 'tiles': 7531, 'teaspoons': 7532, 'tablespoon': 7533, 'lyricist': 7534, '3rd': 7535, 'langerhans': 7536, 'sql': 7537, 'queries': 7538, 'improved': 7539, 'radioactive': 7540, 'previous': 7541, 'commonwealth': 7542, 'taxed': 7543, '1789': 7544, 'scarlet': 7545, 'sara': 7546, 'linux': 7547, 'builders': 7548, 'mainly': 7549, 'offers': 7550, 'cad': 7551, 'doorstep': 7552, 'gasoline': 7553, 'bailey': 7554, 'stinger': 7555, 'tweezers': 7556, 'europeans': 7557, 'oceania': 7558, 'slavery': 7559, 'eldercare': 7560, 'decompose': 7561, 'contributed': 7562, 'plains': 7563, 'farmers': 7564, '1800s': 7565, '1960s': 7566, '1970s': 7567, "'50s": 7568, 'effective': 7569, 'protecting': 7570, 'comprises': 7571, 'highlands': 7572, 'lowlands': 7573, 'uplands': 7574, 'marshal': 7575, 'erwin': 7576, 'rommel': 7577, 'quiz': 7578, 'vera': 7579, 'lynn': 7580, 'meet': 7581, 'pitch': 7582, 'sweeter': 7583, 'mined': 7584, 'safety': 7585, 'constitute': 7586, 'leif': 7587, 'ericson': 7588, 'baskin': 7589, 'robbins': 7590, 'starship': 7591, 'crosstalk': 7592, 'relate': 7593, 'insb': 7594, 'thickness': 7595, 'infrared': 7596, 'detectors': 7597, 'aftra': 7598, 'sexy': 7599, 'punchbowl': 7600, 'hill': 7601, 'ukulele': 7602, 'seccession': 7603, 'jackal': 7604, 'sweden': 7605, 'finland': 7606, 'ghana': 7607, 'denied': 7608, 'andorra': 7609, 'nestled': 7610, 'wade': 7611, 'decision': 7612, 'defreeze': 7613, 'radius': 7614, 'ellipse': 7615, 'heads': 7616, '1955': 7617, 'psychologically': 7618, 'fell': 7619, 'elizabeth': 7620, 'immigration': 7621, 'laws': 7622, 'cough': 7623, 'medication': 7624, 'tesla': 7625, 'jaco': 7626, 'pastorius': 7627, 'veronica': 7628, 'mig': 7629, 'khaki': 7630, 'chino': 7631, 'infinity': 7632, 'alloy': 7633, 'estuary': 7634, 'mevacor': 7635, 'achievement': 7636, '2001': 7637, 'odyssey': 7638, 'renaud': 7639, 'percival': 7640, 'lovell': 7641, 'rocket': 7642, 'surveyor': 7643, 'westernmost': 7644, 'abolitionists': 7645, 'tenderness': 7646, 'ruckus': 7647, 'insisted': 7648, 'clarabell': 7649, 'patrons': 7650, 'stonewall': 7651, 'greenwich': 7652, 'confucius': 7653, 'snack': 7654, 'ridges': 7655, 'tatiana': 7656, 'estonia': 7657, 'burroughs': 7658, 'chickadee': 7659, 'patients': 7660, 'senses': 7661, 'develops': 7662, 'kickoff': 7663, 'climbs': 7664, 'paleontologist': 7665, 'currently': 7666, 'captive': 7667, 'nautilus': 7668, 'rush': 7669, 'homeostasis': 7670, 'pies': 7671, 'wound': 7672, 'manufacturing': 7673, 'throwing': 7674, 'bandleader': 7675, 'cowrote': 7676, 'tisket': 7677, 'tasket': 7678, 'registers': 7679, 'trademarks': 7680, 'osmosis': 7681, 'joke': 7682, 'ancestral': 7683, 'overlooking': 7684, 'hyde': 7685, 'douglas': 7686, 'mcarthur': 7687, 'recalled': 7688, 'deadly': 7689, 'sins': 7690, 'formation': 7691, 'injuries': 7692, 'recreational': 7693, 'skating': 7694, 'lasts': 7695, 'sabres': 7696, 'impenetrable': 7697, 'fortifications': 7698, '95': 7699, 'polka': 7700, 'gran': 7701, 'bernardo': 7702, 'cuckquean': 7703, 'factors': 7704, 'teen': 7705, 'spartanburg': 7706, 'imitations': 7707, 'jellies': 7708, 'rca': 7709, 'dice': 7710, 'olive': 7711, 'oyl': 7712, 'dragonflies': 7713, 'boycotted': 7714, 'leslie': 7715, 'hornby': 7716, 'mahal': 7717, 'distinctive': 7718, 'palmiped': 7719, '139': 7720, 'papal': 7721, 'goulash': 7722, 'parachute': 7723, 'sub': 7724, 'saharan': 7725, 'spartacus': 7726, 'gladiator': 7727, 'supports': 7728, 'badaling': 7729, 'turret': 7730, 'drag': 7731, 'currents': 7732, 'shetland': 7733, 'orkney': 7734, 'ugly': 7735, 'duckling': 7736, 'tel': 7737, 'aviv': 7738, 'crossed': 7739, 'slits': 7740, 'castles': 7741, 'accommodate': 7742, 'aging': 7743, 'freckles': 7744, 'cos': 7745, 'cob': 7746, 'ct': 7747, 'psychology': 7748, 'values': 7749, 'motorcycle': 7750, 'bodies': 7751, 'visited': 7752, 'succeeded': 7753, 'nikita': 7754, 'chosen': 7755, 'chiefs': 7756, 'chef': 7757, 'coddle': 7758, 'bails': 7759, 'wicket': 7760, 'piles': 7761, 'bernini': 7762, 'bristol': 7763, 'dial': 7764, 'trainer': 7765, 'tungsten': 7766, 'quebec': 7767, 'buffett': 7768, 'concert': 7769, 'camden': 7770, 'stamps': 7771, '1st': 7772, 'sao': 7773, 'paulo': 7774, 'boc': 7775, 'boxcars': 7776, 'bestowed': 7777, 'figs': 7778, 'ripe': 7779, 'thee': 7780, 'sicilian': 7781, 'accused': 7782, 'janurary': 7783, 'billionth': 7784, 'crayon': 7785, 'crayola': 7786, 'hydroelectric': 7787, 'highways': 7788, 'binomial': 7789, 'coefficients': 7790, 'birthdate': 7791, 'suzy': 7792, 'montana': 7793, 'ussr': 7794, 'dissolved': 7795, 'edo': 7796, 'distilling': 7797, 'silence': 7798, 'lambs': 7799, 'napolean': 7800, 'jena': 7801, 'auerstadt': 7802, 'angelus': 7803, 'orbit': 7804, 'capture': 7805, 'retirement': 7806, 'jerk': 7807, 'urgent': 7808, 'fury': 7809, 'robbers': 7810, 'nevil': 7811, 'shute': 7812, 'doomed': 7813, 'survivors': 7814, 'newsmen': 7815, 'warlock': 7816, 'forehead': 7817, 'softest': 7818, 'temperance': 7819, 'advocate': 7820, 'wielded': 7821, 'hatchet': 7822, 'saloons': 7823, 'fiji': 7824, 'cecum': 7825, 'volleyball': 7826, 'baryshnikov': 7827, 'normans': 7828, 'galloping': 7829, 'gourmet': 7830, 'nutrients': 7831, 'ninjitsu': 7832, 'kung': 7833, 'fu': 7834, 'prisoners': 7835, 'lobsters': 7836, 'wolverine': 7837, 'habits': 7838, 'fix': 7839, 'squeaky': 7840, 'thompson': 7841, 'flood': 7842, 'mosquitoes': 7843, 'bubblegum': 7844, 'carpet': 7845, 'wembley': 7846, 'sci': 7847, 'fi': 7848, 'peloponnesian': 7849, 'extremes': 7850, 'swims': 7851, 'tide': 7852, 'ebb': 7853, 'tannins': 7854, 'cheerios': 7855, 'durante': 7856, 'burst': 7857, 'commercials': 7858, 'delicacy': 7859, 'indelicately': 7860, 'pickled': 7861, '1916': 7862, 'jung': 7863, 'noodle': 7864, 'factory': 7865, 'hamilton': 7866, 'fahrenheit': 7867, 'centigrade': 7868, 'oyster': 7869, 'derived': 7870, 'biritch': 7871, 'whist': 7872, 'ado': 7873, 'collective': 7874, 'noun': 7875, 'traits': 7876, 'capricorns': 7877, 'concerning': 7878, 'custody': 7879, 'campaign': 7880, 'invention': 7881, 'conservation': 7882, 'impossible': 7883, 'ranking': 7884, 'roles': 7885, 'streetcar': 7886, 'physically': 7887, 'subject': 7888, 'mast': 7889, 'seafarers': 7890, 'kindergarten': 7891, 'mechanical': 7892, 'achieves': 7893, 'speeds': 7894, 'boilermaker': 7895, 'pilgrim': 7896, 'survivor': 7897, 'dresden': 7898, 'firestorm': 7899, 'indoor': 7900, 'inferno': 7901, '111': 7902, 'flu': 7903, 'bridges': 7904, 'upstairs': 7905, 'downstairs': 7906, 'clearer': 7907, 'monsters': 7908, 'rare': 7909, 'symptoms': 7910, 'involuntary': 7911, 'movements': 7912, 'tics': 7913, 'swearing': 7914, 'incoherent': 7915, 'vocalizations': 7916, 'grunts': 7917, 'otters': 7918, 'finn': 7919, 'heimlich': 7920, '287': 7921, 'vasco': 7922, 'gama': 7923, 'megan': 7924, 'listing': 7925, 'showtimes': 7926, 'montenegro': 7927, 'transistors': 7928, 'hazel': 7929, 'glasgow': 7930, 'ink': 7931, 'anteater': 7932, 'gleason': 7933, 'bendix': 7934, 'planned': 7935, 'berth': 7936, 'lane': 7937, 'converting': 7938, 'floating': 7939, 'pedometer': 7940, 'thousands': 7941, 'speaker': 7942, 'titans': 7943, 'suspect': 7944, 'clue': 7945, 'commentary': 7946, 'deconstructionism': 7947, 'lenny': 7948, 'bruce': 7949, 'arrested': 7950, 'returned': 7951, 'fraudulent': 7952, 'airliners': 7953, 'gliding': 7954, 'reflectors': 7955, 'sweetheart': 7956, 'darla': 7957, 'saline': 7958, 'cooling': 7959, 'justify': 7960, 'emergency': 7961, 'decrees': 7962, 'imprisoning': 7963, 'opponents': 7964, 'vesting': 7965, 'visiting': 7966, 'duvalier': 7967, 'attorney': 7968, 'ordered': 7969, 'alcatraz': 7970, 'congo': 7971, 'text': 7972, 'internet2': 7973, 'foreign': 7974, 'financial': 7975, 'button': 7976, 'gills': 7977, 'dubai': 7978, 'concrete': 7979, 'remembrance': 7980, 'kubla': 7981, 'khan': 7982, 'islam': 7983, 'maggio': 7984, 'zebras': 7985, 'considering': 7986, 'antonia': 7987, 'shimerda': 7988, 'farm': 7989, 'bucher': 7990, 'infatuation': 7991, 'genie': 7992, 'conjured': 7993, 'nancy': 7994, 'chuck': 7995, 'b12': 7996, 'owed': 7997, 'illegally': 7998, 'exact': 7999, 'sunset': 8000, 'particular': 8001, 'picasso': 8002, 'vocals': 8003, 'karnak': 8004, 'rowing': 8005, 'queensland': 8006, 'poing': 8007, 'carelessness': 8008, 'carefreeness': 8009, 'afflict': 8010, 'flash': 8011, 'tyvek': 8012, 'zoonose': 8013, 'gunboat': 8014, 'pebbles': 8015, 'sinemet': 8016, 'selleck': 8017, 'tylo': 8018, 'volkswagen': 8019, 'natalie': 8020, 'audrey': 8021, 'sprouts': 8022, 'freeway': 8023, 'construction': 8024, 'louise': 8025, 'fletcher': 8026, 'stronger': 8027, 'vitreous': 8028, 'technology': 8029, 'cellulose': 8030, 'combatting': 8031, 'discontent': 8032, 'fang': 8033, 'tooth': 8034, 'pookie': 8035, 'burns': 8036, 'leftovers': 8037, 'imperial': 8038, 'initial': 8039, 'whiskers': 8040, 'saratoga': 8041, 'eliminates': 8042, 'germs': 8043, 'mildew': 8044, 'bullheads': 8045, 'feeding': 8046, 'pigeons': 8047, 'piazza': 8048, 'studio': 8049, 'bateau': 8050, 'lavoir': 8051, 'montmartre': 8052, 'cid': 8053, 'napalm': 8054, 'yohimbine': 8055, 'drafted': 8056, 'builder': 8057, 'ribbon': 8058, 'malls': 8059, 'decorations': 8060, 'burma': 8061, 'collector': 8062, 'johnsons': 8063, 'biographer': 8064, 'erotic': 8065, 'shortage': 8066, 'keeping': 8067, 'roads': 8068, 'hummingbird': 8069, 'ostrich': 8070, 'missionary': 8071, 'researches': 8072, '1857': 8073, 'pregnancies': 8074, 'methods': 8075, 'regulate': 8076, 'monopolies': 8077, 'denote': 8078, 'boomer': 8079, 'ferret': 8080, 'steepest': 8081, 'streets': 8082, 'giants': 8083, 'twelve': 8084, '90': 8085, 'entering': 8086, 'constantly': 8087, 'sweaty': 8088, 'sine': 8089, 'socioeconomic': 8090, 'ignores': 8091, 'friends': 8092, 'mccall': 8093, 'cruise': 8094, 'kathie': 8095, 'gifford': 8096, 'jenna': 8097, 'bras': 8098, 'cawdor': 8099, 'glamis': 8100, 'blair': 8101, 'horsemen': 8102, 'apocalypse': 8103, 'geckos': 8104, 'watts': 8105, 'kilowatt': 8106, 'jennifer': 8107, 'healer': 8108, 'inspirational': 8109, 'miracles': 8110, 'jinnah': 8111, 'esquire': 8112, 'hang': 8113, 'intranet': 8114, 'harold': 8115, 'stassen': 8116, 'caps': 8117, 'tramped': 8118, 'youth': 8119, 'noah': 8120, 'ark': 8121, 'mounted': 8122, 'guerrilla': 8123, 'coleman': 8124, 'younger': 8125, 'ridden': 8126, 'plagued': 8127, 'choice': 8128, 'height': 8129, '1925': 8130, 'pelt': 8131, 'psychiatric': 8132, 'sessions': 8133, 'thrillers': 8134, 'cortez': 8135, 'parthenon': 8136, '1895': 8137, 'wells': 8138, 'argonauts': 8139, 'dolphin': 8140, 'funded': 8141, 'elders': 8142, 'cricketer': 8143, '1898': 8144, 'dip': 8145, 'fries': 8146, 'adjacent': 8147, 'corridors': 8148, 'pentagon': 8149, 'juan': 8150, 'playwright': 8151, 'sucks': 8152, 'barbershop': 8153, 'beany': 8154, 'cecil': 8155, 'sailed': 8156, 'flora': 8157, 'pampas': 8158, 'needed': 8159, 'tailoring': 8160, 'bordering': 8161, 'due': 8162, 'morris': 8163, 'bishop': 8164, 'becomes': 8165, 'boarders': 8166, 'ekg': 8167, 'lends': 8168, 'surroundings': 8169, 'yearly': 8170, 'specimen': 8171, 'basidiomycetes': 8172, 'faults': 8173, 'asiento': 8174, 'appropriate': 8175, 'yom': 8176, 'kippur': 8177, 'mclean': 8178, 'laments': 8179, 'buddy': 8180, 'holly': 8181, 'srpska': 8182, 'krajina': 8183, 'supplier': 8184, 'cannabis': 8185, 'pergament': 8186, 'esa': 8187, 'pekka': 8188, 'crackle': 8189, 'locking': 8190, 'brakes': 8191, 'magenta': 8192, 'apache': 8193, 'hormone': 8194, 'isolationist': 8195, 'fellatio': 8196, 'characteristics': 8197, 'contribute': 8198, '1815': 8199, 'mitty': 8200, 'portraying': 8201, 'cartoonist': 8202, 'jets': 8203, 'vapor': 8204, 'childbirth': 8205, 'honda': 8206, 'ashen': 8207, 'eidologist': 8208, 'moderated': 8209, 'cohan': 8210, 'dandy': 8211, 'philebus': 8212, 'ingredients': 8213, 'proliferation': 8214, 'theresa': 8215, 'bless': 8216, 'sneeze': 8217, 'consumption': 8218, 'tire': 8219, 'spin': 8220, 'slows': 8221, 'cartoondom': 8222, 'pluribus': 8223, 'unum': 8224, 'tft': 8225, 'dual': 8226, 'scan': 8227, 'oath': 8228, 'paradise': 8229, '47': 8230, 'cookie': 8231, 'ny': 8232, 'bang': 8233, 'koran': 8234, 'heptagon': 8235, 'wasn': 8236, 'anglicans': 8237, 'adventours': 8238, 'tours': 8239, 'becket': 8240, 'barbary': 8241, 'multicultural': 8242, 'multilingual': 8243, 'climbed': 8244, 'mt': 8245, 'photosynthesis': 8246, 'projects': 8247, '8th': 8248, 'links': 8249, 'piccadilly': 8250, 'pocket': 8251, 'billiards': 8252, 'satirized': 8253, 'countinghouse': 8254, 'counting': 8255, 'shifting': 8256, 'rom': 8257, 'headaches': 8258, 'locations': 8259, 'stained': 8260, 'window': 8261, 'version': 8262, 'commonplace': 8263, 'masons': 8264, 'enforce': 8265, 'daisy': 8266, 'moses': 8267, 'menstruation': 8268, 'makeup': 8269, 'capitalizes': 8270, 'pronoun': 8271, 'agra': 8272, 'mammoth': 8273, 'jfk': 8274, 'witness': 8275, 'hearings': 8276, 'stuck': 8277, 'friendly': 8278, 'basic': 8279, 'strokes': 8280, 'sen': 8281, 'everett': 8282, 'dirkson': 8283, "'70": 8284, 'erykah': 8285, 'badu': 8286, 'pony': 8287, 'gangsters': 8288, 'clyde': 8289, 'thalia': 8290, 'suffering': 8291, 'diphallic': 8292, 'terata': 8293, 'panties': 8294, 'pacer': 8295, 'compete': 8296, 'cured': 8297, 'cumin': 8298, 'ficus': 8299, 'aurora': 8300, 'blinking': 8301, 'aimed': 8302, 'audience': 8303, 'syringe': 8304, 'medicinal': 8305, 'barton': 8306, 'bith': 8307, 'sounded': 8308, 'chiffons': 8309, 'giza': 8310, 'historically': 8311, 'completed': 8312, 'corvette': 8313, 'lump': 8314, '191': 8315, 'mcdonald': 8316, 'horologist': 8317, 'rugs': 8318, 'attendance': 8319, 'supper': 8320, 'feed': 8321, 'purina': 8322, 'chow': 8323, 'operate': 8324, 'titus': 8325, 'sellers': 8326, 'creative': 8327, 'genius': 8328, 'hustles': 8329, 'waits': 8330, 'paraguay': 8331, 'vacations': 8332, 'conditioner': 8333, 'efficiency': 8334, 'bounded': 8335, 'tasman': 8336, 'foreman': 8337, 'victim': 8338, 'dsl': 8339, 'boss': 8340, 'multitalented': 8341, 'failed': 8342, 'ayer': 8343, 'craps': 8344, 'cult': 8345, 'marcus': 8346, 'garvey': 8347, 'cultivated': 8348, 'crazy': 8349, 'cruel': 8350, 'theatrical': 8351, 'roaring': 8352, 'forties': 8353, 'mack': 8354, 'sennett': 8355, 'lifetime': 8356, 'kilvington': 8357, 'compound': 8358, 'levine': 8359, 'hispaniola': 8360, 'powder': 8361, 'lotion': 8362, 'smell': 8363, 'janis': 8364, 'brandt': 8365, 'peruvian': 8366, 'mummified': 8367, 'pizarro': 8368, 'mackinaw': 8369, 'somebody': 8370, 'kyriakos': 8371, 'theotokopoulos': 8372, 'englishwoman': 8373, 'autry': 8374, 'regards': 8375, 'builds': 8376, 'odor': 8377, 'stated': 8378, 'rulebook': 8379, 'auh2o': 8380, 'debt': 8381, 'claiming': 8382, 'bankruptcy': 8383, 'shakespearean': 8384, 'shylock': 8385, 'entertainment': 8386, 'roosters': 8387, 'maldive': 8388, 'cullions': 8389, 'popularized': 8390, 'brillo': 8391, 'pad': 8392, 'mccain': 8393, 'rifleman': 8394, 'amazons': 8395, 'lai': 8396, 'hasidic': 8397, 'refrain': 8398, 'airforce': 8399, 'poetic': 8400, 'blank': 8401, 'verse': 8402, 'pibb': 8403, 'berry': 8404, 'blackberry': 8405, 'raspberry': 8406, 'strawberry': 8407, 'benjamin': 8408, 'ruby': 8409, 'platinum': 8410, 'hawkeye': 8411, 'seine': 8412, 'freshen': 8413, 'breath': 8414, 'toothpaste': 8415, 'hijacking': 8416, 'anita': 8417, 'bryant': 8418, 'compiled': 8419, 'propellers': 8420, 'helped': 8421, 'patents': 8422, 'malawi': 8423, 'bend': 8424, 'dimaggio': 8425, 'compile': 8426, '56': 8427, 'graffiti': 8428, 'quilting': 8429, 'stored': 8430, 'exceeded': 8431, 'sonic': 8432, 'boom': 8433, 'iran': 8434, 'contra': 8435, 'indicate': 8436, '007': 8437, 'revolutionary': 8438, 'castro': 8439, 'botanical': 8440, 'nebuchadnezzar': 8441, 'aortic': 8442, 'abdominal': 8443, 'aneurysm': 8444, 'chloroplasts': 8445, 'deere': 8446, 'tractors': 8447, 'zebulon': 8448, 'pike': 8449, 'forward': 8450, 'thinking': 8451, 'insert': 8452, 'bagels': 8453, 'boost': 8454, 'purple': 8455, 'brew': 8456, 'cherubs': 8457, 'webpage': 8458, 'cleaveland': 8459, 'cavaliers': 8460, 'monarchy': 8461, 'isn': 8462, 'added': 8463, 'quisling': 8464, 'heineken': 8465, 'puerto': 8466, 'rico': 8467, 'repossession': 8468, 'butcher': 8469, 'spine': 8470, 'aspen': 8471, 'modesto': 8472, 'galileo': 8473, 'sears': 8474, 'autism': 8475, '1900': 8476, 'labrador': 8477, 'idaho': 8478, 'vaccination': 8479, 'epilepsy': 8480, 'biosphere': 8481, 'muddy': 8482, 'bipolar': 8483, 'cholesterol': 8484, 'macintosh': 8485, 'halfway': 8486, 'poles': 8487, 'invertebrates': 8488, 'linen': 8489, 'amitriptyline': 8490, 'shaman': 8491, 'walrus': 8492, 'turkeys': 8493, 'rip': 8494, 'winkle': 8495, 'triglycerides': 8496, 'liters': 8497, 'rays': 8498, 'fibromyalgia': 8499, 'outdated': 8500, 'yugoslavia': 8501, 'milan': 8502, 'hummingbirds': 8503, 'fargo': 8504, 'moorhead': 8505, 'bats': 8506, 'bighorn': 8507, 'newborn': 8508, 'lennon': 8509, 'ladybug': 8510, 'helpful': 8511, 'amoxicillin': 8512, 'xerophytes': 8513, 'ponce': 8514, 'desktop': 8515, 'publishing': 8516, 'cryogenics': 8517, 'reefs': 8518, 'neurology': 8519, 'ellington': 8520, 'az': 8521, 'micron': 8522, 'core': 8523, 'acupuncture': 8524, 'hindenberg': 8525, 'cubs': 8526, 'perth': 8527, 'eclipse': 8528, 'unmarried': 8529, 'thunderstorms': 8530, 'abolitionist': 8531, '1859': 8532, 'fault': 8533, 'platelets': 8534, 'severance': 8535, 'archives': 8536, 'poliomyelitis': 8537, 'philosopher': 8538, 'phi': 8539, 'beta': 8540, 'nicotine': 8541, 'b1': 8542, 'radium': 8543, 'sunspots': 8544, 'colonized': 8545, 'mongolia': 8546, 'nanotechnology': 8547, '1700': 8548, 'convicts': 8549, 'populate': 8550, 'lower': 8551, 'obtuse': 8552, 'angle': 8553, 'polymers': 8554, 'mauna': 8555, 'loa': 8556, 'astronomic': 8557, 'northern': 8558, 'acetaminophen': 8559, 'milwaukee': 8560, 'atlanta': 8561, 'absorbed': 8562, 'solstice': 8563, 'supernova': 8564, 'shawnee': 8565, 'lourve': 8566, 'pluto': 8567, 'neuropathy': 8568, 'euphrates': 8569, 'cryptography': 8570, 'composed': 8571, 'ruler': 8572, 'defeated': 8573, 'waterloo': 8574, 'wal': 8575, 'mart': 8576, '35824': 8577, 'hula': 8578, 'hoop': 8579, 'pastrami': 8580, 'enquirer': 8581, 'backbones': 8582, 'olympus': 8583, 'mons': 8584, '23rd': 8585, 'defibrillator': 8586, 'abolish': 8587, 'montreal': 8588, 'towers': 8589, 'fungus': 8590, 'frequently': 8591, 'chloride': 8592, 'spots': 8593, 'influenza': 8594, 'depletion': 8595, 'sitting': 8596, 'shiva': 8597, 'stretches': 8598, 'nigeria': 8599, 'spleen': 8600, 'phenylalanine': 8601, 'legislative': 8602, 'branch': 8603, 'sonar': 8604, 'phosphorus': 8605, 'tranquility': 8606, 'bandwidth': 8607, 'parasite': 8608, 'meteorologists': 8609, 'criterion': 8610, 'binney': 8611, '1903': 8612, 'pilates': 8613, 'depth': 8614, 'dress': 8615, 'mardi': 8616, 'gras': 8617, 'pesos': 8618, 'dodgers': 8619, 'admirals': 8620, 'glenn': 8621, 'arc': 8622, 'fortnight': 8623, 'dianetics': 8624, 'ethiopia': 8625, 'janice': 8626, 'fm': 8627, 'peyote': 8628, 'esophagus': 8629, 'mortarboard': 8630, 'chunnel': 8631, 'antacids': 8632, 'pulmonary': 8633, 'quaaludes': 8634, 'naproxen': 8635, 'strep': 8636, 'drawer': 8637, 'hybridization': 8638, 'indigo': 8639, 'barometer': 8640, 'usps': 8641, 'strike': 8642, 'hiroshima': 8643, 'bombed': 8644, 'savannah': 8645, 'strongest': 8646, 'planets': 8647, 'mussolini': 8648, 'seize': 8649, 'persia': 8650, 'cell': 8651, 'tmj': 8652, 'yak': 8653, 'isdn': 8654, 'mozart': 8655, 'semolina': 8656, 'melba': 8657, 'ursa': 8658, 'content': 8659, 'reform': 8660, 'ontario': 8661, 'ceiling': 8662, 'stimulant': 8663, 'griffith': 8664, 'champlain': 8665, 'quicksilver': 8666, 'divine': 8667, 'width': 8668, 'toto': 8669, 'thyroid': 8670, 'ciao': 8671, 'artery': 8672, 'lungs': 8673, 'faithful': 8674, 'acetic': 8675, 'moulin': 8676, 'rouge': 8677, 'atomic': 8678, 'pathogens': 8679, 'zinc': 8680, 'snails': 8681, 'ethics': 8682, 'annuity': 8683, 'turquoise': 8684, 'muscular': 8685, 'dystrophy': 8686, 'neuschwanstein': 8687, 'propylene': 8688, 'glycol': 8689, 'instant': 8690, 'polaroid': 8691, 'carcinogen': 8692, 'nepotism': 8693, 'myopia': 8694, 'comprise': 8695, 'naturally': 8696, 'occurring': 8697, 'mason': 8698, 'dixon': 8699, 'metabolism': 8700, 'cigarettes': 8701, 'semiconductors': 8702, 'tsunami': 8703, 'kidney': 8704, 'genocide': 8705, 'monastery': 8706, 'raided': 8707, 'vikings': 8708, 'coaster': 8709, 'bangers': 8710, 'mash': 8711, 'jewels': 8712, 'ulcer': 8713, 'vertigo': 8714, 'spirometer': 8715, 'sos': 8716, 'gasses': 8717, 'troposphere': 8718, 'gypsy': 8719, 'rainiest': 8720, 'patrick': 8721, 'mixed': 8722, 'refrigerator': 8723, 'schizophrenia': 8724, 'angiotensin': 8725, 'organize': 8726, 'susan': 8727, 'catskill': 8728, 'backwards': 8729, 'forwards': 8730, 'pediatricians': 8731, 'bentonville': 8732, 'compounded': 8733, 'capers': 8734, 'antigen': 8735, 'luxembourg': 8736, 'venezuela': 8737, 'polymer': 8738, 'bulletproof': 8739, 'vests': 8740, 'thermometer': 8741, 'precious': 8742, 'pure': 8743, 'fluorescent': 8744, 'bulb': 8745, 'rheumatoid': 8746, 'arthritis': 8747, 'rowe': 8748, 'cerebral': 8749, 'palsy': 8750, 'shepard': 8751, 'historic': 8752, 'pectin': 8753, 'bio': 8754, 'diversity': 8755, '22nd': 8756, 'zambia': 8757, 'october': 8758, 'coli': 8759}
###Markdown
Data Preprocessing Define `clean_doc` function
###Code
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# remove remaining tokens that are not alphabetic
# tokens = [word for word in tokens if word.isalpha()]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
###Output
_____no_output_____
###Markdown
Develop VocabularyA part of preparing text for text classification involves defining and tailoring the vocabulary of words supported by the model. **We can do this by loading all of the documents in the dataset and building a set of words.**The larger the vocabulary, the more sparse the representation of each word or document. So, we may decide to support all of these words, or perhaps discard some. The final chosen vocabulary can then be saved to a file for later use, such as filtering words in new documents in the future. We can use `Counter` class and create an instance called `vocab` as follows:
###Code
from collections import Counter
vocab = Counter()
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
# Example
add_doc_to_vocab(train_x, vocab)
print(len(vocab))
vocab
vocab.items()
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
# Separate the sentences and the labels for training and testing
train_x = list(corpus[corpus.split=='train'].sentence)
train_y = np.array(corpus[corpus.split=='train'].label)
print(len(train_x))
print(len(train_y))
test_x = list(corpus[corpus.split=='test'].sentence)
test_y = np.array(corpus[corpus.split=='test'].label)
print(len(test_x))
print(len(test_y))
# Instantiate a vocab object
vocab = Counter()
vocab = add_doc_to_vocab(train_x, vocab)
print(len(train_x), len(test_x))
print(len(vocab))
###Output
5452
5452
500
500
5452 500
6840
###Markdown
Bag-of-Words RepresentationOnce we define our vocab obtained from the training data, we need to **convert each review into a representation that we can feed to a Multilayer Perceptron Model.**As a reminder, here are the summary what we will do:- extract features from the text so the text input can be used with ML algorithms like neural networks- we do by converting the text into a vector representation. The larger the vocab, the longer the representation.- we will score the words in a document inside the vector. These scores are placed in the corresponding location in the vector representation.
###Code
def doc_to_line(doc):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join([token for token in tokens])
return line
def clean_docs(docs):
lines = []
for doc in docs:
line = doc_to_line(doc)
lines.append(line)
return lines
print(train_x[:5])
clean_sentences = clean_docs(train_x[:5])
print()
print( clean_sentences)
###Output
['how did serfdom develop in and then leave russia ?', 'what films featured the character popeye doyle ?', "how can i find a list of celebrities ' real names ?", 'what fowl grabs the spotlight after the chinese year of the monkey ?', 'what is the full form of .com ?']
['serfdom develop leav russia', 'film featur charact popey doyl', 'find list celebr real name', 'fowl grab spotlight chines year monkey', 'full form com']
###Markdown
Bag-of-Words VectorsWe will use the **Keras API** to **convert sentences to encoded document vectors**. Although the `Tokenizer` class from TF Keras provides cleaning and vocab definition, it's better we do this ourselves so that we know exactly we are doing.
###Code
def create_tokenizer(sentence):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(lines)
return tokenizer
###Output
_____no_output_____
###Markdown
This process determines a consistent way to **convert the vocabulary to a fixed-length vector**, which is the total number of words in the vocabulary `vocab`. Next, documents can then be encoded using the Tokenizer by calling `texts_to_matrix()`. The function takes both a list of documents to encode and an encoding mode, which is the method used to score words in the document. Here we specify **freq** to score words based on their frequency in the document. This can be used to encode the loaded training and test data, for example:`Xtrain = tokenizer.texts_to_matrix(train_docs, mode='freq')``Xtest = tokenizer.texts_to_matrix(test_docs, mode='freq')`
###Code
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
def create_tokenizer(sentences):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
return tokenizer
# Separate the sentences and the labels for training and testing
train_x = list(corpus[corpus.split=='train'].sentence)
train_y = np.array(corpus[corpus.split=='train'].label)
print('train_x size: ', len(train_x))
print('train_y size: ', len(train_y))
test_x = list(corpus[corpus.split=='test'].sentence)
test_y = np.array(corpus[corpus.split=='test'].label)
print('test_x size: ', len(test_x))
print('test_y size: ', len(test_y))
# Instantiate a vocab object
vocab = Counter()
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
print('The number of vocab: ', len(vocab))
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# Define the tokenizer
tokenizer = create_tokenizer(train_x)
# encode data using freq mode
Xtrain = tokenizer.texts_to_matrix(train_x, mode='freq')
Xtest = tokenizer.texts_to_matrix(test_x, mode='freq')
###Output
train_x size: 5452
train_y size: 5452
test_x size: 500
test_y size: 500
The number of vocab: 6840
###Markdown
Training and Testing the Model CNN ModelNow, we will build Convolutional Neural Network (CNN) models to classify encoded documents as either positive or negative.The model takes inspiration from `CNN for Sentence Classification` by *Yoon Kim*.Now, we will define our CNN model as follows:- One Conv layer with 100 filters, kernel size 5, and relu activation function;- One MaxPool layer with pool size = 2;- One Dropout layer after flattened;- Optimizer: Adam (The best learning algorithm so far)- Loss function: binary cross-entropy (suited for binary classification problem)**Note**: - The whole purpose of dropout layers is to tackle the problem of over-fitting and to introduce generalization to the model. Hence it is advisable to keep dropout parameter near 0.5 in hidden layers. - https://missinglink.ai/guides/keras/keras-conv1d-working-1d-convolutional-neural-networks-keras/
###Code
def train_cnn(train_x, train_y, batch_size = 50, epochs = 10, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=100, kernel_size=5, activation='relu', input_shape=(n_words,1)),
tf.keras.layers.MaxPool1D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0,5),
tf.keras.layers.Dense( units=6, activation='softmax')
])
model.compile( loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose)
return model
###Output
_____no_output_____
###Markdown
Train and Test the Model
###Code
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
def create_tokenizer(sentences):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
return tokenizer
def train_cnn(train_x, train_y, batch_size = 50, epochs = 10, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=100, kernel_size=5, activation='relu', input_shape=(n_words,1)),
tf.keras.layers.MaxPool1D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=6, activation='softmax')
])
model.compile( loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose)
return model
# Separate the sentences and the labels for training and testing
train_x = list(corpus[corpus.split=='train'].sentence)
train_y = np.array(corpus[corpus.split=='train'].label)
print('train_x size: ', len(train_x))
print('train_y size: ', len(train_y))
test_x = list(corpus[corpus.split=='test'].sentence)
test_y = np.array(corpus[corpus.split=='test'].label)
print('test_x size: ', len(test_x))
print('test_y size: ', len(test_y))
# Instantiate a vocab object
vocab = Counter()
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
print('The number of vocab: ', len(vocab))
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# Define the tokenizer
tokenizer = create_tokenizer(train_x)
# encode data using freq mode
Xtrain = tokenizer.texts_to_matrix(train_x, mode='freq')
Xtest = tokenizer.texts_to_matrix(test_x, mode='freq')
Xtrain = np.reshape(Xtrain, (Xtrain.shape[0], Xtrain.shape[1], 1))
Xtest = np.reshape(Xtest, (Xtest.shape[0], Xtest.shape[1], 1))
# train the model
model = train_cnn(Xtrain, train_y)
# evaluate the model
loss, acc = model.evaluate(Xtest, test_y, verbose=0)
print('Test Accuracy: {}'.format(acc*100))
model.summary()
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_2 (Conv1D) (None, 6837, 100) 600
_________________________________________________________________
max_pooling1d_2 (MaxPooling1 (None, 3418, 100) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 341800) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 341800) 0
_________________________________________________________________
dense_2 (Dense) (None, 6) 2050806
=================================================================
Total params: 2,051,406
Trainable params: 2,051,406
Non-trainable params: 0
_________________________________________________________________
###Markdown
Comparing the Word Scoring Methods When we use `text_to_matrix()` function, we are given 4 different methods for scoring words:- `binary`: words are marked as 1 (present) or 0 (absent)- `count`: words are counted based on their occurrence (integer)- `tfidf`: words are scored based on their frequency of occurrence in their own document, but also are being penalized if they are common across all documents- `freq`: wrods are scored based on their frequency of occurrence in their own document
###Code
# prepare bag-of-words encoding of docs
def prepare_data(train_docs, test_docs, mode):
# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)
# encode training data set
Xtrain = tokenizer.texts_to_matrix(train_docs, mode=mode)
# encode test data set
Xtest = tokenizer.texts_to_matrix(test_docs, mode=mode)
return Xtrain, Xtest
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
# prepare bag-of-words encoding of docs
def prepare_data(train_docs, test_docs, mode):
# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)
# encode training data set
Xtrain = tokenizer.texts_to_matrix(train_docs, mode=mode)
# encode test data set
Xtest = tokenizer.texts_to_matrix(test_docs, mode=mode)
return Xtrain, Xtest
def train_cnn(train_x, train_y, batch_size = 50, epochs = 10, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=100, kernel_size=5, activation='relu', input_shape=(n_words,1)),
tf.keras.layers.MaxPool1D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=6, activation='softmax')
])
model.compile( loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose)
return model
# Separate the sentences and the labels for training and testing
train_x = list(corpus[corpus.split=='train'].sentence)
train_y = np.array(corpus[corpus.split=='train'].label)
print('train_x size: ', len(train_x))
print('train_y size: ', len(train_y))
test_x = list(corpus[corpus.split=='test'].sentence)
test_y = np.array(corpus[corpus.split=='test'].label)
print('test_x size: ', len(test_x))
print('test_y size: ', len(test_y))
# Run Experiment of 4 different modes
modes = ['binary', 'count', 'tfidf', 'freq']
results = pd.DataFrame()
for mode in modes:
print('mode: ', mode)
# Instantiate a vocab object
vocab = Counter()
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# encode data using freq mode
Xtrain, Xtest = prepare_data(train_x, test_x, mode)
Xtrain = np.reshape(Xtrain, (Xtrain.shape[0], Xtrain.shape[1], 1))
Xtest = np.reshape(Xtest, (Xtest.shape[0], Xtest.shape[1], 1))
# train the model
model = train_cnn(Xtrain, train_y)
# evaluate the model
loss, acc = model.evaluate(Xtest, test_y, verbose=0)
print('Test Accuracy: {}'.format(acc*100))
results[mode] = [acc*100]
print()
print(results)
results
###Output
_____no_output_____ |
how-to-use-azureml/track-and-monitor-experiments/manage-runs/manage-runs.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Manage runs Table of contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Start, monitor and complete a run](Start,-monitor-and-complete-a-run)1. [Add properties and tags](Add-properties-and-tags)1. [Query properties and tags](Query-properties-and-tags)1. [Start and query child runs](Start-and-query-child-runs)1. [Cancel or fail runs](Cancel-or-fail-runs)1. [Reproduce a run](Reproduce-a-run)1. [Next steps](Next-steps) IntroductionWhen you're building enterprise-grade machine learning models, it is important to track, organize, monitor and reproduce your training runs. For example, you might want to trace the lineage behind a model deployed to production, and re-run the training experiment to troubleshoot issues. This notebooks shows examples how to use Azure Machine Learning services to manage your training runs. SetupIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't already to establish your connection to the AzureML Workspace. Also, if you're new to Azure ML, we recommend that you go through [the tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-train-models-with-aml) first to learn the basic concepts.Let's first import required packages, check Azure ML SDK version, connect to your workspace and create an Experiment to hold the runs.
###Code
import azureml.core
from azureml.core import Workspace, Experiment, Run
from azureml.core import ScriptRunConfig
print(azureml.core.VERSION)
ws = Workspace.from_config()
exp = Experiment(workspace=ws, name="explore-runs")
###Output
_____no_output_____
###Markdown
Start, monitor and complete a runA run is an unit of execution, typically to train a model, but for other purposes as well, such as loading or transforming data. Runs are tracked by Azure ML service, and can be instrumented with metrics and artifact logging.A simplest way to start a run in your interactive Python session is to call *Experiment.start_logging* method. You can then log metrics from within the run.
###Code
notebook_run = exp.start_logging()
notebook_run.log(name="message", value="Hello from run!")
print(notebook_run.get_status())
###Output
_____no_output_____
###Markdown
Use *get_status method* to get the status of the run.
###Code
print(notebook_run.get_status())
###Output
_____no_output_____
###Markdown
Also, you can simply enter the run to get a link to Azure Portal details
###Code
notebook_run
###Output
_____no_output_____
###Markdown
Method *get_details* gives you more details on the run.
###Code
notebook_run.get_details()
###Output
_____no_output_____
###Markdown
Use *complete* method to end the run.
###Code
notebook_run.complete()
print(notebook_run.get_status())
###Output
_____no_output_____
###Markdown
You can also use Python's *with...as* pattern. The run will automatically complete when moving out of scope. This way you don't need to manually complete the run.
###Code
with exp.start_logging() as notebook_run:
notebook_run.log(name="message", value="Hello from run!")
print("Is it still running?",notebook_run.get_status())
print("Has it completed?",notebook_run.get_status())
###Output
_____no_output_____
###Markdown
Next, let's look at submitting a run as a separate Python process. To keep the example simple, we submit the run on local computer. Other targets could include remote VMs and Machine Learning Compute clusters in your Azure ML Workspace.We use *hello.py* script as an example. To perform logging, we need to get a reference to the Run instance from within the scope of the script. We do this using *Run.get_context* method.
###Code
!more hello.py
###Output
_____no_output_____
###Markdown
Let's submit the run on a local computer. A standard pattern in Azure ML SDK is to create run configuration, and then use *Experiment.submit* method.
###Code
run_config = ScriptRunConfig(source_directory='.', script='hello.py')
local_script_run = exp.submit(run_config)
###Output
_____no_output_____
###Markdown
You can view the status of the run as before
###Code
print(local_script_run.get_status())
local_script_run
###Output
_____no_output_____
###Markdown
Submitted runs have additional log files you can inspect using *get_details_with_logs*.
###Code
local_script_run.get_details_with_logs()
###Output
_____no_output_____
###Markdown
Use *wait_for_completion* method to block the local execution until remote run is complete.
###Code
local_script_run.wait_for_completion(show_output=True)
print(local_script_run.get_status())
###Output
_____no_output_____
###Markdown
Add properties and tagsProperties and tags help you organize your runs. You can use them to describe, for example, who authored the run, what the results were, and what machine learning approach was used. And as you'll later learn, properties and tags can be used to query the history of your runs to find the important ones.For example, let's add "author" property to the run:
###Code
local_script_run.add_properties({"author":"azureml-user"})
print(local_script_run.get_properties())
###Output
_____no_output_____
###Markdown
Properties are immutable. Once you assign a value it cannot be changed, making them useful as a permanent record for auditing purposes.
###Code
try:
local_script_run.add_properties({"author":"different-user"})
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Tags on the other hand can be changed:
###Code
local_script_run.tag("quality", "great run")
print(local_script_run.get_tags())
local_script_run.tag("quality", "fantastic run")
print(local_script_run.get_tags())
###Output
_____no_output_____
###Markdown
You can also add a simple string tag. It appears in the tag dictionary with value of None
###Code
local_script_run.tag("worth another look")
print(local_script_run.get_tags())
###Output
_____no_output_____
###Markdown
Query properties and tagsYou can quary runs within an experiment that match specific properties and tags.
###Code
list(exp.get_runs(properties={"author":"azureml-user"},tags={"quality":"fantastic run"}))
list(exp.get_runs(properties={"author":"azureml-user"},tags="worth another look"))
###Output
_____no_output_____
###Markdown
Start and query child runs You can use child runs to group together related runs, for example different hyperparameter tuning iterations.Let's use *hello_with_children* script to create a batch of 5 child runs from within a submitted run.
###Code
!more hello_with_children.py
run_config = ScriptRunConfig(source_directory='.', script='hello_with_children.py')
local_script_run = exp.submit(run_config)
local_script_run.wait_for_completion(show_output=True)
print(local_script_run.get_status())
###Output
_____no_output_____
###Markdown
You can start child runs one by one. Note that this is less efficient than submitting a batch of runs, because each creation results in a network call.Child runs too complete automatically as they move out of scope.
###Code
with exp.start_logging() as parent_run:
for c,count in enumerate(range(5)):
with parent_run.child_run() as child:
child.log(name="Hello from child run", value=c)
###Output
_____no_output_____
###Markdown
To query the child runs belonging to specific parent, use *get_children* method.
###Code
list(parent_run.get_children())
###Output
_____no_output_____
###Markdown
Cancel or fail runsSometimes, you realize that the run is not performing as intended, and you want to cancel it instead of waiting for it to complete.As an example, let's create a Python script with a delay in the middle.
###Code
!more hello_with_delay.py
###Output
_____no_output_____
###Markdown
You can use *cancel* method to cancel a run.
###Code
run_config = ScriptRunConfig(source_directory='.', script='hello_with_delay.py')
local_script_run = exp.submit(run_config)
print("Did the run start?",local_script_run.get_status())
local_script_run.cancel()
print("Did the run cancel?",local_script_run.get_status())
###Output
_____no_output_____
###Markdown
You can also mark an unsuccessful run as failed.
###Code
local_script_run = exp.submit(run_config)
local_script_run.fail()
print(local_script_run.get_status())
###Output
_____no_output_____
###Markdown
Reproduce a runWhen updating or troubleshooting on a model deployed to production, you sometimes need to revisit the original training run that produced the model. To help you with this, Azure ML service by default creates snapshots of your scripts a the time of run submission:You can use *restore_snapshot* to obtain a zip package of the latest snapshot of the script folder.
###Code
local_script_run.restore_snapshot(path="snapshots")
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Manage runs Table of contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Start, monitor and complete a run](Start,-monitor-and-complete-a-run)1. [Add properties and tags](Add-properties-and-tags)1. [Query properties and tags](Query-properties-and-tags)1. [Start and query child runs](Start-and-query-child-runs)1. [Cancel or fail runs](Cancel-or-fail-runs)1. [Reproduce a run](Reproduce-a-run)1. [Next steps](Next-steps) IntroductionWhen you're building enterprise-grade machine learning models, it is important to track, organize, monitor and reproduce your training runs. For example, you might want to trace the lineage behind a model deployed to production, and re-run the training experiment to troubleshoot issues. This notebooks shows examples how to use Azure Machine Learning services to manage your training runs. SetupIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't already to establish your connection to the AzureML Workspace. Also, if you're new to Azure ML, we recommend that you go through [the tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-train-models-with-aml) first to learn the basic concepts.Let's first import required packages, check Azure ML SDK version, connect to your workspace and create an Experiment to hold the runs.
###Code
import azureml.core
from azureml.core import Workspace, Experiment, Run
from azureml.core import ScriptRunConfig
print(azureml.core.VERSION)
ws = Workspace.from_config()
exp = Experiment(workspace=ws, name="explore-runs")
###Output
_____no_output_____
###Markdown
Start, monitor and complete a runA run is an unit of execution, typically to train a model, but for other purposes as well, such as loading or transforming data. Runs are tracked by Azure ML service, and can be instrumented with metrics and artifact logging.A simplest way to start a run in your interactive Python session is to call *Experiment.start_logging* method. You can then log metrics from within the run.
###Code
notebook_run = exp.start_logging()
notebook_run.log(name="message", value="Hello from run!")
print(notebook_run.get_status())
###Output
_____no_output_____
###Markdown
Use *get_status method* to get the status of the run.
###Code
print(notebook_run.get_status())
###Output
_____no_output_____
###Markdown
Also, you can simply enter the run to get a link to Azure Portal details
###Code
notebook_run
###Output
_____no_output_____
###Markdown
Method *get_details* gives you more details on the run.
###Code
notebook_run.get_details()
###Output
_____no_output_____
###Markdown
Use *complete* method to end the run.
###Code
notebook_run.complete()
print(notebook_run.get_status())
###Output
_____no_output_____
###Markdown
You can also use Python's *with...as* pattern. The run will automatically complete when moving out of scope. This way you don't need to manually complete the run.
###Code
with exp.start_logging() as notebook_run:
notebook_run.log(name="message", value="Hello from run!")
print("Is it still running?",notebook_run.get_status())
print("Has it completed?",notebook_run.get_status())
###Output
_____no_output_____
###Markdown
Next, let's look at submitting a run as a separate Python process. To keep the example simple, we submit the run on local computer. Other targets could include remote VMs and Machine Learning Compute clusters in your Azure ML Workspace.We use *hello.py* script as an example. To perform logging, we need to get a reference to the Run instance from within the scope of the script. We do this using *Run.get_context* method.
###Code
!more hello.py
###Output
_____no_output_____
###Markdown
Submitted runs take a snapshot of the *source_directory* to use when executing. You can control which files are available to the run by using an *.amlignore* file.
###Code
%%writefile .amlignore
# Exclude the outputs directory automatically created by our earlier runs.
/outputs
###Output
_____no_output_____
###Markdown
Let's submit the run on a local computer. A standard pattern in Azure ML SDK is to create run configuration, and then use *Experiment.submit* method.
###Code
run_config = ScriptRunConfig(source_directory='.', script='hello.py')
local_script_run = exp.submit(run_config)
###Output
_____no_output_____
###Markdown
You can view the status of the run as before
###Code
print(local_script_run.get_status())
local_script_run
###Output
_____no_output_____
###Markdown
Submitted runs have additional log files you can inspect using *get_details_with_logs*.
###Code
local_script_run.get_details_with_logs()
###Output
_____no_output_____
###Markdown
Use *wait_for_completion* method to block the local execution until remote run is complete.
###Code
local_script_run.wait_for_completion(show_output=True)
print(local_script_run.get_status())
###Output
_____no_output_____
###Markdown
Add properties and tagsProperties and tags help you organize your runs. You can use them to describe, for example, who authored the run, what the results were, and what machine learning approach was used. And as you'll later learn, properties and tags can be used to query the history of your runs to find the important ones.For example, let's add "author" property to the run:
###Code
local_script_run.add_properties({"author":"azureml-user"})
print(local_script_run.get_properties())
###Output
_____no_output_____
###Markdown
Properties are immutable. Once you assign a value it cannot be changed, making them useful as a permanent record for auditing purposes.
###Code
try:
local_script_run.add_properties({"author":"different-user"})
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Tags on the other hand can be changed:
###Code
local_script_run.tag("quality", "great run")
print(local_script_run.get_tags())
local_script_run.tag("quality", "fantastic run")
print(local_script_run.get_tags())
###Output
_____no_output_____
###Markdown
You can also add a simple string tag. It appears in the tag dictionary with value of None
###Code
local_script_run.tag("worth another look")
print(local_script_run.get_tags())
###Output
_____no_output_____
###Markdown
Query properties and tagsYou can query runs within an experiment that match specific properties and tags.
###Code
list(exp.get_runs(properties={"author":"azureml-user"},tags={"quality":"fantastic run"}))
list(exp.get_runs(properties={"author":"azureml-user"},tags="worth another look"))
###Output
_____no_output_____
###Markdown
Start and query child runs You can use child runs to group together related runs, for example different hyperparameter tuning iterations.Let's use *hello_with_children* script to create a batch of 5 child runs from within a submitted run.
###Code
!more hello_with_children.py
run_config = ScriptRunConfig(source_directory='.', script='hello_with_children.py')
local_script_run = exp.submit(run_config)
local_script_run.wait_for_completion(show_output=True)
print(local_script_run.get_status())
###Output
_____no_output_____
###Markdown
You can start child runs one by one. Note that this is less efficient than submitting a batch of runs, because each creation results in a network call.Child runs too complete automatically as they move out of scope.
###Code
with exp.start_logging() as parent_run:
for c,count in enumerate(range(5)):
with parent_run.child_run() as child:
child.log(name="Hello from child run", value=c)
###Output
_____no_output_____
###Markdown
To query the child runs belonging to specific parent, use *get_children* method.
###Code
list(parent_run.get_children())
###Output
_____no_output_____
###Markdown
Cancel or fail runsSometimes, you realize that the run is not performing as intended, and you want to cancel it instead of waiting for it to complete.As an example, let's create a Python script with a delay in the middle.
###Code
!more hello_with_delay.py
###Output
_____no_output_____
###Markdown
You can use *cancel* method to cancel a run.
###Code
run_config = ScriptRunConfig(source_directory='.', script='hello_with_delay.py')
local_script_run = exp.submit(run_config)
print("Did the run start?",local_script_run.get_status())
local_script_run.cancel()
print("Did the run cancel?",local_script_run.get_status())
###Output
_____no_output_____
###Markdown
You can also mark an unsuccessful run as failed.
###Code
local_script_run = exp.submit(run_config)
local_script_run.fail()
print(local_script_run.get_status())
###Output
_____no_output_____
###Markdown
Reproduce a runWhen updating or troubleshooting on a model deployed to production, you sometimes need to revisit the original training run that produced the model. To help you with this, Azure ML service by default creates snapshots of your scripts a the time of run submission:You can use *restore_snapshot* to obtain a zip package of the latest snapshot of the script folder.
###Code
local_script_run.restore_snapshot(path="snapshots")
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Manage runs Table of contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Start, monitor and complete a run](Start,-monitor-and-complete-a-run)1. [Add properties and tags](Add-properties-and-tags)1. [Query properties and tags](Query-properties-and-tags)1. [Start and query child runs](Start-and-query-child-runs)1. [Cancel or fail runs](Cancel-or-fail-runs)1. [Reproduce a run](Reproduce-a-run)1. [Next steps](Next-steps) IntroductionWhen you're building enterprise-grade machine learning models, it is important to track, organize, monitor and reproduce your training runs. For example, you might want to trace the lineage behind a model deployed to production, and re-run the training experiment to troubleshoot issues. This notebooks shows examples how to use Azure Machine Learning services to manage your training runs. SetupIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't already to establish your connection to the AzureML Workspace. Also, if you're new to Azure ML, we recommend that you go through [the tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-train-models-with-aml) first to learn the basic concepts.Let's first import required packages, check Azure ML SDK version, connect to your workspace and create an Experiment to hold the runs.
###Code
import azureml.core
from azureml.core import Workspace, Experiment, Run
from azureml.core import ScriptRunConfig
print(azureml.core.VERSION)
ws = Workspace.from_config()
exp = Experiment(workspace=ws, name="explore-runs")
###Output
_____no_output_____
###Markdown
Start, monitor and complete a runA run is an unit of execution, typically to train a model, but for other purposes as well, such as loading or transforming data. Runs are tracked by Azure ML service, and can be instrumented with metrics and artifact logging.A simplest way to start a run in your interactive Python session is to call *Experiment.start_logging* method. You can then log metrics from within the run.
###Code
notebook_run = exp.start_logging()
notebook_run.log(name="message", value="Hello from run!")
print(notebook_run.get_status())
###Output
_____no_output_____
###Markdown
Use *get_status method* to get the status of the run.
###Code
print(notebook_run.get_status())
###Output
_____no_output_____
###Markdown
Also, you can simply enter the run to get a link to Azure Portal details
###Code
notebook_run
###Output
_____no_output_____
###Markdown
Method *get_details* gives you more details on the run.
###Code
notebook_run.get_details()
###Output
_____no_output_____
###Markdown
Use *complete* method to end the run.
###Code
notebook_run.complete()
print(notebook_run.get_status())
###Output
_____no_output_____
###Markdown
You can also use Python's *with...as* pattern. The run will automatically complete when moving out of scope. This way you don't need to manually complete the run.
###Code
with exp.start_logging() as notebook_run:
notebook_run.log(name="message", value="Hello from run!")
print("Is it still running?",notebook_run.get_status())
print("Has it completed?",notebook_run.get_status())
###Output
_____no_output_____
###Markdown
Next, let's look at submitting a run as a separate Python process. To keep the example simple, we submit the run on local computer. Other targets could include remote VMs and Machine Learning Compute clusters in your Azure ML Workspace.We use *hello.py* script as an example. To perform logging, we need to get a reference to the Run instance from within the scope of the script. We do this using *Run.get_context* method.
###Code
!more hello.py
###Output
_____no_output_____
###Markdown
Let's submit the run on a local computer. A standard pattern in Azure ML SDK is to create run configuration, and then use *Experiment.submit* method.
###Code
run_config = ScriptRunConfig(source_directory='.', script='hello.py')
local_script_run = exp.submit(run_config)
###Output
_____no_output_____
###Markdown
You can view the status of the run as before
###Code
print(local_script_run.get_status())
local_script_run
###Output
_____no_output_____
###Markdown
Submitted runs have additional log files you can inspect using *get_details_with_logs*.
###Code
local_script_run.get_details_with_logs()
###Output
_____no_output_____
###Markdown
Use *wait_for_completion* method to block the local execution until remote run is complete.
###Code
local_script_run.wait_for_completion(show_output=True)
print(local_script_run.get_status())
###Output
_____no_output_____
###Markdown
Add properties and tagsProperties and tags help you organize your runs. You can use them to describe, for example, who authored the run, what the results were, and what machine learning approach was used. And as you'll later learn, properties and tags can be used to query the history of your runs to find the important ones.For example, let's add "author" property to the run:
###Code
local_script_run.add_properties({"author":"azureml-user"})
print(local_script_run.get_properties())
###Output
_____no_output_____
###Markdown
Properties are immutable. Once you assign a value it cannot be changed, making them useful as a permanent record for auditing purposes.
###Code
try:
local_script_run.add_properties({"author":"different-user"})
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Tags on the other hand can be changed:
###Code
local_script_run.tag("quality", "great run")
print(local_script_run.get_tags())
local_script_run.tag("quality", "fantastic run")
print(local_script_run.get_tags())
###Output
_____no_output_____
###Markdown
You can also add a simple string tag. It appears in the tag dictionary with value of None
###Code
local_script_run.tag("worth another look")
print(local_script_run.get_tags())
###Output
_____no_output_____
###Markdown
Query properties and tagsYou can quary runs within an experiment that match specific properties and tags.
###Code
list(exp.get_runs(properties={"author":"azureml-user"},tags={"quality":"fantastic run"}))
list(exp.get_runs(properties={"author":"azureml-user"},tags="worth another look"))
###Output
_____no_output_____
###Markdown
Start and query child runs You can use child runs to group together related runs, for example different hyperparameter tuning iterations.Let's use *hello_with_children* script to create a batch of 5 child runs from within a submitted run.
###Code
!more hello_with_children.py
run_config = ScriptRunConfig(source_directory='.', script='hello_with_children.py')
local_script_run = exp.submit(run_config)
local_script_run.wait_for_completion(show_output=True)
print(local_script_run.get_status())
###Output
_____no_output_____
###Markdown
You can start child runs one by one. Note that this is less efficient than submitting a batch of runs, because each creation results in a network call.Child runs too complete automatically as they move out of scope.
###Code
with exp.start_logging() as parent_run:
for c,count in enumerate(range(5)):
with parent_run.child_run() as child:
child.log(name="Hello from child run", value=c)
###Output
_____no_output_____
###Markdown
To query the child runs belonging to specific parent, use *get_children* method.
###Code
list(parent_run.get_children())
###Output
_____no_output_____
###Markdown
Cancel or fail runsSometimes, you realize that the run is not performing as intended, and you want to cancel it instead of waiting for it to complete.As an example, let's create a Python script with a delay in the middle.
###Code
!more hello_with_delay.py
###Output
_____no_output_____
###Markdown
You can use *cancel* method to cancel a run.
###Code
run_config = ScriptRunConfig(source_directory='.', script='hello_with_delay.py')
local_script_run = exp.submit(run_config)
print("Did the run start?",local_script_run.get_status())
local_script_run.cancel()
print("Did the run cancel?",local_script_run.get_status())
###Output
_____no_output_____
###Markdown
You can also mark an unsuccessful run as failed.
###Code
local_script_run = exp.submit(run_config)
local_script_run.fail()
print(local_script_run.get_status())
###Output
_____no_output_____
###Markdown
Reproduce a runWhen updating or troubleshooting on a model deployed to production, you sometimes need to revisit the original training run that produced the model. To help you with this, Azure ML service by default creates snapshots of your scripts a the time of run submission:You can use *restore_snapshot* to obtain a zip package of the latest snapshot of the script folder.
###Code
local_script_run.restore_snapshot(path="snapshots")
###Output
_____no_output_____ |
_notebooks/2022-03-24-Mighty-Graph.ipynb | ###Markdown
💪 Mighty Graph> A graph data structure is a collection of nodes that have data and are connected to other nodes.- toc: true - badges: true- comments: true- categories: [algorithms,graph]- hide: false Build Graph
###Code
from collections import defaultdict
class Graph:
def __init__(self,vertices):
self.vertices = vertices
self.graph = defaultdict(list)
def add_edge(self,src,dest,weight):
self.graph[src].append([src,dest,weight])
self.graph[dest].append([dest,src,weight])
def build_graph(vertices,edges):
g = Graph(vertices)
for i in edges:
g.add_edge(i[0],i[1],i[2])
return g.graph
# Building Graph with 7 Vertices and * Edges
vertices = 7
edges = [
[0,1,10],
[1,2,20],
[2,3,30],
[0,3,40],
[3,4,50],
[4,5,60],
[5,6,70],
[4,6,80]
]
graph = build_graph(vertices,edges)
graph
###Output
_____no_output_____
###Markdown
Has Path
###Code
def haspath(graph,src,dest,visited):
if src == dest:
return True
visited[src] = True
for conn in graph[src]:
nbr = conn[1]
if not visited[nbr]: len(area[0])
return False
src = 0
dest = 6
visited = [False] * vertices
haspath(graph,src,dest,visited)
###Output
_____no_output_____
###Markdown
All Path
###Code
def allpath(graph,src,dest,visited,psf):
if src == dest:
print(psf)
return
visited[src] = True
for path in graph[src]:
nbr = path[1]
if not visited[nbr]:
allpath(graph,nbr,dest,visited,psf+" "+str(nbr))
visited[src] = False
src = 0
dest = 6
visited = [False] * vertices
allpath(graph,src,dest,visited,"0")
###Output
0 1 2 3 4 5 6
0 1 2 3 4 6
0 3 4 5 6
0 3 4 6
###Markdown
Weight Solver
###Code
def weightsolver(graph,src,dest,visited,psf,wsf):
if src == dest:
print(psf," @ ",str(wsf))
return
visited[src] = True
for path in graph[src]:
nbr = path[1]
wgt = path[2]
if not visited[nbr]:
weightsolver(graph,nbr,dest,visited,psf+" "+str(nbr),wsf+wgt)
visited[src] = False
src = 0
dest = 6
visited = [False] * vertices
weightsolver(graph,src,dest,visited,"0",0)
###Output
0 1 2 3 4 5 6 @ 240
0 1 2 3 4 6 @ 190
0 3 4 5 6 @ 220
0 3 4 6 @ 170
###Markdown
Multisolver
###Code
import math
solvebox = {
"smallest": ["",math.inf],
"longest": ["",-math.inf],
"floor": ["",-math.inf],
"ceil": ["",math.inf],
}
criteria = 200
def multisolver(graph,src,dest,visited,psf,wsf):
if src == dest:
if wsf < solvebox["smallest"][1]:
solvebox["smallest"] = [psf,wsf]
if wsf > solvebox["longest"][1]:
solvebox["longest"] = [psf,wsf]
if wsf < criteria and wsf > solvebox["floor"][1]:
solvebox["floor"] = [psf,wsf]
if wsf > criteria and wsf < solvebox["ceil"][1]:
solvebox["ceil"] = [psf,wsf]
return
visited[src] = True
for path in graph[src]:
nbr = path[1]
wgt = path[2]
if not visited[nbr]:
multisolver(graph,nbr,dest,visited,psf+" "+str(nbr),wsf+wgt)
visited[src] = False
multisolver(graph,src,dest,visited,"0",0)
solvebox
###Output
_____no_output_____
###Markdown
Connected Components
###Code
def gen_comp(graph,src,comp,visited):
visited[src] = True
comp.append(src)
for path in graph[src]:
nbr = path[1]
if not visited[nbr]:
gen_comp(graph,nbr,comp,visited)
def traverse_vert(vertices,graph):
comps = []
visited = [False]*vertices
for vert in range(vertices):
if not visited[vert]:
conn_comp = []
gen_comp(graph,vert,conn_comp,visited)
comps.append(conn_comp)
return comps
v = 7
input = [
[0,1,10],
[2,3,10],
[4,5,10],
[5,6,10],
[4,6,10],
]
comp_graph = build_graph(v,input)
traverse_vert(vertices,comp_graph)
###Output
_____no_output_____
###Markdown
Count Number of Islands
###Code
def isconn(area,x,y,visited):
if(x<0 or x > len(area)-1 or y <0 or y > len(area[0])-1 or visited[x][y] or area[x][y] == 1):
return
visited[x][y] = True
isconn(area,x-1,y,visited)
isconn(area,x+1,y,visited)
isconn(area,x,y-1,visited)
isconn(area,x,y+1,visited)
def island(area):
count = 0
height = len(area)
width = len(area[0])
visited = [[False] * width] * height
for x in range(height):
for y in range(width):
if area[x][y] == 0 and not visited[x][y]:
isconn(area,x,y,visited)
count += 1
return count
area = [
[0,0,1,1,1,1,1,1],
[0,0,1,1,1,1,1,1],
[1,1,1,1,1,1,1,0],
[1,1,0,0,0,1,1,0],
[1,1,1,1,0,1,1,0],
[1,1,1,1,0,1,1,0],
[1,1,1,1,0,1,1,0],
[1,1,1,1,1,1,1,0],
[1,1,1,1,1,1,1,0],
]
island(area)
###Output
_____no_output_____
###Markdown
Perfect Friends
###Code
from collections import defaultdict
class Graph:
def __init__(self,vertices):
self.vertices = vertices
self.graph = defaultdict(list)
def add_edge(self,src,dest):
self.graph[src].append([src,dest])
self.graph[dest].append([dest,src])
def build_graph(vertices,edges):
g = Graph(vertices)
for i in edges:
g.add_edge(i[0],i[1])
return g.graph
persons = 7
pairs = [
[0,1],
[2,3],
[4,5],
[5,6],
[4,6],
]
perfect_graph = build_graph(persons,pairs)
# find connected components to find number of clubs
clubs = traverse_vert(persons,perfect_graph)
no_of_clubs = len(clubs)
perfect_pair = 0
for i in range(no_of_clubs):
for j in range(i+1,no_of_clubs):
perfect_pair += len(clubs[i]) * len(clubs[j])
perfect_pair
###Output
_____no_output_____
###Markdown
Hamilton Path (+) or Cycle (*)
###Code
from collections import defaultdict
class Graph:
def __init__(self,vertices):
self.vertices = vertices
self.graph = defaultdict(list)
def add_edge(self,src,dest):
self.graph[src].append([src,dest])
self.graph[dest].append([dest,src])
def build_graph(vertices,edges):
g = Graph(vertices)
for i in edges:
g.add_edge(i[0],i[1])
return g.graph
vertices = 7
hamilton_in = [
[0,1],
[0,3],
[1,2],
[2,3],
[2,5],
[5,6],
[5,4],
[6,4],
[4,3],
]
hamilton_graph = build_graph(vertices,hamilton_in)
def hamilton(graph,src,dest,visited,psf,osrc):
visited[src] = True
if all(visited):
for path in graph[src]:
nbr = path[1]
if nbr == osrc:
print(psf," ","*")
return
print(psf," ","+")
for path in graph[src]:
nbr = path[1]
if not visited[nbr]:
hamilton(graph,nbr,dest,visited,psf+" "+str(nbr),osrc)
visited[src] = False
visited = [False] * vertices
hamilton(hamilton_graph,0,6,visited,"0",0)
###Output
0 1 2 3 4 5 6 +
0 1 2 3 4 6 5 +
0 1 2 5 6 4 3 *
0 1 2 5 4 6 +
###Markdown
Breadth First Traversal
###Code
sample_graph = {
'P' : ['S','R','Q'],
'Q' : ['P','R'],
'R' : ['P','Q','T'],
'S' : ['P'],
'T' : ['R']
}
def bfs(graph,visited,queue,src):
visited.append(src)
queue.append(src)
while queue:
node = queue.pop(0)
print(node,end=" ")
for nbr in graph[node]:
if nbr not in visited:
queue.append(nbr)
visited.append(nbr)
visited = []
queue = []
src = 'P'
bfs(sample_graph,visited,queue,src)
###Output
P S R Q T
###Markdown
Has Cyclic
###Code
sample_graph = {
'P' : ['S','R','Q'],
'Q' : ['P','R'],
'R' : ['P','Q','T'],
'S' : ['P'],
'T' : ['R']
}
def hascycle(graph,visited,queue,src):
visited.append(src)
queue.append(src)
while queue:
node = queue.pop(0)
for nbr in graph[node]:
if nbr not in visited:
queue.append(nbr)
visited.append(nbr)
# if neighbour is visited, then it's forming a cycle.
else:
return True
return False
visited = []
queue = []
src = 'P'
hascycle(sample_graph,visited,queue,src)
# this work for connected graph, to make it work for non-connected graph, we need loop for every vertiex as src for hascycle funtion.
###Output
_____no_output_____
###Markdown
Spread Of Infection
###Code
infection_graph = {
'P' : ['S','R','Q'],
'Q' : ['P','R'],
'R' : ['P','Q','T'],
'S' : ['P'],
'T' : ['R']
}
def spread_of_infection(graph,visited,queue,src,time):
visited.append(src)
queue.append([src,0])
count=0
while queue:
node = queue.pop(0)
if node[1] > time:
print(count)
return
count+=1
for nbr in graph[node[0]]:
if nbr not in visited:
queue.append([nbr,node[1]+1])
visited.append(nbr)
visited = []
queue = []
src = 'T'
time = 2
spread_of_infection(infection_graph,visited,queue,src,time)
###Output
4
###Markdown
Dijkstra Algorithm
###Code
import heapq
dijk_graph = {
0:[[0,1,10],[0,3,40]],
1:[[1,0,10],[1,2,10]],
2:[[2,1,10],[2,3,10]],
3:[[3,0,40],[3,4,2]],
4:[[4,3,2],[4,5,3],[4,6,8]],
5:[[5,4,3],[5,6,3]],
6:[[6,4,8],[6,5,3]]
}
def dijk_algo(graph,src):
pq = []
visited = [False] * len(graph)
heapq.heappush(pq,(0,src,""))
while pq:
rem = heapq.heappop(pq)
if not visited[rem[1]]:
visited[rem[1]] = True
print(f'{str(rem[1])} via {rem[2]} @ {str(rem[0])}')
for edge in graph[rem[1]]:
if not visited[edge[1]]:
heapq.heappush(pq,(rem[0]+edge[2],edge[1],rem[2]+str(edge[0])))
dijk_algo(dijk_graph,0)
###Output
0 via @ 0
1 via 0 @ 10
2 via 01 @ 20
3 via 012 @ 30
4 via 0123 @ 32
5 via 01234 @ 35
6 via 012345 @ 38
###Markdown
Prims Algorithm
###Code
import heapq
prims_graph = {
0:[[10,0,1],[40,0,3]],
1:[[10,1,0],[10,1,2]],
2:[[10,2,1],[10,2,3]],
3:[[40,3,0],[2,3,4]],
4:[[2,4,3],[3,4,5],[8,4,6]],
5:[[3,5,4],[3,5,6]],
6:[[8,6,4],[3,6,5]]
}
def prims_algo(graph,src):
pq = []
visited = [False] * len(graph)
heapq.heappush(pq,(0,src,-1))
while pq:
rem = heapq.heappop(pq)
if not visited[rem[1]]:
visited[rem[1]] = True
print(f'{str(rem[2])}-{str(rem[1])}@{str(rem[0])}')
for edge in graph[rem[1]]:
if not visited[edge[2]]:
heapq.heappush(pq,(rem[0]+edge[0],edge[2],rem[1]))
prims_algo(prims_graph,0)
###Output
-1-0@0
0-1@10
1-2@20
2-3@30
3-4@32
4-5@35
5-6@38
###Markdown
Topological Sort
###Code
topo_graph = {
0:[[0,1],[0,3]],
1:[[1,2]],
2:[[2,3]],
3:[],
4:[[4,3],[4,5],[4,6]],
5:[[5,6]],
6:[],
}
def helper(graph,src,visited,stack):
visited[src] = True
for edge in graph[src]:
if not visited[edge[1]]:
helper(graph,edge[1],visited,stack)
stack.insert(0,src)
def topo_sort(graph):
stack = []
visited = [False] * len(graph)
for i in range(len(graph)):
if not visited[i]:
helper(graph,i,visited,stack)
return stack
topo_sort(topo_graph)
###Output
_____no_output_____
###Markdown
Iterative Depth First Search
###Code
sample_graph = {
'P' : ['S','R','Q'],
'Q' : ['P','R'],
'R' : ['P','Q','T'],
'S' : ['P'],
'T' : ['R']
}
def dfs(graph,visited,stack,src):
stack.insert(0,(src,src))
while stack:
node = stack.pop(0)
if node[0] not in visited:
visited.append(node[0])
print(node[0],'@',node[1])
for nbr in graph[node[0]]:
if nbr not in visited:
stack.insert(0,(nbr,node[1]+nbr))
visited = []
stack = []
src = 'S'
dfs(sample_graph,visited,stack,src)
###Output
S @ S
P @ SP
Q @ SPQ
R @ SPQR
T @ SPQRT
|
tf.version.1/02.regression/03.2.mnist.softmax.with.tf.data.ipynb | ###Markdown
MNIST softmax with `tf.data`* MNIST data를 가지고 softmax classifier를 만들어보자. * [참고 소스: mnist_softmax.py in verion 1.4](https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/examples/tutorials/mnist/mnist_softmax.py) * `tf.data`를 이용하여 input pipeline을 바꿔보자 Import modules
###Code
"""A very simple MNIST classifier.
See extensive documentation at
https://www.tensorflow.org/get_started/mnist/beginners in version 1.4
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import time
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import clear_output
import tensorflow as tf
sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
os.environ["CUDA_VISIBLE_DEVICES"]="0"
###Output
/home/lab4all/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Import data
###Code
# Load training and eval data from tf.keras
(train_data, train_labels), (test_data, test_labels) = \
tf.keras.datasets.mnist.load_data()
train_data = train_data / 255.
train_data = train_data.reshape(-1, 784)
train_labels = np.asarray(train_labels, dtype=np.int32)
test_data = test_data / 255.
test_data = test_data.reshape(-1, 784)
test_labels = np.asarray(test_labels, dtype=np.int32)
###Output
_____no_output_____
###Markdown
Show the MNIST
###Code
index = 200
print("label = {}".format(train_labels[index]))
plt.imshow(train_data[index].reshape(28, 28))
plt.colorbar()
#plt.gca().grid(False)
plt.show()
###Output
label = 1
###Markdown
Set up dataset with `tf.data` input pipeline `tf.data.Dataset` and Transformation
###Code
tf.set_random_seed(219)
batch_size = 32
max_epochs = 10
# for train
train_dataset = tf.data.Dataset.from_tensor_slices((train_data, train_labels))
train_dataset = train_dataset.shuffle(buffer_size = 10000)
train_dataset = train_dataset.repeat(count = max_epochs)
train_dataset = train_dataset.batch(batch_size = batch_size)
print(train_dataset)
# for test
test_dataset = tf.data.Dataset.from_tensor_slices((test_data, test_labels))
test_dataset = test_dataset.batch(batch_size = len(test_data))
print(test_dataset)
###Output
<BatchDataset shapes: ((?, 784), (?,)), types: (tf.float64, tf.int32)>
<BatchDataset shapes: ((?, 784), (?,)), types: (tf.float64, tf.int32)>
###Markdown
Define Iterator
###Code
# tf.data.Iterator.from_string_handle의 output_shapes는 default = None이지만 꼭 값을 넣는 게 좋음
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(handle,
train_dataset.output_types,
train_dataset.output_shapes)
x, y = iterator.get_next()
x = tf.cast(x, dtype = tf.float32)
y = tf.cast(y, dtype = tf.int32)
###Output
_____no_output_____
###Markdown
Create the model
###Code
#x = tf.placeholder(tf.float32, [None, 784])
W = tf.get_variable(name='weights', shape=[784, 10], initializer=tf.zeros_initializer)
b = tf.get_variable(name='bias', shape=[10], initializer=tf.zeros_initializer)
y_pred = tf.matmul(x, W) + b
###Output
_____no_output_____
###Markdown
Define loss and optimizer* [`tf.nn.softmax_cross_entropy_with_logits_v2`](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits_v2)* [`tf.losses.softmax_cross_entropy`](https://www.tensorflow.org/api_docs/python/tf/losses/softmax_cross_entropy)
###Code
#y = tf.placeholder(tf.int32, [None])
y_one_hot = tf.one_hot(y, depth=10)
#cross_entropy = tf.reduce_mean(
# tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_one_hot, logits=y_pred))
cross_entropy = tf.losses.softmax_cross_entropy(onehot_labels=y_one_hot,
logits=y_pred)
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
###Output
_____no_output_____
###Markdown
`tf.Session()` and train
###Code
sess = tf.Session(config=sess_config)
sess.run(tf.global_variables_initializer())
# train_iterator
train_iterator = train_dataset.make_one_shot_iterator()
train_handle = sess.run(train_iterator.string_handle())
# Train
step = 1
loss_history = []
start_time = time.time()
while True:
try:
_, loss = sess.run([train_step, cross_entropy],
feed_dict={handle: train_handle})
loss_history.append(loss)
if step % 100 == 0:
clear_output(wait=True)
epochs = batch_size * step / float(len(train_data))
print("epochs: {:.2f}, step: {}, loss: {}".format(epochs, step, loss))
step += 1
except tf.errors.OutOfRangeError:
print("End of dataset") # ==> "End of dataset"
break
print("training done!")
print("Elapsed time: {}".format(time.time() - start_time))
###Output
epochs: 9.97, step: 18700, loss: 0.1535784751176834
End of dataset
training done!
Elapsed time: 21.528079748153687
###Markdown
Plot the loss funtion
###Code
plt.plot(loss_history, label='loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Test trained model* test accuracy: 0.9165 for 1 epochs (2.35 sec)* test accuracy: 0.9134 for 10 epochs (21.29 sec)
###Code
# test_iterator
test_iterator = test_dataset.make_one_shot_iterator()
test_handle = sess.run(test_iterator.string_handle())
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y, tf.int64))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("test accuracy:", sess.run(accuracy, feed_dict={handle: test_handle}))
###Output
test accuracy: 0.9134
###Markdown
Plot test set
###Code
np.random.seed(219)
test_batch_size = 16
batch_index = np.random.choice(len(test_data), size=test_batch_size, replace=False)
batch_xs = test_data[batch_index]
batch_ys = test_labels[batch_index]
y_pred_ = sess.run(y_pred, feed_dict={x: batch_xs})
fig = plt.figure(figsize=(16, 10))
for i, (px, py) in enumerate(zip(batch_xs, y_pred_)):
p = fig.add_subplot(4, 8, i+1)
if np.argmax(py) == batch_ys[i]:
p.set_title("y_pred: {}".format(np.argmax(py)), color='blue')
else:
p.set_title("y_pred: {}".format(np.argmax(py)), color='red')
p.imshow(px.reshape(28, 28))
p.axis('off')
###Output
_____no_output_____ |
03_Machine_Learning/sol/[HW14]_Multiple_Logistic_Regression.ipynb | ###Markdown
[HW14] Multiple Logistic Regression 지난시간에 logistic regression에 대해서 데이터를 생성하여 실습을 진행하였습니다. 이번에는 실제 데이터를 사용해서 다양한 입력 변수가 있을 때 logistic regression을 진행해 보겠습니다.
###Code
# visualization을 위한 helper code입니다.
if 'google.colab' in str(get_ipython()):
print('Downloading plot_helpers.py to util/ (only neded for colab')
!mkdir util; wget https://raw.githubusercontent.com/minireference/noBSLAnotebooks/master/util/plot_helpers.py -P util
###Output
Downloading plot_helpers.py to util/ (only neded for colab
--2022-01-23 12:02:42-- https://raw.githubusercontent.com/minireference/noBSLAnotebooks/master/util/plot_helpers.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 8787 (8.6K) [text/plain]
Saving to: ‘util/plot_helpers.py’
plot_helpers.py 100%[===================>] 8.58K --.-KB/s in 0s
2022-01-23 12:02:43 (67.2 MB/s) - ‘util/plot_helpers.py’ saved [8787/8787]
###Markdown
1.1 Images of metal-casting parts제조업 분야에서 물품의 상태를 판별하는데 컴퓨터 비전을 많이 사용합니다. 물품의 사진이 주어지면 우리가 학습한 모델은 그것이 결함이 있는지 없는지 판단합니다. 우리는 이것을 multiple logistic regression으로 진행해보겠습니다. 우리는 간단한 실험으로 하기 위해서 이미지를 흑백으로 변환하고, 개수를 적게 진행하겠습니다. 필요한 패키지를 import 하고, 첨부되어있는 데이터 파일을 통해 데이터를 불러오겠습니다.
###Code
from autograd import numpy
from autograd import grad
from matplotlib import pyplot
from urllib.request import urlretrieve
URL = 'https://github.com/engineersCode/EngComp6_deeplearning/raw/master/data/casting_images.npz'
urlretrieve(URL, 'casting_images.npz')
# read in images and labels
with numpy.load("/content/casting_images.npz", allow_pickle=True) as data:
ok_images = data["ok_images"]
def_images = data["def_images"]
type(ok_images)
ok_images.shape
###Output
_____no_output_____
###Markdown
519는 우리의 전체 데이터의 개수입니다. 원래 데이터는 128 * 128 사이즈의 이미지 데이터입니다. 그런데 우리는 그것을 하나로 쭉 펴서 다룰 것이기 때문에 16384가 되었습니다. 이제 우리의 데이터셋이 어떻게 구성되어있는지 한번 자세히 알아보겠습니다.
###Code
n_ok_total = ok_images.shape[0]
res = int(numpy.sqrt(def_images.shape[1]))
print("Number of images without defects:", n_ok_total)
print("Image resolution: {} by {}".format(res, res))
n_def_total = def_images.shape[0]
print("Number of images with defects:", n_def_total)
###Output
Number of images with defects: 781
###Markdown
결함이 없는 이미지는 519개, 결함이 있는 이미지는 781개 입니다. 이제 pyplot 패키지를 통해서 이미지를 보겠습니다.
###Code
fig, axes = pyplot.subplots(2, 3, figsize=(8, 6), tight_layout=True)
axes[0, 0].imshow(ok_images[0].reshape((res, res)), cmap="gray")
axes[0, 1].imshow(ok_images[50].reshape((res, res)), cmap="gray")
axes[0, 2].imshow(ok_images[100].reshape((res, res)), cmap="gray")
axes[1, 0].imshow(ok_images[150].reshape((res, res)), cmap="gray")
axes[1, 1].imshow(ok_images[200].reshape((res, res)), cmap="gray")
axes[1, 2].imshow(ok_images[250].reshape((res, res)), cmap="gray")
fig.suptitle("Casting parts without defects", fontsize=20);
fig, axes = pyplot.subplots(2, 3, figsize=(8, 6), tight_layout=True)
axes[0, 0].imshow(def_images[0].reshape((res, res)), cmap="gray")
axes[0, 1].imshow(def_images[50].reshape((res, res)), cmap="gray")
axes[0, 2].imshow(def_images[100].reshape((res, res)), cmap="gray")
axes[1, 0].imshow(def_images[150].reshape((res, res)), cmap="gray")
axes[1, 1].imshow(def_images[200].reshape((res, res)), cmap="gray")
axes[1, 2].imshow(def_images[250].reshape((res, res)), cmap="gray")
fig.suptitle("Casting parts with defects", fontsize=20);
###Output
_____no_output_____
###Markdown
1.2 Multiple logistic regression지난 시간에 logistic regression을 배우면서 logistic function을 같이 배웠습니다. Logistic function은 출력 값이 0과 1 사이의 확률 값이 되도록 변환해주는 함수입니다. 그래서 지금 같이 분류해야할 class가 2개 일 때 많이 사용합니다. 지난 시간과 이번의 차이점은 지난번에는 입력 변수가 1개였다면 이번에는 여러개의 입력 변수가 있습니다. 수식을 통해서 알아보도록 하겠습니다. $$\hat{y}^{(1)} = \text{logistic}(b + w_1x_1^{(1)}+ w_2x_2^{(1)} + ... + w_nx_n^{(1)})$$$$\hat{y}^{(2)} = \text{logistic}(b + w_1x_1^{(2)}+ w_2x_2^{(2)} + ... + w_nx_n^{(2)})$$$$\vdots$$$$\hat{y}^{(N)} = \text{logistic}(b + w_1x_1^{(N)}+ w_2x_2^{(N)} + ... + w_nx_n^{(N)})$$위 식에서 $(1), (2), ... (N)$은 $N$개의 이미지가 있다는 것입니다. $\hat{y}$는 예측한 확률 값입니다. 위의 수식들을 행렬의 형태로 바꾸면 다음과 같이 바꿀 수 있습니다. $$\begin{bmatrix}\hat{y}^{(1)} \\\vdots \\\hat{y}^{(N)}\end{bmatrix} = \text{logistic} \left(\begin{bmatrix}b \\\vdots \\b\end{bmatrix}+\begin{bmatrix}x_1^{(1)} & \cdots & x_n^{(1)} \\\vdots & \ddots & \vdots \\x_1^{(N)} & \cdots & x_n^{(N)} \end{bmatrix}\begin{bmatrix}w_1 \\\vdots \\w_n\end{bmatrix}\right)$$$$\hat{\mathbf{y}} = \text{logistic}(\mathbf{b} + \mathbf{X} \mathbf{w})$$이제 코드를 통해서 알아보겠습니다.
###Code
def logistic(x):
"""Logistic/sigmoid function.
Arguments
---------
x : numpy.ndarray
The input to the logistic function.
Returns
-------
numpy.ndarray
The output.
Notes
-----
The function does not restrict the shape of the input array. The output
has the same shape as the input.
"""
out = 1. / (1. + numpy.exp(-x))
return out
def logistic_model(x, params):
"""A logistic regression model.
A a logistic regression is y = sigmoid(x * w + b), where the operator *
denotes a mat-vec multiplication.
Arguments
---------
x : numpy.ndarray
The input of the model. The shape should be (n_images, n_total_pixels).
params : a tuple/list of two elemets
The first element is a 1D array with shape (n_total_pixels). The
second element is a scalar (the intercept)
Returns
-------
probabilities : numpy.ndarray
The output is a 1D array with length n_samples.
"""
out = logistic(numpy.dot(x, params[0]) + params[1])
return out
###Output
_____no_output_____
###Markdown
이제 cost function을 만들겠습니다. Logistic regression 실습 시간에 배운 cost function을 사용하고, 16384개의 많은 feature들을 가지고 있기 때문에 지난시간에 배운 regularization term을 추가하도록 하겠습니다. $$\text{cost function} = -\sum_{i=1}^N y_{\text{true}}^{(i)} \log\left(\hat{y}^{(i)}\right) + \left( 1- y_{\text{true}}^{(i)}\right) \log\left(1-\hat{y}^{(i)}\right) + \lambda \sum_{i=1}^n w_i^2 $$이것을 벡터꼴로 나타내면 다음과 같습니다. $$\text{cost function} = - [\mathbf{y}_{\text{true}}\log\left(\mathbf{\hat{y}}\right) + \left( \mathbf{1}- \mathbf{y}_{\text{true}}\right) \log\left(\mathbf{1}-\mathbf{\hat{y}}\right)] + \lambda \sum_{i=1}^n w_i^2 $$여기서 $\mathbf{1}$는 1로 이루어진 벡터입니다. 이것을 코드로 옮겨보겠습니다.
###Code
def model_loss(x, true_labels, params, _lambda=1.0):
"""Calculate the predictions and the loss w.r.t. the true values.
Arguments
---------
x : numpy.ndarray
The input of the model. The shape should be (n_images, n_total_pixels).
true_labels : numpy.ndarray
The true labels of the input images. Should be 1D and have length of
n_images.
params : a tuple/list of two elements
The first element is a 1D array with shape (n_total_pixels). The
second elenment is a scalar.
_lambda : float
The weight of the regularization term. Default: 1.0
Returns
-------
loss : a scalar
The summed loss.
"""
pred = logistic_model(x, params)
loss = - (
numpy.dot(true_labels, numpy.log(pred+1e-15)) +
numpy.dot(1.-true_labels, numpy.log(1.-pred+1e-15))
) + _lambda * numpy.sum(params[0]**2)
return loss
###Output
_____no_output_____
###Markdown
1.3 Training, validation, and test datasets우리가 모델을 학습하는 목적은 단순히 주어진 데이터셋을 잘 설명할 수 있는 것이 아닙니다. 주어진 데이터셋을 잘 설명하고, 그로 인해 학습 때 본 데이터가 아닌 새롭게 주어진 데이터에서 잘 예측을 할 수 있는 모델을 만드는 것입니다. 그래서 우리는 성능을 평가할 때 학습 때 보지 못한 데이터로 성능을 측정을 해야합니다. 그것이 우리의 최종 목적이기 때문입니다.이것은 우리가 수능 공부를 하는 것과 비슷합니다. 우리는 학교 수업을 듣고, 수많은 문제집을 풀면서 공부를 합니다. 그 이유는 수능 때 좋은 성적을 얻기 위함입니다. 이처럼 우리는 수많은 문제집이라는 training data 로 학습을 하고 training data 에서 보지 못한 수능이라는 새로운 데이터에서 좋은 성능을 나타내는 것을 목표로 하고 있습니다. 그래서 우리는 모델을 학습할 때 세가지로 데이터셋을 나누게 됩니다. **Training, validation, test** 로 나누게 됩니다. 학습을 할 때 training, validation 을 사용하고, 최종 성능을 test로 측정합니다. 왜 2가지 training, test로만 평가하는 것이 아니라 validation까지 필요할까요?저희는 한번도 보지 못한 test data에 대해서 잘 예측을 해야 합니다. 그러면 training 으로 학습을 하면서 이렇게 학습을 하면 새로운 데이터에 잘 예측을 할 수 있는지 확인을 해보고, 아니면 수정을 해야 합니다. 그래서 존재하는 것이 validation입니다. Training을 가지고 학습하는 과정에서 새로운 validation에서도 잘 작동을 하도록 학습을 해줍니다. 이것은 우리가 모의고사를 치르는 것과 유사합니다. 우리는 평소에 문제집을 풀 때 모의고사를 통해서 내 약점부위를 파악하고 앞으로 학습을 어떻게 할지 정하게 됩니다. 이제 우리는 전체 데이터셋을 3종류로 나누도록 하겠습니다. 60%는 training, 20%는 validation, 마지막 20%는 test로 하겠습니다. 코드를 통해서 분배해보겠습니다.
###Code
# numbers of images for validation (~ 20%)
n_ok_val = int(n_ok_total * 0.2)
n_def_val = int(n_def_total * 0.2)
print("Number of images without defects in validation dataset:", n_ok_val)
print("Number of images with defects in validation dataset:", n_def_val)
# numbers of images for test (~ 20%)
n_ok_test = int(n_ok_total * 0.2)
n_def_test = int(n_def_total * 0.2)
print("Number of images without defects in test dataset:", n_ok_test)
print("Number of images with defects in test dataset:", n_def_test)
# remaining images for training (~ 60%)
n_ok_train = n_ok_total - n_ok_val - n_ok_test
n_def_train = n_def_total - n_def_val - n_def_test
print("Number of images without defects in training dataset:", n_ok_train)
print("Number of images with defects in training dataset:", n_def_train)
###Output
Number of images without defects in validation dataset: 103
Number of images with defects in validation dataset: 156
Number of images without defects in test dataset: 103
Number of images with defects in test dataset: 156
Number of images without defects in training dataset: 313
Number of images with defects in training dataset: 469
###Markdown
이제 numpy 패키지 안에 split 함수로 나누어줍니다.
###Code
ok_images = numpy.split(ok_images, [n_ok_val, n_ok_val+n_ok_test], 0)
def_images = numpy.split(def_images, [n_def_val, n_def_val+n_def_test], 0)
###Output
_____no_output_____
###Markdown
이제 numpy 패키지 안에 concatenate 함수를 이용해서 train, val, test끼리 결함이 있는 이미지와 없는 이미지들을 합쳐줍니다.
###Code
images_val = numpy.concatenate([ok_images[0], def_images[0]], 0)
images_test = numpy.concatenate([ok_images[1], def_images[1]], 0)
images_train = numpy.concatenate([ok_images[2], def_images[2]], 0)
###Output
_____no_output_____
###Markdown
1.4 Data normalization: z-score normalization지난 시간에 했던 것처럼 다양한 feature가 있으면 normalization을 해주어야 합니다. 이번에는 z-score normalization을 사용하겠습니다. $$z = \frac{x - \mu_\text{train}}{\sigma_\text{train}}$$Train, validation, test 모두 진행해주겠습니다.
###Code
# calculate mu and sigma
mu = numpy.mean(images_train, axis=0)
sigma = numpy.std(images_train, axis=0)
# normalize the training, validation, and test datasets
images_train = (images_train - mu) / sigma
images_val = (images_val - mu) / sigma
images_test = (images_test - mu) / sigma
###Output
_____no_output_____
###Markdown
1.5 Creating labels/classes이제 데이터셋에 class label을 정해주어야 합니다. 이 이미지가 결함이 있는지 없는지 명시적으로 나타내주는 것입니다. 결함이 있는 것을 1, 결함이 없는 것을 0으로 나타내어 주겠습니다.
###Code
# labels for training data
labels_train = numpy.zeros(n_ok_train+n_def_train)
labels_train[n_ok_train:] = 1.
# labels for validation data
labels_val = numpy.zeros(n_ok_val+n_def_val)
labels_val[n_ok_val:] = 1.
# labels for test data
labels_test = numpy.zeros(n_ok_test+n_def_test)
labels_test[n_ok_test:] = 1.
###Output
_____no_output_____
###Markdown
이제 입력으로 들어온 이미지에 결함이 있는지 없는지 알아내기 위해 logistic model을 사용하겠습니다. 지난 시간에 한 것처럼 출력 확률 값이 0.5보다 크면 결함이 있고, 0.5보다 작으면 결함이 없다고 하겠습니다.
###Code
def classify(x, params):
"""Use a logistic model to label data with 0 or/and 1.
Arguments
---------
x : numpy.ndarray
The input of the model. The shape should be (n_images, n_total_pixels).
params : a tuple/list of two elements
The first element is a 1D array with shape (n_total_pixels). The
second element is a scalar.
Returns
-------
labels : numpy.ndarray
The shape of the label is the same with `probability`.
Notes
-----
This function only works with multiple images, i.e., x has a shape of
(n_images, n_total_pixels).
"""
probabilities = logistic_model(x, params)
labels = (probabilities >= 0.5).astype(float)
return labels
###Output
_____no_output_____
###Markdown
1.6 Evaluating model performance : F-score, Accuracy이제 우리가 학습한 모델이 얼마나 잘 예측을 하는지 알아보도록 하겠습니다. 우리가 예측한 것의 결과는 다음 4가지의 종류로 분류할 수 있습니다. 1. True Positive(TP) : 결함이 있다고 예측한 것들 중 실제로 결함이 있는 것2. False Positive(FP) : 결함이 있다고 예측한 것들 중 실제로 결함이 없는 것3. True Negative(TN) : 결함이 없다고 예측한 것들 중 실제로 결함이 없는 것4. False Negative(FN) : 결함이 없다고 예측한 것들 중 실제로 결함이 있는 것| |결함이 있다고 예측 | 결함이 없다고 예측 ||--- |--- |--- ||실제로 결함이 있음 | $$N_{TP}$$ | $$N_{FN}$$ ||실제로 결함이 없음 | $$N_{FP}$$ | $$N_{TN}$$ | 위에서 $N$은 개수를 나타냅니다. 이제 위에서 설명한 것들을 가지고 가장 보편적으로 사용하는 지표 3가지를 알아보도록 하겠습니다. $$\text{accuracy} = \frac{\text{정확하게 예측한 개수}}{\text{예측한 전체 개수}} \frac{N_{TP} + N_{TN}}{N_{TP}+N_{FN}+N_{FP}+N_{TN}}$$$$\text{precision} = \frac{\text{결함이 있다고 정확하게 예측한 개수}}{\text{결함이 있다고 예측한 총 개수}} = \frac{N_{TP}}{N_{TP}+N_{FP}}$$$$\text{recall} = \frac{\text{결함이 있다고 정확하게 예측한 개수}}{\text{실제로 결함이 있는 개수}} =\frac{N_{TP}}{N_{TP}+N_{FN}}$$여기서 우리는 precision과 recall로 F-score을 계산할 수 있습니다. $$\text{F-score} = \frac{(1+\beta^2) \text{precision} \times \text{recall}}{\beta^2 \text{precision} + \text{recall}}$$$\beta$는 precision과 recall중 어떤 것을 중점적으로 생각할지에 대한 저희가 정하는 상수입니다. 이제 accuracy와 f1-score을 구하는 것을 코드로 구현해보겠습니다.
###Code
def performance(predictions, answers, beta=1.0):
"""Calculate precision, recall, and F-score.
Arguments
---------
predictions : numpy.ndarray of integers
The predicted labels.
answers : numpy.ndarray of integers
The true labels.
beta : float
A coefficient representing the weight of recall.
Returns
-------
precision, recall, score, accuracy : float
Precision, recall, and F-score, accuracy respectively.
"""
true_idx = (answers == 1) # the location where the answers are 1
false_idx = (answers == 0) # the location where the answers are 0
# true positive: answers are 1 and predictions are also 1
n_tp = numpy.count_nonzero(predictions[true_idx] == 1)
# false positive: answers are 0 but predictions are 1
n_fp = numpy.count_nonzero(predictions[false_idx] == 1)
# true negative: answers are 0 and predictions are also 0
n_tn = numpy.count_nonzero(predictions[false_idx] == 0)
# false negative: answers are 1 but predictions are 0
n_fn = numpy.count_nonzero(predictions[true_idx] == 0)
# precision, recall, and f-score
precision = n_tp / (n_tp + n_fp)
recall = n_tp / (n_tp + n_fn)
score = (
(1.0 + beta**2) * precision * recall /
(beta**2 * precision + recall)
)
accuracy = (n_tp + n_tn) / (n_tp + n_fn + n_fp + n_tn)
return precision, recall, score, accuracy
###Output
_____no_output_____
###Markdown
두개 값 모두 높을수록 좋습니다. 1.7 Initialization이제 저희가 학습할 parameter들을 초기화하겠습니다. 먼저 0으로 초기화해서 성능을 측정해보겠습니다.
###Code
# a function to get the gradients of a logistic model
gradients = grad(model_loss, argnum=2)
# initialize parameters
w = numpy.zeros(images_train.shape[1], dtype=float)
b = 0.
###Output
_____no_output_____
###Markdown
학습 전, 후의 성능 비교를 위해 학습 전 test dataset 에서의 성능을 측정해보겠습니다.
###Code
# initial accuracy
pred_labels_test = classify(images_test, (w, b))
perf = performance(pred_labels_test, labels_test)
print("Initial precision: {:.1f}%".format(perf[0]*100))
print("Initial recall: {:.1f}%".format(perf[1]*100))
print("Initial F-score: {:.1f}%".format(perf[2]*100))
print("Initial Accuracy: {:.1f}%".format(perf[3]*100))
###Output
Initial precision: 60.2%
Initial recall: 100.0%
Initial F-score: 75.2%
Initial Accuracy: 60.2%
###Markdown
0으로 초기화 했는데 성능이 나쁘지 않아 보입니다. 왜그럴까요? 전체를 다 0으로 초기화 하게되면 우리 모델은 단순히 모든 것들이 다 결함이 있다고 예측을 하게 됩니다. 우리는 103개의 결함이 없는 부품, 156개의 결함이 있는 부품을 가지고 있기 때문에 성능이 괜찮아 보이는 것입니다. 마치 T/F 문제를 풀 때 모든 것을 F로 찍은 학생이 F 가 정답인 문제의 개수가 많을 때 높은 점수를 받는 것과 비슷한 것입니다. Test와 validation set은 우리의 현실 데이터와 비슷해야 합니다. 실제 제조 공정에서 결함이 있는 부품들이 저렇게 많지 않을 것이기 때문에 이는 좋은 데이터라고 볼 수 없습니다. 데이터셋을 구성하는 데에는 그 해당 도메인에 대한 지식이 들어가야 합니다. 하지만 우리는 실습을 진행중이기 때문에 이대로 진행하겠습니다. 1.7 Training / optimization 이제 본격적으로 학습을 진행하도록 하겠습니다. 학습을 진행하는 동안 validation data로 얼마나 학습이 잘 진행되고 있는지 확인하겠습니다. 그리고 validation loss 가 더이상 줄어들지 않는 부분에서 학습을 멈추도록 하겠습니다.
###Code
# learning rate
lr = 1e-5
# a variable for the change in validation loss
change = numpy.inf
# a counter for optimization iterations
i = 0
# a variable to store the validation loss from the previous iteration
old_val_loss = 1e-15
# keep running if:
# 1. we still see significant changes in validation loss
# 2. iteration counter < 10000
while change >= 1e-5 and i < 10000:
# calculate gradients and use gradient descents
grads = gradients(images_train, labels_train, (w, b))
w -= (grads[0] * lr)
b -= (grads[1] * lr)
# validation loss
val_loss = model_loss(images_val, labels_val, (w, b))
# calculate f-scores against the validation dataset
pred_labels_val = classify(images_val, (w, b))
score = performance(pred_labels_val, labels_val)
# calculate the chage in validation loss
change = numpy.abs((val_loss-old_val_loss)/old_val_loss)
# update the counter and old_val_loss
i += 1
old_val_loss = val_loss
# print the progress every 10 steps
if i % 10 == 0:
print("{}...".format(i), end="")
print("")
print("")
print("Upon optimization stopped:")
print(" Iterations:", i)
print(" Validation loss:", val_loss)
print(" Validation precision:", score[0])
print(" Validation recall:", score[1])
print(" Validation F-score:", score[2])
print(" Validation Accuracy:", score[3])
print(" Change in validation loss:", change)
###Output
10...20...30...40...50...60...70...80...90...100...110...120...130...140...150...160...170...180...190...200...210...220...230...240...250...260...270...280...290...300...310...320...330...340...350...360...370...380...390...400...410...420...430...440...450...460...470...480...490...500...510...520...530...540...
Upon optimization stopped:
Iterations: 541
Validation loss: 126.77255420521098
Validation precision: 0.900709219858156
Validation recall: 0.8141025641025641
Validation F-score: 0.8552188552188552
Validation Accuracy: 0.833976833976834
Change in validation loss: 1.691256849174652e-06
###Markdown
최종 성능은 Test dataset으로 측정해야합니다! 이제 우리의 최종 성능을 구해보겠습니다.
###Code
# final accuracy
pred_labels_test = classify(images_test, (w, b))
perf = performance(pred_labels_test, labels_test)
print("Final precision: {:.1f}%".format(perf[0]*100))
print("Final recall: {:.1f}%".format(perf[1]*100))
print("Final F-score: {:.1f}%".format(perf[2]*100))
print("Final Accuracy: {:.1f}%".format(perf[3]*100))
###Output
Final precision: 88.0%
Final recall: 84.6%
Final F-score: 86.3%
Final Accuracy: 83.8%
###Markdown
F-score는 75.2%에서 86.2%, accuracy는 60.2%에서 83.8% 로 많이 증가하였습니다! 학습이 잘 된것을 확인할 수 있습니다! [HW10]부터 [HW14]에 걸쳐서 Linear model을 통해서 학습을 하는 방법을 배웠습니다. 다음 시간은 competition입니다. 제가 드린 데이터를 가지고 test set에서 가장 좋은 성능을 내는 사람이 우승하는 방식입니다. 궁금한 점 있으시면 질문 주시고 수고 많으셨습니다!
###Code
###Output
_____no_output_____ |
Assignments/Assignment_2.ipynb | ###Markdown
Second Assignment 1) Create a function called **"even_squared"** that receives an integer value **N**, and returns a list containing, in ascending order, the square of each of the even values, from 1 to N, including N if applicable.
###Code
x = 10
def even_squared(x):
my_list = []
x = x + 1
x = range(x)
for y in x:
if y % 2 == 0:
my_list.append(y**2)
return my_list
even_squared(x)
###Output
_____no_output_____
###Markdown
2) Using a while loop and the **input()** function, read an indefinite amount of **integers** until the number read is **-1**. After this process, print two lists on the screen: The first containing the even integers, and the second containing the odd integers. Both must be in ascending order.
###Code
even_list = []
odd_list = []
while True :
x = int(input())
if x % 2 == 0:
even_list.append(x)
if x % 2 != 0:
odd_list.append(x)
if x == -1:
print(even_list)
print(odd_list)
break
###Output
2
99
5
7
8
9
-3
-1
[2, 8]
[99, 5, 7, 9, -3, -1]
###Markdown
3) Create a function called **"even_account"** that receives a list of integers, counts the number of existing even elements, and returns this count.
###Code
def even_account():
list1 =(list(map(int,input().split())))
even_list = []
for x in list1:
if x % 2 == 0:
even_list.append(x)
return len(even_list)
even_account()
###Output
1 2 3 4 5 6 -3 -6 -8
###Markdown
4) Create a function called **"squared_list"** that receives a list of integers and returns another list whose elements are the squares of the elements of the first.
###Code
def squared_list():
list1 =(list(map(int,input().split())))
squared = []
for x in list1:
x = x**2
squared.append(x)
return squared
squared_list()
###Output
1 2 3 4 5 6 7 8 9
###Markdown
5) Create a function called **"descending"** that receives two lists of integers and returns a single list, which contains all the elements in descending order, and may include repeated elements.
###Code
def descending():
list1 =(list(map(int,input().split())))
list2 =(list(map(int,input().split())))
list3 = list1 + list2
return sorted(list3, reverse = True)
descending()
###Output
1 2 3 4 5
8 7 6 5 4
###Markdown
6) Create a function called **"adding"** that receives a list **A**, and an arbitrary number of integers as input. Return a new list containing the elements of **A** plus the integers passed as input, in the order in which they were given. Here is an example: >```python>>>> A = [10,20,30]>>>> adding(A, 4, 10, 50, 1)> [10, 20, 30, 4, 10, 50, 1]```
###Code
def adding():
list1 =(list(map(int,input().split())))
A = [10, 20 , 30]
B = A + list1
return B
adding()
###Output
4 10 50 1
###Markdown
7) Create a function called **"intersection"** that receives two input lists and returns another list with the values that belong to the two lists simultaneously (intersection) without repetition of values and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2, 3]>>>> B = [-1, 2, 3, 6, 8]>>>> intersection(A,B)> [2, 3]```
###Code
def intersection():
list1 =(list(map(int,input().split())))
list2 =(list(map(int,input().split())))
list3 = [value for value in list1 if value in list2]
return list3
intersection()
###Output
-2 0 1 2 3
-1 2 3 6 8
###Markdown
8) Create a function called **"union"** that receives two input lists and returns another list with the union of the elements of the two received, without repetition of elements and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2]>>>> B = [-1, 1, 2, 10]>>>> union(A,B)> [-2, ,-1, 0, 1, 2, 10]```
###Code
def union():
list1 = (list(map(int,input().split())))
list2 = (list(map(int,input().split())))
union_list = []
for x in list1:
if x not in union_list:
union_list.append(x)
for x in list2:
if x not in union_list:
union_list.append(x)
return sorted(union_list)
union()
###Output
-2 0 1 2
-1 1 2 10
###Markdown
9) Generalize the **"intersection"** function so that it receives an indefinite number of lists and returns the intersection of all of them. Call the new function **intersection2**.
###Code
def intersection2():
list1 =(list(map(int,input().split())))
list2 =(list(map(int,input().split())))
list3 =(list(map(int,input().split())))
temp_list =[]
final_list = []
for x in list1:
if x not in temp_list:
temp_list.append(x)
else:
final_list.append(x)
for x in list2:
if x not in temp_list:
temp_list.append(x)
else:
final_list.append(x)
for x in list3:
if x not in temp_list:
temp_list.append(x)
else:
final_list.append(x)
return final_list
intersection2()
###Output
1 2 3
7 4 1
3 5 7
###Markdown
Second Assignment 1) Create a function called **"even_squared"** that receives an integer value **N**, and returns a list containing, in ascending order, the square of each of the even values, from 1 to N, including N if applicable.
###Code
def even_squared():
N = int(input("Number please: "))
square = []
for num in range(1,N+1):
if num % 2 == 0:
square.append(num**2)
print("The square of all even values from 1 to " + str(N) + " is " + str(sorted(square)))
even_squared()
###Output
Number please: 96
The square of all even values from 1 to 96 is [4, 16, 36, 64, 100, 144, 196, 256, 324, 400, 484, 576, 676, 784, 900, 1024, 1156, 1296, 1444, 1600, 1764, 1936, 2116, 2304, 2500, 2704, 2916, 3136, 3364, 3600, 3844, 4096, 4356, 4624, 4900, 5184, 5476, 5776, 6084, 6400, 6724, 7056, 7396, 7744, 8100, 8464, 8836, 9216]
###Markdown
2) Using a while loop and the **input()** function, read an indefinite amount of **integers** until the number read is **-1**. After this process, print two lists on the screen: The first containing the even integers, and the second containing the odd integers. Both must be in ascending order.
###Code
even = []
odd = []
num = int(input())
while num != -1:
if num % 2 == 0:
even.append(num)
elif num % 2 != 0:
odd.append(num)
num = int(input())
if num == -1:
print("Even numbers: " + str(sorted(even)) + "\nOdd numbers: " + str(sorted(odd)))
else: print("oops")
###Output
3
4
5
8
-2
-1
Even numbers: [-2, 4, 8]
Odd numbers: [3, 5]
###Markdown
3) Create a function called **"even_account"** that receives a list of integers, counts the number of existing even elements, and returns this count.
###Code
def even_account(list):
even = 0
for element in list:
if element % 2 == 0:
even += 1
print("There are " + str(even) + " even numbers in the list.")
even_account([1,34,6,17,28,19,20])
###Output
There are 4 even numbers in the list.
###Markdown
4) Create a function called **"squared_list"** that receives a list of integers and returns another list whose elements are the squares of the elements of the first.
###Code
def squared_list(list):
squares = []
for num in list:
if num % list[0] == 0:
squares.append(num)
print("The numbers " + str(squares) + " of the given list are squares of the first given number " + str(list[0]) + ".")
squared_list([3,24,12,55,21,38,33,52])
###Output
The numbers [3, 24, 12, 21, 33] of the given list are squares of the first given number 3.
###Markdown
5) Create a function called **"descending"** that receives two lists of integers and returns a single list, which contains all the elements in descending order, and may include repeated elements.
###Code
def descending(list1, list2):
return sorted(list1 + list2, reverse = True)
descending([1,4,77,23], [40,887,5,2,76])
###Output
_____no_output_____
###Markdown
6) Create a function called **"adding"** that receives a list **A**, and an arbitrary number of integers as input. Return a new list containing the elements of **A** plus the integers passed as input, in the order in which they were given. Here is an example: >```python>>>> A = [10,20,30]>>>> adding(A, 4, 10, 50, 1)> [10, 20, 30, 4, 10, 50, 1]```
###Code
def adding(A,*args):
for arg in args:
A.append(arg)
return A
adding([10,20,30],4,10,50,1)
###Output
_____no_output_____
###Markdown
7) Create a function called **"intersection"** that receives two input lists and returns another list with the values that belong to the two lists simultaneously (intersection) without repetition of values and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2, 3]>>>> B = [-1, 2, 3, 6, 8]>>>> intersection(A,B)> [2, 3]```
###Code
def intersection(A,B):
int = []
for elem in A:
if elem == elem in B:
if elem not in int:
int.append(elem)
return sorted(int)
intersection([3,2,3,1,50,5],[1,50,3,4])
###Output
_____no_output_____
###Markdown
8) Create a function called **"union"** that receives two input lists and returns another list with the union of the elements of the two received, without repetition of elements and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2]>>>> B = [-1, 1, 2, 10]>>>> union(A,B)> [-2, ,-1, 0, 1, 2, 10]```
###Code
def union(A,B):
return sorted(dict.fromkeys(A+B))
union([-2,0,1,3,2],[-1,1,2,3,10])
###Output
_____no_output_____
###Markdown
9) Generalize the **"intersection"** function so that it receives an indefinite number of lists and returns the intersection of all of them. Call the new function **intersection2**.
###Code
def intersection2(*lists):
import itertools
int = list(itertools.chain(*lists))
return sorted(dict.fromkeys(int))
intersection2([3,2,3,1,50,5],[1,50,3,4],[3,5,30,5,18])
###Output
_____no_output_____
###Markdown
Challenge 10) Create a function named **"matrix"** that implements matrix multiplication: Given the matrices: $A_{m\times n}=\left[\begin{matrix}a_{11}&a_{12}&...&a_{1n}\\a_{21}&a_{22}&...&a_{2n}\\\vdots &\vdots &&\vdots\\a_{m1}&a_{m2}&...&a_{mn}\\\end{matrix}\right]$ We will represent then as a list of lists.$A = [[a_{11},a_{12},...,a_{1n}],[a_{21},a_{22},...,a_{2n}], . . . ,[a_{m1},a_{m2},...,a_{mn}]]$The **"matrix"** funtion must receive two matrices $A$ e $B$ in the specified format and return $A\times B$
###Code
def matrix():
import numpy as np
A = np.array(input("Matrice A: "))
B = np.array(input("Matrice B: "))
C = np.matmul(A,B)
print(C)
matrix()
###Output
Matrice A: [[1,2,3],[2,3,4],[4,5,6]]
Matrice B: [[2,3,4],[3,4,5],[5,6,7]]
[[23 29 35]
[33 42 51]
[53 68 83]]
###Markdown
Second Assignment 1) Create a function called **"even_squared"** that receives an integer value **N**, and returns a list containing, in ascending order, the square of each of the even values, from 1 to N, including N if applicable.
###Code
def even_squared():
l = list()
for i in range(1,21):
l.append(i**2)
print(l)
even_squared()
###Output
_____no_output_____
###Markdown
2) Using a while loop and the **input()** function, read an indefinite amount of **integers** until the number read is **-1**. After this process, print two lists on the screen: The first containing the even integers, and the second containing the odd integers. Both must be in ascending order.
###Code
###Output
_____no_output_____
###Markdown
3) Create a function called **"even_account"** that receives a list of integers, counts the number of existing even elements, and returns this count.
###Code
###Output
_____no_output_____
###Markdown
4) Create a function called **"squared_list"** that receives a list of integers and returns another list whose elements are the squares of the elements of the first.
###Code
###Output
_____no_output_____
###Markdown
5) Create a function called **"descending"** that receives two lists of integers and returns a single list, which contains all the elements in descending order, and may include repeated elements.
###Code
###Output
_____no_output_____
###Markdown
6) Create a function called **"adding"** that receives a list **A**, and an arbitrary number of integers as input. Return a new list containing the elements of **A** plus the integers passed as input, in the order in which they were given. Here is an example: >```python>>>> A = [10,20,30]>>>> adding(A, 4, 10, 50, 1)> [10, 20, 30, 4, 10, 50, 1]```
###Code
###Output
_____no_output_____
###Markdown
7) Create a function called **"intersection"** that receives two input lists and returns another list with the values that belong to the two lists simultaneously (intersection) without repetition of values and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2, 3]>>>> B = [-1, 2, 3, 6, 8]>>>> intersection(A,B)> [2, 3]```
###Code
###Output
_____no_output_____
###Markdown
8) Create a function called **"union"** that receives two input lists and returns another list with the union of the elements of the two received, without repetition of elements and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2]>>>> B = [-1, 1, 2, 10]>>>> union(A,B)> [-2, ,-1, 0, 1, 2, 10]```
###Code
###Output
_____no_output_____
###Markdown
9) Generalize the **"intersection"** function so that it receives an indefinite number of lists and returns the intersection of all of them. Call the new function **intersection2**.
###Code
###Output
_____no_output_____
###Markdown
Challenge 10) Create a function named **"matrix"** that implements matrix multiplication: Given the matrices: $A_{m\times n}=\left[\begin{matrix}a_{11}&a_{12}&...&a_{1n}\\a_{21}&a_{22}&...&a_{2n}\\\vdots &\vdots &&\vdots\\a_{m1}&a_{m2}&...&a_{mn}\\\end{matrix}\right]$ We will represent then as a list of lists.$A = [[a_{11},a_{12},...,a_{1n}],[a_{21},a_{22},...,a_{2n}], . . . ,[a_{m1},a_{m2},...,a_{mn}]]$The **"matrix"** funtion must receive two matrices $A$ e $B$ in the specified format and return $A\times B$
###Code
###Output
_____no_output_____
###Markdown
Second Assignment 1) Create a function called **"even_squared"** that receives an integer value **N**, and returns a list containing, in ascending order, the square of each of the even values, from 1 to N, including N if applicable.
###Code
def even_squared(N):
les = [i ** 2 for i in range(1, N+1) if i % 2 == 0]
return les
even_squared(10)
###Output
_____no_output_____
###Markdown
2) Using a while loop and the **input()** function, read an indefinite amount of **integers** until the number read is **-1**. After this process, print two lists on the screen: The first containing the even integers, and the second containing the odd integers. Both must be in ascending order.
###Code
eveni = []
oddi = []
while True:
inp = int(input('Enter a number'))
if inp == -1:
break
elif inp % 2 == 0:
eveni.append(inp)
else:
oddi.append(inp)
print(sorted(eveni), sorted(oddi), sep='\n')
###Output
Enter a number 2
Enter a number 3
Enter a number 4
Enter a number 5
Enter a number 1
Enter a number 0
Enter a number -1
###Markdown
3) Create a function called **"even_account"** that receives a list of integers, counts the number of existing even elements, and returns this count.
###Code
def even_account(LoI):
count = len([v for v in LoI if v % 2 == 0])
return count
even_account([2, 3, 4, 5, 6, 1, 2, 4, 7])
###Output
_____no_output_____
###Markdown
4) Create a function called **"squared_list"** that receives a list of integers and returns another list whose elements are the squares of the elements of the first.
###Code
def squared_list(LOI):
LOI1 = [v ** 2 for v in LOI]
return LOI1
squared_list([2, 3, 4, 1, 10])
###Output
_____no_output_____
###Markdown
5) Create a function called **"descending"** that receives two lists of integers and returns a single list, which contains all the elements in descending order, and may include repeated elements.
###Code
def descending(Loi, Loi1):
Loi.extend(Loi1)
return sorted(Loi)[::-1]
print(descending([2, 4, 5], [6, 2, 3]))
###Output
[6, 5, 4, 3, 2, 2]
###Markdown
6) Create a function called **"adding"** that receives a list **A**, and an arbitrary number of integers as input. Return a new list containing the elements of **A** plus the integers passed as input, in the order in which they were given. Here is an example: >```python>>>> A = [10,20,30]>>>> adding(A, 4, 10, 50, 1)> [10, 20, 30, 4, 10, 50, 1]```
###Code
def adding(A, *args):
A1 = [a for a in args]
return A + A1
A = [3,4,5,6]
adding(A, 10, 100, 200)
###Output
_____no_output_____
###Markdown
7) Create a function called **"intersection"** that receives two input lists and returns another list with the values that belong to the two lists simultaneously (intersection) without repetition of values and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2, 3]>>>> B = [-1, 2, 3, 6, 8]>>>> intersection(A,B)> [2, 3]```
###Code
def intersection(L1, L2):
L3 = [x for x in L1 if x in L2]
L4 = []
for y in L3:
if y not in L4:
L4.append(y)
return sorted(L4)
A = [3, 5, 6,]
B = [5, 5, 6, 6, 2]
intersection(A, B)
###Output
_____no_output_____
###Markdown
8) Create a function called **"union"** that receives two input lists and returns another list with the union of the elements of the two received, without repetition of elements and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2]>>>> B = [-1, 1, 2, 10]>>>> union(A,B)> [-2, ,-1, 0, 1, 2, 10]```
###Code
def union(LI1, LI2):
LI1.extend(LI2)
LI3 = []
for v in LI1:
if v not in LI3:
LI3.append(v)
return sorted(LI3)
A = [-2, 0, 1, 2, 10, 11]
B = [-4, -2, -1, 1, 2, 10, 11]
union(A, B)
###Output
_____no_output_____
###Markdown
9) Generalize the **"intersection"** function so that it receives an indefinite number of lists and returns the intersection of all of them. Call the new function **intersection2**.
###Code
def intersection2(*args):
LIx = []
ct = 0
for a in args[0]:
for b in range(1, len(args)):
if a in args[b]:
ct += 1
if ct == (len(args) - 1):
LIx.append(a)
ct = 0
LIy = []
for c in LIx:
if c not in LIy:
LIy.append(c)
return sorted(LIy)
A = [-3, 4, 5, 6, 7]
B = [-3, 3, 4, 5, 6, 7]
C = [-3, 9, 3, 5, 6, 7]
D = [-3, -9, 3, 4, 5, 6, 7]
intersection2(A, B, C, D)
###Output
_____no_output_____
###Markdown
Challenge 10) Create a function named **"matrix"** that implements matrix multiplication: Given the matrices: $A_{m\times n}=\left[\begin{matrix}a_{11}&a_{12}&...&a_{1n}\\a_{21}&a_{22}&...&a_{2n}\\\vdots &\vdots &&\vdots\\a_{m1}&a_{m2}&...&a_{mn}\\\end{matrix}\right]$ We will represent then as a list of lists.$A = [[a_{11},a_{12},...,a_{1n}],[a_{21},a_{22},...,a_{2n}], . . . ,[a_{m1},a_{m2},...,a_{mn}]]$The **"matrix"** funtion must receive two matrices $A$ e $B$ in the specified format and return $A\times B$
###Code
def matrix(M1, M2):
Am = len(M1)
An = len(M1[0])
Bm = len(M2)
An = Bm
Bn = len(M2[0])
C = [[] for x in range(Am)]
if An != Bm:
print('The multiplication of the two matrices is not possible')
else:
c = 0
d = 0
for a in range(Am):
for b in range(Bn):
for a1 in range(An):
c = M1[a][a1] * M2[a1][b]
d += c
C[a].append(d)
c = 0
d = 0
return C
A = [[3, 2, 1], [1, 0, 2]]
B = [[1, 2], [0, 1], [4, 0]]
matrix(A, B)
###Output
_____no_output_____
###Markdown
Second Assignment 1) Create a function called **"even_squared"** that receives an integer value **N**, and returns a list containing, in ascending order, the square of each of the even values, from 1 to N, including N if applicable.
###Code
def even_squared(n):
list = []
x = range(1, n + 1)
for num in x:
if (num % 2 == 0):
list.append(num * num)
return (print(list))
even_squared(8)
###Output
_____no_output_____
###Markdown
2) Using a while loop and the **input()** function, read an indefinite amount of **integers** until the number read is **-1**. After this process, print two lists on the screen: The first containing the even integers, and the second containing the odd integers. Both must be in ascending order.
###Code
inputs = int(input("Enter Number: "))
even = []
odd = []
while inputs != -1:
if (inputs % 2 == 0):
even.append(inputs)
else:
odd.append(inputs)
inputs = int(input("Enter Num: "))
odd.sort()
print(even, odd)
###Output
_____no_output_____
###Markdown
3) Create a function called **"even_account"** that receives a list of integers, counts the number of existing even elements, and returns this count.
###Code
def even_account(int_list):
even_numbers = []
for x in int_list:
if(x % 2 == 0):
even_numbers.append(x)
return(print(len(even_numbers)))
integers = [1, 2, 6, 9]
even_account(integers)
###Output
_____no_output_____
###Markdown
4) Create a function called **"squared_list"** that receives a list of integers and returns another list whose elements are the squares of the elements of the first.
###Code
def squared_list(integers):
squares = []
for x in integers:
x = x * x
squares.append(x)
return(print(squares))
integers = [1, 2, 6, 9]
squared_list(integers)
###Output
_____no_output_____
###Markdown
5) Create a function called **"descending"** that receives two lists of integers and returns a single list, which contains all the elements in descending order, and may include repeated elements.
###Code
def descending(list1, list2):
list3 = list1 + list2
list3.sort(reverse = True)
return(print(list3))
int_list1 = [4, 29, 2, 19]
int_list2 = [3, 7, 66, 3]
descending(int_list1, int_list2)
###Output
_____no_output_____
###Markdown
6) Create a function called **"adding"** that receives a list **A**, and an arbitrary number of integers as input. Return a new list containing the elements of **A** plus the integers passed as input, in the order in which they were given. Here is an example: >```python>>>> A = [10,20,30]>>>> adding(A, 4, 10, 50, 1)> [10, 20, 30, 4, 10, 50, 1]```
###Code
def adding(A, *int):
list = []
for x in int:
list.append(x)
return(print(A + list))
A = [10, 20, 30]
adding(A, 4, 10, 50, 1)
###Output
_____no_output_____
###Markdown
7) Create a function called **"intersection"** that receives two input lists and returns another list with the values that belong to the two lists simultaneously (intersection) without repetition of values and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2, 3]>>>> B = [-1, 2, 3, 6, 8]>>>> intersection(A,B)> [2, 3]```
###Code
def intersection(list1, list2):
list3 = [x for x in list1 if x in list2]
return(print(list3))
A = [-2, 0, 1, 2, 3]
B = [-1, 2, 3, 6, 8]
intersection(A, B)
###Output
_____no_output_____
###Markdown
8) Create a function called **"union"** that receives two input lists and returns another list with the union of the elements of the two received, without repetition of elements and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2]>>>> B = [-1, 1, 2, 10]>>>> union(A,B)> [-2, ,-1, 0, 1, 2, 10]```
###Code
def union(list1, list2):
list3 = []
for x in list1:
list3.append(x)
for x in list2:
if x not in list3:
list3.append(x)
list3.sort()
return(print(list3))
A = [-2, 0, 1, 2]
B = [-1, 1, 2, 10]
union(A, B)
###Output
_____no_output_____
###Markdown
9) Generalize the **"intersection"** function so that it receives an indefinite number of lists and returns the intersection of all of them. Call the new function **intersection2**.
###Code
def intersection2(*list):
list3 = [x for x in list]
flat_list = [item for sublist in list3 for item in sublist]
all = [flat_list[i] for i in range(len(flat_list)) if not i == flat_list.index(flat_list[i])]
last = []
[last.append(x) for x in all if x not in last]
return(print(last))
A = [-2, 0, 1, 2, 3]
B = [-1, 2, 3, 6, 8]
intersection2(A, B)
###Output
[2, 3, 2]
###Markdown
Challenge 10) Create a function named **"matrix"** that implements matrix multiplication: Given the matrices: $A_{m\times n}=\left[\begin{matrix}a_{11}&a_{12}&...&a_{1n}\\a_{21}&a_{22}&...&a_{2n}\\\vdots &\vdots &&\vdots\\a_{m1}&a_{m2}&...&a_{mn}\\\end{matrix}\right]$ We will represent then as a list of lists.$A = [[a_{11},a_{12},...,a_{1n}],[a_{21},a_{22},...,a_{2n}], . . . ,[a_{m1},a_{m2},...,a_{mn}]]$The **"matrix"** funtion must receive two matrices $A$ e $B$ in the specified format and return $A\times B$
###Code
def matrix(A, B):
list = [[0 for x in range(3)] for y in range(3)]
for i in range(len(A)):
for j in range(len(B[0])):
for k in range(len(A)):
list[i][j] += A[i][k] * B[k][j]
print(list)
###Output
[[114], [160], [60], [27]]
[[74], [97], [73], [14]]
[[119], [157], [112], [23]]
###Markdown
Second Assignment 1) Create a function called **"even_squared"** that receives an integer value **N**, and returns a list containing, in ascending order, the square of each of the even values, from 1 to N, including N if applicable.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
def even_squared(N):
my_list=[]
for i in range(1,N+1):
if (i%2)==0:
i = i**2
my_list.append(i)
else:
continue
return my_list
even_squared(8)
###Output
_____no_output_____
###Markdown
2) Using a while loop and the **input()** function, read an indefinite amount of **integers** until the number read is **-1**. After this process, print two lists on the screen: The first containing the even integers, and the second containing the odd integers. Both must be in ascending order.
###Code
def read_int_and_even_uneven():
number = 0
list1 = []
list2=[]
while number != -1:
number = int(input('Write an integer: '))
if (number%2)==0:
list1.append(number)
elif (number%2)!= 0:
list2.append(number)
return list1,list2
def bubble(liste):
count=1
while count>0:
count=0
for i in range(len(liste)-1):
a=liste[i]
b=liste[i+1]
if a>b:
liste[i]=b
liste[i+1]=a
count+=1
return liste
list1,list2=read_int_and_even_uneven()
list1=bubble(list1)
list2=bubble(list2)
print(list1,list2)
###Output
Write an integer: 34
Write an integer: 25
Write an integer: 78
Write an integer: 2
Write an integer: 1
Write an integer: 90
Write an integer: 135
Write an integer: 4
Write an integer: -2
Write an integer: -1
[-2, 2, 4, 34, 78, 90] [-1, 1, 25, 135]
###Markdown
3) Create a function called **"even_account"** that receives a list of integers, counts the number of existing even elements, and returns this count.
###Code
def even_account(my_list):
even=0
for num in my_list:
if num%2==0:
even += 1
print('Even numbers: ',even)
my_list=[3,7,23,4,5,19,206,87]
even_account(my_list)
###Output
Even numbers: 2
###Markdown
4) Create a function called **"squared_list"** that receives a list of integers and returns another list whose elements are the squares of the elements of the first.
###Code
def squared_list(square):
liste=[]
for i in range(len(square)):
liste.append(square[i]**2)
return liste
square=[2,3,4,56,7,88,9,32,12,-14]
squared_list(square)
###Output
_____no_output_____
###Markdown
5) Create a function called **"descending"** that receives two lists of integers and returns a single list, which contains all the elements in descending order, and may include repeated elements.
###Code
def bubble_inverse(liste):
x=1
while x>0:
x=0
for i in range(len(liste)-1):
a=liste[i]
b=liste[i+1]
if a<b:
liste[i]=b
liste[i+1]=a
x+=1
return liste
def descending(list3,list4):
list5= list3 + list4
list5_sort=bubble_inverse(list5)
print(list5_sort)
list3=[23,14,566,75,43,45,687,201]
list4=[345,302,546,654,659,90,24]
descending(list3,list4)
###Output
[687, 659, 654, 566, 546, 345, 302, 201, 90, 75, 45, 43, 24, 23, 14]
###Markdown
6) Create a function called **"adding"** that receives a list **A**, and an arbitrary number of integers as input. Return a new list containing the elements of **A** plus the integers passed as input, in the order in which they were given. Here is an example: >```python>>>> A = [10,20,30]>>>> adding(A, 4, 10, 50, 1)> [10, 20, 30, 4, 10, 50, 1]```
###Code
A=[10,20,30]
def adding(A,*arg):
C=A+list(arg)
return C
adding(A,1,23,5,78,90)
###Output
_____no_output_____
###Markdown
7) Create a function called **"intersection"** that receives two input lists and returns another list with the values that belong to the two lists simultaneously (intersection) without repetition of values and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2, 3]>>>> B = [-1, 2, 3, 6, 8]>>>> intersection(A,B)> [2, 3]```
###Code
def intersection(A,B):
C=[]
for i in range(len(A)):
for j in range(len(B)):
if A[i]==B[j]:
C.append(A[i])
D=[]
for c in C:
if c not in D:
D.append(c)
D_sorted=bubble(D)
return list(D_sorted)
A = [-2, 2, 1, 2, 3]
B = [-1, 2, 3, 6, 8]
intersection(A,B)
###Output
_____no_output_____
###Markdown
8) Create a function called **"union"** that receives two input lists and returns another list with the union of the elements of the two received, without repetition of elements and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2]>>>> B = [-1, 1, 2, 10]>>>> union(A,B)> [-2, ,-1, 0, 1, 2, 10]```
###Code
def union(A,B):
C=A+B
D=[]
for c in C:
if c not in D:
D.append(c)
D_sorted=bubble(D)
return D_sorted
A=[-2,0,1,2]
B=[-1,1,2,10]
union(A,B)
###Output
_____no_output_____
###Markdown
9) Generalize the **"intersection"** function so that it receives an indefinite number of lists and returns the intersection of all of them. Call the new function **intersection2**.
###Code
A=[-2,0,1,2]
B=[-1,1,2,10]
C=[4,2,1,10]
def intersection2(*arg):
C=arg[0]
for i in range(1,len(arg)):
C=intersection(C,arg[i])
return C
intersection2(A,B,C)
###Output
_____no_output_____
###Markdown
Challenge 10) Create a function named **"matrix"** that implements matrix multiplication: Given the matrices: $A_{m\times n}=\left[\begin{matrix}a_{11}&a_{12}&...&a_{1n}\\a_{21}&a_{22}&...&a_{2n}\\\vdots &\vdots &&\vdots\\a_{m1}&a_{m2}&...&a_{mn}\\\end{matrix}\right]$ We will represent then as a list of lists.$A = [[a_{11},a_{12},...,a_{1n}],[a_{21},a_{22},...,a_{2n}], . . . ,[a_{m1},a_{m2},...,a_{mn}]]$The **"matrix"** funtion must receive two matrices $A$ e $B$ in the specified format and return $A\times B$
###Code
A=[[1,2],[3,4],[1,2]]
B=[[5,6,1],[7,8,1]]
def matrix(A,B):
if len(A)!=len(B[0]):
print("Error: Number of Rows of first Matrix not equal to number of columns of second matrix")
else:
C=[]
#initialize C
for m in range(len(A)):
C.append([])
for n in range(len(B[0])):
C[m].append(0)
#Calcualte Values
for m in range(len(C)):
for n in range(len(C[m])):
for i in range(len(B)):
C[m][n]+=A[m][i]*B[i][n]
return C
matrix(A,B)
###Output
_____no_output_____
###Markdown
Second Assignment 1) Create a function called **"even_squared"** that receives an integer value **N**, and returns a list containing, in ascending order, the square of each of the even values, from 1 to N, including N if applicable.
###Code
import numpy as np
def even_squared(x):
z = np.array(range(1, (x+1)))
y = np.array(z % 2 == 0)
return z[y]**2
even_squared(10)
###Output
_____no_output_____
###Markdown
2) Using a while loop and the **input()** function, read an indefinite amount of **integers** until the number read is **-1**. After this process, print two lists on the screen: The first containing the even integers, and the second containing the odd integers. Both must be in ascending order.
###Code
def number(*x):
my_list = []
even_numbers = []
odd_numbers = []
for num in x:
my_list.append(num)
if num == -1:
break
print(my_list)
for i in my_list:
if i % 2 == 0:
even_numbers.append(i)
even_numbers.sort()
else:
odd_numbers.append(i)
odd_numbers.sort()
print('Your even numbers: ', even_numbers)
print('Your odd numbers: ', odd_numbers)
number(2,4,5,-1, 6, 7)
###Output
[2, 4, 5, -1]
Your even numbers: [2, 4]
Your odd numbers: [-1, 5]
###Markdown
3) Create a function called **"even_account"** that receives a list of integers, counts the number of existing even elements, and returns this count.
###Code
def even_account(x):
even_num = [num for num in x if num % 2 == 0]
return len(even_num)
even_account([1,2,3,4])
###Output
_____no_output_____
###Markdown
The same function with input from user:
###Code
def even_account2(x=1):
x = list(map(int, input("Enter integers: ").split()))
even_num = [num for num in x if num % 2 == 0]
print('Number of even numbers:', len(even_num))
even_account2()
###Output
Enter integers: 1 3
###Markdown
4) Create a function called **"squared_list"** that receives a list of integers and returns another list whose elements are the squares of the elements of the first.
###Code
def squared_list(*x):
squared_x = np.array(x) ** 2
return squared_x
squared_list(1, 2, 5, 6)
###Output
_____no_output_____
###Markdown
Without numpy but with loop:
###Code
def squared_list2(*x):
new_list = []
for i in x:
new_list.append(i**2)
return new_list
squared_list2(3,4,7,23,45)
###Output
_____no_output_____
###Markdown
5) Create a function called **"descending"** that receives two lists of integers and returns a single list, which contains all the elements in descending order, and may include repeated elements.
###Code
def descending(a,b):
my_list = a + b
my_list.sort(reverse=True)
return my_list
descending([1,2,3], [9,34,5])
def descending1():
list3 = (list(map(int, input("Enter 1st list: ").split())) + list(map(int, input("Enter 2nd list: ").split())))
list3.sort(reverse=True)
print(list3)
###Output
_____no_output_____
###Markdown
6) Create a function called **"adding"** that receives a list **A**, and an arbitrary number of integers as input. Return a new list containing the elements of **A** plus the integers passed as input, in the order in which they were given. Here is an example: >```python>>>> A = [10,20,30]>>>> adding(A, 4, 10, 50, 1)> [10, 20, 30, 4, 10, 50, 1]```
###Code
def adding(a, *args):
my_list = [args[0]]
other_num = args[1:]
for num in other_num:
my_list.append(num)
all = a + my_list
return all
adding([26,15],6,7,37,5453)
###Output
_____no_output_____
###Markdown
or even **simpler**:
###Code
def adding(a, *args):
my_list = a + list(args)
return my_list
adding(["a", 1, "sdf"], 3,4,6)
###Output
_____no_output_____
###Markdown
7) Create a function called **"intersection"** that receives two input lists and returns another list with the values that belong to the two lists simultaneously (intersection) without repetition of values and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2, 3]>>>> B = [-1, 2, 3, 6, 8]>>>> intersection(A,B)> [2, 3]```
###Code
def intersection(a,b):
common_n = []
for i in a:
if i in b:
common_n.append(i)
return sorted(common_n)
intersection([3,2,1], [3,1,5])
###Output
_____no_output_____
###Markdown
**OR**
###Code
def intersection1(a, b):
c = [num for num in a if num in b]
print('Common numbers in both lists:', sorted(c))
intersection1([1,2,3], [3,1,5])
###Output
Common numbers in both lists: [1, 3]
###Markdown
8) Create a function called **"union"** that receives two input lists and returns another list with the union of the elements of the two received, without repetition of elements and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2]>>>> B = [-1, 1, 2, 10]>>>> union(A,B)> [-2, ,-1, 0, 1, 2, 10]```
###Code
def union(x, y):
y = [num for num in y if not num in x]
return sorted(x+y)
union([1, 4, 5], [378,4,5])
###Output
_____no_output_____
###Markdown
9) Generalize the **"intersection"** function so that it receives an indefinite number of lists and returns the intersection of all of them. Call the new function **intersection2**.
###Code
def intersection2(*args):
lists = len(args)
common_n = args[0]
for i in range(lists):
common_n = intersection(common_n,args[i])
return common_n
intersection2([1,2,3], [2,3,6], [6,2,7])
###Output
_____no_output_____ |
examples/MNIST-example.ipynb | ###Markdown
Introduction to MyVisionThis is a library I made to combine everything I <3 about PyTorch.My goal is "Do more with less code". Well `MyVision` is a wrapper over PyTorch.That means u must know PyTorch before working with it and if u know PyTorch you can yourself make any customizations. Just have at look at the source code on github.With this aside let's start our example. It's the MNIST example as u might have guessed already :P
###Code
# torch imports
import torch
import torchvision
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
# standard "Every ML/DL problem" imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
So, let me give u a brief overview of what `MyVision` offers:There are two important things at the heart of it.1. Dataset2. TrainerThe former one we will go through in another example.Here, we go through `Trainer`So what is `Trainer`?Simply, Trainer provides trainig and validation methods, normally in PyTorch you have to write yourcustom loop, which let me tell you, gives you ultimate customization. But I wanted to do something like what keras `.fit()` offers. So I decided to build it up.Trainer offers you this keras like `.fit()` magic. With proper parameters you can simply `.fit()` and *boom!* training begins.So, let's import the specifics. Our `Trainer` is present in `MyVision.engine.Engine`
###Code
import MyVision
from MyVision.engine.Engine import Trainer
from MyVision.utils.ModelUtils import freeze_layers
from MyVision.utils.PlotUtils import show_batch
###Output
_____no_output_____
###Markdown
Below we just make the two DataLoaders because as you know in PyTorch `DataLoader` is where the heavylifting takes place. Our trainer expects these DataLoaders
###Code
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, download=True, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=512, shuffle=True
)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=512, shuffle=True
)
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to data\MNIST\raw\train-images-idx3-ubyte.gz
###Markdown
Let's have a look at our batch using our `show_batch` function!
###Code
show_batch(
datasets.MNIST('data', train=True, download=True, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
classes=[0,1,2,3,4,5,6,7,8,9]
)
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
###Markdown
Next we do usual stuff i.e. define our `model`, `optimizer` & `loss`
###Code
model = torchvision.models.resnet18(pretrained=True)
model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
model.fc = torch.nn.Linear(in_features=model.fc.in_features, out_features=10)
loss = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adadelta(model.parameters(), lr=0.01)
###Output
Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to C:\Users\Abhishek Swain/.cache\torch\hub\checkpoints\resnet18-5c106cde.pth
###Markdown
Below is the part to what we have been building upto aka `Trainer`.Let's have a look at what functions does the `Trainer` has.Run the cell below to see:
###Code
?Trainer.fit
###Output
_____no_output_____
###Markdown
You will see that out `Trainer` just takes in the usual stuff:1. Training, Validation & Test(if specified) DataLoaders2. device(either `cpu` or `cuda`)3. loss4. optimizer5. model6. learning rate scheduler(if you want)Whatever you don't want just specify it as `None`. Finally, for the magic to begin specifiy number of epochs and the scheduler metric in the `.fit()`Now just run the cell below and we are off !
###Code
Trainer.fit(
train_loader=train_loader,
val_loader=test_loader,
device='cuda',
criterion=loss,
optimizer=optimizer,
model=model.to('cuda'),
lr_scheduler=None,
metric_name='accuracy',
epochs=5
)
###Output
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [01:21<00:00, 1.45it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00, 3.26it/s]
[SAVING] to models\model-[13012021-225143].pt
+-------+------------+-----------------+----------+
| Epoch | Train loss | Validation loss | accuracy |
+-------+------------+-----------------+----------+
| 1 | 1.189 | 0.505 | 0.854 |
+-------+------------+-----------------+----------+
Epoch completed in: 1.468216605981191 mins
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [01:17<00:00, 1.53it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00, 3.29it/s]
[SAVING] to models\model-[13012021-225306].pt
+-------+------------+-----------------+----------+
| Epoch | Train loss | Validation loss | accuracy |
+-------+------------+-----------------+----------+
| 1 | 1.189 | 0.505 | 0.854 |
| 2 | 0.36 | 0.263 | 0.925 |
+-------+------------+-----------------+----------+
Epoch completed in: 1.3956120530764262 mins
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [01:17<00:00, 1.53it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00, 3.24it/s]
[SAVING] to models\model-[13012021-225430].pt
+-------+------------+-----------------+----------+
| Epoch | Train loss | Validation loss | accuracy |
+-------+------------+-----------------+----------+
| 1 | 1.189 | 0.505 | 0.854 |
| 2 | 0.36 | 0.263 | 0.925 |
| 3 | 0.205 | 0.194 | 0.942 |
+-------+------------+-----------------+----------+
Epoch completed in: 1.3945689757664999 mins
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [01:17<00:00, 1.52it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00, 3.34it/s]
[SAVING] to models\model-[13012021-225554].pt
+-------+------------+-----------------+----------+
| Epoch | Train loss | Validation loss | accuracy |
+-------+------------+-----------------+----------+
| 1 | 1.189 | 0.505 | 0.854 |
| 2 | 0.36 | 0.263 | 0.925 |
| 3 | 0.205 | 0.194 | 0.942 |
| 4 | 0.14 | 0.161 | 0.952 |
+-------+------------+-----------------+----------+
Epoch completed in: 1.3981768409411113 mins
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [01:17<00:00, 1.52it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00, 3.34it/s]
[SAVING] to models\model-[13012021-225718].pt
+-------+------------+-----------------+----------+
| Epoch | Train loss | Validation loss | accuracy |
+-------+------------+-----------------+----------+
| 1 | 1.189 | 0.505 | 0.854 |
| 2 | 0.36 | 0.263 | 0.925 |
| 3 | 0.205 | 0.194 | 0.942 |
| 4 | 0.14 | 0.161 | 0.952 |
| 5 | 0.104 | 0.138 | 0.958 |
+-------+------------+-----------------+----------+
Epoch completed in: 1.4024710655212402 mins
Training completed in 7.0590953707695006 mins
|
Proj2/.ipynb_checkpoints/first_goal-checkpoint.ipynb | ###Markdown
Unfornatanly the dataset was to big, and our system could not allocate enough memory for the Support Vector Machine Classifier, so we did not took into account this classifier.
###Code
def checkClf(name,clf):
clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)
print("Model name: " + name)
#print("R2 score: {:.4f}".format(r2_score(y_test, y_pred)))
#print("Mean square error: {:.4f}".format(mean_squared_error(y_test, y_pred)))
#print('Precision: {:.4f}'.format(precision_score(y_test, y_pred, average='micro')))
#print('Recall: {:.4f}'.format(recall_score(y_test, y_pred, average='micro')))
#print('Accuracy: {:.4f}'.format(accuracy_score(y_test, y_pred)))
print('F-measure: {:.4f}'.format(f1_score(y_test, y_pred, average='micro')))
print('The accuracy of classifying is {:.3f} %'.format(clf.score(X_test, y_test)*100))
print('Confusion Matrix:')
print(confusion_matrix(y_test,y_pred))
print('Report:')
print(classification_report(y_test,y_pred))
print(clf.best_params_)
print("\n")
checkClf('KNN',KNeighborsClassifier())
checkClf('SVC' , svm.SVC())
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
#definition of useful methods
def plot_learning_curve(estimator, title, X, y, axes=None, ylim=None, cv=None,
n_jobs=None, train_sizes=np.linspace(.1, 1.0, 5)):
if axes is None:
_, axes = plt.subplots(1, 3, figsize=(20, 5))
axes[0].set_title(title)
if ylim is not None:
axes[0].set_ylim(*ylim)
axes[0].set_xlabel("Training examples")
axes[0].set_ylabel("Score")
train_sizes, train_scores, test_scores, fit_times, _ = \
learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs,
train_sizes=train_sizes,
return_times=True)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
fit_times_mean = np.mean(fit_times, axis=1)
fit_times_std = np.std(fit_times, axis=1)
# Plot learning curve
axes[0].grid()
axes[0].fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
axes[0].fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1,
color="g")
axes[0].plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
axes[0].plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
axes[0].legend(loc="best")
# Plot n_samples vs fit_times
axes[1].grid()
axes[1].plot(train_sizes, fit_times_mean, 'o-')
axes[1].fill_between(train_sizes, fit_times_mean - fit_times_std,
fit_times_mean + fit_times_std, alpha=0.1)
axes[1].set_xlabel("Training examples")
axes[1].set_ylabel("fit_times")
axes[1].set_title("Scalability of the model")
# Plot fit_time vs score
axes[2].grid()
axes[2].plot(fit_times_mean, test_scores_mean, 'o-')
axes[2].fill_between(fit_times_mean, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1)
axes[2].set_xlabel("fit_times")
axes[2].set_ylabel("Score")
axes[2].set_title("Performance of the model")
return plt
fig, axes = plt.subplots(3, 2, figsize=(10, 15))
title = "Learning Curves (GaussianProcess)"
# Cross validation with 100 iterations to get smoother mean test and train
# score curves, each time with 20% data randomly selected as a validation set.
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=0)
estimator = GaussianProcessClassifier(max_iter_predict = 50, n_restarts_optimizer = 0, warm_start = True)
plot_learning_curve(estimator, title, X, y, axes=axes[:, 0], ylim=(0.3, 0.8),
cv=cv, n_jobs=4)
title = r"Learning Curves (SVM, linear kernel)"
# SVC is more expensive so we do a lower number of CV iterations:
cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=0)
estimator = svm.SVC(gamma = 'scale', kernel = 'rbf')
plot_learning_curve(estimator, title, X, y, axes=axes[:, 1], ylim=(0.3, 0.8),
cv=cv, n_jobs=4)
plt.show()
###Output
_____no_output_____ |
labs24_notebooks/bunches_and_gaps/B_And_G_With_GeoJson.ipynb | ###Markdown
Run Once
###Code
# Used in many places
import psycopg2 as pg
import pandas as pd
# Used to enter database credentials without saving them to the notebook file
import getpass
# Used to easily read in bus location data
import pandas.io.sql as sqlio
# Only used in the schedule class definition
import numpy as np
from scipy import stats
# Used in the fcc_projection function to find distances
from math import sqrt, cos
# Enter database credentials. Requires you to paste in the user and
# password so it isn't saved in the notebook file
print("Enter database username:")
user = getpass.getpass()
print("Enter database password:")
password = getpass.getpass()
creds = {
'user': user,
'password': password,
'host': "lambdalabs24sfmta.cykkiwxbfvpg.us-east-1.rds.amazonaws.com",
'dbname': "historicalTransitData"
}
# Set up connection to database
cnx = pg.connect(**creds)
cursor = cnx.cursor()
print('\nDatabase connection successful')
# Schedule class definition
# Copied from previous work, has extra methods that are not all used in this notebook
class Schedule:
def __init__(self, route_id, date, connection):
"""
The Schedule class loads the schedule for a particular route and day,
and makes several accessor methods available for it.
Parameters:
route_id (str or int)
- The route id to load
date (str or pandas.Timestamp)
- Which date to load
- Converted with pandas.to_datetime so many formats are acceptable
"""
self.route_id = str(route_id)
self.date = pd.to_datetime(date)
# load the schedule for that date and route
self.route_data = load_schedule(self.route_id, self.date, connection)
# process data into a table
self.inbound_table, self.outbound_table = extract_schedule_tables(self.route_data)
# calculate the common interval values
self.mean_interval, self.common_interval = get_common_intervals(
[self.inbound_table, self.outbound_table])
def list_stops(self):
"""
returns the list of all stops used by this schedule
"""
# get stops for both inbound and outbound routes
inbound = list(self.inbound_table.columns)
outbound = list(self.outbound_table.columns)
# convert to set to ensure no duplicates,
# then back to list for the correct output type
return list(set(inbound + outbound))
def get_specific_interval(self, stop, time, inbound=True):
"""
Returns the expected interval, in minutes, for a given stop and
time of day.
Parameters:
stop (str or int)
- the stop tag/id of the bus stop to check
time (str or pandas.Timestamp)
- the time of day to check, uses pandas.to_datetime to convert
- examples that work: "6:00", "3:30pm", "15:30"
inbound (bool, optional)
- whether to check the inbound or outbound schedule
- ignored unless the given stop is in both inbound and outbound
"""
# ensure correct parameter types
stop = str(stop)
time = pd.to_datetime(time)
# check which route to use, and extract the column for the given stop
if (stop in self.inbound_table.columns and
stop in self.outbound_table.columns):
# stop exists in both, use inbound parameter to decide
if inbound:
sched = self.inbound_table[stop]
else:
sched = self.outbound_table[stop]
elif (stop in self.inbound_table.columns):
# stop is in the inbound schedule, use that
sched = self.inbound_table[stop]
elif (stop in self.outbound_table.columns):
# stop is in the outbound schedule, use that
sched = self.outbound_table[stop]
else:
# stop doesn't exist in either, throw an error
raise ValueError(f"Stop id '{stop}' doesn't exist in either inbound or outbound schedules")
# 1: convert schedule to datetime for comparison statements
# 2: drop any NaN values
# 3: convert to list since pd.Series threw errors on i indexing
sched = list(pd.to_datetime(sched).dropna())
# reset the date portion of the time parameter to
# ensure we are checking the schedule correctly
time = time.replace(year=self.date.year, month=self.date.month,
day=self.date.day)
# iterate through that list to find where the time parameter fits
for i in range(1, len(sched)):
# start at 1 and move forward,
# is the time parameter before this schedule entry?
if(time < sched[i]):
# return the difference between this entry and the previous one
return (sched[i] - sched[i-1]).seconds / 60
# can only reach this point if the time parameter is after all entries
# in the schedule, return the last available interval
return (sched[len(sched)-1] - sched[len(sched)-2]).seconds / 60
def load_schedule(route, date, connection):
"""
loads schedule data from the database and returns it
Parameters:
route (str)
- The route id to load
date (str or pd.Timestamp)
- Which date to load
- Converted with pandas.to_datetime so many formats are acceptable
"""
# ensure correct parameter types
route = str(route)
date = pd.to_datetime(date)
# DB connection
cursor = connection.cursor()
# build selection query
query = """
SELECT content
FROM schedules
WHERE rid = %s AND
begin_date <= %s::TIMESTAMP AND
(end_date IS NULL OR end_date >= %s::TIMESTAMP);
"""
# execute query and save the route data to a local variable
cursor.execute(query, (route, str(date), str(date)))
data = cursor.fetchone()[0]['route']
# pd.Timestamp.dayofweek returns 0 for monday and 6 for Sunday
# the actual serviceClass strings are defined by Nextbus
# these are the only 3 service classes we can currently observe,
# if others are published later then this will need to change
if(date.dayofweek <= 4):
serviceClass = 'wkd'
elif(date.dayofweek == 5):
serviceClass = 'sat'
else:
serviceClass = 'sun'
# the schedule format has two entries for each serviceClass,
# one each for inbound and outbound.
# return each entry in the data list with the correct serviceClass
return [sched for sched in data if (sched['serviceClass'] == serviceClass)]
def extract_schedule_tables(route_data):
"""
converts raw schedule data to two pandas dataframes
columns are stops, and rows are individual trips
returns inbound_df, outbound_df
"""
# assuming 2 entries, but not assuming order
if(route_data[0]['direction'] == 'Inbound'):
inbound = 0
else:
inbound = 1
# extract a list of stops to act as columns
inbound_stops = [s['tag'] for s in route_data[inbound]['header']['stop']]
# initialize dataframe
inbound_df = pd.DataFrame(columns=inbound_stops)
# extract each row from the data
if type(route_data[inbound]['tr']) == list:
# if there are multiple trips in a day, structure will be a list
i = 0
for trip in route_data[inbound]['tr']:
for stop in trip['stop']:
# '--' indicates the bus is not going to that stop on this trip
if stop['content'] != '--':
inbound_df.at[i, stop['tag']] = stop['content']
# increment for the next row
i += 1
else:
# if there is only 1 trip in a day, the object is a dict and
# must be handled slightly differently
for stop in route_data[inbound]['tr']['stop']:
if stop['content'] != '--':
inbound_df.at[0, stop['tag']] = stop['content']
# flip between 0 and 1
outbound = int(not inbound)
# repeat steps for the outbound schedule
outbound_stops = [s['tag'] for s in route_data[outbound]['header']['stop']]
outbound_df = pd.DataFrame(columns=outbound_stops)
if type(route_data[outbound]['tr']) == list:
i = 0
for trip in route_data[outbound]['tr']:
for stop in trip['stop']:
if stop['content'] != '--':
outbound_df.at[i, stop['tag']] = stop['content']
i += 1
else:
for stop in route_data[outbound]['tr']['stop']:
if stop['content'] != '--':
outbound_df.at[0, stop['tag']] = stop['content']
# return both dataframes
return inbound_df, outbound_df
def get_common_intervals(df_list):
"""
takes route schedule tables and returns both the average interval (mean)
and the most common interval (mode), measured in number of minutes
takes a list of dataframes and combines them before calculating statistics
intended to combine inbound and outbound schedules for a single route
"""
# ensure we have at least one dataframe
if len(df_list) == 0:
raise ValueError("Function requires at least one dataframe")
# append all dataframes in the array together
df = df_list[0].copy()
for i in range(1, len(df_list)):
df.append(df_list[i].copy())
# convert all values to datetime so we can get an interval easily
for col in df.columns:
df[col] = pd.to_datetime(df[col])
# initialize a table to hold each individual interval
intervals = pd.DataFrame(columns=df.columns)
intervals['temp'] = range(len(df))
# take each column and find the intervals in it
for col in df.columns:
prev_time = np.nan
for i in range(len(df)):
# find the first non-null value and save it to prev_time
if pd.isnull(prev_time):
prev_time = df.at[i, col]
# if the current time is not null, save the interval
elif ~pd.isnull(df.at[i, col]):
intervals.at[i, col] = (df.at[i, col] - prev_time).seconds / 60
prev_time = df.at[i, col]
# this runs without adding a temp column, but the above loop runs 3x as
# fast if the rows already exist
intervals = intervals.drop('temp', axis=1)
# calculate the mean of the entire table
mean = intervals.mean().mean()
# calculate the mode of the entire table, the [0][0] at the end is
# because scipy.stats returns an entire ModeResult class
mode = stats.mode(intervals.values.flatten())[0][0]
return mean, mode
# Route class definition
# Copied from previous work, has extra methods that are not all used in this notebook
class Route:
def __init__(self, route_id, date, connection):
"""
The Route class loads the route configuration data for a particular
route, and makes several accessor methods available for it.
Parameters:
route_id (str or int)
- The route id to load
date (str or pandas.Timestamp)
- Which date to load
- Converted with pandas.to_datetime so many formats are acceptable
"""
self.route_id = str(route_id)
self.date = pd.to_datetime(date)
# load the route data
self.route_data, self.route_type, self.route_name = load_route(self.route_id, self.date, connection)
# extract stops info and rearrange columns to be more human readable
# note: the stop tag is what was used in the schedule data, not stopId
self.stops_table = pd.DataFrame(self.route_data['stop'])
self.stops_table = self.stops_table[['stopId', 'tag', 'title', 'lat', 'lon']]
# extract route path, list of (lat, lon) pairs
self.path_coords = extract_path(self.route_data)
# extract stops table
self.stops_table, self.inbound, self.outbound = extract_stops(self.route_data)
def load_route(route, date, connection):
"""
loads raw route data from the database
Parameters:
route (str or int)
- The route id to load
date (str or pd.Timestamp)
- Which date to load
- Converted with pandas.to_datetime so many formats are acceptable
Returns route_data (dict), route_type (str), route_name (str)
"""
# ensure correct parameter types
route = str(route)
date = pd.to_datetime(date)
# DB connection
cursor = connection.cursor()
# build selection query
query = """
SELECT route_name, route_type, content
FROM routes
WHERE rid = %s AND
begin_date <= %s::TIMESTAMP AND
(end_date IS NULL OR end_date > %s::TIMESTAMP);
"""
# execute query and return the route data
cursor.execute(query, (route, str(date), str(date)))
result = cursor.fetchone()
return result[2]['route'], result[1], result[0]
def extract_path(route_data):
"""
Extracts the list of path coordinates for a route.
The raw data stores this as an unordered list of sub-routes, so this
function deciphers the order they should go in and returns a single list.
"""
# KNOWN BUG
# this approach assumed all routes were either a line or a loop.
# routes that have multiple sub-paths meeting at a point break this,
# route 24 is a current example.
# I'm committing this now to get the rest of the code out there
# extract the list of subpaths as just (lat,lon) coordinates
# also converts from string to float (raw data has strings)
path = []
for sub_path in route_data['path']:
path.append([(float(p['lat']), float(p['lon']))
for p in sub_path['point']])
# start with the first element, remove it from path
final = path[0]
path.pop(0)
# loop until the first and last coordinates in final match
counter = len(path)
done = True
while final[0] != final[-1]:
# loop through the sub-paths that we haven't yet moved to final
for i in range(len(path)):
# check if the last coordinate in final matches the first
# coordinate of another sub-path
if final[-1] == path[i][0]:
# match found, move it to final
# leave out the first coordinate to avoid duplicates
final = final + path[i][1:]
path.pop(i)
break # break the for loop
# protection against infinite loops, if the path never closes
counter -= 1
if counter < 0:
done = False
break
if not done:
# route did not connect in a loop, perform same steps backwards
# to get the rest of the line
for _ in range(len(path)):
# loop through the sub-paths that we haven't yet moved to final
for i in range(len(path)):
# check if the first coordinate in final matches the last
# coordinate of another sub-path
if final[0] == path[i][-1]:
# match found, move it to final
# leave out the last coordinate to avoid duplicates
final = path[i][:-1] + final
path.pop(i)
break # break the for loop
# some routes may have un-used sub-paths
# Route 1 for example has two sub-paths that are almost identical, with the
# same start and end points
# if len(path) > 0:
# print(f"WARNING: {len(path)} unused sub-paths")
# return the final result
return final
def extract_stops(route_data):
"""
Extracts a dataframe of stops info
Returns the main stops dataframe, and a list of inbound and outbound stops
in the order they are intended to be on the route
"""
stops = pd.DataFrame(route_data['stop'])
directions = pd.DataFrame(route_data['direction'])
# Change stop arrays to just the list of numbers
for i in range(len(directions)):
directions.at[i, 'stop'] = [s['tag'] for s in directions.at[i, 'stop']]
# Find which stops are inbound or outbound
inbound = []
for stop_list in directions[directions['name'] == "Inbound"]['stop']:
for stop in stop_list:
if stop not in inbound:
inbound.append(stop)
outbound = []
for stop_list in directions[directions['name'] == "Outbound"]['stop']:
for stop in stop_list:
if stop not in inbound:
outbound.append(stop)
# Label each stop as inbound or outbound
stops['direction'] = ['none'] * len(stops)
for i in range(len(stops)):
if stops.at[i, 'tag'] in inbound:
stops.at[i, 'direction'] = 'inbound'
elif stops.at[i, 'tag'] in outbound:
stops.at[i, 'direction'] = 'outbound'
# Convert from string to float
stops['lat'] = stops['lat'].astype(float)
stops['lon'] = stops['lon'].astype(float)
return stops, inbound, outbound
def get_location_data(rid, begin, end, connection):
# Build query to select location data
query = f"""
SELECT *
FROM locations
WHERE rid = '{rid}' AND
timestamp > '{begin}'::TIMESTAMP AND
timestamp < '{end}'::TIMESTAMP
ORDER BY id;
"""
# read the query directly into pandas
locations = sqlio.read_sql_query(query, connection)
# Convert those UTC timestamps to local PST by subtracting 7 hours
locations['timestamp'] = locations['timestamp'] - pd.Timedelta(hours=7)
# return the result
return locations
# Written by Austie
def fcc_projection(loc1, loc2):
"""
function to apply FCC recommended formulae
for calculating distances on earth projected to a plane
significantly faster computationally, negligible loss in accuracy
Args:
loc1 - a tuple of lat/lon
loc2 - a tuple of lat/lon
"""
lat1, lat2 = loc1[0], loc2[0]
lon1, lon2 = loc1[1], loc2[1]
mean_lat = (lat1+lat2)/2
delta_lat = lat2 - lat1
delta_lon = lon2 - lon1
k1 = 111.13209 - 0.56605*cos(2*mean_lat) + .0012*cos(4*mean_lat)
k2 = 111.41513*cos(mean_lat) - 0.09455*cos(3*mean_lat) + 0.00012*cos(5*mean_lat)
distance = sqrt((k1*delta_lat)**2 + (k2*delta_lon)**2)
return distance
def clean_locations(locations, stops):
"""
takes a dataframe of bus locations and a dataframe of
returns the locations dataframe with nearest stop added
"""
# remove old location reports that would be duplicates
df = locations[locations['age'] < 60].copy()
# remove rows with no direction value
df = df[~pd.isna(df['direction'])]
# shift timestamps according to the age column
df['timestamp'] = df.apply(shift_timestamp, axis=1)
# Make lists of all inbound or outbound stops
inbound_stops = stops[stops['direction'] == 'inbound'].reset_index(drop=True)
outbound_stops = stops[stops['direction'] == 'outbound'].reset_index(drop=True)
# initialize new columns for efficiency
df['closestStop'] = [0] * len(df)
df['distance'] = [0.0] * len(df)
for i in df.index:
if '_I_' in df.at[i, 'direction']:
candidates = inbound_stops
elif '_O_' in df.at[i, 'direction']:
candidates = outbound_stops
else:
# Skip row if bus is not found to be either inbound or outbound
continue
bus_coord = (df.at[i, 'latitude'], df.at[i, 'longitude'])
# Find closest stop within candidates
# Assume the first stop
closest = candidates.iloc[0]
distance = fcc_projection(bus_coord, (closest['lat'], closest['lon']))
# Check each stop after that
for _, row in candidates[1:].iterrows():
# find distance to this stop
dist = fcc_projection(bus_coord, (row['lat'], row['lon']))
if dist < distance:
# closer stop found, save it
closest = row
distance = dist
# Save the tag of the closest stop and the distance to it
df.at[i, 'closestStop'] = closest['tag']
df.at[i, 'distance'] = distance
return df
def shift_timestamp(row):
""" subtracts row['age'] from row['timestamp'] """
return row['timestamp'] - pd.Timedelta(seconds=row['age'])
def get_stop_times(locations, route):
"""
returns a dict, keys are stop tags and values are lists of timestamps
that describe every time a bus was seen at that stop
"""
start = time()
# Initialize the data structure I will store results in
stop_times = {}
vids = {}
for stop in route.inbound + route.outbound:
stop_times[str(stop)] = []
for vid in locations['vid'].unique():
# Process the route one vehicle at a time
df = locations[locations['vid'] == vid]
# process 1st row on its own
prev_row = df.loc[df.index[0]]
stop_times[str(prev_row['closestStop'])].append(prev_row['timestamp'])
# loop through the rest of the rows, comparing each to the previous one
for i, row in df[1:].iterrows():
if row['direction'] != prev_row['direction']:
# changed directions, don't compare to previous row
stop_times[str(row['closestStop'])].append(row['timestamp'])
else:
# same direction, compare to previous row
if '_I_' in row['direction']: # get correct stop list
stoplist = route.inbound
else:
stoplist = route.outbound
current = stoplist.index(str(row['closestStop']))
previous = stoplist.index(str(prev_row['closestStop']))
gap = current - previous
if gap > 1: # need to interpolate
diff = (row['timestamp'] - prev_row['timestamp'])/gap
counter = 1
for stop in stoplist[previous+1:current]:
# save interpolated time
stop_times[str(stop)].append(prev_row['timestamp'] + (counter * diff))
# increase counter for the next stop
# example: with 2 interpolated stops, gap would be 3
# 1st diff is 1/3, next is 2/3
counter += 1
if row['closestStop'] != prev_row['closestStop']:
# only save time if the stop has changed,
# otherwise the bus hasn't moved since last time
stop_times[str(row['closestStop'])].append(row['timestamp'])
# advance for next row
prev_row = row
# Sort each list before returning
for stop in stop_times.keys():
stop_times[stop].sort()
return stop_times
def get_bunches_gaps(stop_times, schedule, bunch_threshold=.2, gap_threshold=1.5):
"""
returns a dataframe of all bunches and gaps found
default thresholds define a bunch as 20% and a gap as 150% of scheduled headway
"""
# Initialize dataframe for the bunces and gaps
problems = pd.DataFrame(columns=['type', 'time', 'duration', 'stop'])
counter = 0
# Set the bunch/gap thresholds (in seconds)
bunch_threshold = (schedule.common_interval * 60) * bunch_threshold
gap_threshold = (schedule.common_interval * 60) * gap_threshold
for stop in stop_times.keys():
# ensure we have any times at all for this stop
if len(stop_times[stop]) == 0:
#print(f"Stop {stop} had no recorded times")
continue # go to next stop in the loop
# save initial time
prev_time = stop_times[stop][0]
# loop through all others, comparing to the previous one
for time in stop_times[stop][1:]:
diff = (time - prev_time).seconds
if diff <= bunch_threshold:
# bunch found, save it
problems.at[counter] = ['bunch', prev_time, diff, stop]
counter += 1
elif diff >= gap_threshold:
problems.at[counter] = ['gap', prev_time, diff, stop]
counter += 1
prev_time = time
return problems
# this uses sequential search, could speed up with binary search if needed,
# but it currently uses hardly any time in comparison to other steps
def helper_count(expected_times, observed_times):
""" Returns the number of on-time stops found """
# set up early/late thresholds (in seconds)
early_threshold = pd.Timedelta(seconds=1*60) # 1 minute early
late_threshold = pd.Timedelta(seconds=4*60) # 4 minutes late
count = 0
for stop in expected_times.columns:
for expected in expected_times[stop]:
if pd.isna(expected):
continue # skip NaN values in the expected schedule
# for each expected time...
# find first observed time after the early threshold
found_time = None
early = expected - early_threshold
# BUG: some schedule data may have stop tags that are not in the inbound
# or outbound definitions for a route. That would throw a key error here.
# Example: stop 14148 on route 24
# current solution is to ignore those stops with the try/except statement
try:
for observed in observed_times[stop]:
if observed >= early:
found_time = observed
break
except:
continue
# if found time is still None, then all observed times were too early
# if found_time is before the late threshold then we were on time
if (not pd.isna(found_time)) and found_time <= (expected + late_threshold):
# found_time is within the on-time window
count += 1
return count
def calculate_ontime(stop_times, schedule):
""" Returns the on-time percentage and total scheduled stops for this route """
# Save schedules with timestamp data types, set date to match
inbound_times = schedule.inbound_table
for col in inbound_times.columns:
inbound_times[col] = pd.to_datetime(inbound_times[col]).apply(
lambda dt: dt.replace(year=schedule.date.year,
month=schedule.date.month,
day=schedule.date.day))
outbound_times = schedule.outbound_table
for col in outbound_times.columns:
outbound_times[col] = pd.to_datetime(outbound_times[col]).apply(
lambda dt: dt.replace(year=schedule.date.year,
month=schedule.date.month,
day=schedule.date.day))
# count times for both inbound and outbound schedules
on_time_count = (helper_count(inbound_times, stop_times) +
helper_count(outbound_times, stop_times))
# get total expected count
total_expected = inbound_times.count().sum() + outbound_times.count().sum()
# return on-time percentage
return (on_time_count / total_expected), total_expected
def bunch_gap_graph(problems, interval=10):
"""
returns data for a graph of the bunches and gaps throughout the day
problems - the dataframe of bunches and gaps
interval - the number of minutes to bin data into
returns
{
"times": [time values (x)],
"bunches": [bunch counts (y1)],
"gaps": [gap counts (y2)]
}
"""
# set the time interval
interval = pd.Timedelta(minutes=interval)
# rest of code doesn't work if there are no bunches or gaps
# return the empty graph manually
if len(problems) == 0:
# generate list of times according to the interval
start = pd.Timestamp('today').replace(hour=0, minute=0, second=0)
t = start
times = []
while t.day == start.day:
times.append(str(t.time())[:5])
t += interval
return {
"times": times,
"bunches": [0] * len(times),
"gaps": [0] * len(times)
}
# generate the DatetimeIndex needed
index = pd.DatetimeIndex(problems['time'])
df = problems.copy()
df.index = index
# lists for graph data
bunches = []
gaps = []
times = []
# set selection times
start_date = problems.at[0, 'time'].replace(hour=0, minute=0, second=0)
select_start = start_date
select_end = select_start + interval
while select_start.day == start_date.day:
# get the count of each type of problem in this time interval
count = df.between_time(select_start.time(), select_end.time())['type'].value_counts()
# append the counts to the data list
if 'bunch' in count.index:
bunches.append(int(count['bunch']))
else:
bunches.append(0)
if 'gap' in count.index:
gaps.append(int(count['gap']))
else:
gaps.append(0)
# save the start time for the x axis
times.append(str(select_start.time())[:5])
# increment the selection window
select_start += interval
select_end += interval
return {
"times": times,
"bunches": bunches,
"gaps": gaps
}
###Output
_____no_output_____
###Markdown
Necessary for geojson
###Code
def create_simple_geojson(bunches, rid):
geojson = {'type': 'FeatureCollection',
'bunches': create_geojson_features(bunches, rid)}
return geojson
def create_geojson_features(df, rid):
"""
function to generate list of geojson features
for plotting vehicle locations on timestamped map
Expects a dataframe containing lat/lon, vid, timestamp
returns list of basic geojson formatted features:
{
type: Feature
geometry: {
type: Point,
coordinates:[lat, lon]
},
properties: {
route_id: rid
time: timestamp
}
}
"""
# initializing empty features list
features = []
# iterating through df to pull coords, vid, timestamp
# and format for json
for index, row in df.iterrows():
feature = {
'type': 'Feature',
'geometry': {
'type':'Point',
'coordinates':[row.lon, row.lat]
},
'properties': {
'time': row.time.__str__(),
'stop': {'stopId': row.stopId.__str__(),
'stopTitle': row.title.__str__()},
'direction': row.direction.__str__()
}
}
features.append(feature) # adding point to features list
return features
###Output
_____no_output_____
###Markdown
Generating report JSONNow updated to include geojson for mapping.Tested geojson generation with: - single report generation - all routes generation- aggregate generationTested mapping bunches with generated geojson W/ Folium. Everything should plug-and-play.
###Code
def generate_report(rid, date):
"""
Generates a daily report for the given rid and date
rid : (str)
the route id to generate a report for
date : (str or pd.Datetime)
the date to generate a report for
returns a dict of the report info
"""
# get begin and end timestamps for the date
begin = pd.to_datetime(date).replace(hour=7)
end = begin + pd.Timedelta(days=1)
# Load schedule and route data
schedule = Schedule(rid, begin, cnx)
route = Route(rid, begin, cnx)
# Load bus location data
locations = get_location_data(rid, begin, end, cnx)
# Apply cleaning function (this usually takes 1-2 minutes)
locations = clean_locations(locations, route.stops_table)
# Calculate all times a bus was at each stop
stop_times = get_stop_times(locations, route)
# Find all bunches and gaps
problems = get_bunches_gaps(stop_times, schedule)
# Calculate on-time percentage
on_time, total_scheduled = calculate_ontime(stop_times, schedule)
# Build result dict
count_times = 0
for key in stop_times.keys():
count_times += len(stop_times[key])
# Number of recorded intervals ( sum(len(each list of time)) - number or lists of times)
intervals = count_times-len(stop_times)
bunches = len(problems[problems['type'] == 'bunch'])
gaps = len(problems[problems['type'] == 'gap'])
coverage = (total_scheduled * on_time + bunches) / total_scheduled
# Isolating bunches, merging with stops to assign locations to bunches
stops = route.stops_table.copy()
bunch_df = problems[problems.type.eq('bunch')]
bunch_df = bunch_df.merge(stops, left_on='stop', right_on='tag', how='left')
# Creating GeoJSON of bunch times / locations
geojson = create_simple_geojson(bunch_df, rid)
# int/float conversions are because the json library doesn't work with numpy types
result = {
'route': rid,
'route_name': route.route_name,
'route_type': route.route_type,
'date': str(pd.to_datetime(date)),
'num_bunches': bunches,
'num_gaps': gaps,
'total_intervals': intervals,
'on_time_percentage': float(round(on_time * 100, 2)),
'scheduled_stops': int(total_scheduled),
'coverage': float(round(coverage * 100, 2)),
# line_chart contains all data needed to generate the line chart
'line_chart': bunch_gap_graph(problems, interval=10),
# route_table is an array of all rows that should show up in the table
# it will be filled in after all reports are generated
'route_table': [
{
'route_name': route.route_name,
'bunches': bunches,
'gaps': gaps,
'on-time': float(round(on_time * 100, 2)),
'coverage': float(round(coverage * 100, 2))
}
],
'geojson': geojson
}
return result
%%time
report_1 = generate_report(rid='1', date='2020/6/1')
report_1['geojson']['bunches'][:5]
report_714 = generate_report(rid='714', date='2020/6/1')
report_714['geojson']['bunches']
###Output
_____no_output_____
###Markdown
Generating report for all routes
###Code
def get_active_routes(date):
"""
returns a list of all active route id's for the given date
"""
query = """
SELECT DISTINCT rid
FROM routes
WHERE begin_date <= %s ::TIMESTAMP AND
(end_date IS NULL OR end_date > %s ::TIMESTAMP);
"""
cursor.execute(query, (date, date))
return [result[0] for result in cursor.fetchall()]
%%time
# since this is not optimized yet, this takes about 20 minutes
# choose a day
date = '2020-6-1'
# get all active routes
route_ids = get_active_routes(date)
# get the report for all routes
all_reports = []
for rid in route_ids:
try:
all_reports.append(generate_report(rid, date))
print("Generated report for route", rid)
except: # in case any particular route throws an error
print(f"Route {rid} failed")
len(all_reports)
# generate aggregate reports
# read existing reports into a dataframe to work with them easily
df = pd.DataFrame(all_reports)
# for each aggregate type
types = list(df['route_type'].unique()) + ['All']
for t in types:
# filter df to the routes we are adding up
if t == 'All':
filtered = df
else:
filtered = df[df['route_type'] == t]
# on-time percentage: sum([all on-time stops]) / sum([all scheduled stops])
count_on_time = (filtered['on_time_percentage'] * filtered['scheduled_stops']).sum()
on_time_perc = count_on_time / filtered['scheduled_stops'].sum()
# coverage: (sum([all on-time stops]) + sum([all bunches])) / sum([all scheduled stops])
coverage = (count_on_time + filtered['num_bunches'].sum()) / filtered['scheduled_stops'].sum()
# aggregate the graph object
# x-axis is same for all
first = filtered.index[0]
times = filtered.at[first, 'line_chart']['times']
# sum up all y-axis values
bunches = pd.Series(filtered.at[first, 'line_chart']['bunches'])
gaps = pd.Series(filtered.at[first, 'line_chart']['gaps'])
for chart in filtered[1:]['line_chart']:
bunches += pd.Series(chart['bunches'])
gaps += pd.Series(chart['gaps'])
# save a new report object
new_report = {
'route': t,
'route_name': t,
'route_type': t,
'date': all_reports[0]['date'],
'num_bunches': int(filtered['num_bunches'].sum()),
'num_gaps': int(filtered['num_gaps'].sum()),
'total_intervals': int(filtered['total_intervals'].sum()),
'on_time_percentage': float(round(on_time_perc, 2)),
'scheduled_stops': int(filtered['scheduled_stops'].sum()),
'coverage': float(round(coverage, 2)),
'line_chart': {
'times': times,
'bunches': list(bunches),
'gaps': list(bunches)
},
'route_table': [
{
'route_name': t,
'bunches': int(filtered['num_bunches'].sum()),
'gaps': int(filtered['num_gaps'].sum()),
'on-time': float(round(on_time_perc, 2)),
'coverage': float(round(coverage, 2))
}
]
}
# TODO: add route_table rows to the aggregate report
all_reports.append(new_report)
all_reports[0].keys()
###Output
_____no_output_____ |
Chapter07/Notebooks/Training/CNNSketchClassifier_2_Training.ipynb | ###Markdown
Sketch Classifier for "How Do Humans Sketch Objects?" A sketch classifier using the dataset from the paper How Do Humans Sketch Objects? where the authors collected 20,000 unique sketches evenly distributed over 250 object categories - we will use a CNN (using Keras) to classify a sketch.
###Code
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
from scipy.misc import imresize
import os
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use('ggplot')
import keras
keras.__version__
from keras import layers
from keras import models
from keras import optimizers
from keras import callbacks
from keras import Input
from keras.utils import plot_model
from keras import preprocessing
from keras.preprocessing import image
###Output
_____no_output_____
###Markdown
Trained on Floydhub
###Code
DEST_SKETCH_DIR = '/sketches_training_data/'
TARGET_SIZE = (256,256)
CATEGORIES_COUNT = 205
TRAINING_SAMPLES = 12736
VALIDATION_SAMPLES = 3184
!ls /sketches_training_data
###Output
training validation
###Markdown
Create model
###Code
def plot_accuracy_loss(history):
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
def train(model,
training_dir,
validation_dir,
target_size=TARGET_SIZE,
training_samples=TRAINING_SAMPLES,
validation_samples=VALIDATION_SAMPLES,
epochs=1000,
batch_size=512,
load_previous_weights=True,
model_weights_file=None):
"""
"""
if model_weights_file is None:
raise("No model weights file set")
print("Training STARTED - target size {}, batch size {}".format(
target_size,
batch_size))
if model_weights_file is not None and os.path.isfile(model_weights_file) and load_previous_weights:
print("Loading weights from file {}".format(model_weights_file))
model.load_weights(model_weights_file)
model.compile(
loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# create data generator
# check the official documentation for more details: https://keras.io/preprocessing/image/
datagen = preprocessing.image.ImageDataGenerator(
rescale=1./255., # rescaling factor applied by multiply the data by this value
width_shift_range=0.1, # ranges (as a fraction of total width) to randomly translate pictures
height_shift_range=0.1, # ranges (as a fraction of total height) to randomly translate pictures
zoom_range=0.1, # randomly zooming inside pictures
horizontal_flip=True, # randomly flipping half of the images horizontally
fill_mode='nearest') # strategy used for filling in newly created pixels
if model.layers[0].input_shape[0] == target_size[0] and model.layers[0].input_shape[1] == target_size[1]:
target_size = None
# create an iterator for the training data
train_generator = datagen.flow_from_directory(
training_dir,
target_size=target_size,
batch_size=batch_size,
color_mode='grayscale')
# create an iterator for the validation data
validation_generator = datagen.flow_from_directory(
validation_dir,
target_size=target_size,
batch_size=batch_size,
color_mode='grayscale')
checkpoint = callbacks.ModelCheckpoint(model_weights_file,
monitor='val_loss',
verbose=0,
save_best_only=True,
save_weights_only=True,
mode='auto',
period=2)
early_stopping = callbacks.EarlyStopping(monitor='val_loss', patience=10)
data_augmentation_multiplier = 1.5
history = model.fit_generator(
train_generator,
steps_per_epoch=int((training_samples/batch_size) * data_augmentation_multiplier),
epochs=epochs,
validation_data=validation_generator,
validation_steps=int((validation_samples/batch_size) * data_augmentation_multiplier),
callbacks=[checkpoint, early_stopping])
print("Training FINISHED - target size {}, batch size {}".format(
target_size,
batch_size))
return history, model
def create_model(input_shape=(256,256,1), classes=CATEGORIES_COUNT, is_training=True):
"""
Create a CNN model
"""
model = models.Sequential()
model.add(layers.Conv2D(16, kernel_size=(7,7), strides=(3,3),
padding='same', activation='relu', input_shape=input_shape))
model.add(layers.MaxPooling2D(2,2))
model.add(layers.Conv2D(32, kernel_size=(5,5), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(2,2))
model.add(layers.Conv2D(64, (5,5), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(2,2))
if is_training:
model.add(layers.Dropout(0.125))
model.add(layers.Conv2D(128, (5,5), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(2,2))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu', name='dense_2_512'))
if is_training:
model.add(layers.Dropout(0.5))
model.add(layers.Dense(classes, activation='softmax', name='output'))
return model
model = create_model()
model.summary()
history, model = train(model,
training_dir=os.path.join(DEST_SKETCH_DIR, 'training'),
validation_dir=os.path.join(DEST_SKETCH_DIR, 'validation'),
target_size=(256,256),
epochs=1000,
batch_size=512,
model_weights_file="/output/cnn_sketch_weights_2.h5",
load_previous_weights=True)
plot_accuracy_loss(history)
###Output
_____no_output_____
###Markdown
---
###Code
def train(model,
training_dir,
validation_dir,
target_size=TARGET_SIZE,
training_samples=TRAINING_SAMPLES,
validation_samples=VALIDATION_SAMPLES,
epochs=1000,
batch_size=512,
load_previous_weights=True,
model_weights_file=None):
"""
"""
if model_weights_file is None:
raise("No model weights file set")
print("Training STARTED - target size {}, batch size {}".format(
target_size,
batch_size))
if model_weights_file is not None and os.path.isfile(model_weights_file) and load_previous_weights:
print("Loading weights from file {}".format(model_weights_file))
model.load_weights(model_weights_file)
model.compile(
loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# create data generator
# check the official documentation for more details: https://keras.io/preprocessing/image/
datagen = preprocessing.image.ImageDataGenerator(
rescale=1./255., # rescaling factor applied by multiply the data by this value
rotation_range=5, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.2, # ranges (as a fraction of total width) to randomly translate pictures
height_shift_range=0.2, # ranges (as a fraction of total height) to randomly translate pictures
horizontal_flip=True, # randomly flipping half of the images horizontally
fill_mode='nearest') # strategy used for filling in newly created pixels
if model.layers[0].input_shape[0] == target_size[0] and model.layers[0].input_shape[1] == target_size[1]:
target_size = None
# create an iterator for the training data
train_generator = datagen.flow_from_directory(
training_dir,
shuffle = True,
target_size=target_size,
batch_size=batch_size,
color_mode='grayscale',
class_mode='categorical')
# create an iterator for the validation data
validation_generator = datagen.flow_from_directory(
validation_dir,
shuffle = True,
target_size=target_size,
batch_size=batch_size,
color_mode='grayscale',
class_mode='categorical')
checkpoint = callbacks.ModelCheckpoint(model_weights_file,
monitor='val_loss',
verbose=0,
save_best_only=True,
save_weights_only=True,
mode='auto',
period=2)
early_stopping = callbacks.EarlyStopping(monitor='val_loss', patience=5)
data_augmentation_multiplier = 1.5
history = model.fit_generator(
train_generator,
steps_per_epoch=int((training_samples/batch_size) * data_augmentation_multiplier),
epochs=epochs,
validation_data=validation_generator,
validation_steps=int((validation_samples/batch_size) * data_augmentation_multiplier),
callbacks=[checkpoint, early_stopping])
print("Training FINISHED - target size {}, batch size {}".format(
target_size,
batch_size))
return history, model
def create_model(input_shape=(256,256,1), classes=CATEGORIES_COUNT, is_training=True):
"""
Create a CNN model
"""
model = models.Sequential()
model.add(layers.Conv2D(16, kernel_size=(7,7), strides=(3,3),
padding='same', activation='relu', input_shape=input_shape))
model.add(layers.MaxPooling2D(2,2))
model.add(layers.Conv2D(32, kernel_size=(5,5), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(2,2))
model.add(layers.Conv2D(64, (5,5), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(2,2))
if is_training:
model.add(layers.Dropout(0.125))
model.add(layers.Conv2D(128, (5,5), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(2,2))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
if is_training:
model.add(layers.Dropout(0.5))
model.add(layers.Dense(classes, activation='softmax'))
return model
model = create_model()
model.summary()
history, model = train(model,
training_dir=os.path.join(DEST_SKETCH_DIR, 'training'),
validation_dir=os.path.join(DEST_SKETCH_DIR, 'validation'),
target_size=(256,256),
epochs=1000,
batch_size=256,
model_weights_file="/output/cnn_sketch_weights_9.h5",
load_previous_weights=True)
plot_accuracy_loss(history)
###Output
_____no_output_____
###Markdown
---
###Code
def create_model(input_shape=(256,256,1), classes=CATEGORIES_COUNT, is_training=True):
"""
Create a CNN model
"""
input_tensor = Input(shape=input_shape)
# layer 1
layer1_conv_1 = layers.Conv2D(64, kernel_size=(15, 15), strides=(3,3), activation='relu')(input_tensor)
layer1_pool_1 = layers.MaxPooling2D(pool_size=(3,3), strides=(2,2))(layer1_conv_1)
# layer 2
layer2_conv_1 = layers.Conv2D(128, kernel_size=(5,5), strides=(1,1), activation='relu')(layer1_pool_1)
layer2_pool_1 = layers.MaxPooling2D(pool_size=(3,3), strides=(2,2))(layer2_conv_1)
# layer 3
layer3_conv_1 = layers.Conv2D(256, kernel_size=(5,5), strides=(1,1), activation='relu')(layer2_pool_1)
layer3_pool_1 = layers.MaxPooling2D(pool_size=(3,3), strides=(2,2))(layer3_conv_1)
# tower A
sparse_conv_a1 = layers.Conv2D(48, kernel_size=(1,1))(layer3_pool_1)
sparse_conv_a2 = layers.Conv2D(64, kernel_size=(3,3))(sparse_conv_a1)
# tower B
sparse_pool_b1 = layers.AveragePooling2D(pool_size=(3,3), strides=(1,1))(layer3_pool_1)
sparse_conv_b2 = layers.Conv2D(64, kernel_size=(1,1))(sparse_pool_b1)
# tower C
sparse_conv_c1 = layers.Conv2D(64, kernel_size=(3,3))(layer3_pool_1)
merge_layer = layers.concatenate([sparse_conv_a2, sparse_conv_b2, sparse_conv_c1], axis=-1)
layer5_pool_1 = layers.MaxPooling2D(pool_size=(3, 3), strides=(2, 2))(merge_layer)
flat = layers.Flatten()(layer5_pool_1)
fc1 = layers.Dense(256, activation='relu')(flat)
if is_training:
dr = layers.Dropout(0.5)(fc1)
fc2 = layers.Dense(CATEGORIES_COUNT, activation='sigmoid')(dr)
model = models.Model(input_tensor,fc2)
else:
fc2 = layers.Dense(CATEGORIES_COUNT, activation='sigmoid')(fc1)
model = models.Model(input_tensor,fc2)
return model
model = create_model()
model.summary()
history, model = train(model,
training_dir=os.path.join(DEST_SKETCH_DIR, 'training'),
validation_dir=os.path.join(DEST_SKETCH_DIR, 'validation'),
target_size=(256,256),
epochs=1000,
batch_size=300,
model_weights_file="/output/cnn_sketch_weights_10.h5",
load_previous_weights=True)
plot_accuracy_loss(history)
###Output
_____no_output_____ |
Fruit360/Fruits360_With_Pytorch.ipynb | ###Markdown
Classification of Fruits And Vegetables
###Code
# !pip install kaggle
# !mkdir .kaggle
# import json
# token = {"username":"ivyclare","key":"17ee8bd3b41486d62e7eb9257bd812d4"}
# with open('/content/.kaggle/kaggle.json', 'w') as file:
# json.dump(token, file)
# !chmod 600 /content/.kaggle/kaggle.json
# !cp /content/.kaggle/kaggle.json ~/.kaggle/kaggle.json
# !kaggle config set -n path -v{/content}
# #!kaggle datasets download -d shayanfazeli/heartbeat -p /content
# !kaggle datasets download -d moltean/fruits -p /content
# !unzip \*.zip
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
import torch
from torchvision import datasets,transforms,models
from torch import nn,optim
import torch.nn.functional as F
from torch.utils.data import *
import time
import json
import copy
import os
import glob
from PIL import Image
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
Load And Visualize Data Given the data is to The image data for this competition are too large to fit in memory in kernels. This kernel demonstrates how to access individual images in the zip archives without having to extract them or load the archive into memory.
###Code
#Now we load images and labels from folder into pytorch tensor
data_dir = 'fruits-360'
train_dir = 'fruits-360/Training'
test_dir = 'fruits-360/Test'
batch_size = 32
# Tansform with data augmentation and normalization for training
# Just normalization for validation
data_transforms = {
'train': transforms.Compose([
transforms.RandomRotation(30),
#transforms.RandomResizedCrop(224),
transforms.RandomResizedCrop(299),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'valid': transforms.Compose([
transforms.Resize(256),
#transforms.CenterCrop(224),
transforms.CenterCrop(299),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'test': transforms.Compose([
transforms.Resize(256),
#transforms.CenterCrop(224),
transforms.CenterCrop(299),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])
]),
}
dataset = datasets.ImageFolder(train_dir,transform=data_transforms['train'])
# splitting our data
valid_size = int(0.2 * len(dataset))
train_size = len(dataset) - valid_size
dataset_sizes = {'train': train_size, 'valid': valid_size}
# now we get our datasets
train_dataset, valid_dataset = torch.utils.data.random_split(dataset, [train_size, valid_size])
test_dataset = datasets.ImageFolder(test_dir,transform=data_transforms['test'])
# Loading datasets into dataloader
dataloaders = {'train': DataLoader(train_dataset, batch_size = batch_size, shuffle = True),
'valid': DataLoader(valid_dataset, batch_size = batch_size, shuffle = False),
'test': DataLoader(test_dataset, batch_size = batch_size, shuffle = False)}
print("Total Number of Samples: ",len(dataset))
print("Number of Samples in Train: ",len(train_dataset))
print("Number of Samples in Valid: ",len(valid_dataset))
print("Number of Samples in Test: ",len(test_dataset))
print("Number of Classes: ",len(dataset.classes))
## Method to display Image for Tensor
def imshow(image, ax=None, title=None, normalize=True):
"""Imshow for Tensor."""
if ax is None:
fig, ax = plt.subplots()
image = image.numpy().transpose((1, 2, 0))
if normalize:
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
image = np.clip(image, 0, 1)
ax.imshow(image)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.tick_params(axis='both', length=0)
ax.set_xticklabels('')
ax.set_yticklabels('')
return ax
# Displaying Training Images
images, labels = next(iter(dataloaders['train']))
fig, axes = plt.subplots(figsize=(16,5), ncols=5)
for ii in range(5):
ax = axes[ii]
#ax.set_title(label_map[class_names[labels[ii].item()]])
imshow(images[ii], ax=ax, normalize=True)
###Output
_____no_output_____
###Markdown
Building MLP Network Transfer Learning Load Pretrained Model
###Code
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name = 'inception' #vgg
if model_name == 'densenet':
model = models.densenet161(pretrained=True)
num_in_features = 2208
print(model)
elif model_name == 'vgg':
model = models.vgg19(pretrained=True)
num_in_features = 25088
print(model.classifier)
elif model_name == 'resnet':
model = models.resnet152(pretrained=True)
#model = models.resnet34(pretrained=True)
num_in_features = 2048 #512
print(model.fc)
elif model_name == 'inception':
model = models.inception_v3(pretrained=True)
model.aux_logits=False
num_in_features = 2048
print(model.fc)
else:
print("Unknown model, please choose 'densenet' or 'vgg'")
###Output
Downloading: "https://download.pytorch.org/models/inception_v3_google-1a9a5a14.pth" to /root/.cache/torch/checkpoints/inception_v3_google-1a9a5a14.pth
100%|██████████| 108857766/108857766 [00:01<00:00, 103328537.86it/s]
###Markdown
Freeze Parameters and Build Classifier
###Code
#Freezing model parameters and defining the fully connected network to be attached to the model, loss function and the optimizer.
#We there after put the model on the GPUs
for param in model.parameters():
param.require_grad = False
# Create Custom Classifier
def build_classifier(num_in_features, hidden_layers, num_out_features):
classifier = nn.Sequential()
if hidden_layers == None:
classifier.add_module('fc0', nn.Linear(num_in_features, 196))
else:
layer_sizes = zip(hidden_layers[:-1], hidden_layers[1:])
classifier.add_module('fc0', nn.Linear(num_in_features, hidden_layers[0]))
classifier.add_module('relu0', nn.ReLU())
classifier.add_module('drop0', nn.Dropout(.6))
# classifier.add_module('relu1', nn.ReLU())
# classifier.add_module('drop1', nn.Dropout(.5))
for i, (h1, h2) in enumerate(layer_sizes):
classifier.add_module('fc'+str(i+1), nn.Linear(h1, h2))
classifier.add_module('relu'+str(i+1), nn.ReLU())
classifier.add_module('drop'+str(i+1), nn.Dropout(.5))
classifier.add_module('output', nn.Linear(hidden_layers[-1], num_out_features))
return classifier
hidden_layers = None #[1050 , 500] #[4096, 1024] #None#[4096, 1024, 256][512, 256, 128] [1050 , 500]
classifier = build_classifier(num_in_features, hidden_layers, 196)
print(classifier)
# Defining model hyperparameters
if model_name == 'densenet':
model.classifier = classifier
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adadelta(model.parameters()) # Adadelta #weight optim.Adam(model.parameters(), lr=0.001, momentum=0.9)
#optimizer_conv = optim.SGD(model.parameters(), lr=0.0001, weight_decay=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 4 epochs
sched = optim.lr_scheduler.StepLR(optimizer, step_size=4)
elif model_name == 'vgg':
model.classifier = classifier
#criterion = nn.NLLLoss()
criterion = nn.CrossEntropyLoss()
#optimizer = optim.SGD(model.parameters(), lr=0.0001,weight_decay=0.001, momentum=0.9)
optimizer = optim.Adam(model.classifier.parameters(), lr=0.0001)
sched = optim.lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.1)
elif model_name == 'resnet':
model.fc = classifier
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
sched = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='max', patience=3, threshold = 0.9)
#sched = optim.lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.1)
# #criterion = nn.NLLLoss()
# optimizer = optim.Adam(model.fc.parameters(), lr= 0.00001)
# sched = optim.lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.1)
#criterion = nn.CrossEntropyLoss()
elif model_name == 'inception':
model.fc = classifier
criterion = nn.CrossEntropyLoss()
#optimizer = optim.Adadelta(model.parameters()) # Adadelta #weight optim.Adam(model.parameters(), lr=0.001, momentum=0.9)
optimizer = optim.SGD(model.parameters(), lr=0.001,momentum=0.9)
sched = optim.lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.1)
else:
pass
###Output
Sequential(
(fc0): Linear(in_features=2048, out_features=196, bias=True)
)
###Markdown
Training The Model
###Code
def train_model(model, criterion, optimizer, sched, num_epochs=5,device='cuda'):
start = time.time()
train_results = []
valid_results = []
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch+1, num_epochs))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'valid']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
#sched.step()
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
# calculate average time over an epoch
#elapshed_epoch = time.time() - start/
#print('Epoch {}/{} - completed in: {:.0f}m {:.0f}s'.format(epoch+1, num_epochs,elapshed_epoch // 60, elapshed_epoch % 60))
if(phase == 'train'):
train_results.append([epoch_loss,epoch_acc])
if(phase == 'valid'):
#sched.step(epoch_acc)
valid_results.append([epoch_loss,epoch_acc])
print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))
# deep copy the model (Early Stopping) and Saving our model, when we get best accuracy
if phase == 'valid' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
#print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(valid_loss_min,valid_loss))
#model_save_name = "ResNetDeepFlowers.pt"
model_save_name = "FruitInception.pt"
path = F"/content/drive/My Drive/{model_save_name}"
torch.save(model.state_dict(), path)
print()
time_elapsed = time.time() - start
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
#load best model weights
model.load_state_dict(best_model_wts)
return model,train_results,valid_results
#Resnet34 = 68.8%, 50 epochs, vggDeepFlowers
epochs = 20
model.to(device)
model,train_results,valid_results = train_model(model, criterion, optimizer, sched, epochs)
###Output
Epoch 1/20
----------
train Loss: 1.7560 Acc: 0.6375
valid Loss: 0.2047 Acc: 0.9479
Epoch 2/20
----------
train Loss: 0.2959 Acc: 0.9265
valid Loss: 0.0793 Acc: 0.9775
Epoch 3/20
----------
###Markdown
Check for Overfitting
###Code
# Plot of Losses
train_results = np.array(train_results)
valid_results = np.array(valid_results)
#print(train_results)
#print(valid_results[:,0])
plt.plot(train_results[:,0])
plt.plot(valid_results[:,0])
plt.legend(['Train Loss', 'Valid Loss'])
plt.xlabel('Epoch Number')
plt.ylabel('Loss')
plt.ylim(0,1)
plt.show()
#Plot of Accuracies
plt.plot(train_results[:,1])
plt.plot(valid_results[:,1])
plt.legend(['Train Accuracy', 'Valid Accuracy'])
plt.xlabel('Epoch Number')
plt.ylabel('Accuracy')
plt.ylim(0,1)
plt.show()
###Output
_____no_output_____
###Markdown
Load Saved Model
###Code
from google.colab import drive
drive.mount('/content/drive')
model.load_state_dict(torch.load('/content/drive/My Drive/Colab Notebooks/Kaggle/Inception.pt'))
model.to(device)
###Output
_____no_output_____
###Markdown
Testing The Model Test Model Per Class
###Code
def test_per_class(model, test_loader, criterion, classes):
total_class = len(classes)
test_loss = 0.0
class_correct = list(0. for i in range(total_class))
class_total = list(0. for i in range(total_class))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.eval() # prep model for evaluation
for data, target in test_loader:
# Move input and label tensors to the default device
data,target = data.to(device), target.to(device)
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item() * data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct = np.squeeze(pred.eq(target.data.view_as(pred)))
# calculate test accuracy for each object class
for i in range(len(target) - 1):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# calculate and print avg test loss
test_loss = test_loss / len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(total_class):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
str(i), 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
# dataloaders['train']
# len(dataset.classes)
# model,train_results,valid_results = train_model(model, criterion, optimizer, sched, epochs)
test_per_class(model, dataloaders['test'], criterion, dataset.classes)
###Output
Test Loss: 0.020984
Test Accuracy of 0: 100% (159/159)
Test Accuracy of 1: 100% (144/144)
Test Accuracy of 2: 100% (159/159)
Test Accuracy of 3: 100% (158/158)
Test Accuracy of 4: 87% (136/156)
Test Accuracy of 5: 100% (159/159)
Test Accuracy of 6: 100% (148/148)
Test Accuracy of 7: 100% (158/158)
Test Accuracy of 8: 96% (153/159)
Test Accuracy of 9: 100% (140/140)
Test Accuracy of 10: 100% (161/161)
Test Accuracy of 11: 92% (147/159)
Test Accuracy of 12: 100% (212/212)
Test Accuracy of 13: 100% (159/159)
Test Accuracy of 14: 100% (138/138)
Test Accuracy of 15: 100% (161/161)
Test Accuracy of 16: 97% (157/161)
Test Accuracy of 17: 100% (147/147)
Test Accuracy of 18: 97% (157/161)
Test Accuracy of 19: 100% (149/149)
Test Accuracy of 20: 100% (161/161)
Test Accuracy of 21: 100% (159/159)
Test Accuracy of 22: 100% (159/159)
Test Accuracy of 23: 100% (160/160)
Test Accuracy of 24: 100% (159/159)
Test Accuracy of 25: 100% (239/239)
Test Accuracy of 26: 100% (238/238)
Test Accuracy of 27: 100% (159/159)
Test Accuracy of 28: 100% (159/159)
Test Accuracy of 29: 100% (158/158)
Test Accuracy of 30: 100% (149/149)
Test Accuracy of 31: 100% (160/160)
Test Accuracy of 32: 100% (161/161)
Test Accuracy of 33: 100% (161/161)
Test Accuracy of 34: 100% (96/96)
Test Accuracy of 35: 100% (161/161)
Test Accuracy of 36: 100% (318/318)
Test Accuracy of 37: 100% (158/158)
Test Accuracy of 38: 100% (161/161)
Test Accuracy of 39: 100% (161/161)
Test Accuracy of 40: 100% (159/159)
Test Accuracy of 41: 100% (153/153)
Test Accuracy of 42: 100% (161/161)
Test Accuracy of 43: 100% (159/159)
Test Accuracy of 44: 100% (161/161)
Test Accuracy of 45: 100% (152/152)
Test Accuracy of 46: 100% (160/160)
Test Accuracy of 47: 100% (161/161)
Test Accuracy of 48: 100% (151/151)
Test Accuracy of 49: 100% (152/152)
Test Accuracy of 50: 100% (161/161)
Test Accuracy of 51: 100% (159/159)
Test Accuracy of 52: 100% (161/161)
Test Accuracy of 53: 100% (161/161)
Test Accuracy of 54: 100% (161/161)
Test Accuracy of 55: 100% (160/160)
Test Accuracy of 56: 100% (161/161)
Test Accuracy of 57: 100% (138/138)
Test Accuracy of 58: 100% (99/99)
Test Accuracy of 59: 100% (160/160)
Test Accuracy of 60: 100% (239/239)
Test Accuracy of 61: 98% (156/159)
Test Accuracy of 62: 99% (157/158)
Test Accuracy of 63: 100% (155/155)
Test Accuracy of 64: 100% (212/212)
Test Accuracy of 65: 100% (172/172)
Test Accuracy of 66: 100% (145/145)
Test Accuracy of 67: 100% (151/151)
Test Accuracy of 68: 100% (141/141)
Test Accuracy of 69: 100% (155/155)
Test Accuracy of 70: 100% (159/159)
Test Accuracy of 71: 100% (161/161)
Test Accuracy of 72: 100% (159/159)
Test Accuracy of 73: 100% (238/238)
Test Accuracy of 74: 100% (159/159)
Test Accuracy of 75: 100% (159/159)
Test Accuracy of 76: 100% (160/160)
Test Accuracy of 77: 100% (99/99)
Test Accuracy of 78: 100% (161/161)
Test Accuracy of 79: 100% (215/215)
Test Accuracy of 80: 100% (161/161)
Test Accuracy of 81: 100% (161/161)
Test Accuracy of 82: 100% (143/143)
Test Accuracy of 83: 100% (215/215)
Test Accuracy of 84: 100% (215/215)
Test Accuracy of 85: 100% (159/159)
Test Accuracy of 86: 100% (159/159)
Test Accuracy of 87: 100% (161/161)
Test Accuracy of 88: 100% (158/158)
Test Accuracy of 89: 100% (161/161)
Test Accuracy of 90: 100% (146/146)
Test Accuracy of 91: 100% (137/137)
Test Accuracy of 92: 100% (295/295)
Test Accuracy of 93: 100% (159/159)
Test Accuracy of 94: 100% (148/148)
Test Accuracy of 95: 100% (146/146)
Test Accuracy of 96: 100% (146/146)
Test Accuracy of 97: 100% (160/160)
Test Accuracy of 98: 100% (159/159)
Test Accuracy of 99: 100% (161/161)
Test Accuracy of 100: 100% (159/159)
Test Accuracy of 101: 100% (157/157)
Test Accuracy of 102: 100% (159/159)
Test Accuracy of 103: 98% (235/238)
Test Accuracy of 104: 100% (161/161)
Test Accuracy of 105: 100% (161/161)
Test Accuracy of 106: 97% (232/238)
Test Accuracy of 107: 100% (218/218)
Test Accuracy of 108: 100% (238/238)
Test Accuracy of 109: 100% (155/155)
Test Accuracy of 110: 100% (159/159)
Test Accuracy of 111: 100% (123/123)
Test Accuracy of 112: 100% (148/148)
Test Accuracy of 113: 100% (241/241)
Test Accuracy (Overall): 99% (18878/18937)
###Markdown
Test With Single Image
###Code
def test_with_single_image(model, file, transform, classes):
file = Image.open(file).convert('RGB')
img = transform(file).unsqueeze(0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
with torch.no_grad():
out = model(img.to(device))
ps = torch.exp(out)
top_p, top_class = ps.topk(1, dim=1)
value = top_class.item()
print("Value:", value)
print(classes[value])
plt.imshow(np.array(file))
plt.show()
test_with_single_image(model, 'fruits-360/Test/Apple Golden 3/311_100.jpg', data_transforms['test'], dataset.classes)
###Output
_____no_output_____ |
tests/maxnet/softmax-regression-gluon.ipynb | ###Markdown
Multiclass logistic regression with ``gluon``Now that we've built a [logistic regression model from scratch](./softmax-regression-scratch.ipynb), let's make this more efficient with ``gluon``. If you completed the corresponding chapters on linear regression, you might be tempted rest your eyes a little in this one. We'll be using ``gluon`` in a rather similar way and since the interface is reasonably well designed, you won't have to do much work. To keep you awake we'll introduce a few subtle tricks. Let's start by importing the standard packages.
###Code
#load watermark
%load_ext watermark
%watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer,seaborn,keras,tflearn,bokeh,gensim
from __future__ import print_function
import mxnet as mx
from mxnet import nd, autograd
from mxnet import gluon
import numpy as np
###Output
_____no_output_____
###Markdown
Set the contextNow, let's set the context. In the linear regression tutorial we did all of our computation on the cpu (`mx.cpu()`) just to keep things simple. When you've got 2-dimensional data and scalar labels, a smartwatch can probably handle the job. Already, in this tutorial we'll be working with a considerably larger dataset. If you happen to be running this code on a server with a GPU and installed the GPU-enabled version of MXNet (or remembered to build MXNet with ``CUDA=1``), you might want to substitute the following line for its commented-out counterpart.
###Code
data_ctx = mx.cpu()
model_ctx = mx.cpu()
# model_ctx = mx.gpu()
###Output
_____no_output_____
###Markdown
The MNIST DatasetWe won't suck up too much wind describing the MNIST dataset for a second time. If you're unfamiliar with the dataset and are reading these chapters out of sequence, take a look at the data section in the previous chapter on [softmax regression from scratch](./softmax-regression-scratch.ipynb).We'll load up data iterators corresponding to the training and test splits of MNIST dataset.
###Code
batch_size = 64
num_inputs = 784
num_outputs = 10
num_examples = 60000
def transform(data, label):
return data.astype(np.float32)/255, label.astype(np.float32)
train_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=True, transform=transform),
batch_size, shuffle=True)
test_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=transform),
batch_size, shuffle=False)
###Output
/srv/conda/lib/python3.6/site-packages/mxnet/gluon/data/vision/datasets.py:82: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead
label = np.fromstring(fin.read(), dtype=np.uint8).astype(np.int32)
/srv/conda/lib/python3.6/site-packages/mxnet/gluon/data/vision/datasets.py:86: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead
data = np.fromstring(fin.read(), dtype=np.uint8)
###Markdown
We're also going to want to load up an iterator with *test* data. After we train on the training dataset we're going to want to test our model on the test data. Otherwise, for all we know, our model could be doing something stupid (or treacherous?) like memorizing the training examples and regurgitating the labels on command. Multiclass Logistic RegressionNow we're going to define our model. Remember from [our tutorial on linear regression with ``gluon``](./P02-C02-linear-regression-gluon)that we add ``Dense`` layers by calling ``net.add(gluon.nn.Dense(num_outputs))``. This leaves the parameter shapes under-specified, but ``gluon`` will infer the desired shapes the first time we pass real data through the network.
###Code
net = gluon.nn.Dense(num_outputs)
###Output
_____no_output_____
###Markdown
Parameter initializationAs before, we're going to register an initializer for our parameters. Remember that ``gluon`` doesn't even know what shape the parameters have because we never specified the input dimension. The parameters will get initialized during the first call to the forward method.
###Code
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)
###Output
_____no_output_____
###Markdown
Softmax Cross Entropy LossNote, we didn't have to include the softmax layer because MXNet's has an efficient function that simultaneously computes the softmax activation and cross-entropy loss. However, if ever need to get the output probabilities,
###Code
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
###Output
_____no_output_____
###Markdown
OptimizerAnd let's instantiate an optimizer to make our updates
###Code
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1})
###Output
_____no_output_____
###Markdown
Evaluation MetricThis time, let's simplify the evaluation code by relying on MXNet's built-in ``metric`` package.
###Code
def evaluate_accuracy(data_iterator, net):
acc = mx.metric.Accuracy()
for i, (data, label) in enumerate(data_iterator):
data = data.as_in_context(model_ctx).reshape((-1,784))
label = label.as_in_context(model_ctx)
output = net(data)
predictions = nd.argmax(output, axis=1)
acc.update(preds=predictions, labels=label)
return acc.get()[1]
###Output
_____no_output_____
###Markdown
Because we initialized our model randomly, and because roughly one tenth of all examples belong to each of the ten classes, we should have an accuracy in the ball park of .10.
###Code
evaluate_accuracy(test_data, net)
###Output
_____no_output_____
###Markdown
Execute training loop
###Code
epochs = 10
moving_loss = 0.
for e in range(epochs):
cumulative_loss = 0
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(model_ctx).reshape((-1,784))
label = label.as_in_context(model_ctx)
with autograd.record():
output = net(data)
loss = softmax_cross_entropy(output, label)
loss.backward()
trainer.step(batch_size)
cumulative_loss += nd.sum(loss).asscalar()
test_accuracy = evaluate_accuracy(test_data, net)
train_accuracy = evaluate_accuracy(train_data, net)
print("Epoch %s. Loss: %s, Train_acc %s, Test_acc %s" % (e, cumulative_loss/num_examples, train_accuracy, test_accuracy))
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
import matplotlib.pyplot as plt
def model_predict(net,data):
output = net(data.as_in_context(model_ctx))
return nd.argmax(output, axis=1)
# let's sample 10 random data points from the test set
sample_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=transform),
10, shuffle=True)
for i, (data, label) in enumerate(sample_data):
data = data.as_in_context(model_ctx)
print(data.shape)
im = nd.transpose(data,(1,0,2,3))
im = nd.reshape(im,(28,10*28,1))
imtiles = nd.tile(im, (1,1,3))
plt.imshow(imtiles.asnumpy())
plt.show()
pred=model_predict(net,data.reshape((-1,784)))
print('model predictions are:', pred)
break
###Output
_____no_output_____ |
Chapter9_note.ipynb | ###Markdown
Chapter 9 Compound patterns Compound pattern就是組合各種patterns的意思,在寫程式時,你的應用例子,絕對不可能那麼美好應用一個pattern就可以完成,或多或少都會同時結合兩個pattern以上,其實這樣的例子,比較常被拿來討論的就是Model View Controller pattern,但是其實我覺得MVC說法很多種,每個人都有自己的MVC,顆顆,但是核心基本上就是 - model 代表data及一些相關的操作 - view 渲染操作介面,他不會有其餘商業邏輯,基本上就是拿到資料作轉換到介面 - controller 處理接收到的資料,傳送到相對的地方 這麼做主要目的是要區分視覺呈現的邏輯和資料處理的邏輯。以現今的web framework絕大多數都是用MVC架構,但是每個或多或少都有各自的差異。
###Code
class Model:
def logic(self):
data = 'songla'
print('Model: Crunching data as per business logic')
return data
class View:
def update(self, data):
print('View: updating the view with the result')
class Controller:
def __init__(self):
self.model = Model()
self.view = View()
def interface(self):
print("Controller: Relayed the Client asks")
data = self.model.logic()
self.view.update(data)
###Output
_____no_output_____ |
liam_notebook.ipynb | ###Markdown
Obv Fake Trump’s Chief Strategist Steve Bannon Compares Himself To Darth Vader And Satan Fake News not so obv 44425 Patrick Henningsen and Don DeBar Discuss Trump’s ‘Immigration Ban’ and the Media Reaction24316 House Republicans Begin Process To Withdraw America From The United Nations 27636 Report: Hillary Clinton To Raise 1 Billion To Unseat GOP Reps. And Senators 44675 Russian Military: US Coalition Predator Drone Spotted at Time & Place of Syria UN Aid Convoy Attack 38367HOW THE FBI Cracked A Terror Plot On Black Friday That May Have Been Worse Than 9-11 Obv Real News 15345 NATO to send more troops to Afghanistan after U.S. shift 11544 Explosion outside Athens court shatters windows, no injuries Not so Obv Real 3538 McMaster says 'of course' Trump supports NATO Article 5 21246 'Gates of Hell': Iraqi army says fighting near Tal Afar worse than Mosul
###Code
pd.options.display.max_colwidth = 1000
df[df.is_fake == False].title.sample(n=10)
df.clean_text.dtype
df = df.dropna()
df.isnull().count()
df.isna().count()
df.shape
def _show_counts_and_ratios(df, column):
"""
This fucntion takes in a df and column name.
Will produce a valuecounts for each label and the percetage of the data it represents
"""
fof = pd.concat([df.is_fake.value_counts(),
df.is_fake.value_counts(normalize=True)], axis=1)
fof.columns = ['n', 'percent']
return fof
_show_counts_and_ratios(df, 'is_fake')
def _generate_list_for_clean_text(df):
all_clean_text = " ".join(df.clean_text)
return re.sub(r"[^\w\s]", "", all_clean_text).split()
all_articles = _generate_list_for_clean_text(df)[0:10]
all_articles
df.info()
fake_words = (' '.join(df[df.is_fake == True].clean_text))
real_words = (' '.join(df[df.is_fake == False].clean_text))
all_words = (' '.join(df.clean_text))
fake_words = re.sub(r"[^\w\s]", "", fake_words).split()
real_words = re.sub(r"[^\w\s]", "", real_words).split()
all_words = re.sub(r"[^\w\s]", "", all_words).split()
###Output
_____no_output_____
###Markdown
Top Ten Words for fake-real-all
###Code
fake_freq = pd.Series(fake_words).value_counts()
fake_freq.head(10)
real_freq = pd.Series(real_words).value_counts()
real_freq.head(10)
all_freq = pd.Series(all_words).value_counts()
all_freq.head(10)
###Output
_____no_output_____
###Markdown
Takeaways - The top words for fake news articles are: trump, said, people, president, one. - The top words for real news articles are: said, trump, u, state, would. - The tope words for all news articles are: said, trump, u, state, would
###Code
word_counts = (pd.concat([all_freq, fake_freq, real_freq], axis=1, sort=True)
.set_axis(['all', 'fake', 'real'], axis=1, inplace=False)
.fillna(0)
.apply(lambda s: s.astype(int)))
word_counts.tail(10)
word_counts.sort_values(by='all', ascending=False).head(50)
###Output
_____no_output_____
###Markdown
Visualizations
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
# figure out the percentage of spam vs ham
def _percentFakevsReal(word_counts):
(word_counts
.assign(p_fake=word_counts.fake / word_counts['all'],
p_real=word_counts.real / word_counts['all'])
.sort_values(by='all')
[['p_fake', 'p_real']]
.tail(20)
.sort_values('p_real')
.plot.barh(stacked=True))
plt.title('Proportion of Fake vs Real news for the 20 most common words')
_percentFakevsReal(word_counts)
###Output
_____no_output_____
###Markdown
Takeaways - We found that the the most common fake news words were time, one, donald, people, clinton
###Code
def _wordcounts_all(word_counts):
word_counts_all = (word_counts
[(word_counts.fake > 10) & (word_counts['all'] > 10)]
.assign(ratio=lambda df: df.fake / (df['all'] + .01))
.sort_values(by='all', ascending = False)
.pipe(lambda df: pd.concat([df.head(), df.head(20)])))
return word_counts_all
word_counts_all = _wordcounts_all(word_counts)
word_counts_all
###Output
_____no_output_____
###Markdown
- 50 percent of all had trump as a keyword, 54 percent of these instances are fake. - The higest amkount of instances is associated with the word said at 118359 instances. - The word said only occured in 19% of fake articles, while 81% where associated with real news articles. - The second highest ammount of intances is associated with the word/name trump at 115797 intances and making up over 53% being in fake news realted articles.
###Code
def _wordcount_fake(word_counts):
word_counts_fake = (word_counts
[(word_counts.fake > 10) & (word_counts.real > 10)]
.assign(ratio=lambda df: df.fake / (df.real + .01))
.sort_values(by='ratio', ascending = False)
.pipe(lambda df: pd.concat([df.head(), df.head(20)])))
return word_counts_fake
word_counts_fake = _wordcount_fake(word_counts)
word_counts_fake
###Output
_____no_output_____
###Markdown
- Fake news articles tend to have words with negative connotations such as bigoted, disgusting, pathetic, insane, and idiot. -
###Code
def _wordcount_real(word_counts):
word_counts_real = (word_counts
[(word_counts.fake > 10) & (word_counts.real > 10)]
.assign(ratio=lambda df: df.real / (df.fake + .01))
.sort_values(by='ratio', ascending = False)
.pipe(lambda df: df.head(20)))
return word_counts_real
word_counts_real = _wordcount_real(word_counts)
word_counts_real
###Output
_____no_output_____
###Markdown
- Real news articles use words that are more centered around world events and places. - Ankara is the captiol of turkey - sdf may be syarian democratic forces or a type of geospacial file type.
###Code
import numpy as np
df.corr()
matrix = np.triu(df.corr())
sns.heatmap(df.corr(), annot = True, fmt='.1g', mask = matrix)
###Output
_____no_output_____
###Markdown
Wordclouds for Real-Fake-Combined
###Code
from wordcloud import WordCloud
def _word_clouds_rfa(all_words, fake_words, real_words):
all_cloud = WordCloud(background_color='black', height=1000, width=400, colormap="Blues").generate(' '.join(all_words))
fake_cloud = WordCloud(background_color='black', height=600, width=800, colormap="Blues").generate(' '.join(fake_words))
real_cloud = WordCloud(background_color='black', height=600, width=800, colormap="Blues").generate(' '.join(real_words))
plt.figure(figsize=(10, 8))
axs = [plt.axes([0, 0, .5, 1]), plt.axes([.5, .5, .5, .5]), plt.axes([.5, 0, .5, .5])]
axs[0].imshow(all_cloud)
axs[1].imshow(fake_cloud)
axs[2].imshow(real_cloud)
axs[0].set_title('All Words')
axs[1].set_title('Fake')
axs[2].set_title('Real')
for ax in axs: ax.axis('off')
return _word_clouds_rfa
_word_clouds_rfa(all_words, fake_words, real_words)
###Output
_____no_output_____
###Markdown
Takeaways - Said and donald trump are the top two words in the fake news related articles. - This could be related to some quates that were infered by the press. (id like to look into this deeper and actualy compare if these statments match what trump acutaly said) -
###Code
import numpy as np
from PIL import Image
###Output
_____no_output_____
###Markdown
Bigrams (Fake - Real)
###Code
top_20_fake_bigrams = (pd.Series(nltk.ngrams(fake_words, 2))
.value_counts()
.head(20))
top_20_fake_bigrams.head()
def _fake_bigrams(fake_words):
top_20_fake_bigrams = (pd.Series(nltk.ngrams(fake_words, 2))
.value_counts()
.head(20))
top_20_fake_bigrams.sort_values().plot.barh(color='blue', width=.9, figsize=(10, 6))
plt.title('20 Most frequently occuring fake bigrams')
plt.ylabel('Bigram')
plt.xlabel('# Occurances')
# make the labels pretty
ticks, _ = plt.yticks()
labels = top_20_fake_bigrams.reset_index()['index'].apply(lambda t: t[0] + ' ' + t[1])
_ = plt.yticks(ticks, labels)
_fake_bigrams(fake_words)
###Output
_____no_output_____
###Markdown
Takeaways - The bigrams for fake news articles are filled with "in house" events and places such as supreme count, republican part, and trumps twitter tag.
###Code
top_20_real_bigrams = (pd.Series(nltk.ngrams(real_words, 2))
.value_counts()
.head(20))
top_20_real_bigrams.head()
def _real_bigrams(real_words):
top_20_real_bigrams = (pd.Series(nltk.ngrams(real_words, 2))
.value_counts()
.head(20))
top_20_real_bigrams.sort_values().plot.barh(color='green', width=.9, figsize=(10, 6))
plt.title('20 Most frequently occuring real bigrams')
plt.ylabel('Bigram')
plt.xlabel('# Occurances')
# make the labels pretty
ticks, _ = plt.yticks()
labels = top_20_real_bigrams.reset_index()['index'].apply(lambda t: t[0] + ' ' + t[1])
_ = plt.yticks(ticks, labels)
_real_bigrams(real_words)
###Output
_____no_output_____
###Markdown
Takeaways - The bigrams for real news are filled with phrases that repesernt world events and significant moments in time.
###Code
top_20_real_trigrams2 = (pd.Series(nltk.ngrams(real_words, 3))
.value_counts()
.head(20))
top_20_real_trigrams2.head()
###Output
_____no_output_____
###Markdown
Trigrams (Real - Fake)
###Code
def _real_trigrams(real_words):
top_20_real_trigrams2 = (pd.Series(nltk.ngrams(real_words, 3))
.value_counts()
.head(20))
top_20_real_trigrams2.head()
top_20_real_trigrams2.sort_values().plot.barh(color='blue', width=.9, figsize=(10, 6))
plt.title('20 Most frequently occuring real Trigrams')
plt.ylabel('Trigram')
plt.xlabel('# Occurances')
# make the labels pretty
ticks, _ = plt.yticks()
labels = top_20_real_trigrams2.reset_index()['index'].apply(lambda t: t[0] + ' ' + t[1] + ' ' + t[2])
_ = plt.yticks(ticks, labels)
_real_trigrams(real_words)
###Output
_____no_output_____
###Markdown
Takeaways -
###Code
top_20_fake_trigrams2 = (pd.Series(nltk.ngrams(fake_words, 3))
.value_counts()
.head(20))
top_20_fake_trigrams2.head()
def _fake_trigrams(fake_words):
top_20_fake_trigrams2 = (pd.Series(nltk.ngrams(fake_words, 3))
.value_counts()
.head(20))
top_20_fake_trigrams2.head()
top_20_fake_trigrams2.sort_values().plot.barh(color='green', width=.9, figsize=(10, 6))
plt.title('20 Most frequently occuring fake Trigrams')
plt.ylabel('Trigram')
plt.xlabel('# Occurances')
# make the labels pretty
ticks, _ = plt.yticks()
labels = top_20_fake_trigrams2.reset_index()['index'].apply(lambda t: t[0] + ' ' + t[1] + ' ' + t[2])
_ = plt.yticks(ticks, labels)
_fake_trigrams(fake_words)
###Output
_____no_output_____
###Markdown
Quadgrams
###Code
top_20_fake_quadgrams = (pd.Series(nltk.ngrams(fake_words, 4))
.value_counts()
.head(20))
top_20_fake_quadgrams.head()
top_20_fake_quadgrams.sort_values().plot.barh(color='blue', width=.9, figsize=(10, 6))
plt.title('20 Most frequently occuring fake quadgrams')
plt.ylabel('Quadgram')
plt.xlabel('# Occurances')
# make the labels pretty
ticks, _ = plt.yticks()
labels = top_20_fake_quadgrams.reset_index()['index'].apply(lambda t: t[0] + ' ' + t[1] + ' ' + t[2] + ' ' + t[3])
_ = plt.yticks(ticks, labels)
###Output
_____no_output_____
###Markdown
Takeaways -
###Code
top_20_real_quadgrams = (pd.Series(nltk.ngrams(real_words, 4))
.value_counts()
.head(20))
top_20_real_quadgrams.head()
top_20_real_quadgrams.sort_values().plot.barh(color='green', width=.9, figsize=(10, 6))
plt.title('20 Most frequently occuring real quadgrams')
plt.ylabel('Quadgram')
plt.xlabel('# Occurances')
# make the labels pretty
ticks, _ = plt.yticks()
labels = top_20_real_quadgrams.reset_index()['index'].apply(lambda t: t[0] + ' ' + t[1] + ' ' + t[2] + ' ' + t[3])
_ = plt.yticks(ticks, labels)
def _generate_liste_for_clean_text(df):
all_clean_texts = " ".join(df.clean_text)
return all_clean_texts
all_clean_texts = _generate_liste_for_clean_text(df)
###Output
_____no_output_____
###Markdown
Wordclouds for Single-Bigram-Trigram
###Code
df.head()
###Output
_____no_output_____
###Markdown
Feature added (Text Length and Twiter Mentions)
###Code
df['text_size'] = df['clean_text'].apply(len)
df.head()
df.to_csv (r'C:\Users\liamjackson\Desktop\fof.csv', index = False, header=True)
df.text_size.hist(bins = 2000)
plt.hist(df.text_size, 30, range=[100, 5000], align='mid')
df.head()
###Output
_____no_output_____ |
lab-exercise-1-ehiane.ipynb | ###Markdown
NAME : Ehiane Kelvin Oigiagbe Mat.No: 2012-061-2084 Email: [email protected] Exercise description: Exercise1 Write a Python program to get the difference between a given number and 17, if the number is greater than 17 return double the absolute difference. {pictorial representation}
###Code
def spec_sev(num):
value = (num - 17)
if num > 17:
print('Your answer is =', value * 2)
else:
print('error')
spec_sev(num=int(input('Input desired number :')))
###Output
Input desired number :23
Your answer is = 12
###Markdown
Exercise II Write a Python program to calculate the sum of three given numbers, if the values are equal then return thrice of their sum.
###Code
def triple_checker(val_1,val_2,val_3):
add = val_1 + val_2 + val_3
print ('Your sum value is =', add)
if val_1 == val_2== val_3:
add*=3
print ('Your true answer is =', add)
triple_checker(val_1=int(input('Enter first number :')),
val_2=int(input('Enter second number :')),
val_3=int(input('Enter third number :')))
###Output
Enter first number :23
Enter second number :23
Enter third number :23
Your sum value is = 69
Your true answer is = 207
###Markdown
Exercise III Write a Python program which will return true if the two given integer values are equal or their sum or difference is 5.
###Code
def qualitycheck (e,y):
if e == y:
return True
elif (e + y) == 9:
return True
elif (e - y) == 5:
return True
else:
return False
print(qualitycheck(2,2))
###Output
True
###Markdown
Exercise IV Write a Python program to sort three integers without using conditional statements and loops.
###Code
x=int(input('Input first value :'))
y= int(input('Input second value : '))
z = int(input('Input third value: '))
q = min(x,y,z)
r = max(x,y,z)
s = (x+y+z)- q - r
print('Numbers in sorted order:', q ,r, s)
###Output
Input first value :23
Input second value : 45
Input third value: 78
Numbers in sorted order: 23 78 45
###Markdown
Exercise V Write a Python function that takes a positive integer and returns the sum of the cube of all the positive integers smaller than the specified number.
###Code
a = int(input('Positive Integer value: '))
def numcuber(a):
cuber=range(1,a)
result = 0
for i in cuber:
result += (i**3)
return result
if a >=0:
print('Your answer is = ' , numcuber(a))
else:
print('Error!, not a positive integer')
###Output
Positive Integer value: -2
Error!, not a positive integer
|
introduction_to_amazon_algorithms/xgboost_abalone/xgboost_abalone.ipynb | ###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the UCI data repository [1]. More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. Dataset is already processed and stored on S3. Scripts used for processing the data can be found in the [Appendix](Appendix). These include downloading the data, splitting into train, validation and test, and uploading to S3 bucket. >[1] Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. SetupThis notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.Let's start by specifying:1. The S3 buckets and prefixes that you want to use for saving the model and where training data is located. This should be within the same region as the Notebook Instance, training, and hosting. 1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
s3_client = boto3.client("s3")
# S3 bucket where the training data is located.
data_bucket = f"sagemaker-sample-files"
data_prefix = "datasets/tabular/uci_abalone"
data_bucket_path = f"s3://{data_bucket}"
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
output_bucket = sagemaker.Session().default_bucket()
output_prefix = "sagemaker/DEMO-xgboost-abalone-default"
output_bucket_path = f"s3://{output_bucket}"
for data_category in ["train", "test", "validation"]:
data_key = "{0}/{1}/abalone.{1}".format(data_prefix, data_category)
output_key = "{0}/{1}/abalone.{1}".format(output_prefix, data_category)
data_filename = "abalone.{}".format(data_category)
s3_client.download_file(data_bucket, data_key, data_filename)
s3_client.upload_file(data_filename, output_bucket, output_key)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
container = sagemaker.image_uris.retrieve("xgboost", region, "1.2-1")
%%time
import boto3
from time import gmtime, strftime
job_name = f"DEMO-xgboost-regression-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print("Training job", job_name)
# Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = {
"AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "File"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": f"{output_bucket_path}/{output_prefix}/single-xgboost"},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 5},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.7",
"objective": "reg:linear",
"num_round": "50",
"verbosity": "2",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 3600},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{output_bucket_path}/{output_prefix}/train",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{output_bucket_path}/{output_prefix}/validation",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
],
}
client = boto3.client("sagemaker", region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
while status != "Completed" and status != "Failed":
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name = f"{job_name}-model"
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info["ModelArtifacts"]["S3ModelArtifacts"]
print(model_data)
primary_container = {"Image": container, "ModelDataUrl": model_data}
create_model_response = client.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response["ModelArn"])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = f"DEMO-XGBoostEndpointConfig-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m5.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print(f"Endpoint Config Arn: {create_endpoint_config_response['EndpointConfigArn']}")
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = f'DEMO-XGBoostEndpoint-{strftime("%Y-%m-%d-%H-%M-%S", gmtime())}'
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
while status == "Creating":
print(f"Status: {status}")
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print(f"Arn: {resp['EndpointArn']}")
print(f"Status: {status}")
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client("runtime.sagemaker", region_name=region)
###Output
_____no_output_____
###Markdown
Download test data
###Code
FILE_TEST = "abalone.test"
s3 = boto3.client("s3")
s3.download_file(data_bucket, f"{data_prefix}/test/{FILE_TEST}", FILE_TEST)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = "abalone.single.test" # customize to your test file
with open(file_name, "r") as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType="text/x-libsvm", Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
result = [math.ceil(float(i)) for i in result]
label = payload.strip(" ").split()[0]
print(f"Label: {label}\nPrediction: {result[0]}")
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = "\n".join(data)
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType=content_type, Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset + batch_size < items:
results = do_predict(data[offset : (offset + batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write(".")
return arrs
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, "r") as f:
payload = f.read().strip()
labels = [int(line.split(" ")[0]) for line in payload.split("\n")]
test_data = [line for line in payload.split("\n")]
preds = batch_predict(test_data, 100, endpoint_name, "text/x-libsvm")
print(
"\n Median Absolute Percent Error (MdAPE) = ",
np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)),
)
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Appendix Data split and uploadFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
import io
import boto3
import random
def data_split(
FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST
):
data = [l for l in open(FILE_DATA, "r")]
train_file = open(FILE_TRAIN, "w")
valid_file = open(FILE_VALIDATION, "w")
tests_file = open(FILE_TEST, "w")
num_of_data = len(data)
num_train = int((PERCENT_TRAIN / 100.0) * num_of_data)
num_valid = int((PERCENT_VALIDATION / 100.0) * num_of_data)
num_tests = int((PERCENT_TEST / 100.0) * num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[], [], []]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data) - 1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return (
boto3.Session(region_name=region).resource("s3").Bucket(bucket).Object(key).upload_fileobj(fobj)
)
def upload_to_s3(bucket, channel, filename):
fobj = open(filename, "rb")
key = f"{prefix}/{channel}"
url = f"s3://{bucket}/{key}/{filename}"
print(f"Writing to {url}")
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
s3 = boto3.client("s3")
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-abalone-default"
# Load the dataset
FILE_DATA = "abalone"
s3.download_file("sagemaker-sample-files", f"datasets/tabular/uci_abalone/abalone.libsvm", FILE_DATA)
# split the downloaded data into train/test/validation files
FILE_TRAIN = "abalone.train"
FILE_VALIDATION = "abalone.validation"
FILE_TEST = "abalone.test"
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(
FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST
)
# upload the files to the S3 bucket
upload_to_s3(bucket, "train", FILE_TRAIN)
upload_to_s3(bucket, "validation", FILE_VALIDATION)
upload_to_s3(bucket, "test", FILE_TEST)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the UCI data repository [1]. More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. Dataset is already processed and stored on S3. Scripts used for processing the data can be found in the [Appendix](Appendix). These include downloading the data, splitting into train, validation and test, and uploading to S3 bucket. >[1] Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. SetupThis notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.Let's start by specifying:1. The S3 buckets and prefixes that you want to use for saving the model and where training data is located. This should be within the same region as the Notebook Instance, training, and hosting. 1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket where the training data is located.
# Feel free to specify a different bucket and prefix
data_bucket = f"jumpstart-cache-prod-{region}"
data_prefix = "1p-notebooks-datasets/abalone/libsvm"
data_bucket_path = f"s3://{data_bucket}"
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
output_bucket = sagemaker.Session().default_bucket()
output_prefix = "sagemaker/DEMO-xgboost-abalone-default"
output_bucket_path = f"s3://{output_bucket}"
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, "xgboost", "1.0-1")
%%time
import boto3
from time import gmtime, strftime
job_name = f"DEMO-xgboost-regression-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print("Training job", job_name)
# Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = {
"AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "File"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": f"{output_bucket_path}/{output_prefix}/single-xgboost"},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 5},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.7",
"silent": "0",
"objective": "reg:linear",
"num_round": "50",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 3600},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{data_bucket_path}/{data_prefix}/train",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{data_bucket_path}/{data_prefix}/validation",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
],
}
client = boto3.client("sagemaker", region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
while status != "Completed" and status != "Failed":
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name = f"{job_name}-model"
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info["ModelArtifacts"]["S3ModelArtifacts"]
print(model_data)
primary_container = {"Image": container, "ModelDataUrl": model_data}
create_model_response = client.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response["ModelArn"])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = f"DEMO-XGBoostEndpointConfig-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m5.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print(f"Endpoint Config Arn: {create_endpoint_config_response['EndpointConfigArn']}")
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = f'DEMO-XGBoostEndpoint-{strftime("%Y-%m-%d-%H-%M-%S", gmtime())}'
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
while status == "Creating":
print(f"Status: {status}")
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print(f"Arn: {resp['EndpointArn']}")
print(f"Status: {status}")
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client("runtime.sagemaker", region_name=region)
###Output
_____no_output_____
###Markdown
Download test data
###Code
FILE_TEST = "abalone.test"
s3 = boto3.client("s3")
s3.download_file(data_bucket, f"{data_prefix}/test/{FILE_TEST}", FILE_TEST)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = "abalone.single.test" # customize to your test file
with open(file_name, "r") as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType="text/x-libsvm", Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
result = [math.ceil(float(i)) for i in result]
label = payload.strip(" ").split()[0]
print(f"Label: {label}\nPrediction: {result[0]}")
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = "\n".join(data)
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType=content_type, Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset + batch_size < items:
results = do_predict(data[offset : (offset + batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write(".")
return arrs
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, "r") as f:
payload = f.read().strip()
labels = [int(line.split(" ")[0]) for line in payload.split("\n")]
test_data = [line for line in payload.split("\n")]
preds = batch_predict(test_data, 100, endpoint_name, "text/x-libsvm")
print(
"\n Median Absolute Percent Error (MdAPE) = ",
np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)),
)
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Appendix Data split and uploadFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
import io
import boto3
import random
def data_split(
FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST
):
data = [l for l in open(FILE_DATA, "r")]
train_file = open(FILE_TRAIN, "w")
valid_file = open(FILE_VALIDATION, "w")
tests_file = open(FILE_TEST, "w")
num_of_data = len(data)
num_train = int((PERCENT_TRAIN / 100.0) * num_of_data)
num_valid = int((PERCENT_VALIDATION / 100.0) * num_of_data)
num_tests = int((PERCENT_TEST / 100.0) * num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[], [], []]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data) - 1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return (
boto3.Session(region_name=region).resource("s3").Bucket(bucket).Object(key).upload_fileobj(fobj)
)
def upload_to_s3(bucket, channel, filename):
fobj = open(filename, "rb")
key = f"{prefix}/{channel}"
url = f"s3://{bucket}/{key}/{filename}"
print(f"Writing to {url}")
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-abalone-default"
# Load the dataset
FILE_DATA = "abalone"
urllib.request.urlretrieve(
"https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA
)
# split the downloaded data into train/test/validation files
FILE_TRAIN = "abalone.train"
FILE_VALIDATION = "abalone.validation"
FILE_TEST = "abalone.test"
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(
FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST
)
# upload the files to the S3 bucket
upload_to_s3(bucket, "train", FILE_TRAIN)
upload_to_s3(bucket, "validation", FILE_VALIDATION)
upload_to_s3(bucket, "test", FILE_TEST)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the UCI data repository [1]. More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. Dataset is already processed and stored on S3. Scripts used for processing the data can be found in the [Appendix](Appendix). These include downloading the data, splitting into train, validation and test, and uploading to S3 bucket. >[1] Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. SetupThis notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.Let's start by specifying:1. The S3 buckets and prefixes that you want to use for saving the model and where training data is located. This should be within the same region as the Notebook Instance, training, and hosting. 1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
!pip3 install -U sagemaker
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
s3_client = boto3.client("s3")
# S3 bucket where the training data is located.
data_bucket = f"sagemaker-sample-files"
data_prefix = "datasets/tabular/uci_abalone"
data_bucket_path = f"s3://{data_bucket}"
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
output_bucket = sagemaker.Session().default_bucket()
output_prefix = "sagemaker/DEMO-xgboost-abalone-default"
output_bucket_path = f"s3://{output_bucket}"
for data_category in ["train", "test", "validation"]:
data_key = "{0}/{1}/abalone.{1}".format(data_prefix, data_category)
output_key = "{0}/{1}/abalone.{1}".format(output_prefix, data_category)
data_filename = "abalone.{}".format(data_category)
s3_client.download_file(data_bucket, data_key, data_filename)
s3_client.upload_file(data_filename, output_bucket, output_key)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
container = sagemaker.image_uris.retrieve("xgboost", region, "1.5-1")
%%time
import boto3
from time import gmtime, strftime
job_name = f"DEMO-xgboost-regression-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print("Training job", job_name)
# Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = {
"AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "File"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": f"{output_bucket_path}/{output_prefix}/single-xgboost"},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 5},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.7",
"objective": "reg:linear",
"num_round": "50",
"verbosity": "2",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 3600},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{output_bucket_path}/{output_prefix}/train",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{output_bucket_path}/{output_prefix}/validation",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
],
}
client = boto3.client("sagemaker", region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
while status != "Completed" and status != "Failed":
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name = f"{job_name}-model"
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info["ModelArtifacts"]["S3ModelArtifacts"]
print(model_data)
primary_container = {"Image": container, "ModelDataUrl": model_data}
create_model_response = client.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response["ModelArn"])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = f"DEMO-XGBoostEndpointConfig-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m5.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print(f"Endpoint Config Arn: {create_endpoint_config_response['EndpointConfigArn']}")
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = f'DEMO-XGBoostEndpoint-{strftime("%Y-%m-%d-%H-%M-%S", gmtime())}'
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
while status == "Creating":
print(f"Status: {status}")
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print(f"Arn: {resp['EndpointArn']}")
print(f"Status: {status}")
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client("runtime.sagemaker", region_name=region)
###Output
_____no_output_____
###Markdown
Download test data
###Code
FILE_TEST = "abalone.test"
s3 = boto3.client("s3")
s3.download_file(data_bucket, f"{data_prefix}/test/{FILE_TEST}", FILE_TEST)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = "abalone.single.test" # customize to your test file
with open(file_name, "r") as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType="text/x-libsvm", Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
result = [math.ceil(float(i)) for i in result]
label = payload.strip(" ").split()[0]
print(f"Label: {label}\nPrediction: {result[0]}")
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = "\n".join(data)
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType=content_type, Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset + batch_size < items:
results = do_predict(data[offset : (offset + batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write(".")
return arrs
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, "r") as f:
payload = f.read().strip()
labels = [int(line.split(" ")[0]) for line in payload.split("\n")]
test_data = [line for line in payload.split("\n")]
preds = batch_predict(test_data, 100, endpoint_name, "text/x-libsvm")
print(
"\n Median Absolute Percent Error (MdAPE) = ",
np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)),
)
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Appendix Data split and uploadFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
import io
import boto3
import random
def data_split(
FILE_DATA,
FILE_TRAIN,
FILE_VALIDATION,
FILE_TEST,
PERCENT_TRAIN,
PERCENT_VALIDATION,
PERCENT_TEST,
):
data = [l for l in open(FILE_DATA, "r")]
train_file = open(FILE_TRAIN, "w")
valid_file = open(FILE_VALIDATION, "w")
tests_file = open(FILE_TEST, "w")
num_of_data = len(data)
num_train = int((PERCENT_TRAIN / 100.0) * num_of_data)
num_valid = int((PERCENT_VALIDATION / 100.0) * num_of_data)
num_tests = int((PERCENT_TEST / 100.0) * num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[], [], []]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data) - 1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return (
boto3.Session(region_name=region)
.resource("s3")
.Bucket(bucket)
.Object(key)
.upload_fileobj(fobj)
)
def upload_to_s3(bucket, channel, filename):
fobj = open(filename, "rb")
key = f"{prefix}/{channel}"
url = f"s3://{bucket}/{key}/{filename}"
print(f"Writing to {url}")
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
s3 = boto3.client("s3")
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-abalone-default"
# Load the dataset
FILE_DATA = "abalone"
s3.download_file(
"sagemaker-sample-files", f"datasets/tabular/uci_abalone/abalone.libsvm", FILE_DATA
)
# split the downloaded data into train/test/validation files
FILE_TRAIN = "abalone.train"
FILE_VALIDATION = "abalone.validation"
FILE_TEST = "abalone.test"
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(
FILE_DATA,
FILE_TRAIN,
FILE_VALIDATION,
FILE_TEST,
PERCENT_TRAIN,
PERCENT_VALIDATION,
PERCENT_TEST,
)
# upload the files to the S3 bucket
upload_to_s3(bucket, "train", FILE_TRAIN)
upload_to_s3(bucket, "validation", FILE_VALIDATION)
upload_to_s3(bucket, "test", FILE_TEST)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
bucket='<bucket-name>' # put your s3 bucket name here, and create s3 bucket
prefix = 'sagemaker/DEMO-xgboost-regression'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region,bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session().resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(boto3.Session().region_name, 'xgboost')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m4.4xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker')
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
while status=='Creating':
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker')
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
bucket='<bucket-name>' # put your s3 bucket name here, and create s3 bucket
prefix = 'sagemaker/DEMO-xgboost-regression'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region,bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session().resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/xgboost:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/xgboost:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/xgboost:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/xgboost:latest'}
container = containers[boto3.Session().region_name]
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m4.4xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker')
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
while status=='Creating':
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker')
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
bucket = sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-abalone-default'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region, bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost', '1.0-1')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m5.2xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m5.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
while status=='Creating':
print("Status: " + status)
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker', region_name=region)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
bucket = sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-abalone-default'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region, bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost', '0.90-1')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m5.2xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m5.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
while status=='Creating':
print("Status: " + status)
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker', region_name=region)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
bucket='<bucket-name>' # put your s3 bucket name here, and create s3 bucket
prefix = 'sagemaker/DEMO-xgboost-regression'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region,bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m4.4xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Plotting evaluation metricsEvaluation metrics for the completed training job are available in CloudWatch. We can pull the area under curve metric for the validation data set and plot it to see the performance of the model over time.
###Code
%matplotlib inline
from sagemaker.analytics import TrainingJobAnalytics
metric_name = 'validation:rmse'
metrics_dataframe = TrainingJobAnalytics(training_job_name=job_name, metric_names=[metric_name]).dataframe()
plt = metrics_dataframe.plot(kind='line', figsize=(12,5), x='timestamp', y='value', style='b.', legend=False)
plt.set_ylabel(metric_name);
###Output
_____no_output_____
###Markdown
Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
while status=='Creating':
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker', region_name=region)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the UCI data repository [1]. More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. Dataset is already processed and stored on S3. Scripts used for processing the data can be found in the [Appendix](Appendix). These include downloading the data, splitting into train, validation and test, and uploading to S3 bucket. >[1] Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. SetupThis notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.Let's start by specifying:1. The S3 buckets and prefixes that you want to use for saving the model and where training data is located. This should be within the same region as the Notebook Instance, training, and hosting. 1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
s3_client = boto3.client("s3")
# S3 bucket where the training data is located.
data_bucket = f"sagemaker-sample-files"
data_prefix = "datasets/tabular/uci_abalone"
data_bucket_path = f"s3://{data_bucket}"
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
output_bucket = sagemaker.Session().default_bucket()
output_prefix = "sagemaker/DEMO-xgboost-abalone-default"
output_bucket_path = f"s3://{output_bucket}"
for data_category in ["train", "test", "validation"]:
data_key = "{0}/{1}/abalone.{1}".format(data_prefix, data_category)
output_key = "{0}/{1}/abalone.{1}".format(output_prefix, data_category)
data_filename = "abalone.{}".format(data_category)
s3_client.download_file(data_bucket, data_key, data_filename)
s3_client.upload_file(data_filename, output_bucket, output_key)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
container = sagemaker.image_uris.retrieve("xgboost", region, "1.3-1")
%%time
import boto3
from time import gmtime, strftime
job_name = f"DEMO-xgboost-regression-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print("Training job", job_name)
# Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = {
"AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "File"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": f"{output_bucket_path}/{output_prefix}/single-xgboost"},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 5},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.7",
"objective": "reg:linear",
"num_round": "50",
"verbosity": "2",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 3600},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{output_bucket_path}/{output_prefix}/train",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{output_bucket_path}/{output_prefix}/validation",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
],
}
client = boto3.client("sagemaker", region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
while status != "Completed" and status != "Failed":
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name = f"{job_name}-model"
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info["ModelArtifacts"]["S3ModelArtifacts"]
print(model_data)
primary_container = {"Image": container, "ModelDataUrl": model_data}
create_model_response = client.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response["ModelArn"])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = f"DEMO-XGBoostEndpointConfig-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m5.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print(f"Endpoint Config Arn: {create_endpoint_config_response['EndpointConfigArn']}")
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = f'DEMO-XGBoostEndpoint-{strftime("%Y-%m-%d-%H-%M-%S", gmtime())}'
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
while status == "Creating":
print(f"Status: {status}")
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print(f"Arn: {resp['EndpointArn']}")
print(f"Status: {status}")
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client("runtime.sagemaker", region_name=region)
###Output
_____no_output_____
###Markdown
Download test data
###Code
FILE_TEST = "abalone.test"
s3 = boto3.client("s3")
s3.download_file(data_bucket, f"{data_prefix}/test/{FILE_TEST}", FILE_TEST)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = "abalone.single.test" # customize to your test file
with open(file_name, "r") as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType="text/x-libsvm", Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
result = [math.ceil(float(i)) for i in result]
label = payload.strip(" ").split()[0]
print(f"Label: {label}\nPrediction: {result[0]}")
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = "\n".join(data)
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType=content_type, Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset + batch_size < items:
results = do_predict(data[offset : (offset + batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write(".")
return arrs
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, "r") as f:
payload = f.read().strip()
labels = [int(line.split(" ")[0]) for line in payload.split("\n")]
test_data = [line for line in payload.split("\n")]
preds = batch_predict(test_data, 100, endpoint_name, "text/x-libsvm")
print(
"\n Median Absolute Percent Error (MdAPE) = ",
np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)),
)
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Appendix Data split and uploadFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
import io
import boto3
import random
def data_split(
FILE_DATA,
FILE_TRAIN,
FILE_VALIDATION,
FILE_TEST,
PERCENT_TRAIN,
PERCENT_VALIDATION,
PERCENT_TEST,
):
data = [l for l in open(FILE_DATA, "r")]
train_file = open(FILE_TRAIN, "w")
valid_file = open(FILE_VALIDATION, "w")
tests_file = open(FILE_TEST, "w")
num_of_data = len(data)
num_train = int((PERCENT_TRAIN / 100.0) * num_of_data)
num_valid = int((PERCENT_VALIDATION / 100.0) * num_of_data)
num_tests = int((PERCENT_TEST / 100.0) * num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[], [], []]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data) - 1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return (
boto3.Session(region_name=region)
.resource("s3")
.Bucket(bucket)
.Object(key)
.upload_fileobj(fobj)
)
def upload_to_s3(bucket, channel, filename):
fobj = open(filename, "rb")
key = f"{prefix}/{channel}"
url = f"s3://{bucket}/{key}/{filename}"
print(f"Writing to {url}")
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
s3 = boto3.client("s3")
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-abalone-default"
# Load the dataset
FILE_DATA = "abalone"
s3.download_file(
"sagemaker-sample-files", f"datasets/tabular/uci_abalone/abalone.libsvm", FILE_DATA
)
# split the downloaded data into train/test/validation files
FILE_TRAIN = "abalone.train"
FILE_VALIDATION = "abalone.validation"
FILE_TEST = "abalone.test"
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(
FILE_DATA,
FILE_TRAIN,
FILE_VALIDATION,
FILE_TEST,
PERCENT_TRAIN,
PERCENT_VALIDATION,
PERCENT_TEST,
)
# upload the files to the S3 bucket
upload_to_s3(bucket, "train", FILE_TRAIN)
upload_to_s3(bucket, "validation", FILE_VALIDATION)
upload_to_s3(bucket, "test", FILE_TEST)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
bucket='<bucket-name>' # put your s3 bucket name here, and create s3 bucket
prefix = 'sagemaker/DEMO-xgboost-regression'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region,bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session().resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/xgboost:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/xgboost:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/xgboost:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/xgboost:latest',
'ap-northeast-1': '501404015308.dkr.ecr.ap-northeast-1.amazonaws.com/xgboost:latest',
'ap-northeast-2': '306986355934.dkr.ecr.ap-northeast-2.amazonaws.com/xgboost:latest'}
container = containers[boto3.Session().region_name]
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m4.4xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker')
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
while status=='Creating':
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker')
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. PrerequisitesEnsuring the latest sagemaker sdk is installed. For a major version upgrade, there might be some apis that may get deprecated.
###Code
#!pip install -qU awscli boto3 sagemaker
#!/usr/local/bin/python -m pip install --upgrade pip
###Output
_____no_output_____
###Markdown
SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
bucket = sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-abalone-default'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region, bucket)
###Output
CPU times: user 828 ms, sys: 138 ms, total: 966 ms
Wall time: 1.03 s
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
CPU times: user 8 µs, sys: 1 µs, total: 9 µs
Wall time: 12.2 µs
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
Writing to s3://sagemaker-us-east-1-716665088992/sagemaker/DEMO-xgboost-abalone-default/train/abalone.train
Writing to s3://sagemaker-us-east-1-716665088992/sagemaker/DEMO-xgboost-abalone-default/validation/abalone.validation
Writing to s3://sagemaker-us-east-1-716665088992/sagemaker/DEMO-xgboost-abalone-default/test/abalone.test
CPU times: user 254 ms, sys: 29 ms, total: 283 ms
Wall time: 40.3 s
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import image_uris
container = image_uris.retrieve('xgboost', region=region, version="latest")
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m5.2xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
Training job DEMO-xgboost-regression-2020-08-06-20-24-22
InProgress
InProgress
InProgress
Completed
CPU times: user 86.7 ms, sys: 11.8 ms, total: 98.6 ms
Wall time: 3min
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
DEMO-xgboost-regression-2020-08-06-20-24-22-model
https://s3-us-east-1.amazonaws.com/sagemaker-us-east-1-716665088992/sagemaker/DEMO-xgboost-abalone-default/single-xgboost/DEMO-xgboost-regression-2020-08-06-20-24-22/output/model.tar.gz
arn:aws:sagemaker:us-east-1:716665088992:model/demo-xgboost-regression-2020-08-06-20-24-22-model
CPU times: user 9.23 ms, sys: 0 ns, total: 9.23 ms
Wall time: 439 ms
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m5.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
DEMO-XGBoostEndpointConfig-2020-08-06-20-27-23
Endpoint Config Arn: arn:aws:sagemaker:us-east-1:716665088992:endpoint-config/demo-xgboostendpointconfig-2020-08-06-20-27-23
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
while status=='Creating':
print("Status: " + status)
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
DEMO-XGBoostEndpoint-2020-08-07-01-57-36
arn:aws:sagemaker:us-east-1:716665088992:endpoint/demo-xgboostendpoint-2020-08-07-01-57-36
Status: Creating
Status: Creating
Status: Creating
Status: Creating
Status: Creating
Status: Creating
Status: Creating
Arn: arn:aws:sagemaker:us-east-1:716665088992:endpoint/demo-xgboostendpoint-2020-08-07-01-57-36
Status: InService
CPU times: user 108 ms, sys: 3.73 ms, total: 112 ms
Wall time: 7min 1s
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker', region_name=region)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
Label: 9
Prediction: 10
CPU times: user 9.68 ms, sys: 5.31 ms, total: 15 ms
Wall time: 125 ms
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
.......
Median Absolute Percent Error (MdAPE) = 0.125
CPU times: user 19.1 ms, sys: 1.19 ms, total: 20.2 ms
Wall time: 142 ms
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
bucket='<bucket-name>' # put your s3 bucket name here, and create s3 bucket
prefix = 'sagemaker/DEMO-xgboost-regression'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region,bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m4.4xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
while status=='Creating':
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker', region_name=region)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. PrerequisitesEnsuring the latest sagemaker sdk is installed. For a major version upgrade, there might be some apis that may get deprecated.
###Code
!pip install -qU awscli boto3 sagemaker
###Output
_____no_output_____
###Markdown
SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
bucket = sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-abalone-default'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region, bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost', '1.0-1')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m5.2xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m5.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
while status=='Creating':
print("Status: " + status)
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker', region_name=region)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
bucket='<bucket-name>' # put your s3 bucket name here, and create s3 bucket
prefix = 'sagemaker/DEMO-xgboost-regression'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region,bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session().resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/xgboost:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/xgboost:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/xgboost:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/xgboost:latest',
'ap-northeast-1': '501404015308.dkr.ecr.ap-northeast-1.amazonaws.com/xgboost:latest'}
container = containers[boto3.Session().region_name]
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m4.4xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker')
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
while status=='Creating':
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker')
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
bucket='<bucket-name>' # put your s3 bucket name here, and create s3 bucket
prefix = 'sagemaker/DEMO-xgboost-regression'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region,bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session().resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(boto3.Session().region_name, 'xgboost')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m4.4xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker')
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
while status=='Creating':
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker')
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the UCI data repository [1]. More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. Dataset is already processed and stored on S3. Scripts used for processing the data can be found in the [Appendix](Appendix). These include downloading the data, splitting into train, validation and test, and uploading to S3 bucket. >[1] Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. SetupThis notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.Let's start by specifying:1. The S3 buckets and prefixes that you want to use for saving the model and where training data is located. This should be within the same region as the Notebook Instance, training, and hosting. 1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket where the training data is located.
# Feel free to specify a different bucket and prefix
data_bucket = f"jumpstart-cache-prod-{region}"
data_prefix = "1p-notebooks-datasets/abalone/libsvm"
data_bucket_path = f"s3://{data_bucket}"
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
output_bucket = sagemaker.Session().default_bucket()
output_prefix = "sagemaker/DEMO-xgboost-abalone-default"
output_bucket_path = f"s3://{output_bucket}"
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
container = sagemaker.image_uris.retrieve("xgboost", region, "1.2-1")
%%time
import boto3
from time import gmtime, strftime
job_name = f"DEMO-xgboost-regression-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print("Training job", job_name)
# Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = {
"AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "File"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": f"{output_bucket_path}/{output_prefix}/single-xgboost"},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 5},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.7",
"objective": "reg:linear",
"num_round": "50",
"verbosity": "2",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 3600},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{data_bucket_path}/{data_prefix}/train",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{data_bucket_path}/{data_prefix}/validation",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
],
}
client = boto3.client("sagemaker", region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
while status != "Completed" and status != "Failed":
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name = f"{job_name}-model"
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info["ModelArtifacts"]["S3ModelArtifacts"]
print(model_data)
primary_container = {"Image": container, "ModelDataUrl": model_data}
create_model_response = client.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response["ModelArn"])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = f"DEMO-XGBoostEndpointConfig-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m5.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print(f"Endpoint Config Arn: {create_endpoint_config_response['EndpointConfigArn']}")
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = f'DEMO-XGBoostEndpoint-{strftime("%Y-%m-%d-%H-%M-%S", gmtime())}'
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
while status == "Creating":
print(f"Status: {status}")
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print(f"Arn: {resp['EndpointArn']}")
print(f"Status: {status}")
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client("runtime.sagemaker", region_name=region)
###Output
_____no_output_____
###Markdown
Download test data
###Code
FILE_TEST = "abalone.test"
s3 = boto3.client("s3")
s3.download_file(data_bucket, f"{data_prefix}/test/{FILE_TEST}", FILE_TEST)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = "abalone.single.test" # customize to your test file
with open(file_name, "r") as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType="text/x-libsvm", Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
result = [math.ceil(float(i)) for i in result]
label = payload.strip(" ").split()[0]
print(f"Label: {label}\nPrediction: {result[0]}")
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = "\n".join(data)
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType=content_type, Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset + batch_size < items:
results = do_predict(data[offset : (offset + batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write(".")
return arrs
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, "r") as f:
payload = f.read().strip()
labels = [int(line.split(" ")[0]) for line in payload.split("\n")]
test_data = [line for line in payload.split("\n")]
preds = batch_predict(test_data, 100, endpoint_name, "text/x-libsvm")
print(
"\n Median Absolute Percent Error (MdAPE) = ",
np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)),
)
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Appendix Data split and uploadFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
import io
import boto3
import random
def data_split(
FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST
):
data = [l for l in open(FILE_DATA, "r")]
train_file = open(FILE_TRAIN, "w")
valid_file = open(FILE_VALIDATION, "w")
tests_file = open(FILE_TEST, "w")
num_of_data = len(data)
num_train = int((PERCENT_TRAIN / 100.0) * num_of_data)
num_valid = int((PERCENT_VALIDATION / 100.0) * num_of_data)
num_tests = int((PERCENT_TEST / 100.0) * num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[], [], []]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data) - 1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return (
boto3.Session(region_name=region).resource("s3").Bucket(bucket).Object(key).upload_fileobj(fobj)
)
def upload_to_s3(bucket, channel, filename):
fobj = open(filename, "rb")
key = f"{prefix}/{channel}"
url = f"s3://{bucket}/{key}/{filename}"
print(f"Writing to {url}")
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-abalone-default"
# Load the dataset
FILE_DATA = "abalone"
urllib.request.urlretrieve(
"https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA
)
# split the downloaded data into train/test/validation files
FILE_TRAIN = "abalone.train"
FILE_VALIDATION = "abalone.validation"
FILE_TEST = "abalone.test"
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(
FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST
)
# upload the files to the S3 bucket
upload_to_s3(bucket, "train", FILE_TRAIN)
upload_to_s3(bucket, "validation", FILE_VALIDATION)
upload_to_s3(bucket, "test", FILE_TEST)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
bucket='<bucket-name>' # put your s3 bucket name here, and create s3 bucket
prefix = 'sagemaker/DEMO-xgboost-regression'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region,bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m4.4xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Plotting evaluation metricsEvaluation metrics for the completed training job are available in CloudWatch. We can pull the area under curve metric for the validation data set and plot it to see the performance of the model over time.
###Code
%matplotlib inline
from sagemaker.analytics import TrainingJobAnalytics
metric_name = 'validation:rmse'
metrics_dataframe = TrainingJobAnalytics(training_job_name=job_name, metric_names=[metric_name]).dataframe()
plt = metrics_dataframe.plot(kind='line', figsize=(12,5), x='timestamp', y='value', style='b.', legend=False)
plt.set_ylabel(metric_name);
###Output
_____no_output_____
###Markdown
Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
while status=='Creating':
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker', region_name=region)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
bucket='<bucket-name>' # put your s3 bucket name here, and create s3 bucket
prefix = 'sagemaker/xgboost-regression'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region,bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session().resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/xgboost:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/xgboost:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/xgboost:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/xgboost:latest'}
container = containers[boto3.Session().region_name]
%%time
import boto3
from time import gmtime, strftime
job_name = 'xgboost-single-machine-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m4.4xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker')
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
while status=='Creating':
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker')
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ')[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. PrerequisitesEnsuring the latest sagemaker sdk is installed. For a major version upgrade, there might be some apis that may get deprecated.
###Code
import sys
!{sys.executable} -m pip install -qU awscli boto3 "sagemaker>=1.71.0,<2.0.0"
###Output
_____no_output_____
###Markdown
SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
bucket = sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-abalone-default'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region, bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost', '1.0-1')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m5.2xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m5.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
while status=='Creating':
print("Status: " + status)
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker', region_name=region)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
bucket='<bucket-name>' # put your s3 bucket name here, and create s3 bucket
prefix = 'sagemaker/DEMO-xgboost-regression'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region,bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m4.4xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Plotting evaluation metricsEvaluation metrics for the completed training job are available in CloudWatch. We can pull the area under curve metric for the validation data set and plot it to see the performance of the model over time.
###Code
%matplotlib inline
from sagemaker.analytics import TrainingJobAnalytics
metric_name = 'validation:rmse'
metrics_dataframe = TrainingJobAnalytics(training_job_name=job_name, metric_names=[metric_name]).dataframe()
plt = metrics_dataframe.plot(kind='line', figsize=(12,5), x='timestamp', y='value', style='b.', legend=False)
plt.set_ylabel(metric_name);
###Output
_____no_output_____
###Markdown
Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
while status=='Creating':
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker', region_name=region)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the UCI data repository [1]. More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. Dataset is already processed and stored on S3. Scripts used for processing the data can be found in the [Appendix](Appendix). These include downloading the data, splitting into train, validation and test, and uploading to S3 bucket. >[1] Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. SetupThis notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.Let's start by specifying:1. The S3 buckets and prefixes that you want to use for saving the model and where training data is located. This should be within the same region as the Notebook Instance, training, and hosting. 1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
s3_client = boto3.client("s3")
# S3 bucket where the training data is located.
data_bucket = f"sagemaker-sample-files"
data_prefix = "datasets/tabular/uci_abalone"
data_bucket_path = f"s3://{data_bucket}"
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
output_bucket = sagemaker.Session().default_bucket()
output_prefix = "sagemaker/DEMO-xgboost-abalone-default"
output_bucket_path = f"s3://{output_bucket}"
for data_category in ["train", "test", "validation"]:
data_key = "{0}/{1}/abalone.{1}".format(data_prefix, data_category)
output_key = "{0}/{1}/abalone.{1}".format(output_prefix, data_category)
data_filename = "abalone.{}".format(data_category)
s3_client.download_file(data_bucket, data_key, data_filename)
s3_client.upload_file(data_filename, output_bucket, output_key)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
container = sagemaker.image_uris.retrieve("xgboost", region, "1.3-1")
%%time
import boto3
from time import gmtime, strftime
job_name = f"DEMO-xgboost-regression-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print("Training job", job_name)
# Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = {
"AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "File"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": f"{output_bucket_path}/{output_prefix}/single-xgboost"},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 5},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.7",
"objective": "reg:linear",
"num_round": "50",
"verbosity": "2",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 3600},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{output_bucket_path}/{output_prefix}/train",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{output_bucket_path}/{output_prefix}/validation",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
],
}
client = boto3.client("sagemaker", region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
while status != "Completed" and status != "Failed":
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name = f"{job_name}-model"
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info["ModelArtifacts"]["S3ModelArtifacts"]
print(model_data)
primary_container = {"Image": container, "ModelDataUrl": model_data}
create_model_response = client.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response["ModelArn"])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = f"DEMO-XGBoostEndpointConfig-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m5.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print(f"Endpoint Config Arn: {create_endpoint_config_response['EndpointConfigArn']}")
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = f'DEMO-XGBoostEndpoint-{strftime("%Y-%m-%d-%H-%M-%S", gmtime())}'
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
while status == "Creating":
print(f"Status: {status}")
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print(f"Arn: {resp['EndpointArn']}")
print(f"Status: {status}")
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client("runtime.sagemaker", region_name=region)
###Output
_____no_output_____
###Markdown
Download test data
###Code
FILE_TEST = "abalone.test"
s3 = boto3.client("s3")
s3.download_file(data_bucket, f"{data_prefix}/test/{FILE_TEST}", FILE_TEST)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = "abalone.single.test" # customize to your test file
with open(file_name, "r") as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType="text/x-libsvm", Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
result = [math.ceil(float(i)) for i in result]
label = payload.strip(" ").split()[0]
print(f"Label: {label}\nPrediction: {result[0]}")
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = "\n".join(data)
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType=content_type, Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset + batch_size < items:
results = do_predict(data[offset : (offset + batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write(".")
return arrs
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, "r") as f:
payload = f.read().strip()
labels = [int(line.split(" ")[0]) for line in payload.split("\n")]
test_data = [line for line in payload.split("\n")]
preds = batch_predict(test_data, 100, endpoint_name, "text/x-libsvm")
print(
"\n Median Absolute Percent Error (MdAPE) = ",
np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)),
)
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Appendix Data split and uploadFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
import io
import boto3
import random
def data_split(
FILE_DATA,
FILE_TRAIN,
FILE_VALIDATION,
FILE_TEST,
PERCENT_TRAIN,
PERCENT_VALIDATION,
PERCENT_TEST,
):
data = [l for l in open(FILE_DATA, "r")]
train_file = open(FILE_TRAIN, "w")
valid_file = open(FILE_VALIDATION, "w")
tests_file = open(FILE_TEST, "w")
num_of_data = len(data)
num_train = int((PERCENT_TRAIN / 100.0) * num_of_data)
num_valid = int((PERCENT_VALIDATION / 100.0) * num_of_data)
num_tests = int((PERCENT_TEST / 100.0) * num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[], [], []]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data) - 1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return (
boto3.Session(region_name=region)
.resource("s3")
.Bucket(bucket)
.Object(key)
.upload_fileobj(fobj)
)
def upload_to_s3(bucket, channel, filename):
fobj = open(filename, "rb")
key = f"{prefix}/{channel}"
url = f"s3://{bucket}/{key}/{filename}"
print(f"Writing to {url}")
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
s3 = boto3.client("s3")
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-abalone-default"
# Load the dataset
FILE_DATA = "abalone"
s3.download_file(
"sagemaker-sample-files", f"datasets/tabular/uci_abalone/abalone.libsvm", FILE_DATA
)
# split the downloaded data into train/test/validation files
FILE_TRAIN = "abalone.train"
FILE_VALIDATION = "abalone.validation"
FILE_TEST = "abalone.test"
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(
FILE_DATA,
FILE_TRAIN,
FILE_VALIDATION,
FILE_TEST,
PERCENT_TRAIN,
PERCENT_VALIDATION,
PERCENT_TEST,
)
# upload the files to the S3 bucket
upload_to_s3(bucket, "train", FILE_TRAIN)
upload_to_s3(bucket, "validation", FILE_VALIDATION)
upload_to_s3(bucket, "test", FILE_TEST)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
bucket = sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-abalone-default'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region, bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost', '0.90-1')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m5.2xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m5.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
while status=='Creating':
print("Status: " + status)
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker', region_name=region)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the UCI data repository [1]. More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. Dataset is already processed and stored on S3. Scripts used for processing the data can be found in the [Appendix](Appendix). These include downloading the data, splitting into train, validation and test, and uploading to S3 bucket. >[1] Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. SetupThis notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.Let's start by specifying:1. The S3 buckets and prefixes that you want to use for saving the model and where training data is located. This should be within the same region as the Notebook Instance, training, and hosting. 1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket where the training data is located.
# Feel free to specify a different bucket and prefix
data_bucket = f"jumpstart-cache-prod-{region}"
data_prefix = "1p-notebooks-datasets/abalone/libsvm"
data_bucket_path = f"s3://{data_bucket}"
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
output_bucket = sagemaker.Session().default_bucket()
output_prefix = "sagemaker/DEMO-xgboost-abalone-default"
output_bucket_path = f"s3://{output_bucket}"
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
container = sagemaker.image_uris.retrieve("xgboost", region, "1.2-1")
%%time
import boto3
from time import gmtime, strftime
job_name = f"DEMO-xgboost-regression-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print("Training job", job_name)
# Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = {
"AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "File"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": f"{output_bucket_path}/{output_prefix}/single-xgboost"},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 5},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.7",
"objective": "reg:linear",
"num_round": "50",
"verbosity": "2",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 3600},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{data_bucket_path}/{data_prefix}/train",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{data_bucket_path}/{data_prefix}/validation",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
],
}
client = boto3.client("sagemaker", region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
while status != "Completed" and status != "Failed":
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name = f"{job_name}-model"
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info["ModelArtifacts"]["S3ModelArtifacts"]
print(model_data)
primary_container = {"Image": container, "ModelDataUrl": model_data}
create_model_response = client.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response["ModelArn"])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = f"DEMO-XGBoostEndpointConfig-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m5.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print(f"Endpoint Config Arn: {create_endpoint_config_response['EndpointConfigArn']}")
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = f'DEMO-XGBoostEndpoint-{strftime("%Y-%m-%d-%H-%M-%S", gmtime())}'
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
while status == "Creating":
print(f"Status: {status}")
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print(f"Arn: {resp['EndpointArn']}")
print(f"Status: {status}")
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client("runtime.sagemaker", region_name=region)
###Output
_____no_output_____
###Markdown
Download test data
###Code
FILE_TEST = "abalone.test"
s3 = boto3.client("s3")
s3.download_file(data_bucket, f"{data_prefix}/test/{FILE_TEST}", FILE_TEST)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = "abalone.single.test" # customize to your test file
with open(file_name, "r") as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType="text/x-libsvm", Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
result = [math.ceil(float(i)) for i in result]
label = payload.strip(" ").split()[0]
print(f"Label: {label}\nPrediction: {result[0]}")
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = "\n".join(data)
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType=content_type, Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset + batch_size < items:
results = do_predict(data[offset : (offset + batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write(".")
return arrs
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, "r") as f:
payload = f.read().strip()
labels = [int(line.split(" ")[0]) for line in payload.split("\n")]
test_data = [line for line in payload.split("\n")]
preds = batch_predict(test_data, 100, endpoint_name, "text/x-libsvm")
print(
"\n Median Absolute Percent Error (MdAPE) = ",
np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)),
)
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Appendix Data split and uploadFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
import io
import boto3
import random
def data_split(
FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST
):
data = [l for l in open(FILE_DATA, "r")]
train_file = open(FILE_TRAIN, "w")
valid_file = open(FILE_VALIDATION, "w")
tests_file = open(FILE_TEST, "w")
num_of_data = len(data)
num_train = int((PERCENT_TRAIN / 100.0) * num_of_data)
num_valid = int((PERCENT_VALIDATION / 100.0) * num_of_data)
num_tests = int((PERCENT_TEST / 100.0) * num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[], [], []]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data) - 1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return (
boto3.Session(region_name=region).resource("s3").Bucket(bucket).Object(key).upload_fileobj(fobj)
)
def upload_to_s3(bucket, channel, filename):
fobj = open(filename, "rb")
key = f"{prefix}/{channel}"
url = f"s3://{bucket}/{key}/{filename}"
print(f"Writing to {url}")
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
s3 = boto3.client("s3")
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-abalone-default"
# Load the dataset
FILE_DATA = "abalone"
s3.download_file("sagemaker-sample-files", f"datasets/tabular/uci_abalone/abalone.libsvm", FILE_DATA)
# split the downloaded data into train/test/validation files
FILE_TRAIN = "abalone.train"
FILE_VALIDATION = "abalone.validation"
FILE_TEST = "abalone.test"
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(
FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST
)
# upload the files to the S3 bucket
upload_to_s3(bucket, "train", FILE_TRAIN)
upload_to_s3(bucket, "validation", FILE_VALIDATION)
upload_to_s3(bucket, "test", FILE_TEST)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the UCI data repository [1]. More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. Dataset is already processed and stored on S3. Scripts used for processing the data can be found in the [Appendix](Appendix). These include downloading the data, splitting into train, validation and test, and uploading to S3 bucket. >[1] Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. SetupThis notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.Let's start by specifying:1. The S3 buckets and prefixes that you want to use for saving the model and where training data is located. This should be within the same region as the Notebook Instance, training, and hosting. 1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
s3_client = boto3.client("s3")
# S3 bucket where the training data is located.
data_bucket = f"sagemaker-sample-files"
data_prefix = "datasets/tabular/uci_abalone"
data_bucket_path = f"s3://{data_bucket}"
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
output_bucket = sagemaker.Session().default_bucket()
output_prefix = "sagemaker/DEMO-xgboost-abalone-default"
output_bucket_path = f"s3://{output_bucket}"
for data_category in ["train", "test", "validation"]:
data_key = "{0}/{1}/abalone.{1}".format(data_prefix, data_category)
output_key = "{0}/{1}/abalone.{1}".format(output_prefix, data_category)
data_filename = "abalone.{}".format(data_category)
s3_client.download_file(data_bucket, data_key, data_filename)
s3_client.upload_file(data_filename, output_bucket, output_key)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
container = sagemaker.image_uris.retrieve("xgboost", region, "1.2-1")
%%time
import boto3
from time import gmtime, strftime
job_name = f"DEMO-xgboost-regression-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print("Training job", job_name)
# Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = {
"AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "File"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": f"{output_bucket_path}/{output_prefix}/single-xgboost"},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 5},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.7",
"objective": "reg:linear",
"num_round": "50",
"verbosity": "2",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 3600},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{output_bucket_path}/{output_prefix}/train",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{output_bucket_path}/{output_prefix}/validation",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
],
}
client = boto3.client("sagemaker", region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
while status != "Completed" and status != "Failed":
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name = f"{job_name}-model"
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info["ModelArtifacts"]["S3ModelArtifacts"]
print(model_data)
primary_container = {"Image": container, "ModelDataUrl": model_data}
create_model_response = client.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response["ModelArn"])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = f"DEMO-XGBoostEndpointConfig-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m5.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print(f"Endpoint Config Arn: {create_endpoint_config_response['EndpointConfigArn']}")
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = f'DEMO-XGBoostEndpoint-{strftime("%Y-%m-%d-%H-%M-%S", gmtime())}'
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
while status == "Creating":
print(f"Status: {status}")
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print(f"Arn: {resp['EndpointArn']}")
print(f"Status: {status}")
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client("runtime.sagemaker", region_name=region)
###Output
_____no_output_____
###Markdown
Download test data
###Code
FILE_TEST = "abalone.test"
s3 = boto3.client("s3")
s3.download_file(data_bucket, f"{data_prefix}/test/{FILE_TEST}", FILE_TEST)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = "abalone.single.test" # customize to your test file
with open(file_name, "r") as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType="text/x-libsvm", Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
result = [math.ceil(float(i)) for i in result]
label = payload.strip(" ").split()[0]
print(f"Label: {label}\nPrediction: {result[0]}")
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = "\n".join(data)
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType=content_type, Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset + batch_size < items:
results = do_predict(data[offset : (offset + batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write(".")
return arrs
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, "r") as f:
payload = f.read().strip()
labels = [int(line.split(" ")[0]) for line in payload.split("\n")]
test_data = [line for line in payload.split("\n")]
preds = batch_predict(test_data, 100, endpoint_name, "text/x-libsvm")
print(
"\n Median Absolute Percent Error (MdAPE) = ",
np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)),
)
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Appendix Data split and uploadFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
import io
import boto3
import random
def data_split(
FILE_DATA,
FILE_TRAIN,
FILE_VALIDATION,
FILE_TEST,
PERCENT_TRAIN,
PERCENT_VALIDATION,
PERCENT_TEST,
):
data = [l for l in open(FILE_DATA, "r")]
train_file = open(FILE_TRAIN, "w")
valid_file = open(FILE_VALIDATION, "w")
tests_file = open(FILE_TEST, "w")
num_of_data = len(data)
num_train = int((PERCENT_TRAIN / 100.0) * num_of_data)
num_valid = int((PERCENT_VALIDATION / 100.0) * num_of_data)
num_tests = int((PERCENT_TEST / 100.0) * num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[], [], []]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data) - 1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return (
boto3.Session(region_name=region)
.resource("s3")
.Bucket(bucket)
.Object(key)
.upload_fileobj(fobj)
)
def upload_to_s3(bucket, channel, filename):
fobj = open(filename, "rb")
key = f"{prefix}/{channel}"
url = f"s3://{bucket}/{key}/{filename}"
print(f"Writing to {url}")
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
s3 = boto3.client("s3")
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-abalone-default"
# Load the dataset
FILE_DATA = "abalone"
s3.download_file(
"sagemaker-sample-files", f"datasets/tabular/uci_abalone/abalone.libsvm", FILE_DATA
)
# split the downloaded data into train/test/validation files
FILE_TRAIN = "abalone.train"
FILE_VALIDATION = "abalone.validation"
FILE_TEST = "abalone.test"
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(
FILE_DATA,
FILE_TRAIN,
FILE_VALIDATION,
FILE_TEST,
PERCENT_TRAIN,
PERCENT_VALIDATION,
PERCENT_TEST,
)
# upload the files to the S3 bucket
upload_to_s3(bucket, "train", FILE_TRAIN)
upload_to_s3(bucket, "validation", FILE_VALIDATION)
upload_to_s3(bucket, "test", FILE_TEST)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. --- SetupThis notebook was created and tested on an ml.m4.4xlarge notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
%%time
import os
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
bucket='<bucket-name>' # put your s3 bucket name here, and create s3 bucket
prefix = 'sagemaker/DEMO-xgboost-regression'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region,bucket)
###Output
_____no_output_____
###Markdown
Fetching the datasetFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session().resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(boto3.Session().region_name, 'xgboost')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m4.4xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker')
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Plotting evaluation metricsEvaluation metrics for the completed training job are available in CloudWatch. We can pull the area under curve metric for the validation data set and plot it to see the performance of the model over time.
###Code
%matplotlib inline
from sagemaker.analytics import TrainingJobAnalytics
metric_name = 'validation:rmse'
metrics_dataframe = TrainingJobAnalytics(training_job_name=job_name, metric_names=[metric_name]).dataframe()
plt = metrics_dataframe.plot(kind='line', figsize=(12,5), x='timestamp', y='value', style='b.', legend=False)
plt.set_ylabel(metric_name);
###Output
_____no_output_____
###Markdown
Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
while status=='Creating':
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client('runtime.sagemaker')
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Regression with Amazon SageMaker XGBoost algorithm_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_------ Contents1. [Introduction](Introduction)2. [Setup](Setup) 1. [Fetching the dataset](Fetching-the-dataset) 2. [Data Ingestion](Data-ingestion)3. [Training the XGBoost model](Training-the-XGBoost-model) 1. [Plotting evaluation metrics](Plotting-evaluation-metrics)4. [Set up hosting for the model](Set-up-hosting-for-the-model) 1. [Import model into hosting](Import-model-into-hosting) 2. [Create endpoint configuration](Create-endpoint-configuration) 3. [Create endpoint](Create-endpoint)5. [Validate the model for use](Validate-the-model-for-use)--- IntroductionThis notebook demonstrates the use of Amazon SageMaker’s implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the UCI data repository [1]. More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements. Dataset is already processed and stored on S3. Scripts used for processing the data can be found in the [Appendix](Appendix). These include downloading the data, splitting into train, validation and test, and uploading to S3 bucket. >[1] Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. SetupThis notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.Let's start by specifying:1. The S3 buckets and prefixes that you want to use for saving the model and where training data is located. This should be within the same region as the Notebook Instance, training, and hosting. 1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
!pip3 install -U sagemaker
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
s3_client = boto3.client("s3")
# S3 bucket where the training data is located.
data_bucket = f"sagemaker-sample-files"
data_prefix = "datasets/tabular/uci_abalone"
data_bucket_path = f"s3://{data_bucket}"
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
output_bucket = sagemaker.Session().default_bucket()
output_prefix = "sagemaker/DEMO-xgboost-abalone-default"
output_bucket_path = f"s3://{output_bucket}"
for data_category in ["train", "test", "validation"]:
data_key = "{0}/{1}/abalone.{1}".format(data_prefix, data_category)
output_key = "{0}/{1}/abalone.{1}".format(output_prefix, data_category)
data_filename = "abalone.{}".format(data_category)
s3_client.download_file(data_bucket, data_key, data_filename)
s3_client.upload_file(data_filename, output_bucket, output_key)
###Output
_____no_output_____
###Markdown
Training the XGBoost modelAfter setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
###Code
container = sagemaker.image_uris.retrieve("xgboost", region, "1.5-1")
%%time
import boto3
from time import gmtime, strftime
job_name = f"DEMO-xgboost-regression-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print("Training job", job_name)
# Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = {
"AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "File"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": f"{output_bucket_path}/{output_prefix}/single-xgboost"},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 5},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.7",
"objective": "reg:linear",
"num_round": "50",
"verbosity": "2",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 3600},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{output_bucket_path}/{output_prefix}/train",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": f"{output_bucket_path}/{output_prefix}/validation",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "libsvm",
"CompressionType": "None",
},
],
}
client = boto3.client("sagemaker", region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
while status != "Completed" and status != "Failed":
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
###Output
_____no_output_____
###Markdown
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel. Set up hosting for the modelIn order to set up hosting, we have to import the model from training to hosting. Import model into hostingRegister the model with hosting. This allows the flexibility of importing models trained elsewhere.
###Code
%%time
import boto3
from time import gmtime, strftime
model_name = f"{job_name}-model"
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info["ModelArtifacts"]["S3ModelArtifacts"]
print(model_data)
primary_container = {"Image": container, "ModelDataUrl": model_data}
create_model_response = client.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response["ModelArn"])
###Output
_____no_output_____
###Markdown
Create endpoint configurationSageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
###Code
from time import gmtime, strftime
endpoint_config_name = f"DEMO-XGBoostEndpointConfig-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m5.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print(f"Endpoint Config Arn: {create_endpoint_config_response['EndpointConfigArn']}")
###Output
_____no_output_____
###Markdown
Create endpointLastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
###Code
%%time
import time
endpoint_name = f'DEMO-XGBoostEndpoint-{strftime("%Y-%m-%d-%H-%M-%S", gmtime())}'
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
while status == "Creating":
print(f"Status: {status}")
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print(f"Arn: {resp['EndpointArn']}")
print(f"Status: {status}")
###Output
_____no_output_____
###Markdown
Validate the model for useFinally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
###Code
runtime_client = boto3.client("runtime.sagemaker", region_name=region)
###Output
_____no_output_____
###Markdown
Download test data
###Code
FILE_TEST = "abalone.test"
s3 = boto3.client("s3")
s3.download_file(data_bucket, f"{data_prefix}/test/{FILE_TEST}", FILE_TEST)
###Output
_____no_output_____
###Markdown
Start with a single prediction.
###Code
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = "abalone.single.test" # customize to your test file
with open(file_name, "r") as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType="text/x-libsvm", Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.split(",")
result = [math.ceil(float(i)) for i in result]
label = payload.strip(" ").split()[0]
print(f"Label: {label}\nPrediction: {result[0]}")
###Output
_____no_output_____
###Markdown
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
###Code
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = "\n".join(data)
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType=content_type, Body=payload
)
result = response["Body"].read()
result = result.decode("utf-8")
result = result.strip("\n").split("\n")
preds = [float(num) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset + batch_size < items:
results = do_predict(data[offset : (offset + batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write(".")
return arrs
###Output
_____no_output_____
###Markdown
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
###Code
%%time
import json
import numpy as np
with open(FILE_TEST, "r") as f:
payload = f.read().strip()
labels = [int(line.split(" ")[0]) for line in payload.split("\n")]
test_data = [line for line in payload.split("\n")]
preds = batch_predict(test_data, 100, endpoint_name, "text/x-libsvm")
print(
"\n Median Absolute Percent Error (MdAPE) = ",
np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)),
)
###Output
_____no_output_____
###Markdown
Delete EndpointOnce you are done using the endpoint, you can use the following to delete it.
###Code
client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Appendix Data split and uploadFollowing methods split the data into train/test/validation datasets and upload files to S3.
###Code
import io
import boto3
import random
def data_split(
FILE_DATA,
FILE_TRAIN,
FILE_VALIDATION,
FILE_TEST,
PERCENT_TRAIN,
PERCENT_VALIDATION,
PERCENT_TEST,
):
data = [l for l in open(FILE_DATA, "r")]
train_file = open(FILE_TRAIN, "w")
valid_file = open(FILE_VALIDATION, "w")
tests_file = open(FILE_TEST, "w")
num_of_data = len(data)
num_train = int((PERCENT_TRAIN / 100.0) * num_of_data)
num_valid = int((PERCENT_VALIDATION / 100.0) * num_of_data)
num_tests = int((PERCENT_TEST / 100.0) * num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[], [], []]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data) - 1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return (
boto3.Session(region_name=region)
.resource("s3")
.Bucket(bucket)
.Object(key)
.upload_fileobj(fobj)
)
def upload_to_s3(bucket, channel, filename):
fobj = open(filename, "rb")
key = f"{prefix}/{channel}"
url = f"s3://{bucket}/{key}/{filename}"
print(f"Writing to {url}")
write_to_s3(fobj, bucket, key)
###Output
_____no_output_____
###Markdown
Data ingestionNext, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
###Code
%%time
s3 = boto3.client("s3")
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-abalone-default"
# Load the dataset
FILE_DATA = "abalone"
s3.download_file(
"sagemaker-sample-files", f"datasets/tabular/uci_abalone/abalone.libsvm", FILE_DATA
)
# split the downloaded data into train/test/validation files
FILE_TRAIN = "abalone.train"
FILE_VALIDATION = "abalone.validation"
FILE_TEST = "abalone.test"
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(
FILE_DATA,
FILE_TRAIN,
FILE_VALIDATION,
FILE_TEST,
PERCENT_TRAIN,
PERCENT_VALIDATION,
PERCENT_TEST,
)
# upload the files to the S3 bucket
upload_to_s3(bucket, "train", FILE_TRAIN)
upload_to_s3(bucket, "validation", FILE_VALIDATION)
upload_to_s3(bucket, "test", FILE_TEST)
###Output
_____no_output_____ |
23 - Python for Finance/2_Calculating and Comparing Rates of Return in Python/5_Calculating a Security's Rate of Return in Python - Simple Returns - Part II (3:28)/Simple Returns - Part II - Solution_Yahoo_Py3.ipynb | ###Markdown
Simple Returns - Part II *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* $$\frac{P_1 - P_0}{P_0} = \frac{P_1}{P_0} - 1$$
###Code
import numpy as np
from pandas_datareader import data as wb
MSFT = wb.DataReader('MSFT', data_source='yahoo', start='2000-1-1')
MSFT['simple_return'] = (MSFT['Adj Close'] / MSFT['Adj Close'].shift(1)) - 1
print (MSFT['simple_return'])
###Output
_____no_output_____
###Markdown
Plot the simple returns on a graph.
###Code
import matplotlib.pyplot as plt
MSFT['simple_return'].plot(figsize=(8, 5))
plt.show()
###Output
_____no_output_____
###Markdown
Calculate the average daily return.
###Code
avg_returns_d = MSFT['simple_return'].mean()
avg_returns_d
###Output
_____no_output_____
###Markdown
Estimate the average annual return.
###Code
avg_returns_a = MSFT['simple_return'].mean() * 250
avg_returns_a
###Output
_____no_output_____
###Markdown
Print the percentage version of the result as a float with 2 digits after the decimal point.
###Code
print (str(round(avg_returns_a, 4) * 100) + ' %')
###Output
_____no_output_____ |
notebooks/exploratory/maternal_mortality.ipynb | ###Markdown
Format and explore maternal mortality data
###Code
import os
import warnings
import camelot
import geopandas
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
warnings.filterwarnings('ignore')
# Load and format mortality data
data = camelot.read_pdf('../data/MMR-2018-State-Data-508.pdf', flavor='stream', pages='all', strip_text='†')
df = data[1].df
df = df.iloc[3:]
df = df[[0, 1, 2, 3, 4]]
df.rename(columns={
0: 'State',
1: 'Deaths',
2: 'DeathRate',
3: 'LowerCI',
4: 'UpperCI'
}, inplace=True)
df = df[df['Deaths'] != '']
df.to_csv('../data/maternal_mortality.csv', index=False)
# Load geography
states = geopandas.read_file('../data/cb_2020_us_state_500k/cb_2020_us_state_500k.shp')
# Remove extras (removed Alaska and Hawaii just for plotting)
extra = ['American Samoa', 'Commonwealth of the Northern Mariana Islands', 'Puerto Rico',
'United States Virgin Islands', 'Guam', 'Alaska', 'Hawaii']
states = states[states['NAME'].isin(extra) == False]
# Merge data sets
df2 = states.copy()
df2 = df2.merge(
df,
how='right',
left_on='NAME',
right_on='State'
)
# Take a look at death counts by state
fig, ax = plt.subplots(figsize=(12, 4))
states.boundary.plot(edgecolor='k', linewidth=0.5, ax=ax)
df2['Deaths'] = df2['Deaths'].astype(int)
df2.plot(cmap='magma', column='Deaths', legend=True, legend_kwds={'label': 'Deaths'}, ax=ax);
# Take a look at death rate by state
fig, ax = plt.subplots(figsize=(12, 4))
states.boundary.plot(edgecolor='k', linewidth=0.5, ax=ax)
df2['DeathRate'] = df2['DeathRate'].astype(float)
df2.plot(cmap='magma', column='DeathRate', legend=True, legend_kwds={'label': 'DeathRate'}, ax=ax);
###Output
_____no_output_____ |
04.CNN/CNN_fromFile.ipynb | ###Markdown
CNN Example FromFile by Keras* Trains a simple convnet on the MNIST dataset from file(3 Class from Folder). * Gets to 0.98% test accuracy after 10 epochs* Unser 1 seconds per epoch on a GPU.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import numpy as np
from os import listdir
from os.path import isfile, join
from pylab import *
from numpy import *
def getFolder(thePath,isFile=True):
return [f for f in listdir(thePath) if isFile == isfile(join(thePath, f)) ]
def getImagesAndLabels(tPath,isGray=True):
labels=getFolder(tPath,False)
tImgDic={f:getFolder(join(tPath,f)) for f in labels}
tImages,tLabels=None,None
ks=sorted(list(tImgDic.keys()))
oh=np.identity(len(ks))
for label in tImgDic.keys():
for image in tImgDic[label]:
le=np.array([float(label)],ndmin=1)
img_color=imread(join(tPath,label,image))
if isGray:
img=img_color[:,:,1]
img1d=img.reshape([1,-1])
if tImages is None:
tImages, tLabels =img1d, le
else:
tImages,tLabels = np.concatenate((tImages,img1d),axis=0), np.append(tLabels,le ,axis=0)
return (tImages,tLabels)
!wget https://raw.githubusercontent.com/Finfra/AI_Vision/master/data/MNIST_Simple.zip
!unzip MNIST_Simple.zip
tPath='MNIST_Simple/train/'
train_images,train_labels=getImagesAndLabels(tPath)
tPath='MNIST_Simple/test/'
test_images,test_labels=getImagesAndLabels(tPath)
train_images = train_images.reshape((-1, 28, 28, 1))
test_images = test_images.reshape((-1, 28, 28, 1))
train_images, test_images = train_images / 255.0, test_images / 255.0
print("Shape of Train_images = {}".format(train_images.shape))
import matplotlib.pyplot as plt
plt.gray()
plt.axis("off")
plt.title(" Label = "+str(train_labels[1000]) )
plt.imshow(train_images[1000].reshape(28, 28))
# i0=train_images[0]
# print(np.max(i0),np.min(i0),i0.shape,train_labels.shape,train_labels[0])
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=10)
score = model.evaluate(test_images, test_labels, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
_____no_output_____ |
tutorials/intro/Intro_Tutorial_3.ipynb | ###Markdown
Intro. to Snorkel: Extracting Spouse Relations from the News Part III: Training an End Extraction ModelIn this final section of the tutorial, we'll use the noisy training labels we generated in the last tutorial part to train our end extraction model.For this tutorial, we will be training a Bi-LSTM, a state-of-the-art deep neural network implemented in [TensorFlow](https://www.tensorflow.org/).
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
# TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE
# Note that this is necessary for parallel execution amongst other things...
# os.environ['SNORKELDB'] = 'postgres:///snorkel-intro'
from snorkel import SnorkelSession
session = SnorkelSession()
###Output
_____no_output_____
###Markdown
We repeat our definition of the `Spouse` `Candidate` subclass:
###Code
from snorkel.models import candidate_subclass
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
###Output
_____no_output_____
###Markdown
We reload the probabilistic training labels:
###Code
from snorkel.annotations import load_marginals
train_marginals = load_marginals(session, split=0)
###Output
_____no_output_____
###Markdown
We also reload the candidates:
###Code
train_cands = session.query(Spouse).filter(Spouse.split == 0).order_by(Spouse.id).all()
dev_cands = session.query(Spouse).filter(Spouse.split == 1).order_by(Spouse.id).all()
test_cands = session.query(Spouse).filter(Spouse.split == 2).order_by(Spouse.id).all()
###Output
_____no_output_____
###Markdown
Finally, we load gold labels for evaluation:
###Code
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
L_gold_test = load_gold_labels(session, annotator_name='gold', split=2)
###Output
_____no_output_____
###Markdown
Now we can setup our discriminative model. Here we specify the model and learning hyperparameters.They can also be set automatically using a search based on the dev set with a [GridSearch](https://github.com/HazyResearch/snorkel/blob/master/snorkel/learning/utils.py) object.
###Code
from snorkel.learning.pytorch import LSTM
train_kwargs = {
'lr': 0.01,
'embedding_dim': 50,
'hidden_dim': 50,
'n_epochs': 10,
'dropout': 0.25,
'seed': 1701
}
lstm = LSTM(n_threads=None)
lstm.train(train_cands, train_marginals, X_dev=dev_cands, Y_dev=L_gold_dev, **train_kwargs)
###Output
[LSTM] Training model
[LSTM] n_train=16004 #epochs=10 batch size=256
[LSTM] Epoch 1 (33.52s) Average loss=0.618173 Dev F1=0.00
[LSTM] Epoch 2 (70.08s) Average loss=0.602743 Dev F1=10.20
[LSTM] Epoch 3 (106.21s) Average loss=0.595686 Dev F1=12.67
[LSTM] Epoch 4 (141.95s) Average loss=0.590128 Dev F1=22.59
[LSTM] Epoch 5 (177.92s) Average loss=0.586008 Dev F1=22.02
[LSTM] Epoch 6 (214.17s) Average loss=0.585437 Dev F1=13.82
[LSTM] Epoch 7 (249.03s) Average loss=0.584641 Dev F1=23.36
[LSTM] Epoch 8 (284.29s) Average loss=0.581280 Dev F1=23.73
[LSTM] Epoch 9 (319.53s) Average loss=0.579671 Dev F1=25.12
checkpoints/LSTM
[LSTM] Model saved as <LSTM>
[LSTM] Epoch 10 (355.27s) Average loss=0.578497 Dev F1=25.00
[LSTM] Training done (357.58s)
[LSTM] Loaded model <LSTM>
###Markdown
Now, we get the precision, recall, and F1 score from the discriminative model:
###Code
p, r, f1 = lstm.score(test_cands, L_gold_test)
print("Prec: {0:.3f}, Recall: {1:.3f}, F1 Score: {2:.3f}".format(p, r, f1))
###Output
Prec: 0.197, Recall: 0.657, F1 Score: 0.303
###Markdown
We can also get the candidates returned in sets (true positives, false positives, true negatives, false negatives) as well as a more detailed score report:
###Code
tp, fp, tn, fn = lstm.error_analysis(session, test_cands, L_gold_test)
###Output
========================================
Scores (Un-adjusted)
========================================
Pos. class accuracy: 0.676
Neg. class accuracy: 0.883
Precision 0.213
Recall 0.676
F1 0.324
----------------------------------------
TP: 73 | FP: 270 | TN: 2046 | FN: 35
========================================
###Markdown
Note that if this is the final test set that you will be reporting final numbers on, to avoid biasing results you should not inspect results. However you can run the model on your _development set_ and, as we did in the previous part with the generative labeling function model, inspect examples to do error analysis.You can also improve performance substantially by increasing the number of training epochs!Finally, we can save the predictions of the model on the test set back to the database. (This also works for other candidate sets, such as unlabeled candidates.)
###Code
lstm.save_marginals(session, test_cands)
###Output
Saved 2424 marginals
###Markdown
Intro. to Snorkel: Extracting Spouse Relations from the News Part III: Training an End Extraction ModelIn this final section of the tutorial, we'll use the noisy training labels we generated in the last tutorial part to train our end extraction model.For this tutorial, we will be training a Bi-LSTM, a state-of-the-art deep neural network implemented in [TensorFlow](https://www.tensorflow.org/).
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import numpy as np
# TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE
# Note that this is necessary for parallel execution amongst other things...
# os.environ['SNORKELDB'] = 'postgres:///snorkel-intro'
from snorkel import SnorkelSession
session = SnorkelSession()
###Output
_____no_output_____
###Markdown
We repeat our definition of the `Spouse` `Candidate` subclass:
###Code
from snorkel.models import candidate_subclass
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
###Output
_____no_output_____
###Markdown
We reload the probabilistic training labels:
###Code
from snorkel.annotations import load_marginals
train_marginals = load_marginals(session, split=0)
###Output
_____no_output_____
###Markdown
We also reload the candidates:
###Code
train_cands = session.query(Spouse).filter(Spouse.split == 0).order_by(Spouse.id).all()
dev_cands = session.query(Spouse).filter(Spouse.split == 1).order_by(Spouse.id).all()
test_cands = session.query(Spouse).filter(Spouse.split == 2).order_by(Spouse.id).all()
###Output
_____no_output_____
###Markdown
Finally, we load gold labels for evaluation:
###Code
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
L_gold_test = load_gold_labels(session, annotator_name='gold', split=2)
###Output
_____no_output_____
###Markdown
Now we can setup our discriminative model. Here we specify the model and learning hyperparameters.They can also be set automatically using a search based on the dev set with a [GridSearch](https://github.com/HazyResearch/snorkel/blob/master/snorkel/learning/utils.py) object.
###Code
unid = [i for i,x in enumerate(L_gold_dev.toarray()) if x == 0]
dev_cleaned = [x for i,x in enumerate(dev_cands) if i not in unid]
dev_labels_cleaned = L_gold_dev.toarray().tolist()
dev_labels_cleaned = np.array([x for i,x in enumerate(dev_labels_cleaned) if i not in unid])
dev_labels_cleaned[dev_labels_cleaned==-1] = 0
train_set = train_cands.copy()
train_set.extend(dev_cleaned)
full_train_labels = list(train_marginals).copy()
full_train_labels.extend(dev_labels_cleaned)
full_train_labels = np.array(full_train_labels)
full_train_labels.shape
from snorkel.learning.pytorch import LSTM
train_kwargs = {
'lr': 0.01,
'embedding_dim': 40,
'hidden_dim': 40,
'n_epochs': 20,
'dropout': 0.25,
'seed': 1701,
'num_layers': 2,
}
lstm = LSTM(n_threads=None)
lstm.train(train_set, full_train_labels, X_dev=dev_cands, Y_dev=L_gold_dev, **train_kwargs)
###Output
_____no_output_____
###Markdown
Now, we get the precision, recall, and F1 score from the discriminative model:
###Code
p, r, f1 = lstm.score(test_cands, L_gold_test)
print("Prec: {0:.3f}, Recall: {1:.3f}, F1 Score: {2:.3f}".format(p, r, f1))
###Output
Prec: 0.398, Recall: 0.587, F1 Score: 0.474
###Markdown
We can also get the candidates returned in sets (true positives, false positives, true negatives, false negatives) as well as a more detailed score report:
###Code
tp, fp, tn, fn = lstm.error_analysis(session, test_cands, L_gold_test)
###Output
========================================
Scores (Un-adjusted)
========================================
Pos. class accuracy: 0.587
Neg. class accuracy: 0.922
Precision 0.398
Recall 0.587
F1 0.474
----------------------------------------
TP: 128 | FP: 194 | TN: 2289 | FN: 90
========================================
###Markdown
Note that if this is the final test set that you will be reporting final numbers on, to avoid biasing results you should not inspect results. However you can run the model on your _development set_ and, as we did in the previous part with the generative labeling function model, inspect examples to do error analysis.You can also improve performance substantially by increasing the number of training epochs!Finally, we can save the predictions of the model on the test set back to the database. (This also works for other candidate sets, such as unlabeled candidates.)
###Code
lstm.load('weak_supervision_5_layers')
lstm.save_marginals(session, test_cands)
lstm_small = LSTM(n_threads=None)
labels = L_gold_dev.toarray()
labels[labels != 1] = 0
train_marginals.shape
lstm_small.train(dev_cands, labels.reshape(2811,), **train_kwargs)
lstm_small.save('small_dataset')
tp, fp, tn, fn = lstm_small.error_analysis(session, test_cands, L_gold_test)
###Output
========================================
Scores (Un-adjusted)
========================================
Pos. class accuracy: 0.165
Neg. class accuracy: 0.922
Precision 0.157
Recall 0.165
F1 0.161
----------------------------------------
TP: 36 | FP: 194 | TN: 2289 | FN: 182
========================================
###Markdown
More importantly, you completed the introduction to Snorkel! Give yourself a pat on the back!
###Code
y = L_gold_test.toarray()
q =L_gold_dev.toarray()
train_marginals[:10]
print(sum(1 for x in L_gold_test.toarray() if x == -1))
print(sum(1 for x in L_gold_test.toarray() if x == 0))
print(sum(1 for x in L_gold_test.toarray() if x == 1))
###Output
2397
86
218
###Markdown
Intro. to Snorkel: Extracting Spouse Relations from the News Part III: Training an End Extraction ModelIn this final section of the tutorial, we'll use the noisy training labels we generated in the last tutorial part to train our end extraction model.For this tutorial, we will be training a Bi-LSTM, a state-of-the-art deep neural network implemented in [TensorFlow](https://www.tensorflow.org/).
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
# TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE
# Note that this is necessary for parallel execution amongst other things...
# os.environ['SNORKELDB'] = 'postgres:///snorkel-intro'
from snorkel import SnorkelSession
session = SnorkelSession()
###Output
_____no_output_____
###Markdown
We repeat our definition of the `Spouse` `Candidate` subclass:
###Code
from snorkel.models import candidate_subclass
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
###Output
_____no_output_____
###Markdown
We reload the probabilistic training labels:
###Code
from snorkel.annotations import load_marginals
train_marginals = load_marginals(session, split=0)
###Output
_____no_output_____
###Markdown
We also reload the candidates:
###Code
train_cands = session.query(Spouse).filter(Spouse.split == 0).order_by(Spouse.id).all()
dev_cands = session.query(Spouse).filter(Spouse.split == 1).order_by(Spouse.id).all()
test_cands = session.query(Spouse).filter(Spouse.split == 2).order_by(Spouse.id).all()
###Output
_____no_output_____
###Markdown
Finally, we load gold labels for evaluation:
###Code
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
L_gold_test = load_gold_labels(session, annotator_name='gold', split=2)
###Output
_____no_output_____
###Markdown
Now we can setup our discriminative model. Here we specify the model and learning hyperparameters.They can also be set automatically using a search based on the dev set with a [GridSearch](https://github.com/HazyResearch/snorkel/blob/master/snorkel/learning/utils.py) object.
###Code
from snorkel.learning.disc_models.rnn import reRNN
train_kwargs = {
'lr': 0.01,
'dim': 50,
'n_epochs': 10,
'dropout': 0.25,
'print_freq': 1,
'max_sentence_length': 100
}
lstm = reRNN(seed=1701, n_threads=None)
lstm.train(train_cands, train_marginals, X_dev=dev_cands, Y_dev=L_gold_dev, **train_kwargs)
###Output
[reRNN] Training model
[reRNN] n_train=17211 #epochs=10 batch size=256
[reRNN] Epoch 0 (23.14s) Average loss=0.574778 Dev F1=24.47
[reRNN] Epoch 1 (50.09s) Average loss=0.540465 Dev F1=35.18
[reRNN] Epoch 2 (76.51s) Average loss=0.540333 Dev F1=19.47
[reRNN] Epoch 3 (98.49s) Average loss=0.540295 Dev F1=37.00
[reRNN] Epoch 4 (123.53s) Average loss=0.538588 Dev F1=34.94
[reRNN] Epoch 5 (148.63s) Average loss=0.537206 Dev F1=37.15
[reRNN] Epoch 6 (170.56s) Average loss=0.536529 Dev F1=40.92
[reRNN] Epoch 7 (193.84s) Average loss=0.536493 Dev F1=40.78
[reRNN] Epoch 8 (218.16s) Average loss=0.536371 Dev F1=41.22
[reRNN] Model saved as <reRNN>
[reRNN] Epoch 9 (244.28s) Average loss=0.536352 Dev F1=40.08
[reRNN] Training done (245.50s)
INFO:tensorflow:Restoring parameters from checkpoints/reRNN/reRNN-8
[reRNN] Loaded model <reRNN>
###Markdown
Now, we get the precision, recall, and F1 score from the discriminative model:
###Code
p, r, f1 = lstm.score(test_cands, L_gold_test)
print("Prec: {0:.3f}, Recall: {1:.3f}, F1 Score: {2:.3f}".format(p, r, f1))
###Output
Prec: 0.385, Recall: 0.601, F1 Score: 0.470
###Markdown
We can also get the candidates returned in sets (true positives, false positives, true negatives, false negatives) as well as a more detailed score report:
###Code
tp, fp, tn, fn = lstm.error_analysis(session, test_cands, L_gold_test)
###Output
========================================
Scores (Un-adjusted)
========================================
Pos. class accuracy: 0.601
Neg. class accuracy: 0.916
Precision 0.385
Recall 0.601
F1 0.47
----------------------------------------
TP: 131 | FP: 209 | TN: 2274 | FN: 87
========================================
###Markdown
Note that if this is the final test set that you will be reporting final numbers on, to avoid biasing results you should not inspect results. However you can run the model on your _development set_ and, as we did in the previous part with the generative labeling function model, inspect examples to do error analysis.You can also improve performance substantially by increasing the number of training epochs!Finally, we can save the predictions of the model on the test set back to the database. (This also works for other candidate sets, such as unlabeled candidates.)
###Code
lstm.save_marginals(session, test_cands)
###Output
Saved 2701 marginals
###Markdown
Intro. to Snorkel: Extracting Spouse Relations from the News Part III: Creating or Loading Evaluation Labels
###Code
%load_ext autoreload
%autoreload 2
import os
# TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE
# Note that this is necessary for parallel execution amongst other things...
# os.environ['SNORKELDB'] = 'postgres:///snorkel-intro'
from snorkel import SnorkelSession
session = SnorkelSession()
###Output
_____no_output_____
###Markdown
Part III(a): Creating Evaluation Labels in the `Viewer` We repeat our definition of the `Spouse` `Candidate` subclass from Part II.
###Code
from snorkel.models import candidate_subclass
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
dev_cands = session.query(Spouse).filter(Spouse.split == 1).all()
len(dev_cands)
test_cands = session.query(Spouse).filter(Spouse.split == 2).all()
len(test_cands)
###Output
_____no_output_____
###Markdown
Labeling by hand in the `Viewer`
###Code
from snorkel.viewer import SentenceNgramViewer
# NOTE: This if-then statement is only to avoid opening the viewer during automated testing of this notebook
# You should ignore this!
import os
if 'CI' not in os.environ:
sv = SentenceNgramViewer(dev_cands, session)
else:
sv = None
###Output
_____no_output_____
###Markdown
We now open the Viewer. You can mark each `Candidate` as true or false. Try it! These labels are automatically saved in the database backend, and can be accessed using the annotator's name as the AnnotationKey.
###Code
sv
###Output
_____no_output_____
###Markdown
Part III(b): Loading External Evaluation LabelsWe have already annotated the dev and test set for this tutorial, and now use it as an excuse to go through a basic procedure of loading in _externally annotated_ labels.Snorkel stores all labels that are manually annotated in a **stable** format (called `StableLabels`), which is somewhat independent from the rest of Snorkel's data model, does not get deleted when you delete the candidates, corpus, or any other objects, and can be recovered even if the rest of the data changes or is deleted.If we have external labels from another source, we can also load them in via the `stable_label` table:
###Code
import pandas as pd
from snorkel.models import StableLabel
gold_labels = pd.read_csv('data/gold_labels.tsv', sep="\t")
name = 'gold'
for index, row in gold_labels.iterrows():
# We check if the label already exists, in case this cell was already executed
context_stable_ids = "~~".join([row['person1'], row['person2']])
query = session.query(StableLabel).filter(StableLabel.context_stable_ids == context_stable_ids)
query = query.filter(StableLabel.annotator_name == name)
if query.count() == 0:
session.add(StableLabel(context_stable_ids=context_stable_ids, annotator_name=name, value=row['label']))
# Because it's a symmetric relation, load both directions...
context_stable_ids = "~~".join([row['person2'], row['person1']])
query = session.query(StableLabel).filter(StableLabel.context_stable_ids == context_stable_ids)
query = query.filter(StableLabel.annotator_name == name)
if query.count() == 0:
session.add(StableLabel(context_stable_ids=context_stable_ids, annotator_name=name, value=row['label']))
session.commit()
###Output
_____no_output_____
###Markdown
Then, we use a helper function to restore `Labels` from the `StableLabels` we just loaded_Note that we "miss" a few due to parsing discrepancies with original candidates labeled; specifically, you should be able to reload 220/223 on the dev set and 273/279 on the test set._
###Code
from snorkel.db_helpers import reload_annotator_labels
reload_annotator_labels(session, Spouse, 'gold', split=1, filter_label_split=False)
reload_annotator_labels(session, Spouse, 'gold', split=2, filter_label_split=False)
###Output
AnnotatorLabels created: 220
AnnotatorLabels created: 273
###Markdown
Intro. to Snorkel: Extracting Spouse Relations from the News Part III: Training an End Extraction ModelIn this final section of the tutorial, we'll use the noisy training labels we generated in the last tutorial part to train our end extraction model.For this tutorial, we will be training a Bi-LSTM, a state-of-the-art deep neural network implemented in [TensorFlow](https://www.tensorflow.org/).
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
# TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE
# Note that this is necessary for parallel execution amongst other things...
# os.environ['SNORKELDB'] = 'postgres:///snorkel-intro'
from snorkel import SnorkelSession
session = SnorkelSession()
###Output
_____no_output_____
###Markdown
We repeat our definition of the `Spouse` `Candidate` subclass:
###Code
from snorkel.models import candidate_subclass
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
###Output
_____no_output_____
###Markdown
We reload the probabilistic training labels:
###Code
from snorkel.annotations import load_marginals
train_marginals = load_marginals(session, split=0)
###Output
_____no_output_____
###Markdown
We also reload the candidates:
###Code
train_cands = session.query(Spouse).filter(Spouse.split == 0).order_by(Spouse.id).all()
dev_cands = session.query(Spouse).filter(Spouse.split == 1).order_by(Spouse.id).all()
test_cands = session.query(Spouse).filter(Spouse.split == 2).order_by(Spouse.id).all()
train_cands[0]
###Output
_____no_output_____
###Markdown
Finally, we load gold labels for evaluation:
###Code
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
L_gold_test = load_gold_labels(session, annotator_name='gold', split=2)
###Output
_____no_output_____
###Markdown
Now we can setup our discriminative model. Here we specify the model and learning hyperparameters.They can also be set automatically using a search based on the dev set with a [GridSearch](https://github.com/HazyResearch/snorkel/blob/master/snorkel/learning/utils.py) object.
###Code
from snorkel.learning.pytorch import LSTM
train_kwargs = {
'lr': 0.01,
'embedding_dim': 50,
'hidden_dim': 50,
'n_epochs': 10,
'dropout': 0.25,
'seed': 1701
}
lstm = LSTM(n_threads=None)
lstm.train(train_cands, train_marginals, X_dev=dev_cands, Y_dev=L_gold_dev, **train_kwargs)
###Output
[LSTM] Training model
[LSTM] n_train=17238 #epochs=10 batch size=64
###Markdown
Now, we get the precision, recall, and F1 score from the discriminative model:
###Code
p, r, f1 = lstm.score(test_cands, L_gold_test)
print("Prec: {0:.3f}, Recall: {1:.3f}, F1 Score: {2:.3f}".format(p, r, f1))
###Output
Prec: 0.197, Recall: 0.657, F1 Score: 0.303
###Markdown
We can also get the candidates returned in sets (true positives, false positives, true negatives, false negatives) as well as a more detailed score report:
###Code
tp, fp, tn, fn = lstm.error_analysis(session, test_cands, L_gold_test)
###Output
========================================
Scores (Un-adjusted)
========================================
Pos. class accuracy: 0.676
Neg. class accuracy: 0.883
Precision 0.213
Recall 0.676
F1 0.324
----------------------------------------
TP: 73 | FP: 270 | TN: 2046 | FN: 35
========================================
###Markdown
Note that if this is the final test set that you will be reporting final numbers on, to avoid biasing results you should not inspect results. However you can run the model on your _development set_ and, as we did in the previous part with the generative labeling function model, inspect examples to do error analysis.You can also improve performance substantially by increasing the number of training epochs!Finally, we can save the predictions of the model on the test set back to the database. (This also works for other candidate sets, such as unlabeled candidates.)
###Code
lstm.save_marginals(session, test_cands)
###Output
Saved 2424 marginals
|
extra_notebooks/Lecture_2_early_version.ipynb | ###Markdown
 Data Science in Medicine using Python Author: Dr Gusztav Belteki 1. Let us see your homework Try to guess what will these expressions return before pressing `SHIFT-Enter`Some of these points were not discussed. You may need to look it up on the Internet.
###Code
# Multiplication and division are of equal priority so they will be done sequentially
# Note the decimal point. Division always returns a floating point number
10 * 10 / 10
# Division takes priority over addition - as in maths
(10 + 10) / 10
# This is integer division
15 // 2
# Modulo operator, returning the remainder after the division
15 % 4
###Output
_____no_output_____
###Markdown
Discuss what booleans return
###Code
True and True
True and False
True or False
True or False
False or False
# In boolean operations, numeric values are implicitly evaluated a boolean. All numbers except zero are 'True'
# During 'or' the first value is returned if it is True otherwise the second value
'' or 0
# In 'and' operations, the second value is returned if the first is true, otherwise the first value is returned.
1 and 0
# Zero is evaluated as False
0 and 2
0 or 2
# 'and' takes priority over 'or'. 'not' takes priority over both.
0 or 1 and 0
# Empty text evaluates as 'False' any other text as 'True'
# You can you single or double quotation marks but you cannot mix them
'' or 'hello'
# This is not an empty string but a single space character
' ' or hello
" " or 'hello'
###Output
_____no_output_____
###Markdown
Why are these different?
###Code
fn()
# 'int' returns the ingeters part = always rounding down
# for round you can provide the nummber of digits as an argument
int(16.864295), round(16.864295, )
###Output
_____no_output_____
###Markdown
What will be the output ?
###Code
'Midway upon the journey of our life I found myself within a forest dark, For the straightforward pathway had been lost.'
B = 'Midway upon the journey of our life I found myself within a forest dark, For the straightforward pathway had been lost.'
B
# Case sensitive
B.count('m')
B.lower()
# You can chain methods
B.lower().count('m')
# Splits a text into a list - introduce 'lists'
B.split(' ')
len(B)
len(B.split(' '))
###Output
_____no_output_____
###Markdown
Finally: what the heck is this
###Code
# Lists all attributes and methods of an object
dir(B)
B.title()
###Output
_____no_output_____
###Markdown
2. Type of medical data 1. Tabular data - Obtained from sensors of medical devices, monitors etc. - Typically retrieved as csv or other text format - Usually two-dimensional - Usually time series data  _____2. Image data - Obtained from imaging medical devices - Usually as raster images (jpg,png, tiff etc file format, different compression methods) - At least 3-dimensional, but frequently 4 or 5 dimensional  _____ 3. Medical free text - From electronic medical notes or from medical knowledge databases (e.g., PubMed) - Typically retrieved as text (.txt) files - Unstructured - Can be time series  3. How to read in tabular data
###Code
# Modules need to be imported first
import pandas as pd
###Output
_____no_output_____
###Markdown
On Mac / Linux
###Code
'hello wordld'.split()
len('hello world')
# Imported by not bound to a variable. We cannot use it later.
pd.read_csv("data/CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip")
###Output
_____no_output_____
###Markdown
- Compare this with other expressions On windows
###Code
# Imported by not bound to a variable. We cannot use it later.
pd.read_csv(r'\data\CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip')
###Output
_____no_output_____
###Markdown
Works on all systems
###Code
# Imported by not bound to a variable. We cannot use it later.
import os
os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip')
# Imported by not bound to a variable. We cannot use it later.
import os
pd.read_csv(os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip'))
a = 2
# Now it is there for later use
import os
data = pd.read_csv(os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip'))
data
###Output
_____no_output_____
###Markdown
- Compare this with other assigment statements
###Code
data = pd.read_csv(os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip'))
path = os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip')
data = pd.read_csv(path, nrows = 15)
data
data = pd.read_csv(os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip'), nrows=100000)
pd.read_csv?
a = 'Hello'
a.lower()
###Output
_____no_output_____
###Markdown
`HOMEWORK` : How to import using the absolute filepath
###Code
data.info()
pd.read_csv?
###Output
_____no_output_____
###Markdown
There is only one positional argument, all other arguments are keyword arguments with default valueNotice that lines can be broken inside parentheses.
###Code
pd.read_csv?
path = os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip')
data = pd.read_csv(path, # You can break the lines when there are parentheses
usecols = ['Date', 'Time', 'Rel.Time [s]','5001|MVe [L/min]', '5001|VTmand [mL]', '5001|PIP [mbar]', '5001|RRspon [1/min]'],
index_col = ['Rel.Time [s]'])
data
###Output
_____no_output_____
###Markdown
1. Only limit columns to the ones you really need2. Only throw away rows (data points) you really need to drop
###Code
path = os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip')
columns_to_keep = ['Date', 'Time', 'Rel.Time [s]','5001|MVe [L/min]', '5001|VTmand [mL]', '5001|PIP [mbar]', '5001|RRspon [1/min]']
data = pd.read_csv(path, usecols = columns_to_keep, index_col = ['Rel.Time [s]'])
data
data.info()
path = os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip')
columns_to_keep = ['Date', 'Time', 'Rel.Time [s]','5001|MVe [L/min]', '5001|VTmand [mL]', '5001|PIP [mbar]', '5001|RRspon [1/min]']
data = pd.read_csv(path, usecols = columns_to_keep, index_col = ['Rel.Time [s]'],
parse_dates = [[0,1]])
data
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 689436 entries, 0 to 344676
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Date_Time 689436 non-null datetime64[ns]
1 5001|MVe [L/min] 344633 non-null float64
2 5001|VTmand [mL] 344567 non-null float64
3 5001|PIP [mbar] 344677 non-null float64
4 5001|RRspon [1/min] 256267 non-null float64
dtypes: datetime64[ns](1), float64(4)
memory usage: 31.6 MB
###Markdown
Repating code is ugly and sourse of error
###Code
path = os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip')
columns_to_keep = ['Date', 'Time', 'Rel.Time [s]','5001|MVe [L/min]', '5001|VTmand [mL]', '5001|PIP [mbar]', '5001|RRspon [1/min]']
data = pd.read_csv(path, usecols = columns_to_keep, index_col = ['Rel.Time [s]'],
parse_dates =[[0,1]])
data = data.reset_index()
data = data.set_index(['Date_Time'])
data
data.info()
pd.read_csv?
###Output
_____no_output_____
###Markdown
4 Data structures in PythonStore some data (from nothing to the whole universe)All data structures are `objects` but not objects are data structures Evaluating them return the data is some format:
###Code
a = 'Hello World'
a
b = 42
b
data
###Output
_____no_output_____
###Markdown
`print()` usually but not always results in a nicer format
###Code
print(a)
print(b)
print(data)
###Output
_____no_output_____
###Markdown
They have different types
###Code
type(a)
type(b)
type(data)
###Output
_____no_output_____
###Markdown
Built-in functions work differently on different data structures
###Code
# This is obvious
len(a)
# This will produce an error which is perhaps unexpected
len(b)
# This is by no means obvious
len(data)
###Output
_____no_output_____
###Markdown
They also have different methods associated with them Methods for text strings
###Code
dir(a)
a.upper()
# Counting and indexing in Python starts from zero
a.find('o')
a.index('o')
a.count('o')
a.startswith('H'), a.startswith('h')
a.isnumeric()
a.rjust(30)
###Output
_____no_output_____
###Markdown
Methods for integer numbers
###Code
dir(b)
b.bit_length()
b, b.__add__(2)
c = -42
c.__abs__()
abs(c)
###Output
_____no_output_____
###Markdown
Methods for complex data structures (pandas DataFrames)
###Code
dir(data)
len(dir(data))
data
data.mean()
data.isnull()
data.isnull().sum()
%%time
data.plot()
###Output
_____no_output_____
###Markdown
`HOMEWORK` : Exploratory data analysis 5. Reading in images
###Code
# importing matplotlib module
import matplotlib.image as mpimg
img = mpimg.imread('data/newborn_heart_image.jpg')
img.ndim
img.shape
369 * 636 * 3
img.flatten()
len(img.flatten())
print(set(img.flatten()))
img[:, :, 1]
img[:, :, 2]
img[1, :, :]
# Output Image
# importing matplotlib module
import matplotlib.pyplot as plt
plt.imshow(img)
###Output
_____no_output_____
###Markdown
6. Reading in text data
###Code
f_handle = open('data/karamazov_brothers.txt', 'r')
text = f_handle.read()
f_handle.close()
len(text)
text
text[:10000]
print(text[:10000])
print(text[5016:10000])
###Output
_____no_output_____
###Markdown
7. Homework Get tabular data and import and play around with it Absolute and relative file paths
###Code
# Relative path:
pd.read_csv('data/data_new/CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip')
pd.read_csv(os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip'))
'D:\dddldd\ddddd\vfvff\'
# My absolute path:
pd.read_csv('/Users/guszti/data_science_course/data/CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip')
pd.read_csv(os.path.join('/Users', 'guszti', 'data_science_course', 'data',
'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip'))
# Why is this working ?
pd.read_csv(os.path.join('.','data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip'))
# And why is this working ?
pd.read_csv(os.path.join('..', 'data_science_course', 'data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip'))
###Output
_____no_output_____
###Markdown
Slicing and dicing in Python
###Code
# List of strings
lst = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October',
'November', 'December']
lst
# Indexing is zero based
lst[0:4]
###Output
_____no_output_____
###Markdown
So - write the input to generate the output `['March', 'April', 'May']`
###Code
lst[]
###Output
_____no_output_____
###Markdown
`['May', 'June', 'July', 'August']`
###Code
lst[]
###Output
_____no_output_____
###Markdown
`['July', 'August', 'September', 'October', 'November', 'December']`
###Code
lst[]
###Output
_____no_output_____
###Markdown
`['January', 'March', 'May', 'July', 'September']`
###Code
lst[]
###Output
_____no_output_____
###Markdown
`['March', 'April', 'May']` `['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']`
###Code
lst[]
###Output
_____no_output_____
###Markdown
`['April', 'June', 'August', 'October']`
###Code
lst[]
###Output
_____no_output_____
###Markdown
`['December', 'November', 'October', 'September', 'August']`
###Code
lst[]
###Output
_____no_output_____
###Markdown
`['November', 'September', 'July', 'May']`
###Code
lst[]
###Output
_____no_output_____
###Markdown
`['December', 'November', 'October', 'September', 'August', 'July', 'June', 'May', 'April', 'March', 'February', 'January']`
###Code
lst[]
###Output
_____no_output_____ |
OpenHPC-v2/notebooks/061-Singularityのロード.ipynb | ###Markdown
Singularityのロード---構築したOpenHPC環境で[Singularity](https://sylabs.io/singularity/)がデフォルトでロードされるように設定します。 前提条件このNotebookを実行するための前提条件を満たしていることを確認します。 以下のことを前提条件とします。* 構築済のOpenHPC環境がある* OpenHPC環境の各ノードに対してAnsibleで操作できるように設定されている VCノードを作成時に指定した値を確認するために `group_vars` ファイル名の一覧を表示します。
###Code
!ls -1 group_vars/*.yml | sed -e 's/^group_vars\///' -e 's/\.yml//' | sort
###Output
_____no_output_____
###Markdown
各ノードに対してAnsibleによる操作が行えることを確認します。操作対象となる UnitGroup 名を指定してください。
###Code
# (例)
# ugroup_name = 'OpenHPC'
ugroup_name =
###Output
_____no_output_____
###Markdown
疎通確認を行います。
###Code
!ansible {ugroup_name} -m ping
###Output
_____no_output_____
###Markdown
設定変更Singularity(version:3.7.1)をデフォルトで利用できるようにするために `/etc/profile.d/` の設定ファイルを変更します。 OpenHPC環境では、デフォルトでは[Singularity](https://sylabs.io/singularity/)が利用できるようになっていません。ここでは`/etc/profile.d`/にある[Lmod](https://lmod.readthedocs.io/en/latest/index.html)の設定ファイルを変更操作して Singularity を利用できるようにします。> OpenHPCでは複数の開発ツールやライブラリから実際に利用するものを選択するためにLmodを使用しています。 まずLmodのコマンド`module`を実行して、OpenHPC環境で利用可能なモジュールの一覧を表示してみます。
###Code
!ansible {ugroup_name} -m shell -a 'module avail'
###Output
_____no_output_____
###Markdown
一覧表示されたモジュールのなかで `(L)` のマークがついているものはLmodがロード済のもので、実行可能な状態になっています。ロードされたモジュールの一覧を表示してみます。
###Code
!ansible {ugroup_name} -m shell -a 'module list'
###Output
_____no_output_____
###Markdown
Lmod がデフォルトでロードするモジュールは `/etc/profile.d/` にある `lmod.sh`, `lmod.csh` で設定されています。Singularityをロードするように設定ファイルを書き換えます。
###Code
# 書き換え対象のファイル
files = ['/etc/profile.d/lmod.sh', '/etc/profile.d/lmod.csh']
# 追加する行
line = 'module try-add singularity'
for file in files:
!ansible {ugroup_name} -m lineinfile -b -a \
'path={file} backup=yes line="{line}" regexp="module\s+try-add\s+singularity"'
###Output
_____no_output_____
###Markdown
設定が変更されたことを確認するために、ロードモジュールの内容を再度確認してみます。
###Code
!ansible {ugroup_name} -m shell -a 'module list'
###Output
_____no_output_____
###Markdown
以下のように `singularity/3.7.1` がロードモジュールに追加されていれば、設定変更が行われたことが確認できます。```Currently Loaded Modules: 1) autotools 2) prun/2.1 3) gnu9/9.3.0 4) ucx/1.9.0 5) libfabric/1.11.2 6) openmpi4/4.0.5 7) ohpc 8) singularity/3.7.1``` Singularityがロードされていることを確認します。> ロードされていない場合は、次のセルを実行するとエラーになります。
###Code
!ansible {ugroup_name} -m shell -a 'module is-loaded singularity'
###Output
_____no_output_____
###Markdown
`singularity`コマンドが実行できることを確認します。
###Code
!ansible {ugroup_name} -m shell -a 'singularity version'
###Output
_____no_output_____ |
supp_ntbks_arxiv.2111.11761/tg_quant_sample.ipynb | ###Markdown
Cosmological constraints on quantum fluctuations in modified teleparallel gravity The Friedmann equations' modified by quantum fluctuations can be written as\begin{equation}3 H^2=\cdots ,\end{equation}and\begin{equation}2 \dot{H}+3 H^2=\cdots ,\end{equation}whereas the modified Klein-Gordon equation can be written in the form\begin{equation}\dot{\rho} + 3 H \left( \rho + P \right) = \cdots\end{equation}where $H$ is the Hubble function, and $(\rho, P)$ are the fluid energy density and pressure. Dots over a variable denote differentiation with respect to the cosmic time $t$. The ellipses on the right hand sides represent the quantum corrections. See [arXiv:2108.04853](https://arxiv.org/abs/2108.04853) and [arXiv:2111.11761](https://arxiv.org/abs/2111.11761) for full details.This jupyter notebook is devoted to constraining the quantum corrections using late-time compiled data sets from cosmic chronometers (CC), supernovae (SNe), and baryon acoustic oscillations (BAO). In other words, we shall be numerically integrate the dynamical system and perform a Bayesian analysis to determine a best fit theory parameters. We divide the discussion in three sections: (1) observation, (2) theory, and (3) data analysis.*References to the data and python packages can be found at the end of the notebook.*
###Code
import numpy as np
from scipy.integrate import solve_ivp, simps
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes # for the insets
from scipy.constants import c
from cobaya.run import run
from getdist.mcsamples import loadMCSamples
from gdplotter_rcb import plot_triangle, plot_1d
import os # requires *full path*
# for imposing likelihood time limit; otherwise, mcmc gets stuck
from multiprocessing import Process
###Output
_____no_output_____
###Markdown
1. Observation We import the cosmological data to be used for constraining the theory. We start with the CC + BAO which provides measurements of the Hubble function at various redshifts.
###Code
cc_data = np.loadtxt('Hdz_2020.txt')
z_cc = cc_data[:, 0]
Hz_cc = cc_data[:, 1]
sigHz_cc = cc_data[:, 2]
fig, ax = plt.subplots()
ax.errorbar(z_cc, Hz_cc, yerr = sigHz_cc,
fmt = 'rx', ecolor = 'k',
markersize = 7, capsize = 3)
ax.set_xlabel('$z$')
ax.set_ylabel('$H(z)$')
plt.show()
###Output
_____no_output_____
###Markdown
We also consider the 1048 supernovae observations in the form of the Pantheon compilation.
###Code
# load pantheon compressed m(z) data
loc_lcparam = 'https://raw.githubusercontent.com/dscolnic/Pantheon/master/Binned_data/lcparam_DS17f.txt'
loc_lcparam_sys = 'https://raw.githubusercontent.com/dscolnic/Pantheon/master/Binned_data/sys_DS17f.txt'
#loc_lcparam = 'https://raw.githubusercontent.com/dscolnic/Pantheon/master/lcparam_full_long_zhel.txt'
#loc_lcparam_sys = 'https://raw.githubusercontent.com/dscolnic/Pantheon/master/sys_full_long.txt'
lcparam = np.loadtxt(loc_lcparam, usecols = (1, 4, 5))
lcparam_sys = np.loadtxt(loc_lcparam_sys, skiprows = 1)
# setup pantheon samples
z_ps = lcparam[:, 0]
logz_ps = np.log(z_ps)
mz_ps = lcparam[:, 1]
sigmz_ps = lcparam[:, 2]
# pantheon samples systematics
covmz_ps_sys = lcparam_sys.reshape(40, 40)
#covmz_ps_sys = lcparam_sys.reshape(1048, 1048)
covmz_ps_tot = covmz_ps_sys + np.diag(sigmz_ps**2)
# plot data set
plt.errorbar(logz_ps, mz_ps, yerr = np.sqrt(np.diag(covmz_ps_tot)),
fmt = 'bx', markersize = 7, ecolor = 'k', capsize = 3)
plt.xlabel('$\ln(z)$')
plt.ylabel('$m(z)$')
plt.show()
###Output
_____no_output_____
###Markdown
The compiled CC, SNe, and BAO data sets above will be used to constrain the quantum corrections arising as teleparallel gravity terms in the Friedmann equations. 2. Theory We setup the Hubble function $H(z)$ by numerically integrating the field equations. This is in preparation for analysis later on where this observable as well as the supernovae apparent magnitudes are compared with the data.We start by coding the differential equation (in the form $y'(z) = f[y(z),z]$) and the density parameters and other relevant quantities in the next line.
###Code
def F(z, y, om0, eps):
'''returns the differential equation y' = f(y, z) for input to odeint
input: y = H(z)/H0
z = redshift
om0 = matter fraction at z = 0
eps = LambdaCDM deviation'''
lmd = 1 - om0 + eps
q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0))
num = 3*(-lmd + (1 - 24*q*lmd)*(y**2) + 36*q*(y**4)) \
*(1 + 18*q*(y**2)*(-1 + 4*q*(y**2)))
den = 2*(1 + z)*y*(1 - 18*q*lmd \
+ 6*q*(y**2)*(7 + 126*q*lmd \
+ 24*q*(y**2)*(-13 - 12*q*lmd + 45*q*(y**2))))
return num/den
def ol0(om0, eps):
'''returns the density parameter of lambda'''
return 1 - om0 + eps
def q_param(om0, eps):
'''returns the dimensionless quantum correction parameter'''
lmd = 1 - om0 + eps
q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0))
return q
def oq0(om0, eps):
'''returns the density parameter of the quantum corrections'''
lmd = 1 - om0 + eps
q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0))
return q*(24*lmd - 6*(6 + lmd))
###Output
_____no_output_____
###Markdown
Now that the equations are all set, we can proceed with the numerical integration. We test this out in the next line.
###Code
# late-time redshifts
z_min = 0
z_max = 2.5
n_div = 12500
z_late = np.linspace(z_min, z_max, n_div)
def nsol(om0, eps):
'''numerically integrates the master ode
returns: y(z) = H(z)/H0: rescaled Hubble function'''
nsol = solve_ivp(F, t_span = (z_min, z_max), y0 = [1], t_eval = z_late, args = (om0, eps))
return nsol
# pilot/test run, shown with the CC data
test_run = nsol(om0 = 0.3, eps = 0.01)
fig, ax = plt.subplots()
ax.errorbar(z_cc, Hz_cc, yerr = sigHz_cc,
fmt = 'kx', ecolor = 'k',
markersize = 7, capsize = 3)
ax.plot(test_run.t, 70*test_run.y[0], 'r-')
ax.set_xlim(z_min, z_max)
ax.set_xlabel('$z$')
ax.set_ylabel('$H(z)$')
plt.show()
###Output
_____no_output_____
###Markdown
We also setup the integral to obtain the SNe apparent magnitude. We assume a spatially-flat scenario in which the luminosity distance given by\begin{equation}d_L \left( z \right) = \dfrac{c}{H_0} \left( 1 + z \right) \int_0^z \dfrac{dz'}{H\left(z'\right) /H_0} .\end{equation}*$H_0$ will be written as $h \times 100$ (km/s/Mpc) $= h \times 10^{-1}$ (m/s/pc). The factor $c/H_0$ will then be written as $c / \left(h \times 10^{-1}\right)$ parsecs where $c$ is the speed of light in vacuum in m/s (a.k.a. scipy value).
###Code
def dl(om0, eps, z_rec):
'''returns the luminosity distance
input: z_rec = redshifts at prediction'''
E_sol = nsol(om0, eps).y[0]
E_inv = 1/E_sol
dL = []
for z_i in z_rec:
diff_list = list(abs(z_i - z_late))
idx = diff_list.index(min(diff_list))
dL.append((1 + z_i)*simps(E_inv[:idx + 1], z_late[:idx + 1]))
return np.array(dL)
def dm(H0, om0, eps, z_rec):
'''returns the distance modulus m - M
input: z_rec = redshifts at prediction'''
h = H0/100
return 5*np.log10((c/h)*dl(om0, eps, z_rec))
def m0(H0, om0, eps, M, z_rec):
'''returns the apparent magnitude m
input: z_rec = redshifts at prediction'''
return dm(H0, om0, eps, z_rec) + M
###Output
_____no_output_____
###Markdown
We can test out a prediction with the Pantheon data set. Here is an illustration for the same parameters used in the CC prediction earlier.
###Code
test_run = m0(H0 = 70, om0 = 0.3, eps = 0.01,
M = -19.3, z_rec = z_late[1:])
fig, ax = plt.subplots()
ax.plot(np.log(z_late[1:]), test_run, 'k-')
ax.errorbar(logz_ps, mz_ps, yerr = np.sqrt(np.diag(covmz_ps_tot)),
fmt = 'bo', markersize = 2, ecolor = 'k', capsize = 3)
ax.set_xlim(min(logz_ps) - 1, max(logz_ps))
ax.set_xlabel('$\ln(z)$')
ax.set_ylabel('$m(z)$')
plt.show()
###Output
_____no_output_____
###Markdown
With predictions of $H(z)$ and $m(z)$, we're now ready to study the data with the model. 3. Data analysis We setup the individual and joint log-likelihoods for the CC, SNe, and BAO data sets.
###Code
def loglike_cc_bao(H0, om0, eps):
'''returns the log-likelihood for the CC data'''
if (om0 < 0) or (np.abs(oq0(om0, eps)) > 0.1):
return -np.inf
else:
H_sol = H0*nsol(om0, eps).y[0]
H_sol_cc = []
for z_i in z_cc:
diff_list = list(abs(z_i - z_late))
idx = diff_list.index(min(diff_list))
H_sol_cc.append(H_sol[idx])
H_sol_cc = np.array(H_sol_cc)
Delta_H = H_sol_cc - Hz_cc
ll_cc = -0.5*np.sum((Delta_H/sigHz_cc)**2)
if np.isnan(ll_cc) == True:
return -np.inf
else:
return ll_cc
C_inv = np.linalg.inv(covmz_ps_tot)
def loglike_sn(H0, om0, eps, M):
'''returns the log-likelihood for the SN data'''
if (om0 < 0) or (np.abs(oq0(om0, eps)) > 0.1):
return -np.inf
else:
m_sol_ps = m0(H0, om0, eps, M, z_ps)
Delta_m = m_sol_ps - mz_ps
ll_sn = -0.5*(Delta_m.T @ C_inv @ Delta_m)
if np.isnan(ll_sn) == True:
return -np.inf
else:
return ll_sn
def loglike_cc_bao_sn(H0, om0, eps, M):
'''returns the total CC + BAO + SNe likelihood for a theory prediction'''
return loglike_cc_bao(H0, om0, eps) + loglike_sn(H0, om0, eps, M)
###Output
_____no_output_____
###Markdown
Now, we must impose a time limit to the evaluation of the likelihood. Otherwise, the MCMC would not converge particularly when using MPI as some of the chains get stuck in certain, isolated regions of the parameter space.
###Code
# impose timeout, to avoid evaluations/chains getting stuck somewhere
def Loglike_cc_bao(H0, om0, eps):
'''same loglike but with timelimit of 10 secs per eval'''
p = Process(target = loglike_cc_bao, args = (H0, om0, eps,))
p.start()
p.join(10)
if p.is_alive():
p.terminate()
p.join()
return -np.inf
else:
return loglike_cc_bao(H0, om0, eps)
def Loglike_cc_bao_sn(H0, om0, eps, M):
'''same loglike but with timelimit of 10 secs per eval'''
p = Process(target = loglike_cc_bao_sn, args = (H0, om0, eps, M,))
p.start()
p.join(10)
if p.is_alive():
p.terminate()
p.join()
return -np.inf
else:
return loglike_cc_bao_sn(H0, om0, eps, M)
###Output
_____no_output_____
###Markdown
The input to ``cobaya`` is preferrably prepared as a ``.yaml`` file to run in a cluster. See the ones in the directory. This comprises of the likelihood and the priors to be used for the sampling.Nonetheless, if one insists, the input can also be prepared as a python dictionary. We show an example below.
###Code
# SNe Mag prior, SH0ES taken from lit., cepheids calibrated
M_priors = {'SH0ES': {'ave': -19.22, 'std': 0.04}}
M_prior = M_priors['SH0ES']
# likelihood
#info = {"likelihood": {"loglike": Loglike_cc_bao}}
info = {"likelihood": {"loglike": Loglike_cc_bao_sn}}
# parameters to perform mcmc
info["params"] = {"H0": {"prior": {"min": 50, "max": 80},
"ref": {"min": 68, "max": 72},
"proposal": 0.05, "latex": r"H_0"},
"om0": {"prior": {"min": 0, "max": 1},
"ref": {"min": 0.25, "max": 0.35},
"proposal": 1e-3, "latex": r"\Omega_{m0}"},
"eps": {"prior": {"min": -1e-1, "max": 1e-1},
"ref": {"min": -1e-2, "max": 1e-2},
"proposal": 1e-3, "latex": r"\epsilon"}}
# uncomment info["params"]["M"] if SNe data is considered
info["params"]["M"] = {"prior": {"dist": "norm",
"loc": M_prior['ave'],
"scale": M_prior['std']},
"ref": M_prior['ave'],
"proposal": M_prior['std']/4, "latex": r"M"}
info["params"]["q"] = {"derived": q_param, "latex": r"q"}
info["params"]["ol0"] = {"derived": ol0, "latex": r"\Omega_{\Lambda}"}
info["params"]["oq0"] = {"derived": oq0, "latex": r"\Omega_{q0}"}
# mcmc, Rminus1_stop dictates covergence
info["sampler"] = {"mcmc":{"Rminus1_stop": 0.01, "max_tries": 1000}}
# output, uncomment to save output in the folder chains
#info["output"] = "chains_nonminmat_Hdz_Pantheon/tg_quantum_M_SH0ES_cc_bao"
info["output"] = "chains_nonminmat_Hdz_Pantheon/tg_quantum_M_SH0ES_cc_bao_sn"
# uncomment to overwrite existing files, be careful
#info["force"] = True
###Output
_____no_output_____
###Markdown
The sampling can now be performed. Suggestion is to run this in a cluster using the command ``cobaya-run``, e.g., with $N$ processes: ``mpirun -n N cobaya-run -f __.yaml``. See also the sample yaml file in the same directory as this jupyter notebook.In a python interpreter, the MCMC can be performed using the function ``run``. Example below.
###Code
# uncomment next two lines if input is yaml file
#from cobaya.yaml import yaml_load_file
#info = yaml_load_file("tg_quantum_mcmc_Hdz_Pantheon_cc_bao_sn.yaml")
updated_info, sampler = run(info)
###Output
_____no_output_____
###Markdown
The results of the sampling can be viewed any time once the results are saved. We prepare the plots by defining the following generic plotting functions using ``getdist`` in ``gdplotter_rcb.py``. The posteriors for the density parameters provided the (1) CC + SNe and (2) CC + SNe + BAO data sets are shown below.
###Code
# specify file location(s)
folder_filename_0 = "chains_nonminmat_Hdz_Pantheon/tg_quantum_cc_bao"
folder_filename_1 = "chains_nonminmat_Hdz_Pantheon/tg_quantum_M_SH0ES_cc_bao_sn"
# loading results from folder_filename
gdsamples_0 = loadMCSamples(os.path.abspath(folder_filename_0))
gdsamples_1 = loadMCSamples(os.path.abspath(folder_filename_1))
plot_triangle([gdsamples_0, gdsamples_1],
["H0", "om0", "oq0"],
['red', 'blue'],
['-', '--'],
[r"CC + BAO",
r"CC + BAO + SNe"],
thickness = 3, font_size = 15, title_fs = 15,
parlims = {'oq0': (-0.1, 0.1)}, lgd_font_size = 15)
###Output
_____no_output_____
###Markdown
This shows a slight preference for quantum corrections ($\Omega_{q0} < 0$). We shall look at the statistical significance of this later.Here is the corresponding plot for the other parameters.
###Code
plot_triangle([gdsamples_0, gdsamples_1],
["H0", "ol0", "eps"],
['red', 'blue'],
['-', '--'],
[r"CC + BAO",
r"CC + BAO + SNe"],
thickness = 3, font_size = 15, title_fs = 15,
parlims = {'eps': (-0.07, 0.07)}, lgd_font_size = 15)
plot_1d([gdsamples_1], ["M"], clrs = ['blue'], thickness = 3,
lsty = ['--'], font_size = 15, width_inch = 3.5, figs_per_row = 1)
###Output
WARNING:root:fine_bins not large enough to well sample smoothing scale - chi2
WARNING:root:fine_bins not large enough to well sample smoothing scale - chi2__loglike
###Markdown
It is also useful to look at the posteriors with the corresponding $\Lambda$CDM model ($\varepsilon = 0$).
###Code
# specify file location(s)
folder_filename_2 = "chains_lcdm_Hdz_Pantheon/lcdm_cc_bao"
folder_filename_3 = "chains_lcdm_Hdz_Pantheon/lcdm_M_SH0ES_cc_bao_sn"
# loading results from folder_filename
gdsamples_2 = loadMCSamples(os.path.abspath(folder_filename_2))
gdsamples_3 = loadMCSamples(os.path.abspath(folder_filename_3))
plot_triangle([gdsamples_0, gdsamples_2, gdsamples_1, gdsamples_3],
["H0", "om0"],
['red', 'green', 'blue', 'black'],
['-', '-.', '--', ':'],
[r"TG/quant: CC + BAO",
r"$\Lambda$CDM: CC + BAO",
r"TG/quant: CC + BAO + SNe",
r"$\Lambda$CDM: CC + BAO + SNe"],
thickness = 3, font_size = 15, title_fs = 15,
width_inch = 7, lgd_font_size = 12)
plot_1d([gdsamples_1, gdsamples_3], ["M"],
lbls = [r"TG/quant: CC + BAO + SNe",
r"$\Lambda$CDM: CC + BAO + SNe"],
clrs = ['blue', 'black'],
lsty = ['--', ':'], thickness = 3,
font_size = 15, lgd_font_size = 12,
width_inch = 3.5, figs_per_row = 1)
plot_1d([gdsamples_0, gdsamples_1], ["oq0", "q"],
lbls = [r"TG/quant: CC + BAO",
r"TG/quant: CC + BAO + SNe"],
clrs = ['red', 'blue'],
lsty = ['-', '--'], thickness = 3,
font_size = 15, lgd_font_size = 12,
width_inch = 7, figs_per_row = 2)
###Output
_____no_output_____
###Markdown
We can obtain the best estimates (marginalized statistics) of the constrained parameters $H_0$, $\Omega_{m0}$, $\Omega_\Lambda$, $\Omega_{q0}$, $\varepsilon$, and $M$ (SN absolute magnitude).
###Code
# uncomment next 3 lines to get more info on gdsamples_X
#print(gdsamples_x.getGelmanRubin())
#print(gdsamples_x.getConvergeTests())
#print(gdsamples_x.getLikeStats())
def get_bes(gdx, params_list):
'''get summary statistics for params_list and gdx,
params_list = list of parameter strings, e.g., ["H0", "om0"]
gdx = cobaya/getdist samples, e.g., gdsamples_1'''
stats = gdx.getMargeStats()
for p in params_list:
p_ave = stats.parWithName(p).mean
p_std = stats.parWithName(p).err
print()
print(p, '=', p_ave, '+/-', p_std)
def get_loglike_cc_bao(gdx):
'''returns the loglikelihood at the mean of the best fit'''
stats = gdx.getMargeStats()
return Loglike_cc_bao(stats.parWithName("H0").mean,
stats.parWithName("om0").mean,
stats.parWithName("eps").mean)
def get_loglike_cc_bao_sn(gdx):
'''returns the loglikelihood at the mean of the best fit'''
stats = gdx.getMargeStats()
return Loglike_cc_bao_sn(stats.parWithName("H0").mean,
stats.parWithName("om0").mean,
stats.parWithName("eps").mean,
stats.parWithName("M").mean)
print('CC + BAO : loglike = ', get_loglike_cc_bao(gdsamples_0))
get_bes(gdsamples_0, ["H0", "om0", "ol0", "oq0", "eps", "q"])
print()
print('CC + SNe + BAO : loglike = ', get_loglike_cc_bao_sn(gdsamples_1))
get_bes(gdsamples_1, ["H0", "om0", "ol0", "oq0", "eps", "q", "M"])
###Output
CC + BAO : loglike = -14.390450724779976
H0 = 67.8004534543283 +/- 1.4770558775736187
om0 = 0.34080578948060447 +/- 0.03776932283936424
ol0 = 0.6852167726443292 +/- 0.031097649142164077
oq0 = -0.028386252082125873 +/- 0.01623081432927593
eps = 0.026022562079968937 +/- 0.014889691748570664
q = 0.0011964420976620595 +/- 0.0006889664003658708
CC + SNe + BAO : loglike = -36.76284425721016
H0 = 70.05454742388778 +/- 0.8527931592535787
om0 = 0.2991032108061891 +/- 0.025414395633843102
ol0 = 0.7235945176419945 +/- 0.016742957793318802
oq0 = -0.025412627377857364 +/- 0.020899602813229563
eps = 0.02269772841245066 +/- 0.01862145927771875
q = 0.001106075763625953 +/- 0.0009143510073116634
M = -19.354843877575284 +/- 0.020811981993327823
###Markdown
We end the notebook by comparing the best fit results compared with $\Lambda$CDM. We also print out the $\chi^2$ statistics for the SNe + CC + BAO results.
###Code
# generic plotting function
def plot_best_fit_Hdz(gdxs, lbls, lsts, gdxs_lcdm, lbls_lcdm, lsts_lcdm,
save = False, fname = None, folder = None):
'''plots the best fit CC results with compared with LambdaCDM'''
# cosmic chronometers
fig, ax = plt.subplots()
ix = inset_axes(ax, width = '45%', height = '30%', loc = 'upper left')
ax.errorbar(z_cc, Hz_cc, yerr = sigHz_cc, fmt = 'rx',
ecolor = 'k', markersize = 7, capsize = 3, zorder = 0)
ix.errorbar(z_cc, Hz_cc, yerr = sigHz_cc, fmt = 'rx',
ecolor = 'k', markersize = 7, capsize = 3, zorder = 0)
for i in np.arange(0, len(gdxs)):
stats = gdxs[i].getMargeStats()
H0 = stats.parWithName("H0").mean
om0 = stats.parWithName("om0").mean
eps = stats.parWithName("eps").mean
Hz = H0*nsol(om0 = om0, eps = eps).y[0]
ax.plot(z_late, Hz, lsts[i], label = lbls[i])
ix.plot(z_late, Hz, lsts[i])
for i in np.arange(0, len(gdxs_lcdm)):
stats = gdxs_lcdm[i].getMargeStats()
H0 = stats.parWithName("H0").mean
om0 = stats.parWithName("om0").mean
Hz = H0*nsol(om0 = om0, eps = 0).y[0]
ax.plot(z_late, Hz, lsts_lcdm[i], label = lbls_lcdm[i])
ix.plot(z_late, Hz, lsts_lcdm[i])
ax.set_xlim(z_min, z_max)
ax.set_xlabel('$z$')
ax.set_ylabel('$H(z)$')
ax.legend(loc = 'lower right', prop = {'size': 9.5})
ix.set_xlim(0, 0.2)
ix.set_ylim(66, 74)
ix.set_xticks([0.05, 0.1])
ix.yaxis.tick_right()
ix.set_yticks([68, 70, 72])
ix.xaxis.set_tick_params(labelsize = 10)
ix.yaxis.set_tick_params(labelsize = 10)
if save == True:
fig.savefig(folder + '/' + fname + '.' + fig_format)
def plot_best_fit_sne(gdxs, lbls, lsts, \
gdxs_lcdm, lbls_lcdm, lsts_lcdm,
save = False, fname = None, folder = None):
'''plots the best fit CC results with compared with LambdaCDM'''
# setup full pantheon samples
lcparam_full = np.loadtxt('../../datasets/pantheon/lcparam_full_long_zhel.txt',
usecols = (1, 4, 5))
lcparam_sys_full = np.loadtxt('../../datasets/pantheon/sys_full_long.txt',
skiprows = 1)
z_ps = lcparam_full[:, 0]
mz_ps = lcparam_full[:, 1]
sigmz_ps = lcparam_full[:, 2]
covmz_ps_sys = lcparam_sys_full.reshape(1048, 1048)
covmz_ps_tot = covmz_ps_sys + np.diag(sigmz_ps**2)
# supernovae
z_sne = np.logspace(-3, np.log10(2.5), 100)
fig, ax = plt.subplots()
ax.errorbar(z_ps, mz_ps, yerr = np.sqrt(np.diag(covmz_ps_tot)),
fmt = 'rx', markersize = 3, ecolor = 'k', capsize = 3, zorder = 0)
for i in np.arange(0, len(gdxs)):
stats = gdxs[i].getMargeStats()
H0 = stats.parWithName("H0").mean
om0 = stats.parWithName("om0").mean
eps = stats.parWithName("eps").mean
M = stats.parWithName("M").mean
mz = m0(H0 = H0, om0 = om0, eps = eps, M = M, z_rec = z_sne)
ax.plot(z_sne, mz, lsts[i], label = lbls[i])
for i in np.arange(0, len(gdxs_lcdm)):
stats = gdxs_lcdm[i].getMargeStats()
H0 = stats.parWithName("H0").mean
om0 = stats.parWithName("om0").mean
M = stats.parWithName("M").mean
mz = m0(H0 = H0, om0 = om0, eps = 0, M = M, z_rec = z_sne)
ax.plot(z_sne, mz, lsts_lcdm[i], label = lbls_lcdm[i])
ax.set_xlim(0, 2.5)
ax.set_ylim(11.5, 27.5)
ax.set_xlabel('$\ln(z)$')
ax.set_ylabel('$m(z)$')
ax.legend(loc = 'lower right', prop = {'size': 9.5})
if save == True:
fig.savefig(folder + '/' + fname + '.' + fig_format)
plot_best_fit_Hdz([gdsamples_0, gdsamples_1],
['TG/quant: CC + BAO', 'TG/quant: CC + BAO + SNe'],
['r-', 'b--'],
[gdsamples_2, gdsamples_3],
[r'$\Lambda$CDM: CC + BAO', r'$\Lambda$CDM: CC + BAO + SNe'],
['g-.', 'k:'])
plot_best_fit_sne([gdsamples_1],
['TG/quant: CC + BAO + SNe'],
['b--'],
[gdsamples_3],
[r'$\Lambda$CDM: CC + BAO + SNe'],
['k:'])
###Output
WARNING:root:auto bandwidth for chi2 very small or failed (h=0.00033607191733809753,N_eff=5942.201957107961). Using fallback (h=0.0004234425331251951)
WARNING:root:fine_bins not large enough to well sample smoothing scale - chi2
WARNING:root:fine_bins not large enough to well sample smoothing scale - chi2__loglike
###Markdown
To objectively assess whether the results are significant, we calculate three statistical measures: the $\chi^2$, the Akaike information criterion (AIC), and the Bayesian information criterion (BIC). We can easily compute the chi-squared from the loglikelihood as $\chi^2 = -2 \log \mathcal{L}$. Doing so leads to $\Delta \chi^2 = \chi^2_{\Lambda \text{CDM}} - \chi^2_{\text{TG}}$:
###Code
def get_bfloglike_cc_bao(gdx):
'''returns the best fit loglikelihood using like stats'''
stats = gdx.getLikeStats()
return Loglike_cc_bao(stats.parWithName("H0").bestfit_sample,
stats.parWithName("om0").bestfit_sample,
stats.parWithName("eps").bestfit_sample)
def get_bfloglike_cc_bao_sn(gdx):
'''returns the best fit loglikelihood using like stats'''
stats = gdx.getLikeStats()
return Loglike_cc_bao_sn(stats.parWithName("H0").bestfit_sample,
stats.parWithName("om0").bestfit_sample,
stats.parWithName("eps").bestfit_sample,
stats.parWithName("M").bestfit_sample)
# LambdaCDM CC + BAO like-stats
stats_lcdm_cc_bao = gdsamples_2.getLikeStats()
H0_lcdm_cc_bao = stats_lcdm_cc_bao.parWithName("H0").bestfit_sample
om0_lcdm_cc_bao = stats_lcdm_cc_bao.parWithName("om0").bestfit_sample
loglike_lcdm_cc_bao = Loglike_cc_bao(H0_lcdm_cc_bao, om0_lcdm_cc_bao, eps = 0)
# LambdaCDM CC + BAO + SNe like-stats
stats_lcdm_cc_bao_sn = gdsamples_3.getLikeStats()
H0_lcdm_cc_bao_sn = stats_lcdm_cc_bao_sn.parWithName("H0").bestfit_sample
om0_lcdm_cc_bao_sn = stats_lcdm_cc_bao_sn.parWithName("om0").bestfit_sample
M_lcdm_cc_bao_sn = stats_lcdm_cc_bao_sn.parWithName("M").bestfit_sample
loglike_lcdm_cc_bao_sn = Loglike_cc_bao_sn(H0_lcdm_cc_bao_sn, om0_lcdm_cc_bao_sn, \
eps = 0, M = M_lcdm_cc_bao_sn)
print('CC + BAO results')
print('LambdaCDM : chi-squared = ', -2*loglike_lcdm_cc_bao)
print('TG/quant : chi-squared = ', -2*get_bfloglike_cc_bao(gdsamples_0))
print('Delta chi-squared = ', \
-2*(loglike_lcdm_cc_bao - get_bfloglike_cc_bao(gdsamples_0)))
print()
print('CC + BAO + SNe results')
print('LambdaCDM : chi-squared = ', -2*loglike_lcdm_cc_bao_sn)
print('TG/quant : chi-squared = ', -2*get_bfloglike_cc_bao_sn(gdsamples_1))
print('Delta chi-squared = ', \
-2*(loglike_lcdm_cc_bao_sn - get_bfloglike_cc_bao_sn(gdsamples_1)))
###Output
CC + BAO results
LambdaCDM : chi-squared = 32.075673520122784
TG/quant : chi-squared = 28.618737106195127
Delta chi-squared = 3.4569364139276573
CC + BAO + SNe results
LambdaCDM : chi-squared = 75.24897103548676
TG/quant : chi-squared = 68.92102582823969
Delta chi-squared = 6.327945207247069
###Markdown
This shows that in both cases $\chi^2 > 0$ which corresponds a (very) slight preference for the inclusion of the quantum corrections. Moving on, the AIC can be computed using\begin{equation}\text{AIC} = 2 k - 2 \log(\mathcal{L})\end{equation}where $\log(\mathcal{L})$ is the log-likelihood and $k$ is the number of parameters estimated by the model. The results for the AIC are printed in the next line with $\Delta \text{AIC} = \text{AIC}_{\Lambda\text{CDM}} - \text{AIC}_{\text{TG}}$.
###Code
print('CC + BAO results')
aic_lcdm_cc_bao = 2*2 - 2*loglike_lcdm_cc_bao # estimated H0, om0
aic_tg_quantum_cc_bao = 2*3 - 2*get_bfloglike_cc_bao(gdsamples_0) # estimated H0, om0, eps
print('LambdaCDM : AIC = ', aic_lcdm_cc_bao)
print('TG/quant : AIC = ', aic_tg_quantum_cc_bao)
print('Delta AIC = ', \
aic_lcdm_cc_bao - aic_tg_quantum_cc_bao)
print()
aic_lcdm_cc_bao_sn = 2*3 - 2*loglike_lcdm_cc_bao_sn # estimated ... + M
aic_tg_quantum_cc_bao_sn = 2*4 - 2*get_bfloglike_cc_bao_sn(gdsamples_1)
print('CC + BAO + SNe results')
print('LambdaCDM : AIC = ', aic_lcdm_cc_bao_sn)
print('TGquantum : AIC = ', aic_tg_quantum_cc_bao_sn)
print('Delta AIC = ', \
aic_lcdm_cc_bao_sn - aic_tg_quantum_cc_bao_sn)
###Output
CC + BAO results
LambdaCDM : AIC = 36.075673520122784
TG/quant : AIC = 34.61873710619513
Delta AIC = 1.4569364139276573
CC + BAO + SNe results
LambdaCDM : AIC = 81.24897103548676
TGquantum : AIC = 76.92102582823969
Delta AIC = 4.327945207247069
###Markdown
In the first case (CC + BAO), the inclusion of the TG/quantum corrections is preferred by the AIC as $\Delta \text{AIC} > 0$; on the other hand, with CC + BAO + SNe, the $\Lambda$CDM is slightly preferred.The BIC can be computed using\begin{equation}\text{BIC} = k \log(n) - 2 \log(\mathcal{L})\end{equation}where $\log(\mathcal{L})$ is the log-likelihood, $n$ is the number of data points, and $k$ is the number of parameters estimated by the model. We can again easily compute this together with $\Delta \text{BIC} = \text{BIC}_{\Lambda\text{CDM}} - \text{BIC}_{\text{TG}}$. The results are printed below.
###Code
print('CC + BAO results')
n_cc_bao = len(z_cc)
bic_lcdm_cc_bao = 2*np.log(n_cc_bao) - 2*loglike_lcdm_cc_bao # estimated H0, om0
bic_tg_quantum_cc_bao = 3*np.log(n_cc_bao) - 2*get_bfloglike_cc_bao(gdsamples_0) # estimated H0, om0, eps
print('LambdaCDM : BIC = ', bic_lcdm_cc_bao)
print('TG/quant : BIC = ', bic_tg_quantum_cc_bao)
print('Delta BIC = ', \
bic_lcdm_cc_bao - bic_tg_quantum_cc_bao)
print()
n_cc_bao_sn = len(z_cc) + len(z_ps)
bic_lcdm_cc_bao_sn = 3*np.log(n_cc_bao_sn) - 2*loglike_lcdm_cc_bao_sn # estimated ... + M
bic_tg_quantum_cc_bao_sn = 4*np.log(n_cc_bao_sn) - 2*get_bfloglike_cc_bao_sn(gdsamples_1)
print('CC + BAO + SNe results')
print('LambdaCDM : BIC = ', bic_lcdm_cc_bao_sn)
print('TG/quant : BIC = ', bic_tg_quantum_cc_bao_sn)
print('Delta BIC = ', \
bic_lcdm_cc_bao_sn - bic_tg_quantum_cc_bao_sn)
###Output
CC + BAO results
LambdaCDM : BIC = 40.16177605579188
TG/quant : BIC = 40.74789090969878
Delta BIC = -0.5861148539068992
CC + BAO + SNe results
LambdaCDM : BIC = 88.9731039709969
TG/quant : BIC = 87.21986974225322
Delta BIC = 1.753234228743679
###Markdown
We find here that CC + BAO and CC + BAO + SNe prefers the $\Lambda$CDM model $\left( \Delta \text{BIC} < 0 \right)$ over the inclusion of quantum corrections. Appendix: A quantum corrected DE EoS It is additionally insightful to look at the dark energy equation of state. This is computed below considering the contributions sourcing an accelerated expansion phase through the modified Friedmann equations.
###Code
def rhoLambda(H0, om0, eps):
lmd = 1 - om0 + eps
Lmd = lmd*(3*(H0**2))
q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0))
alpha = q/(H0**2)
Hz = H0*nsol(om0 = om0, eps = eps).y[0]
return Lmd + 24*alpha*Lmd*Hz**2
def preLambda(H0, om0, eps):
lmd = 1 - om0 + eps
Lmd = lmd*(3*(H0**2))
q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0))
alpha = q/(H0**2)
Hz = H0*nsol(om0 = om0, eps = eps).y[0]
z = z_late
Hpz = H0*F(z, Hz/H0, om0, eps)
return -Lmd*(1 + 24*alpha*(Hz**2) \
- 16*(1 + z)*alpha*Hz*Hpz)
def wLambda(H0, om0, eps):
return preLambda(H0, om0, eps)/rhoLambda(H0, om0, eps)
def rhoHO(H0, om0, eps):
lmd = 1 - om0 + eps
Lmd = lmd*(3*(H0**2))
q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0))
alpha = q/(H0**2)
Hz = H0*nsol(om0 = om0, eps = eps).y[0]
return -108*alpha*(Hz**4)
def preHO(H0, om0, eps):
lmd = 1 - om0 + eps
Lmd = lmd*(3*(H0**2))
q = -(-1 + lmd + om0)/(6*(-6 + 4*lmd - om0))
alpha = q/(H0**2)
Hz = H0*nsol(om0 = om0, eps = eps).y[0]
z = z_late
Hpz = H0*F(z, Hz/H0, om0, eps)
return 36*alpha*(Hz**3)*(3*Hz - 4*(1 + z)*Hpz)
def wHO(H0, om0, eps):
return preHO(H0, om0, eps)/rhoHO(H0, om0, eps)
def wLambdaPlusHO(H0, om0, eps):
preTot = preLambda(H0, om0, eps) + preHO(H0, om0, eps)
rhoTot = rhoLambda(H0, om0, eps) + rhoHO(H0, om0, eps)
return preTot/rhoTot
def plot_best_fit_wz(gdxs, lbls, lsts, save = False, fname = None, folder = None):
'''plots the best fit DE EoS including quantum corrections'''
fig, ax = plt.subplots()
for i in np.arange(0, len(gdxs)):
stats = gdxs[i].getMargeStats()
H0 = stats.parWithName("H0").mean
om0 = stats.parWithName("om0").mean
eps = stats.parWithName("eps").mean
wz = wLambdaPlusHO(H0 = H0, om0 = om0, eps = eps)
ax.plot(z_late, 1 + wz, lsts[i], label = lbls[i])
ax.plot(z_late, np.array([0]*len(z_late)), "k:", label = r"$\Lambda$CDM")
ax.set_xlim(0, 1)
ax.set_ylim(-1.1, 1.1)
ax.set_xlabel('$z$')
ax.set_ylabel('$1 + w(z)$')
ax.legend(loc = 'upper right', prop = {'size': 9.5})
if save == True:
fig.savefig(folder + '/' + fname + '.' + fig_format, bbox_inches = 'tight')
###Output
_____no_output_____
###Markdown
Here we go with the plot.
###Code
plot_best_fit_wz([gdsamples_0, gdsamples_1],
['TG/quant: CC + BAO', 'TG/quant: CC + BAO + SNe'],
['r-', 'b--'])
###Output
_____no_output_____ |
2017_2018_BGMP/Summer_2017/PS4.ipynb | ###Markdown
Bi 621 – Problem Set 4 Due before class, Thursday, July 13Our goal with this assignment is to assess the overall quality of a lane of Illumina sequencing data. We will be working with (part of) a lane of 101bp long Illumina sequence data. Our first goal is to calculate the average quality score along each of the 101bp of data we have. Once that is complete, we will extend our calculations to include variance and standard deviation, and in part 3, the median. Finally, you will calculate and then plot the distribution of quality scores at two particular basepair positions.This assigment will take you through the algorithms step-wise. In the real world, you would update the original code to add functionality. However, since this is auto-graded, please "write new code" for each step instead of simply updating the code you have written previously. Part 11. Copy the file ```lane1_NoIndex_L001_R1_003.fastq.gz``` from HPC to the folder **directly above** PS4 folder and reference it with the relative file path (the automated grading will fail if you do not follow this direction). This file contains 4 million Illumina reads from a stickleback experiment.2. Create an array called ```mean_scores``` that contains 101 elements, each a float value initialized to ```0.0```.
###Code
mean_scores = []
for i in range(101):
mean_scores.append(0.0)
#print (mean_scores)
#int((4/4)-1)
print (mean_scores[100])
###Output
0.0
###Markdown
The next cell contains tests for the autograder. Do not change!
###Code
assert len(mean_scores) == 101
assert mean_scores[54] == 0.0
###Output
_____no_output_____
###Markdown
3. Open the FASTQ file and loop through every record (*recommend testing your code with a smaller subsample of the file*). Convert the Phred quality score from a letter to its corresponding number and add it to an ongoing sum of the quality scores for each base pair. So, the quality score of the first nucleotide of every read will be summed together in position 0 of the array you create. Likewise, the quality scores of the 101th nucleotide will be stored in position 100 of the array.4. Keep a counter called ```NR``` to keep track of the total number of lines in the file.
###Code
my_file=open("../lane1_NoIndex_L001_R1_003.fastq")
quality_lines = []
NR = 0
#To convert phred scores to numbers
def convert_phred(letter):
"""Converts a single character into a phred score"""
QScore = ord(letter) - 33
return QScore
#Populate the mean scores
for lines in my_file:
NR = NR + 1
if NR%500000==0:
print (NR)
if NR%4 == 0:
quality_lines.append(lines)
for i in range((len(lines)-1)):
mean_scores[i] = mean_scores[i] + convert_phred(lines[i])
#Some quality assurance
print(NR)
print (mean_scores)
my_file.close()
###Output
500000
1000000
1500000
2000000
2500000
3000000
3500000
4000000
4500000
5000000
5500000
6000000
6500000
7000000
7500000
8000000
8500000
9000000
9500000
10000000
10500000
11000000
11500000
12000000
12500000
13000000
13500000
14000000
14500000
15000000
15500000
16000000
16000000
[128569832.0, 129955000.0, 130152756.0, 143193097.0, 142560058.0, 142735776.0, 144147154.0, 144874873.0, 152909739.0, 152231586.0, 152684471.0, 152606648.0, 148525137.0, 153908471.0, 153802650.0, 153048642.0, 153073269.0, 152871222.0, 152541135.0, 152340346.0, 152328419.0, 151711954.0, 151722802.0, 151667570.0, 151069689.0, 151104371.0, 150687769.0, 150403702.0, 150064290.0, 149549938.0, 149350552.0, 149843901.0, 150173777.0, 149896596.0, 149709729.0, 149494051.0, 149042583.0, 148731300.0, 148612738.0, 148081339.0, 148018035.0, 147751809.0, 147325106.0, 146737857.0, 146142046.0, 145433175.0, 145113295.0, 144383287.0, 143416343.0, 142914506.0, 142617221.0, 141531748.0, 141050749.0, 140439299.0, 139813864.0, 139021898.0, 138256669.0, 136994387.0, 136278288.0, 135401719.0, 134460102.0, 133890330.0, 134192586.0, 133624491.0, 133333175.0, 132959275.0, 132219574.0, 131669343.0, 130951785.0, 130275619.0, 129508774.0, 128657293.0, 128101852.0, 127473235.0, 126783748.0, 121149285.0, 123769805.0, 125132601.0, 125361116.0, 125361992.0, 124941773.0, 124250119.0, 123893821.0, 123444591.0, 122797790.0, 122260403.0, 121654215.0, 121481266.0, 120993976.0, 120488393.0, 120037609.0, 119753727.0, 118964379.0, 118204055.0, 117765212.0, 117200803.0, 116059168.0, 115393184.0, 114431431.0, 113099031.0, 101933304.0]
###Markdown
The next cell contains tests for the autograder. Do not change!
###Code
assert len(mean_scores) == 101
assert mean_scores[54] == 139813864.0
assert mean_scores[70] == 129508774.0
###Output
_____no_output_____
###Markdown
1. Once you have completed summing the quality values, you will calculate the mean quality value at each base and store it back in the array at the appropriate position.2. Finally, you will print to the terminal the mean quality scores from the Illumina reads, like this:``` Base Pair Mean Quality Score0 33.81 27.22 31.9...```3. Plot these results any way you know how: using Excel, R, gnuplot... **Challenge** - plot these results inline in your Jupyter Notebook.> *Hint* - if tackling the challenge, look into matplotlib and the "magic" command ```% matplotlib inline```
###Code
for i in range(101):
mean_scores[i]=mean_scores[i]/4000000
print ("# Base Pair","\t","Mean Quality Score")
for i in range(101):
print(i,"\t",mean_scores[i])
#Use this cell to generate your plot in Jupyter.
#Otherwise, issue a print statement so that the name of your plot file is printed.
print("PS4_Mean_QS.pdf")
###Output
PS4_Mean_QS.pdf
###Markdown
Part 2 – Extended statistics Calculate the variance and the standard deviation for each position in the Illumina reads. Do not use any statistics packages for this assigment.1. You will now create two additional arrays of the same size, one to hold the variance (called ```var```) and one to hold the standard deviation (called ```stdev```) for each position in the read.2. Modify your print code to include two additional columns to print the variance and standard deviation along with the mean. Include the standard deviation in a new plot.
###Code
my_file=open("../lane1_NoIndex_L001_R1_003.fastq")
NR = 0
var=[]
stdev=[]
from math import sqrt
#Find mean for whole position data set
mean=0
mean = sum(mean_scores) / len(mean_scores)
print("The mean is",mean)
#Make the variance and stdev lists
for i in range(101):
var.append(0)
stdev.append(0)
#Calculate the variance of the data set
for i in range(101):
qualitylist=[]
for lines in quality_lines:
qualitylist.append(convert_phred(lines[i]))
var[i] = sum([(scores-mean_scores[i])**2 for scores in qualitylist])/((len(qualitylist)))
#Calculate the stdev of data set
for i in range(101):
stdev[i]=sqrt(var[i])
#Print my tables
print ("# Base Pair","\t","Mean Quality Score","\t","Variance","\t","Standard Deviation")
for i in range(101):
print(i,"\t\t",mean_scores[i],"\t",var[i],"\t",stdev[i])
my_file.close()
#Print the name of the stdev plot
print("PS4_stdev_plot.pdf")
###Output
The mean is 34.34788233663367
# Base Pair Mean Quality Score Variance Standard Deviation
0 32.142458 12.952700218371163 3.598985998635055
1 32.48875 10.880134937695138 3.2985049549296024
2 32.538189 10.222612600400359 3.197282064566772
3 35.79827425 13.607210971709382 3.688795327977602
4 35.6400145 16.405214938805866 4.050335163761867
5 35.683944 16.74728160598006 4.092344267773676
6 36.0367885 7.815591605858491 2.7956379604409602
7 36.21871825 8.036954576788029 2.8349523059106354
8 38.22743475 10.666837684751082 3.266012505296188
9 38.0578965 12.03730649490885 3.469482165238618
10 38.17111775 11.333607464793001 3.36654236046318
11 38.151662 11.807116137183005 3.4361484451610944
12 37.13128425 27.840401696013693 5.276400448792121
13 38.47711775 36.09485990234785 6.0078997913037675
14 38.4506625 36.93192431221588 6.077164166962735
15 38.2621605 40.84198987312479 6.390773808634193
16 38.26831725 40.58089160197402 6.370313304851969
17 38.2178055 42.27536176509091 6.5019506123232675
18 38.13528375 45.08207455767075 6.714318621995143
19 38.0850865 45.99902178624699 6.782257867867234
20 38.08210475 46.249774060966125 6.800718642979294
21 37.9279885 48.20084434560245 6.94268279165932
22 37.9307005 48.31301707988857 6.950756583271246
23 37.9168925 48.833109142263325 6.988069056775507
24 37.76742225 50.55906534086687 7.11048981019359
25 37.77609275 50.86549229522403 7.1320047879417485
26 37.67194225 52.221783860847424 7.226464132675636
27 37.6009255 52.88080504253976 7.271918938116662
28 37.5160725 54.21834917557843 7.36331101988626
29 37.3874845 55.56459226179312 7.454166101033242
30 37.337638 54.408547081506704 7.376214956297485
31 37.46097525 54.87582706777441 7.407822019175029
32 37.54344425 54.08653559836713 7.354354873023679
33 37.474149 55.31477922640143 7.437390619457972
34 37.42743225 56.669778921773094 7.527933243711257
35 37.37351275 57.458895475277906 7.5801646073998885
36 37.26064575 59.012464541591044 7.681957077567606
37 37.182825 59.92894851806383 7.741378980392565
38 37.1531845 60.535323008091574 7.780444910677767
39 37.02033475 62.15532125022775 7.883864613895127
40 37.00450875 62.3257979232098 7.894668955897378
41 36.93795225 63.299527326200845 7.956100007302626
42 36.8312765 64.61587687810368 8.038400144189369
43 36.68446425 66.24383593911183 8.139031633991346
44 36.5355115 67.60667593153266 8.222327890052346
45 36.35829375 69.4599953398949 8.334266334830852
46 36.27832375 70.01837464096309 8.367698288117413
47 36.09582175 72.08625644082186 8.490362562389302
48 35.85408575 74.03334728124689 8.604263320078418
49 35.7286265 74.56712342392059 8.63522573091871
50 35.65430525 74.57443889468446 8.635649303595212
51 35.382937 76.88448275567981 8.768379710965979
52 35.26268725 77.48673315815286 8.802654892596486
53 35.10982475 77.83423777471161 8.82237143713138
54 34.953466 78.39560158546303 8.854129069844364
55 34.7554745 79.45757927987385 8.91389809678537
56 34.56416725 80.47486306462852 8.970778286449205
57 34.24859675 82.7672419058345 9.097650350823255
58 34.069572 83.22041923570298 9.122522635527027
59 33.85042975 83.85830998899961 9.157418303703267
60 33.6150255 84.40649813311019 9.187300916651756
61 33.4725825 84.76138728121794 9.2065947712071
62 33.5481465 83.8294079149816 9.155840098810245
63 33.40612275 84.60224106175872 9.19794765487164
64 33.33329375 84.05424602616455 9.168110275632845
65 33.23981875 83.33322271863912 9.128703233134436
66 33.0548935 83.65975470320365 9.146570652610936
67 32.91733575 83.23365537301177 9.1232480714388
68 32.73794625 83.4656185826031 9.135951980095074
69 32.56890475 83.44974413646531 9.135083148853399
70 32.3771935 83.34844456367834 9.129536930407715
71 32.16432325 84.16248561962686 9.174011424650988
72 32.025463 84.00785463149336 9.165579885173297
73 31.86830875 84.20107566565068 9.176114410013133
74 31.695937 84.65350469116184 9.20073392133268
75 30.28732125 81.24797724983777 9.013765985970446
76 30.94245125 82.57412639004744 9.087030669588797
77 31.28315025 83.31254268479213 9.127570469998691
78 31.340279 83.80572070013976 9.154546449723206
79 31.340498 83.93234461676349 9.161459742680938
80 31.23544325 84.81540122323243 9.209527741596332
81 31.06252975 86.1588792785196 9.282180739380138
82 30.97345525 86.90894362743315 9.322496641320562
83 30.86114775 87.95285980180901 9.378318602063432
84 30.6994475 89.5636141962187 9.46380548174035
85 30.56510075 90.94680189360346 9.53660326812453
86 30.41355375 92.73932204403393 9.630125754320861
87 30.3703165 93.52122769189366 9.670637398429003
88 30.248494 95.23140223483622 9.758657809086055
89 30.12209825 96.93030726598515 9.84531905353936
90 30.00940225 98.67662734771123 9.93361099236885
91 29.93843175 100.34008860213656 10.016989997106744
92 29.74109475 103.5947328196479 10.17814977388562
93 29.55101375 106.56350009583424 10.322959851507427
94 29.441303 109.06364366384379 10.443354042827611
95 29.30020075 112.46938026079094 10.605158191219541
96 29.014792 117.48335769622412 10.838974014925219
97 28.848296 122.0477213980118 11.047521052164228
98 28.60785775 128.20698820419807 11.322852476483039
99 28.27475775 135.97499343040755 11.66083159257553
100 25.483326 138.51283297838611 11.769147504317639
###Markdown
The next cell contains tests for the autograder. Do not change!
###Code
assert len(var) == 101
assert len(stdev) == 101
assert type(var[54]) == float
assert type(stdev[54]) == float
assert var[54] == 78.39560158546303
assert stdev[54] == 8.854129069844364
assert var[89] == 96.93030726598515
assert stdev[89] == 9.84531905353936
###Output
_____no_output_____
###Markdown
Part 3You will now calculate the median value of each position in the read as well as the full distribution for nucleotide positions 6 and 95.**Warning - be aware this portion is computationally intense. It will talk some time to finish running. You may want to incorporate some print statments to assure yourself your program is progressing.**1. You will need create a two dimensional array (called ```all_qscores```). Instead of initializing each position in your one dimensional array to 0.0, you will instead initialize each position to another empty array.2. Instead of simply summing each value at a particular position in the read, now you are going to store all values for that nucleotide position in the array. So, position 5 of your first array will contain an array with all the quality scores for that position from all the Illumina reads in the data set.3. Sort the values at each position in the array and determine the median value (store in an array called ```median```). Your program should now print for each position in the read: Base Pair, Mean, Variance, Stdev, and Median.4. Open a new file for writing called ```p6.tsv``` - for position 6 in the array, sum the number of occurrences of each quality score and print it out to the file. >*Summing the number of occurrences will give you a distribution of quality scores, that occurred in position 6 across all the reads in our data set.* 5. Repeat this process for position 95 in the array in a file called ```p95.tsv```.6. Plot the distribution of quality scores for these two positions. If you would like to challenge yourself, plot inline with matlabplot.
###Code
from collections import Counter
all_qscores = []
median = []
for i in range(101):
all_qscores.append([])
median.append(0)
NR=0
CF=-1
def convert_phred(letter):
"""Converts a single character into a phred score"""
QScore = ord(letter) - 33
return QScore
#Redundant reassignment since I didn't want to run file many times
with open ("../lane1_NoIndex_L001_R1_003.fastq") as fh:
for line in fh:
NR+=1
if NR%500000==0:
print (NR)
if NR%4 == 0:
for i in range(101):
all_qscores[i].append(convert_phred(line[i]))
#Populate the median
for i in range(101):
if i%20 == 0:
print(i)
all_qscores[i]=sorted(all_qscores[i])
median[i]=all_qscores[i][2000000]
#Make the file for position 6
pos6=Counter(all_qscores[5])
p6=open("P6.tsv","w+")
for key, value in pos6.items() :
p6.write(str(key) + "\t" + str(value) + "\n")
p6.close()
#Make the file for position 95
pos95 = Counter(all_qscores[94])
p95=open("P95.tsv","w+")
for key, value in pos95.items() :
p95.write(str(key) + "\t" + str(value) + "\n")
p95.close()
#MY PRINT STATEMENT
print ("# Base Pair","\t","Mean Quality Score","\t","Variance","\t","Standard Deviation","\t","Median")
for i in range(101):
print(i,"\t\t",mean_scores[i],"\t",var[i],"\t",stdev[i],"\t",median[i])
#Just some quality assurance statements
print(len(all_qscores))
print(len(all_qscores[1]))
print(all_qscores[100][826109])
print(all_qscores[0][826109])
print(median[54])
print(median[99])
#Use this cell to generate your position 6 plot in Jupyter.
#Otherwise, issue a print statement so that the name of your plot file is printed.
#Print position 6 plot file name
print("POS_6.pdf")
#Use this cell to generate your position 95 plot in Jupyter.
#Otherwise, issue a print statement so that the name of your plot file is printed.
#Print position 95 plot file name
print("POS_95.pdf")
assert len(all_qscores) == 101
assert len(all_qscores[47]) == 4000000
assert all_qscores[57][826109] == 33
assert median[54] == 76
assert median[99] == 68
assert all_qscores[1][500000:500200] == [31]*200
assert all_qscores[97][3999900:4000000] == [39]*85 + [40]*5 + [41]*10
###Output
_____no_output_____ |
Heart_Disease_&_Violent_Crime.ipynb | ###Markdown
Prepping Coronary Heart Disease Data
###Code
import pandas as pd
import numpy as np
heart = pd.read_csv('500_Cities__Coronary_heart_disease_among_adults_aged___18_years.csv')
heart.head()
print(heart.shape)
print(heart.isnull().sum())
heart= heart.dropna(how='any',axis=0)
print(heart.shape)
print(heart.isnull().sum())
heart['CityName_ST'] = heart['CityName'] + '_' + heart['StateAbbr']
heart.head()
##Averaging together the different instances of reported data per city
heart_gr = heart.groupby(['CityName_ST'])
heart_average = heart_gr['Data_Value'].mean().to_frame(name='% of Disease Prevalence').reset_index()
heart_average.head()
###Output
_____no_output_____
###Markdown
Prepping Crime Data
###Code
# fbi = pd.read_csv('FBI_data_project_2.csv')
# fbi = fbi.drop('Metropolitan Statistical Area', 1)
fbi = pd.read_csv('FBI data_edit.csv')
print(fbi.shape)
fbi[fbi['Counties/principal cities']=='City of Chicago']
city_crime = fbi[fbi['Counties/principal cities'].str.contains('City of ', na=False)].reset_index()
city_crime.shape
city_crime= city_crime.dropna(how='any',axis=0)
city_crime.isnull().sum()
city_crime.shape
x = 'City of Chicago'
# x.replace(x[:8], '')
def drop_extras(mylist):
"This changes a passed list into this function"
return mylist.replace(x[:8], '')
## Scraping off "City of" so I can match dataframes
city_crime['Counties/principal cities'] = city_crime['Counties/principal cities'].apply(drop_extras)
# city_crime['Counties/principal cities'] = city_crime['Counties/principal cities'].str.strip('City of')
city_crime["Population"] = pd.to_numeric(city_crime["Population"], downcast="float")
city_crime["Violent\rcrime"] = pd.to_numeric(city_crime["Violent\rcrime"], downcast="float")
city_crime['Violent Crime Per 100'] = (city_crime['Violent\rcrime']/city_crime['Population'])*100
city_crime.head(5)
#city_crime[city_crime['Counties/principal cities']=='Chicago']
city_crime.head()
city_crime['CityName_ST'] = city_crime['Counties/principal cities'] + '_' + city_crime['STATE']
city_crime.head()
###Output
_____no_output_____
###Markdown
Merging and Analyzing Crime + Heart Disease
###Code
heart_crime = pd.merge(heart_average, city_crime, on='CityName_ST')
heart_crime.shape
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.title("Heart Disease vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
ax1 = sns.regplot(x = heart_crime['% of Disease Prevalence'], y = heart_crime['Violent Crime Per 100'], data = heart_crime, ci=None)
plt.style.use('seaborn-darkgrid')
plt.title("Heart Disease vs. Population Size", fontsize=15, fontweight=5, color='orange')
ax1 = sns.regplot(x = heart_crime['% of Disease Prevalence'], y = heart_crime['Population'], data = heart_crime, ci=None)
import numpy as np
from scipy.stats import pearsonr
corr, _ = pearsonr(heart_crime['Violent Crime Per 100'], heart_crime['% of Disease Prevalence'])
print('Pearsons correlation: %.3f' % corr)
corr, _ = pearsonr(heart_crime['Violent Crime Per 100'], heart_crime['Population'])
print('Pearsons correlation: %.3f' % corr)
heart_crime.nlargest(10, '% of Disease Prevalence')
###Output
_____no_output_____
###Markdown
There is a moderate correlation between crime and heart disease. Generally Sick for More Than 14 Days & Violent Crime
###Code
gen_health = pd.read_csv('Cdc_General_health.csv')
gen_health= gen_health.dropna(how='any',axis=0)
gen_health['CityName_ST'] = gen_health['CityName'] + '_' + gen_health['StateAbbr']
gen_health.shape
health_gr = gen_health.groupby(['CityName_ST'])
health_average = health_gr['Data_Value'].mean().to_frame(name='% of Disease Prevalence').reset_index()
health_average.shape
health_crime = pd.merge(health_average, city_crime, on='CityName_ST')
health_crime.shape
import seaborn as sns
plt.style.use('seaborn-darkgrid')
plt.title("Sick for More than 14 Days vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
ax1 = sns.regplot(x = health_crime['% of Disease Prevalence'], y = health_crime['Violent Crime Per 100'], data = health_crime, ci=None)
corr, _ = pearsonr(health_crime['Violent Crime Per 100'], health_crime['% of Disease Prevalence'])
print('Pearsons correlation: %.3f' % corr)
###Output
Pearsons correlation: 0.565
###Markdown
There is a moderate correlation between being ill for more than 14 days and living in a higher crime area Athsma vs Crime
###Code
athsma = pd.read_csv('500_Cities__Current_asthma.csv')
athsma.head()
athsma= athsma.dropna(how='any',axis=0)
athsma['CityName_ST'] = athsma['CityName'] + '_' + athsma['StateAbbr']
athsma_gr = athsma.groupby(['CityName_ST'])
athsma_average = athsma_gr['Data_Value'].mean().to_frame(name='% of Disease Prevalence').reset_index()
athsma_average.head()
athsma_crime = pd.merge(athsma_average, city_crime, how='inner', on = 'CityName_ST')
athsma_crime.head()
import seaborn as sns
plt.style.use('seaborn-darkgrid')
plt.title("Athsma vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
ax1 = sns.regplot(x = athsma_crime['% of Disease Prevalence'], y = athsma_crime['Violent Crime Per 100'], data = athsma_crime, ci=None)
corr, _ = pearsonr(athsma_crime['Violent Crime Per 100'], athsma_crime['% of Disease Prevalence'])
print('Pearsons correlation: %.3f' % corr)
athsma_crime.nlargest(10, '% of Disease Prevalence')
###Output
_____no_output_____
###Markdown
There's a moderate correlation between athsma & violent crime in towns. Mental Health & Violent Crime
###Code
mental = pd.read_csv('500_Cities__Mental_health_not_good_for___14_days_among_adults_aged___18_years.csv')
mental.head()
mental= mental.dropna(how='any',axis=0)
mental['CityName_ST'] = mental['CityName'] + '_' + mental['StateAbbr']
mental_gr = mental.groupby(['CityName_ST'])
mental_average = mental_gr['Data_Value'].mean().to_frame(name='% of Disease Prevalence').reset_index()
mental_average.head()
mental_crime = pd.merge(mental_average, city_crime, how='inner', on ='CityName_ST')
import seaborn as sns
plt.style.use('seaborn-darkgrid')
plt.title("Mental Health vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
ax1 = sns.regplot(x = mental_crime['% of Disease Prevalence'], y = mental_crime['Violent Crime Per 100'], data = mental_crime, ci=None)
corr, _ = pearsonr(mental_crime['Violent Crime Per 100'], mental_crime['% of Disease Prevalence'])
print('Pearsons correlation: %.3f' % corr)
###Output
Pearsons correlation: 0.590
###Markdown
Cancer(Excluding Skin Cancer) vs. Violent Crime
###Code
cancer = pd.read_csv('500_Cities__Cancer__excluding_skin_cancer__among_adults_aged___18_years.csv')
cancer.head()
cancer = cancer.drop(['Unnamed: 7', 'Unnamed: 8','Unnamed: 9'], 1)
cancer.head()
cancer= cancer.dropna(how='any',axis=0)
cancer['CityName_ST'] = cancer['CityName'] + '_' + cancer['StateAbbr']
cancer_gr = cancer.groupby(['CityName_ST'])
cancer_average = cancer_gr['Data_Value'].mean().to_frame(name='% of Disease Prevalence').reset_index()
cancer_average.head()
cancer_crime = pd.merge(cancer_average, city_crime, how='inner', on = 'CityName_ST')
cancer_crime.head()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.title("Cancer(Excluding Skin Cancer) vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
import seaborn as sns
ax1 = sns.regplot(x = cancer_crime['% of Disease Prevalence'], y = cancer_crime['Violent Crime Per 100'], data = cancer_crime, ci=None)
corr, _ = pearsonr(cancer_crime['Violent Crime Per 100'], cancer_crime['% of Disease Prevalence'])
print('Pearsons correlation: %.3f' % corr)
cancer_crime.nlargest(10, '% of Disease Prevalence')
###Output
_____no_output_____
###Markdown
Diabetes vs. Violent Crime
###Code
diabetes = pd.read_csv('500_Cities__Diagnosed_diabetes_among_adults_aged___18_years.csv')
diabetes = diabetes.drop(['Unnamed: 7', 'Unnamed: 8','Unnamed: 9'], 1)
diabetes.head()
diabetes= diabetes.dropna(how='any',axis=0)
diabetes['CityName_ST'] = diabetes['CityName'] + '_' + diabetes['StateAbbr']
diabetes_gr = diabetes.groupby(['CityName_ST'])
diabetes_average = diabetes_gr['Data_Value'].mean().to_frame(name='% of Disease Prevalence').reset_index()
diabetes_average.head()
diabetes_crime = pd.merge(diabetes_average, city_crime, how='inner', on ='CityName_ST')
diabetes_crime.head()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.title("Diabetes vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
import seaborn as sns
ax1 = sns.regplot(x = diabetes_crime['% of Disease Prevalence'], y = diabetes_crime['Violent Crime Per 100'], data = diabetes_crime, ci=None)
corr, _ = pearsonr(diabetes_crime['Violent Crime Per 100'], diabetes_crime['% of Disease Prevalence'])
print('Pearsons correlation: %.3f' % corr)
diabetes_crime.nlargest(10, '% of Disease Prevalence')
###Output
_____no_output_____
###Markdown
Chronic Kidney Disease vs. Violent Crime
###Code
kidney = pd.read_csv('500_Cities__Chronic_kidney_disease_among_adults_aged___18_years.csv')
kidney = kidney.drop(['Unnamed: 15', 'Unnamed: 16','Unnamed: 17'], 1)
kidney.head()
kidney= kidney.dropna(how='any',axis=0)
kidney['CityName_ST'] = kidney['CityName'] + '_' + kidney['StateAbbr']
kidney_gr = kidney.groupby(['CityName_ST'])
kidney_average = kidney_gr['Data_Value'].mean().to_frame(name='% of Disease Prevalence').reset_index()
kidney_average.head()
kidney_crime = pd.merge(kidney_average, city_crime, how='inner', on = 'CityName_ST')
kidney_crime.head()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.title("Kidney Disease vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
import seaborn as sns
ax1 = sns.regplot(x = kidney_crime['% of Disease Prevalence'], y = kidney_crime['Violent Crime Per 100'], data = kidney_crime, ci=None)
corr, _ = pearsonr(kidney_crime['Violent Crime Per 100'], kidney_crime['% of Disease Prevalence'])
print('Pearsons correlation: %.3f' % corr)
kidney_crime.nlargest(10, '% of Disease Prevalence')
###Output
_____no_output_____
###Markdown
Alcohol Consumption vs. Violent Crime
###Code
alcohol = pd.read_csv('500_Cities__Binge_drinking_among_adults_aged_new.csv')
alcohol.shape
alcohol= alcohol.dropna(how='any',axis=0)
alcohol['CityName_ST'] = alcohol['CityName'] + '_' + alcohol['StateAbbr']
alcohol_gr = alcohol.groupby(['CityName_ST'])
alcohol_average = alcohol_gr['Data_Value'].mean().to_frame(name='% of Disease Prevalence').reset_index()
alcohol_average.head()
alcohol_crime = pd.merge(alcohol_average, city_crime, how='inner', on = 'CityName_ST')
alcohol_crime.head()
###Output
_____no_output_____
###Markdown
###Code
alcohol_crime.nsmallest(20, '% of Disease Prevalence')
###Output
_____no_output_____
###Markdown
Appleton, Wisconsin has earned the reputation for being the drunkest city in America [link text](https://www.fox6now.com/news/appleton-named-drunkest-city-in-america-seven-wisconsin-cities-in-the-top-10)The top cities are around colleges which could explain part of the binge drinking statistics. The binge drinking study also does not offer any indication as to how long someone engages in binbe drinking behaviors so it may not be the best indicator of the health of a city.
###Code
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.title("Binge Drinking vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
import seaborn as sns
ax1 = sns.regplot(x = alcohol_crime['% of Disease Prevalence'], y = alcohol_crime['Violent Crime Per 100'], data = alcohol_crime, ci=None)
corr, _ = pearsonr(alcohol_crime['Violent Crime Per 100'], alcohol_crime['% of Disease Prevalence'])
print('Pearsons correlation: %.3f' % corr)
###Output
Pearsons correlation: -0.221
###Markdown
Obesity & Violent Crime
###Code
obesity = pd.read_csv('500_Cities__Obesity_among_adults_aged___18_years.csv')
obesity.head()
# alcohol = alcohol.drop(['Unnamed: 15', 'Unnamed: 16','Unnamed: 17'], 1)
obesity['CityName_ST'] = obesity['CityName'] + '_' + obesity['StateAbbr']
obesity= obesity.dropna(how='any',axis=0)
obesity_gr = obesity.groupby(['CityName_ST'])
obesity_average = obesity_gr['Data_Value'].mean().to_frame(name='% of Disease Prevalence').reset_index()
obesity_average.head()
obesity_crime = pd.merge(obesity_average, city_crime, how='inner', on = 'CityName_ST')
obesity_crime.head()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.title("Obesity vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
import seaborn as sns
ax1 = sns.regplot(x = obesity_crime['% of Disease Prevalence'], y = obesity_crime['Violent Crime Per 100'], data = obesity_crime, ci=None)
corr, _ = pearsonr(obesity_crime['Violent Crime Per 100'], obesity_crime['% of Disease Prevalence'])
print('Pearsons correlation: %.3f' % corr)
print(obesity_crime['CityName_ST'])
obesity_crime.nlargest(10, '% of Disease Prevalence')
###Output
_____no_output_____
###Markdown
Smoking and Violent Crime
###Code
smoke = pd.read_csv('500_Cities__Current_smoking_among_adults_aged___18_years.csv')
smoke.head()
smoke = smoke.drop(['Data_Value_Unit', 'DataValueTypeID','Data_Value_Type','Data_Value_Footnote_Symbol', 'Data_Value_Footnote', 'PopulationCount','CategoryID','CityFIPS', 'MeasureId', 'TractFIPS', 'Short_Question_Text'], 1)
smoke['CityName_ST'] = smoke['CityName'] + '_' + smoke['StateAbbr']
smoke= smoke.dropna(how='any',axis=0)
smoke_gr = smoke.groupby(['CityName_ST'])
smoke_average = smoke_gr['Data_Value'].mean().to_frame(name='% of Disease Prevalence').reset_index()
smoke_average.head()
smoke_crime = pd.merge(smoke_average, city_crime, how='inner', on = 'CityName_ST')
smoke.head()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.title("Smoking vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
import seaborn as sns
ax1 = sns.regplot(x = smoke_crime['% of Disease Prevalence'], y = smoke_crime['Violent Crime Per 100'], data = smoke_crime, ci=None)
corr, _ = pearsonr(smoke_crime['Violent Crime Per 100'], smoke_crime['% of Disease Prevalence'])
print('Pearsons correlation: %.3f' % corr)
###Output
Pearsons correlation: 0.605
###Markdown
Stroke vs. Violent Crime
###Code
stroke = pd.read_csv('500_Cities__Stroke_among_adults_aged___18_years.csv')
stroke.head()
stroke = stroke.drop(['Data_Value_Unit', 'DataValueTypeID','Data_Value_Type','Data_Value_Footnote_Symbol', 'Data_Value_Footnote', 'PopulationCount','CategoryID','CityFIPS', 'MeasureId', 'TractFIPS', 'Short_Question_Text'], 1)
stroke['CityName_ST'] = stroke['CityName'] + '_' + stroke['StateAbbr']
stroke= stroke.dropna(how='any',axis=0)
stroke_gr = stroke.groupby(['CityName_ST'])
stroke_average = stroke_gr['Data_Value'].mean().to_frame(name='% of Disease Prevalence').reset_index()
stroke_average.head()
stroke_crime = pd.merge(stroke_average, city_crime, how='inner', on = 'CityName_ST')
stroke.head()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.title("Stroke vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
import seaborn as sns
ax1 = sns.regplot(x = stroke_crime['% of Disease Prevalence'], y = stroke_crime['Violent Crime Per 100'], data = stroke_crime, ci=None)
corr, _ = pearsonr(heart_crime['Violent Crime Per 100'], heart_crime['% of Disease Prevalence'])
print('Pearsons correlation: %.3f' % corr)
plt.title("Coronary Heart Disease vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
ax1 = sns.regplot(x = heart_crime['% of Disease Prevalence'], y = heart_crime['Violent Crime Per 100'], data = heart_crime, ci=None)
corr, _ = pearsonr(stroke_crime['Violent Crime Per 100'], stroke_crime['% of Disease Prevalence'])
print('Pearsons correlation: %.3f' % corr)
stroke_crime.nlargest(20, '% of Disease Prevalence')
heart_crime.nlargest(10, '% of Disease Prevalence')
###Output
_____no_output_____
###Markdown
This was just me trying to understand why stroke rates have a higher correlatio that coronary heart disease
###Code
test = pd.merge(stroke_crime, heart_crime, how='inner', on = 'CityName_ST')
test.head()
ax1 = sns.regplot(x = test['% of Disease Prevalence_x'], y = test['% of Disease Prevalence_y'], data = test, ci=None)
corr, _ = pearsonr(test['% of Disease Prevalence_x'], test['% of Disease Prevalence_y'])
print('Pearsons correlation: %.3f' % corr)
###Output
Pearsons correlation: 0.921
###Markdown
Cholesterol Screenings
###Code
cholesterol = pd.read_csv('500_Cities__Cholesterol_screening_among_adults_aged___18_years.csv')
cholesterol.head()
cholesterol = cholesterol.drop(['Data_Value_Unit', 'DataValueTypeID','Data_Value_Type','Data_Value_Footnote_Symbol', 'Data_Value_Footnote', 'PopulationCount','CategoryID','CityFIPS', 'MeasureId', 'TractFIPS', 'Short_Question_Text'], 1)
cholesterol['CityName_ST'] = cholesterol['CityName'] + '_' + cholesterol['StateAbbr']
cholesterol= cholesterol.dropna(how='any',axis=0)
cholesterol_gr = cholesterol.groupby(['CityName_ST'])
cholesterol_average = cholesterol_gr['Data_Value'].mean().to_frame(name='% of Disease Prevalence').reset_index()
cholesterol_average.head()
cholesterol_crime = pd.merge(cholesterol_average, city_crime, how='inner', on = 'CityName_ST')
cholesterol.head()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.title("Cholesterol Screenings vs. Violent Crime", fontsize=15, fontweight=5, color='orange')
import seaborn as sns
ax1 = sns.regplot(x = cholesterol_crime['% of Disease Prevalence'], y = cholesterol_crime['Violent Crime Per 100'], data = cholesterol_crime, ci=None)
corr, _ = pearsonr(cholesterol_crime['% of Disease Prevalence'], cholesterol_crime['Violent Crime Per 100'])
print('Pearsons correlation: %.3f' % corr)
cholesterol_crime[cholesterol_crime['CityName_ST']==Detroit_MI]
###Output
_____no_output_____ |
Data_Science_Batch_1_Assignment_4.ipynb | ###Markdown
###Code
# Questions 1:
# How to import pandas and check the version?
import pandas as pd
print(pd.__version__)
# Questions 2:
# How to create a series from a numpy array?
import pandas as pd
import numpy as np
# numpy array
data = np.array(['S', 'N', 'E', 'H', 'A', 'L'])
# creating series
s = pd.Series(data)
print(s)
# Questions 3:
# How to convert the index of a series into a column of a dataframe?
import pandas as pd
# Creating the dataframe df
df = pd.DataFrame({'Roll Number': ['1', '2', '3', '4'],
'Name': ['Ayesha', 'Sonali', 'Aarti', 'Snehal'],
'Marks In Percentage': [97, 90, 70, 82],
'Grade': ['A', 'A', 'C', 'B'],
'Subject': ['Physics', 'Physics', 'Physics', 'Physics']})
# Printing the dataframe
df
# Questions 4:
# Write the code to list all the datasets available in seaborn library.
# Load the mpg dataset
sns.get_dataset_names()
import seaborn as sns
tips = sns.load_dataset('tips')
tips.head()
mpg = sns.load_dataset('mpg')
mpg.head()
print(mpg)
# Questions 5:
# Which country origin cars are a part of this dataset?
import pandas as pd
import seaborn as sns
mpg = sns.load_dataset('mpg')
df = pd.DataFrame(mpg)
df.origin.unique()
# Questions 6:
# Extract the part of the dataframe which contains cars belonging to usa
import pandas as pd
import seaborn as sns
mpg = sns.load_dataset('mpg')
df = pd.DataFrame(mpg)
df[df['origin'].str.contains('usa')]
###Output
_____no_output_____ |
Panda/DataFrame.ipynb | ###Markdown
Introduction to Pandas dataframe Data frame is a main object in pandas. It is used to represent data with rows and columns.Data frame is a data structure represent the data in tabular or excel spread sheet like data. creating dataframe:
###Code
import pandas as pd
df = pd.read_csv('covid19.csv') #read weather.csv data -> csv : comma seperated values
df
"""
Above data has 8 columns and 306429 rows.
panda reads the csv file using read_csv function.
"""
covid_data = [('4/1/2021','Delhi','India','4/1/2021 , 17:00',10.0,10.0,0.0),
('5/1/2021','Delhi','India','5/1/2021 , 17:00',15.0,15.0,0.0),
('6/1/2021','Delhi','India','6/1/2021 , 17:00',20.0,20.0,0.0),
('7/1/2021','Delhi','India','7/1/2021 , 17:00',200.0,100.0,100.0),
('8/1/2021','Delhi','India','8/1/2021 , 17:00',1000.0,900.0,100.0),
('9/1/2021','Delhi','India','9/1/2021 , 17:00',4000.0,3900.0,100.0),
]
df = pd.DataFrame(covid_data, columns = ['Date', 'State', 'Country', 'Last update','Confirmed','Deaths','Recovered'])
df
#get dimensions of the table
df.shape #total number of rows and columns
#if you want to see initial some rows then use head command (DEFAULT 5 ROWS)
df.head()
#if you ant to see last few rows then use tail command (defautl : last 5 rows )
df.tail()
#slicing
df[3:6] # if i wish to see latest data
df.Date #print particular column data
#another way of accessing column
df['Confirmed'] #df.Confirmed (both are same)
#get 2 or more data
df[['Confirmed','Deaths','Recovered']]
#get all Deaths
df['Deaths']
#print max Death
df['Deaths'].max()
#print min Deaths
df['Deaths'].min()
#print max Deaths
df['Deaths'].describe()
#select the date having maximum death
df[df.Deaths == df.Deaths.max()]
#select only the date which has maximum Deaths
df.Date[df.Deaths == df.Deaths.max()]
###Output
_____no_output_____ |
tutorials/asr/ASR_with_Transducers.ipynb | ###Markdown
Automatic Speech Recognition with Transducer ModelsThis notebook is a basic tutorial for creating a Transducer ASR model and then training it on a small dataset (AN4). It includes discussion relevant to reducing memory issues when training such models and demonstrates how to change the decoding strategy after training. Finally, it also provides a brief glimpse of extracting alignment information from a trained Transducer model.As we will see in this tutorial, apart from the differences in the config and the class used to instantiate the model, nearly all steps are precisely similar to any CTC-based model training. Many concepts such as data loader setup, optimization setup, pre-trained checkpoint weight loading will be nearly identical between CTC and Transducer models.In essence, NeMo makes it seamless to take a config for a CTC ASR model, add in a few components related to Transducers (often without any modifications) and use a different class to instantiate a Transducer model!--------**Note**: It is assumed that the previous tutorial - "Intro-to-Transducers" has been reviewed, and there is some familiarity with the config components of transducer models. Preparing the datasetIn this tutorial, we will be utilizing the `AN4`dataset - also known as the Alphanumeric dataset, which was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly. Let's first download the preparation script from NeMo's scripts directory -
###Code
import os
if not os.path.exists("scripts/"):
os.makedirs("scripts")
if not os.path.exists("scripts/process_an4_data.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_an4_data.py
###Output
_____no_output_____
###Markdown
------Download and prepare the two subsets of `AN 4`
###Code
import wget
import tarfile
import subprocess
import glob
data_dir = "datasets"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
if os.path.exists(f"{data_dir}/an4"):
print("Preparing AN4 dataset ...")
an4_path = f"{data_dir}/"
!python scripts/process_an4_data.py \
--data_root=$an4_path
print("AN4 prepared !")
# Manifest filepaths
TRAIN_MANIFEST = os.path.join(data_dir, "an4", "train_manifest.json")
TEST_MANIFEST = os.path.join(data_dir, "an4", "test_manifest.json")
###Output
_____no_output_____
###Markdown
Preparing the tokenizerNow that we have a dataset ready, we need to decide whether to use a character-based model or a sub-word-based model. For completeness' sake, we will use a tokenizer based model so that we can leverage a modern encoder architecture like ContextNet or Conformer-T.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
-----Since the dataset is tiny, we can use a small SentencePiece based tokenizer. We always delete the tokenizer directory so any changes to the manifest files are always replicated in the tokenizer.
###Code
VOCAB_SIZE = 32 # can be any value above 29
TOKENIZER_TYPE = "spe" # can be wpe or spe
SPE_TYPE = "unigram" # can be bpe or unigram
# ------------------------------------------------------------------- #
!rm -r tokenizers/
if not os.path.exists("tokenizers"):
os.makedirs("tokenizers")
!python scripts/process_asr_text_tokenizer.py \
--manifest=$TRAIN_MANIFEST \
--data_root="tokenizers" \
--tokenizer=$TOKENIZER_TYPE \
--spe_type=$SPE_TYPE \
--no_lower_case \
--log \
--vocab_size=$VOCAB_SIZE
# Tokenizer path
if TOKENIZER_TYPE == 'spe':
TOKENIZER = os.path.join("tokenizers", f"tokenizer_spe_{SPE_TYPE}_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "bpe"
else:
TOKENIZER = os.path.join("tokenizers", f"tokenizer_wpe_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "wpe"
###Output
_____no_output_____
###Markdown
Preparing a Transducer ModelNow that we have the dataset and tokenizer prepared, let us begin by setting up the config of the Transducer model! In this tutorial, we will build a slightly modified ContextNet architecture (which is obtained from the paper [ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context](https://arxiv.org/abs/2005.03191)).We can note that many of the steps here are identical to the setup of a CTC model! Prepare the configFor a dataset such as AN4, we do not need such a deep model. In fact, the depth of this model will cause much slower convergence on a small dataset, which would require far too long to train on Colab.In order to speed up training for this demo, we will take only the first five blocks of ContextNet, and discard the rest - and we can do this directly from the config.**Note**: On any realistic dataset (say Librispeech) this step would hurt the model's accuracy significantly. It is being done only to reduce the time spent waiting for training to finish on Colab.
###Code
from omegaconf import OmegaConf, open_dict
config = OmegaConf.load("configs/contextnet_rnnt.yaml")
###Output
_____no_output_____
###Markdown
-----Here, we will slice off the first five blocks from the Jasper block (used to build ContextNet). Setting the config with this subset will create a stride 2x model with just five blocks.We will also explicitly state that the last block dimension must be obtained from `model.model_defaults.enc_hidden` inside the config.
###Code
config.model.encoder.jasper = config.model.encoder.jasper[:5]
config.model.encoder.jasper[-1].filters = '${model.model_defaults.enc_hidden}'
###Output
_____no_output_____
###Markdown
-------Next, set up the data loaders of the config for the ContextNet model.
###Code
# print out the train and validation configs to know what needs to be changed
print(OmegaConf.to_yaml(config.model.train_ds))
###Output
_____no_output_____
###Markdown
-------We can note that the config here is nearly identical to the CTC ASR model configs! So let us take the same steps here to update the configs.
###Code
config.model.train_ds.manifest_filepath = TRAIN_MANIFEST
config.model.validation_ds.manifest_filepath = TEST_MANIFEST
config.model.test_ds.manifest_filepath = TEST_MANIFEST
###Output
_____no_output_____
###Markdown
------Next, we need to setup the tokenizer section of the config
###Code
print(OmegaConf.to_yaml(config.model.tokenizer))
config.model.tokenizer.dir = TOKENIZER
config.model.tokenizer.type = TOKENIZER_TYPE_CFG
###Output
_____no_output_____
###Markdown
------Now, we can update the optimization and augmentation for this dataset in order to converge to some reasonable score within a short training run.
###Code
print(OmegaConf.to_yaml(config.model.optim))
# Finally, let's remove logging of samples and the warmup since the dataset is small (similar to CTC models)
config.model.log_prediction = False
config.model.optim.sched.warmup_steps = None
###Output
_____no_output_____
###Markdown
------Next, we remove the spec augment that is provided by default for ContextNet. While additional augmentation would surely help training, it would require longer training to see significant benefits.
###Code
print(OmegaConf.to_yaml(config.model.spec_augment))
config.model.spec_augment.freq_masks = 0
config.model.spec_augment.time_masks = 0
###Output
_____no_output_____
###Markdown
------... We are now almost done! Most of the updates to a Transducer config are nearly the same as any CTC model. Fused Batch during training and evaluationWe discussed in the previous tutorial (Intro-to-Transducers) the significant memory cost of the Transducer Joint calculation during training. We also discussed that NeMo provides a simple yet effective method to nearly sidestep this limitation. We can now dive deeper into understanding what precisely NeMo's Transducer framework will do to alleviate this memory consumption issue.The following sub-cells are **voluntary** and valuable for understanding the cause, effect, and resolution of memory issues in Transducer models. The content can be skipped if one is familiar with the topic, and it is only required to use the `fused batch step`. Transducer Memory reduction with Fused Batch stepThe following few cells explain why memory is an issue when training Transducer models and how NeMo tackles the issue with its Fused Batch step.The material can be read for a thorough understanding, otherwise, it can be skipped. Diving deeper into the memory costs of Transducer Joint-------One of the significant limitations of Transducers is the exorbitant memory cost of computing the Joint module. The Joint module is comprised of two steps. 1) Projecting the Acoustic and Transcription feature dimensions to some standard hidden dimension (specified by `model.model_defaults.joint_hidden`)2) Projecting this intermediate hidden dimension to the final vocabulary space to obtain the transcription.Take the following example.**BS**=32 ; **T** (after **2x** stride) = 800, **U** (with character encoding) = 400-450 tokens, Vocabulary size **V** = 28 (26 alphabet chars, space and apostrophe). Let the hidden dimension of the Joint model be 640 (Most Google Transducer papers use hidden dimension of 640).$ Memory \, (Hidden, \, gb) = 32 \times 800 \times 450 \times 640 \times 4 = 29.49 $ gigabytes (4 bytes per float). $ Memory \, (Joint, \, gb) = 32 \times 800 \times 450 \times 28 \times 4 = 1.290 $ gigabytes (4 bytes per float)-----**NOTE**: This is just for the forward pass! We need to double this memory to store gradients! This much memory is also just for the Joint model **alone**. Far more memory is required for the Prediction model as well as the large Acoustic model itself and its gradients!Even with mixed precision, that's $\sim 30$ GB of GPU RAM for just 1 part of the network + its gradients.--------- Simple methods to reduce memory consumption------The easiest way to reduce memory consumption is to perform more downsampling in the acoustic model and use sub-word tokenization of the text to reduce the length of the target sequence.**BS**=32 ; **T** (after **8x** stride) = 200, **U** (with sub-word encoding) = 100-180 tokens, Vocabulary size **V** = 1024.$ Memory \, (Hidden, \, gb) = 32 \times 200 \times 150 \times 640 \times 4 = 2.45 $ gigabytes (4 bytes per float).$ Memory \, (Joint, \, gb) = 32 \times 200 \times 150 \times 1024 \times 4 = 3.93 $ gigabytes (4 bytes per float)-----Using Automatic Mixed Precision, we expend just around 6-7 GB of GPU RAM on the Joint + its gradient.The above memory cost is much more tractable - but we generally want larger and larger acoustic models. It is consistently the easiest way to improve transcription accuracy. So that means on a limited 32 GB GPU, we have to partition 7 GB just for the Joint and remaining memory allocated between Transcription + Acoustic Models. Fused Transcription-Joint-Loss-WER (also called Batch Splitting)----------The fundamental problem is that the joint tensor grows in size when `[T x U]` grows in size. This growth in memory cost is due to many reasons - either by model construction (downsampling) or the choice of dataset preprocessing (character tokenization vs. sub-word tokenization).Another dimension that NeMo can control is **batch**. Due to how we batch our samples, small and large samples all get clumped together into a single batch. So even though the individual samples are not all as long as the maximum length of T and U in that batch, when a batch of such samples is constructed, it will consume a significant amount of memory for the sake of compute efficiency.So as is always the case - **trade-off compute speed for memory savings**.------The fused operation goes as follows : 1) Forward the entire acoustic model in a single pass. (Use global batch size here for acoustic model - found in `model.*_ds.batch_size`)2) Split the Acoustic Model's logits by `fused_batch_size` and loop over these sub-batches.3) Construct a sub-batch of same `fused_batch_size` for the Prediction model. Now the target sequence length is $U_{sub-batch} < U$. 4) Feed this $U_{sub-batch}$ into the Joint model, along with a sub-batch from the Acoustic model (with $T_{sub-batch} < T$). Remember, we only have to slice off a part of the acoustic model here since we have the full batch of samples $(B, T, D)$ from the acoustic model.5) Performing steps (3) and (4) yields $T_{sub-batch}$ and $U_{sub-batch}$. Perform sub-batch joint step - costing an intermediate $(B, T_{sub-batch}, U_{sub-batch}, V)$ in memory.6) Compute loss on sub-batch and preserve in a list to be later concatenated. 7) Compute sub-batch metrics (such as Character / Word Error Rate) using the above Joint tensor and sub-batch of ground truth labels. Preserve the scores to be averaged across the entire batch later.8) Delete the sub-batch joint matrix $(B, T_{sub-batch}, U_{sub-batch}, V)$. Only gradients from .backward() are preserved now in the computation graph.9) Repeat steps (3) - (8) until all sub-batches are consumed.10) Cleanup step. Compute full batch WER and log. Concatenate loss list and pass to PTL to compute the equivalent of the original (full batch) Joint step. Delete ancillary objects necessary for sub-batching. Setting up Fused Batch step in a Transducer ConfigAfter all that discussion above, let us look at how to enable that entire pipeline in NeMo.As we can note below, it takes precisely two changes in the config to enable the fused batch step:
###Code
print(OmegaConf.to_yaml(config.model.joint))
# Two lines to enable the fused batch step
config.model.joint.fuse_loss_wer = True
config.model.joint.fused_batch_size = 16 # this can be any value (preferably less than model.*_ds.batch_size)
# We will also reduce the hidden dimension of the joint and the prediction networks to preserve some memory
config.model.model_defaults.pred_hidden = 64
config.model.model_defaults.joint_hidden = 64
###Output
_____no_output_____
###Markdown
--------Finally, since the dataset is tiny, we do not need an enormous model (the default is roughly 40 M parameters!).
###Code
# Use just 128 filters across the model to speed up training and reduce parameter count
config.model.model_defaults.filters = 128
###Output
_____no_output_____
###Markdown
Initialize a Transducer ASR ModelFinally, let us create a Transducer model, which is as easy as changing a line of import if you already have a script to create CTC models. We will use a small model since the dataset is just 5 hours of speech. ------Setup a Pytorch Lightning Trainer:
###Code
import torch
from pytorch_lightning import Trainer
if torch.cuda.is_available():
gpus = 1
else:
gpus = 0
EPOCHS = 50
# Initialize a Trainer for the Transducer model
trainer = Trainer(gpus=gpus, max_epochs=EPOCHS,
checkpoint_callback=False, logger=False,
log_every_n_steps=5, check_val_every_n_epoch=10)
# Import the Transducer Model
import nemo.collections.asr as nemo_asr
# Build the model
model = nemo_asr.models.EncDecRNNTBPEModel(cfg=config.model, trainer=trainer)
model.summarize();
###Output
_____no_output_____
###Markdown
------We now have a Transducer model ready to be trained! (Optional) Partially loading pre-trained weights from another modelAn interesting point to note about Transducer models - the Acoustic model config (and therefore the Acoustic model itself) can be shared between CTC and Transducer models.This means that we can initialize the weights of a Transducer's Acoustic model with weights from a pre-trained CTC encoder model.------**Note**: This step is optional and not necessary at all to train a Transducer model. Below, we show the steps that we would take if we wanted to do this, however as the loaded model has different kernel sizes compared to the current model, the checkpoint cannot be loaded.
###Code
# Load a small CTC model
# ctc_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_citrinet_256", map_location='cpu')
###Output
_____no_output_____
###Markdown
------Then load the state dict of the CTC model's encoder into the Transducer model's encoder.
###Code
# <<< NOTE: This is only for demonstration ! >>>
# Below cell will fail because the two model's have incompatible kernel sizes in their Conv layers.
# <<< NOTE: Below cell is only shown to illustrate the method >>>
# model.encoder.load_state_dict(ctc_model.encoder.state_dict(), strict=True)
###Output
_____no_output_____
###Markdown
Training on AN4Now that the model is ready, we can finally train it!
###Code
# Prepare NeMo's Experiment manager to handle checkpoint saving and logging for us
from nemo.utils import exp_manager
# Environment variable generally used for multi-node multi-gpu training.
# In notebook environments, this flag is unnecessary and can cause logs of multiple training runs to overwrite each other.
os.environ.pop('NEMO_EXPM_VERSION', None)
exp_config = exp_manager.ExpManagerConfig(
exp_dir=f'experiments/',
name=f"Transducer-Model",
checkpoint_callback_params=exp_manager.CallbackParams(
monitor="val_wer",
mode="min",
always_save_nemo=True,
save_best_model=True,
),
)
exp_config = OmegaConf.structured(exp_config)
logdir = exp_manager.exp_manager(trainer, exp_config)
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir /content/experiments/Transducer-Model/
else:
print("To use TensorBoard, please use this notebook in a Google Colab environment.")
# Release resources prior to training
import gc
gc.collect()
if gpus > 0:
torch.cuda.empty_cache()
# Train the model
trainer.fit(model)
###Output
_____no_output_____
###Markdown
-------Lets check what the final performance on the test set.
###Code
trainer.test(model)
###Output
_____no_output_____
###Markdown
------The model should obtain some score between 10-12% WER after 50 epochs of training. Quite a good score for just 50 epochs of training a tiny model! Note that these are greedy scores, yet they are pretty strong for such a short training run.We can further improve these scores by using the internal Prediction network to calculate beam scores. Changing the Decoding StrategyDuring training, for the sake of efficiency, we were using the `greedy_batch` decoding strategy. However, we might want to perform inference with another method - say, beam search.NeMo allows changing the decoding strategy easily after the model has been trained.
###Code
import copy
decoding_config = copy.deepcopy(config.model.decoding)
print(OmegaConf.to_yaml(decoding_config))
# Update the config for the decoding strategy
decoding_config.strategy = "alsd" # Options are `greedy`, `greedy_batch`, `beam`, `tsd` and `alsd`
decoding_config.beam.beam_size = 4 # Increase beam size for better scores, but it will take much longer for transcription !
# Finally update the model's decoding strategy !
model.change_decoding_strategy(decoding_config)
trainer.test(model)
###Output
_____no_output_____
###Markdown
------Here, we improved our scores significantly by using the `Alignment-Length Synchronous Decoding` beam search. Feel free to try the other algorithms and compare the speed-accuracy tradeoff! (Extra) Extracting Transducer Model Alignments Transducers are unique in the sense that for each timestep $t \le T$, they can emit multiple target tokens $u_t$. During training, this is represented as the $T \times U$ joint that maps to the vocabulary $V$. During inference, there is no need to compute the full joint $T \times U$. Instead, after the model predicts the `Transducer Blank` token at the current timestep $t$ while predicting the target token $u_t$, the model will move onto the next acoustic timestep $t + 1$. As such, we can obtain the diagonal alignment of the Transducer model per sample relatively simply.------**Note**: While alignments can be calculated for both greedy and beam search - it is non-trivial to incorporate this alignment information for beam decoding. Therefore NeMo only supports extracting alignments during greedy decoding. -----Restore model to greedy decoding for alignment calculation
###Code
decoding_config.strategy = "greedy_batch"
# Special flag which is generally disabled
# Instruct Greedy Decoders to preserve alignment information during autoregressive decoding
with open_dict(decoding_config):
decoding_config.preserve_alignments = True
model.change_decoding_strategy(decoding_config)
###Output
_____no_output_____
###Markdown
-------Set up a test data loader that we will use to obtain the alignments for a single batch.
###Code
test_dl = model.test_dataloader()
test_dl = iter(test_dl)
batch = next(test_dl)
device = torch.device('cuda' if gpus > 0 else 'cpu')
def rnnt_alignments(model, batch):
model = model.to(device)
encoded, encoded_len = model.forward(
input_signal=batch[0].to(device), input_signal_length=batch[1].to(device)
)
current_hypotheses = model.decoding.rnnt_decoder_predictions_tensor(
encoded, encoded_len, return_hypotheses=True
)
del encoded, encoded_len
# current hypothesis is a tuple of
# 1) best hypothesis
# 2) Sorted list of hypothesis (if using beam search); None otherwise
return current_hypotheses
# Get a batch of hypotheses, as well as a batch of all obtained hypotheses (if beam search is used)
hypotheses, all_hypotheses = rnnt_alignments(model, batch)
###Output
_____no_output_____
###Markdown
------Select a sample ID from within the batch to observe the alignment information contained in the Hypothesis.
###Code
# Select the sample ID from within the batch
SAMPLE_ID = 0
# Obtain the hypothesis for this sample, as well as some ground truth information about this sample
hypothesis = hypotheses[SAMPLE_ID]
original_sample_len = batch[1][SAMPLE_ID]
ground_truth = batch[2][SAMPLE_ID]
# The Hypothesis object contains a lot of useful information regarding the decoding step.
print(hypothesis)
###Output
_____no_output_____
###Markdown
-------Now, decode the hypothesis and compare it against the ground truth text. Note - this decoded hypothesis is at *sub-word* level for this model. Therefore sub-word tokens such as `_` may be seen here.
###Code
decoded_text = hypothesis.text
decoded_hypothesis = model.decoding.decode_ids_to_tokens(hypothesis.y_sequence.cpu().numpy().tolist())
decoded_ground_truth = model.decoding.tokenizer.ids_to_text(ground_truth.cpu().numpy().tolist())
print("Decoded ground truth :", decoded_ground_truth)
print("Decoded hypothesis :", decoded_text)
print("Decoded hyp tokens :", decoded_hypothesis)
###Output
_____no_output_____
###Markdown
---------Next we print out the 2-d alignment grid of the RNNT model:
###Code
alignments = hypothesis.alignments
# These two values should normally always match
print("Length of alignments (T): ", len(alignments))
print("Length of padded acoustic model after striding : ", int(hypothesis.length))
###Output
_____no_output_____
###Markdown
------Finally, let us calculate the alignment grid. We will de-tokenize the sub-word token if it is a valid index in the vocabulary and use `''` as a placeholder for the `Transducer Blank` token.Note that each `timestep` here is (roughly) $timestep * total\_stride\_of\_model * preprocessor.window\_stride$ seconds timestamp. **Note**: You can modify the value of `config.model.loss.warprnnt_numba_kwargs.fastemit_lambda` prior to training and see an impact on final alignment latency!
###Code
# Compute the alignment grid
for ti in range(len(alignments)):
t_u = []
for uj in range(len(alignments[ti])):
token = alignments[ti][uj]
token = token.to('cpu').numpy().tolist()
decoded_token = model.decoding.decode_ids_to_tokens([token])[0] if token != model.decoding.blank_id else '' # token at index len(vocab) == RNNT blank token
t_u.append(decoded_token)
print(f"Tokens at timestep {ti} = {t_u}")
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Transducer ModelsThis notebook is a basic tutorial for creating a Transducer ASR model and then training it on a small dataset (AN4). It includes discussion relevant to reducing memory issues when training such models and demonstrates how to change the decoding strategy after training. Finally, it also provides a brief glimpse of extracting alignment information from a trained Transducer model.As we will see in this tutorial, apart from the differences in the config and the class used to instantiate the model, nearly all steps are precisely similar to any CTC-based model training. Many concepts such as data loader setup, optimization setup, pre-trained checkpoint weight loading will be nearly identical between CTC and Transducer models.In essence, NeMo makes it seamless to take a config for a CTC ASR model, add in a few components related to Transducers (often without any modifications) and use a different class to instantiate a Transducer model!--------**Note**: It is assumed that the previous tutorial - "Intro-to-Transducers" has been reviewed, and there is some familiarity with the config components of transducer models. Preparing the datasetIn this tutorial, we will be utilizing the `AN4`dataset - also known as the Alphanumeric dataset, which was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly. Let's first download the preparation script from NeMo's scripts directory -
###Code
import os
if not os.path.exists("scripts/"):
os.makedirs("scripts")
if not os.path.exists("scripts/process_an4_data.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_an4_data.py
###Output
_____no_output_____
###Markdown
------Download and prepare the two subsets of `AN 4`
###Code
import wget
import tarfile
import subprocess
import glob
data_dir = "datasets"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
if os.path.exists(f"{data_dir}/an4"):
print("Preparing AN4 dataset ...")
an4_path = f"{data_dir}/"
!python scripts/process_an4_data.py \
--data_root=$an4_path
print("AN4 prepared !")
# Manifest filepaths
TRAIN_MANIFEST = os.path.join(data_dir, "an4", "train_manifest.json")
TEST_MANIFEST = os.path.join(data_dir, "an4", "test_manifest.json")
###Output
_____no_output_____
###Markdown
Preparing the tokenizerNow that we have a dataset ready, we need to decide whether to use a character-based model or a sub-word-based model. For completeness' sake, we will use a tokenizer based model so that we can leverage a modern encoder architecture like ContextNet or Conformer-T.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
-----Since the dataset is tiny, we can use a small SentencePiece based tokenizer. We always delete the tokenizer directory so any changes to the manifest files are always replicated in the tokenizer.
###Code
VOCAB_SIZE = 32 # can be any value above 29
TOKENIZER_TYPE = "spe" # can be wpe or spe
SPE_TYPE = "unigram" # can be bpe or unigram
# ------------------------------------------------------------------- #
!rm -r tokenizers/
if not os.path.exists("tokenizers"):
os.makedirs("tokenizers")
!python scripts/process_asr_text_tokenizer.py \
--manifest=$TRAIN_MANIFEST \
--data_root="tokenizers" \
--tokenizer=$TOKENIZER_TYPE \
--spe_type=$SPE_TYPE \
--no_lower_case \
--log \
--vocab_size=$VOCAB_SIZE
# Tokenizer path
if TOKENIZER_TYPE == 'spe':
TOKENIZER = os.path.join("tokenizers", f"tokenizer_spe_{SPE_TYPE}_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "bpe"
else:
TOKENIZER = os.path.join("tokenizers", f"tokenizer_wpe_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "wpe"
###Output
_____no_output_____
###Markdown
Preparing a Transducer ModelNow that we have the dataset and tokenizer prepared, let us begin by setting up the config of the Transducer model! In this tutorial, we will build a slightly modified ContextNet architecture (which is obtained from the paper [ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context](https://arxiv.org/abs/2005.03191)).We can note that many of the steps here are identical to the setup of a CTC model! Prepare the configFor a dataset such as AN4, we do not need such a deep model. In fact, the depth of this model will cause much slower convergence on a small dataset, which would require far too long to train on Colab.In order to speed up training for this demo, we will take only the first five blocks of ContextNet, and discard the rest - and we can do this directly from the config.**Note**: On any realistic dataset (say Librispeech) this step would hurt the model's accuracy significantly. It is being done only to reduce the time spent waiting for training to finish on Colab.
###Code
from omegaconf import OmegaConf, open_dict
config = OmegaConf.load("configs/contextnet_rnnt.yaml")
###Output
_____no_output_____
###Markdown
-----Here, we will slice off the first five blocks from the Jasper block (used to build ContextNet). Setting the config with this subset will create a stride 2x model with just five blocks.We will also explicitly state that the last block dimension must be obtained from `model.model_defaults.enc_hidden` inside the config.
###Code
config.model.encoder.jasper = config.model.encoder.jasper[:5]
config.model.encoder.jasper[-1].filters = '${model.model_defaults.enc_hidden}'
###Output
_____no_output_____
###Markdown
-------Next, set up the data loaders of the config for the ContextNet model.
###Code
# print out the train and validation configs to know what needs to be changed
print(OmegaConf.to_yaml(config.model.train_ds))
###Output
_____no_output_____
###Markdown
-------We can note that the config here is nearly identical to the CTC ASR model configs! So let us take the same steps here to update the configs.
###Code
config.model.train_ds.manifest_filepath = TRAIN_MANIFEST
config.model.validation_ds.manifest_filepath = TEST_MANIFEST
config.model.test_ds.manifest_filepath = TEST_MANIFEST
###Output
_____no_output_____
###Markdown
------Next, we need to setup the tokenizer section of the config
###Code
print(OmegaConf.to_yaml(config.model.tokenizer))
config.model.tokenizer.dir = TOKENIZER
config.model.tokenizer.type = TOKENIZER_TYPE_CFG
###Output
_____no_output_____
###Markdown
------Now, we can update the optimization and augmentation for this dataset in order to converge to some reasonable score within a short training run.
###Code
print(OmegaConf.to_yaml(config.model.optim))
# Finally, let's remove logging of samples and the warmup since the dataset is small (similar to CTC models)
config.model.log_prediction = False
config.model.optim.sched.warmup_steps = None
###Output
_____no_output_____
###Markdown
------Next, we remove the spec augment that is provided by default for ContextNet. While additional augmentation would surely help training, it would require longer training to see significant benefits.
###Code
print(OmegaConf.to_yaml(config.model.spec_augment))
config.model.spec_augment.freq_masks = 0
config.model.spec_augment.time_masks = 0
###Output
_____no_output_____
###Markdown
------... We are now almost done! Most of the updates to a Transducer config are nearly the same as any CTC model. Fused Batch during training and evaluationWe discussed in the previous tutorial (Intro-to-Transducers) the significant memory cost of the Transducer Joint calculation during training. We also discussed that NeMo provides a simple yet effective method to nearly sidestep this limitation. We can now dive deeper into understanding what precisely NeMo's Transducer framework will do to alleviate this memory consumption issue.The following sub-cells are **voluntary** and valuable for understanding the cause, effect, and resolution of memory issues in Transducer models. The content can be skipped if one is familiar with the topic, and it is only required to use the `fused batch step`. Transducer Memory reduction with Fused Batch stepThe following few cells explain why memory is an issue when training Transducer models and how NeMo tackles the issue with its Fused Batch step.The material can be read for a thorough understanding, otherwise, it can be skipped. Diving deeper into the memory costs of Transducer Joint-------One of the significant limitations of Transducers is the exorbitant memory cost of computing the Joint module. The Joint module is comprised of two steps. 1) Projecting the Acoustic and Transcription feature dimensions to some standard hidden dimension (specified by `model.model_defaults.joint_hidden`)2) Projecting this intermediate hidden dimension to the final vocabulary space to obtain the transcription.Take the following example.**BS**=32 ; **T** (after **2x** stride) = 800, **U** (with character encoding) = 400-450 tokens, Vocabulary size **V** = 28 (26 alphabet chars, space and apostrophe). Let the hidden dimension of the Joint model be 640 (Most Google Transducer papers use hidden dimension of 640).$ Memory \, (Hidden, \, gb) = 32 \times 800 \times 450 \times 640 \times 4 = 29.49 $ gigabytes (4 bytes per float). $ Memory \, (Joint, \, gb) = 32 \times 800 \times 450 \times 28 \times 4 = 1.290 $ gigabytes (4 bytes per float)-----**NOTE**: This is just for the forward pass! We need to double this memory to store gradients! This much memory is also just for the Joint model **alone**. Far more memory is required for the Prediction model as well as the large Acoustic model itself and its gradients!Even with mixed precision, that's $\sim 30$ GB of GPU RAM for just 1 part of the network + its gradients.--------- Simple methods to reduce memory consumption------The easiest way to reduce memory consumption is to perform more downsampling in the acoustic model and use sub-word tokenization of the text to reduce the length of the target sequence.**BS**=32 ; **T** (after **8x** stride) = 200, **U** (with sub-word encoding) = 100-180 tokens, Vocabulary size **V** = 1024.$ Memory \, (Hidden, \, gb) = 32 \times 200 \times 150 \times 640 \times 4 = 2.45 $ gigabytes (4 bytes per float).$ Memory \, (Joint, \, gb) = 32 \times 200 \times 150 \times 1024 \times 4 = 3.93 $ gigabytes (4 bytes per float)-----Using Automatic Mixed Precision, we expend just around 6-7 GB of GPU RAM on the Joint + its gradient.The above memory cost is much more tractable - but we generally want larger and larger acoustic models. It is consistently the easiest way to improve transcription accuracy. So that means on a limited 32 GB GPU, we have to partition 7 GB just for the Joint and remaining memory allocated between Transcription + Acoustic Models. Fused Transcription-Joint-Loss-WER (also called Batch Splitting)----------The fundamental problem is that the joint tensor grows in size when `[T x U]` grows in size. This growth in memory cost is due to many reasons - either by model construction (downsampling) or the choice of dataset preprocessing (character tokenization vs. sub-word tokenization).Another dimension that NeMo can control is **batch**. Due to how we batch our samples, small and large samples all get clumped together into a single batch. So even though the individual samples are not all as long as the maximum length of T and U in that batch, when a batch of such samples is constructed, it will consume a significant amount of memory for the sake of compute efficiency.So as is always the case - **trade-off compute speed for memory savings**.------The fused operation goes as follows : 1) Forward the entire acoustic model in a single pass. (Use global batch size here for acoustic model - found in `model.*_ds.batch_size`)2) Split the Acoustic Model's logits by `fused_batch_size` and loop over these sub-batches.3) Construct a sub-batch of same `fused_batch_size` for the Prediction model. Now the target sequence length is $U_{sub-batch} < U$. 4) Feed this $U_{sub-batch}$ into the Joint model, along with a sub-batch from the Acoustic model (with $T_{sub-batch} < T$). Remember, we only have to slice off a part of the acoustic model here since we have the full batch of samples $(B, T, D)$ from the acoustic model.5) Performing steps (3) and (4) yields $T_{sub-batch}$ and $U_{sub-batch}$. Perform sub-batch joint step - costing an intermediate $(B, T_{sub-batch}, U_{sub-batch}, V)$ in memory.6) Compute loss on sub-batch and preserve in a list to be later concatenated. 7) Compute sub-batch metrics (such as Character / Word Error Rate) using the above Joint tensor and sub-batch of ground truth labels. Preserve the scores to be averaged across the entire batch later.8) Delete the sub-batch joint matrix $(B, T_{sub-batch}, U_{sub-batch}, V)$. Only gradients from .backward() are preserved now in the computation graph.9) Repeat steps (3) - (8) until all sub-batches are consumed.10) Cleanup step. Compute full batch WER and log. Concatenate loss list and pass to PTL to compute the equivalent of the original (full batch) Joint step. Delete ancillary objects necessary for sub-batching. Setting up Fused Batch step in a Transducer ConfigAfter all that discussion above, let us look at how to enable that entire pipeline in NeMo.As we can note below, it takes precisely two changes in the config to enable the fused batch step:
###Code
print(OmegaConf.to_yaml(config.model.joint))
# Two lines to enable the fused batch step
config.model.joint.fuse_loss_wer = True
config.model.joint.fused_batch_size = 16 # this can be any value (preferably less than model.*_ds.batch_size)
# We will also reduce the hidden dimension of the joint and the prediction networks to preserve some memory
config.model.model_defaults.pred_hidden = 64
config.model.model_defaults.joint_hidden = 64
###Output
_____no_output_____
###Markdown
--------Finally, since the dataset is tiny, we do not need an enormous model (the default is roughly 40 M parameters!).
###Code
# Use just 128 filters across the model to speed up training and reduce parameter count
config.model.model_defaults.filters = 128
###Output
_____no_output_____
###Markdown
Initialize a Transducer ASR ModelFinally, let us create a Transducer model, which is as easy as changing a line of import if you already have a script to create CTC models. We will use a small model since the dataset is just 5 hours of speech. ------Setup a Pytorch Lightning Trainer:
###Code
import torch
from pytorch_lightning import Trainer
if torch.cuda.is_available():
gpus = 1
else:
gpus = 0
EPOCHS = 50
# Initialize a Trainer for the Transducer model
trainer = Trainer(gpus=gpus, max_epochs=EPOCHS,
checkpoint_callback=False, logger=False,
log_every_n_steps=5, check_val_every_n_epoch=10)
# Import the Transducer Model
import nemo.collections.asr as nemo_asr
# Build the model
model = nemo_asr.models.EncDecRNNTBPEModel(cfg=config.model, trainer=trainer)
model.summarize();
###Output
_____no_output_____
###Markdown
------We now have a Transducer model ready to be trained! (Optional) Partially loading pre-trained weights from another modelAn interesting point to note about Transducer models - the Acoustic model config (and therefore the Acoustic model itself) can be shared between CTC and Transducer models.This means that we can initialize the weights of a Transducer's Acoustic model with weights from a pre-trained CTC encoder model.------**Note**: This step is optional and not necessary at all to train a Transducer model. Below, we show the steps that we would take if we wanted to do this, however as the loaded model has different kernel sizes compared to the current model, the checkpoint cannot be loaded.
###Code
# Load a small CTC model
# ctc_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_citrinet_256", map_location='cpu')
###Output
_____no_output_____
###Markdown
------Then load the state dict of the CTC model's encoder into the Transducer model's encoder.
###Code
# <<< NOTE: This is only for demonstration ! >>>
# Below cell will fail because the two model's have incompatible kernel sizes in their Conv layers.
# <<< NOTE: Below cell is only shown to illustrate the method >>>
# model.encoder.load_state_dict(ctc_model.encoder.state_dict(), strict=True)
###Output
_____no_output_____
###Markdown
Training on AN4Now that the model is ready, we can finally train it!
###Code
# Prepare NeMo's Experiment manager to handle checkpoint saving and logging for us
from nemo.utils import exp_manager
# Environment variable generally used for multi-node multi-gpu training.
# In notebook environments, this flag is unnecessary and can cause logs of multiple training runs to overwrite each other.
os.environ.pop('NEMO_EXPM_VERSION', None)
exp_config = exp_manager.ExpManagerConfig(
exp_dir=f'experiments/',
name=f"Transducer-Model",
checkpoint_callback_params=exp_manager.CallbackParams(
monitor="val_wer",
mode="min",
always_save_nemo=True,
save_best_model=True,
),
)
exp_config = OmegaConf.structured(exp_config)
logdir = exp_manager.exp_manager(trainer, exp_config)
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir /content/experiments/Transducer-Model/
else:
print("To use TensorBoard, please use this notebook in a Google Colab environment.")
# Release resources prior to training
import gc
gc.collect()
if gpus > 0:
torch.cuda.empty_cache()
# Train the model
trainer.fit(model)
###Output
_____no_output_____
###Markdown
-------Lets check what the final performance on the test set.
###Code
trainer.test(model)
###Output
_____no_output_____
###Markdown
------The model should obtain some score between 10-12% WER after 50 epochs of training. Quite a good score for just 50 epochs of training a tiny model! Note that these are greedy scores, yet they are pretty strong for such a short training run.We can further improve these scores by using the internal Prediction network to calculate beam scores. Changing the Decoding StrategyDuring training, for the sake of efficiency, we were using the `greedy_batch` decoding strategy. However, we might want to perform inference with another method - say, beam search.NeMo allows changing the decoding strategy easily after the model has been trained.
###Code
import copy
decoding_config = copy.deepcopy(config.model.decoding)
print(OmegaConf.to_yaml(decoding_config))
# Update the config for the decoding strategy
decoding_config.strategy = "alsd" # Options are `greedy`, `greedy_batch`, `beam`, `tsd` and `alsd`
decoding_config.beam.beam_size = 4 # Increase beam size for better scores, but it will take much longer for transcription !
# Finally update the model's decoding strategy !
model.change_decoding_strategy(decoding_config)
trainer.test(model)
###Output
_____no_output_____
###Markdown
------Here, we improved our scores significantly by using the `Alignment-Length Synchronous Decoding` beam search. Feel free to try the other algorithms and compare the speed-accuracy tradeoff! (Extra) Extracting Transducer Model Alignments Transducers are unique in the sense that for each timestep $t \le T$, they can emit multiple target tokens $u_t$. During training, this is represented as the $T \times U$ joint that maps to the vocabulary $V$. During inference, there is no need to compute the full joint $T \times U$. Instead, after the model predicts the `Transducer Blank` token at the current timestep $t$ while predicting the target token $u_t$, the model will move onto the next acoustic timestep $t + 1$. As such, we can obtain the diagonal alignment of the Transducer model per sample relatively simply.------**Note**: While alignments can be calculated for both greedy and beam search - it is non-trivial to incorporate this alignment information for beam decoding. Therefore NeMo only supports extracting alignments during greedy decoding. -----Restore model to greedy decoding for alignment calculation
###Code
decoding_config.strategy = "greedy_batch"
# Special flag which is generally disabled
# Instruct Greedy Decoders to preserve alignment information during autoregressive decoding
with open_dict(decoding_config):
decoding_config.preserve_alignments = True
model.change_decoding_strategy(decoding_config)
###Output
_____no_output_____
###Markdown
-------Set up a test data loader that we will use to obtain the alignments for a single batch.
###Code
test_dl = model.test_dataloader()
test_dl = iter(test_dl)
batch = next(test_dl)
device = torch.device('cuda' if gpus > 0 else 'cpu')
def rnnt_alignments(model, batch):
model = model.to(device)
encoded, encoded_len = model.forward(
input_signal=batch[0].to(device), input_signal_length=batch[1].to(device)
)
current_hypotheses = model.decoding.rnnt_decoder_predictions_tensor(
encoded, encoded_len, return_hypotheses=True
)
del encoded, encoded_len
# current hypothesis is a tuple of
# 1) best hypothesis
# 2) Sorted list of hypothesis (if using beam search); None otherwise
return current_hypotheses
# Get a batch of hypotheses, as well as a batch of all obtained hypotheses (if beam search is used)
hypotheses, all_hypotheses = rnnt_alignments(model, batch)
###Output
_____no_output_____
###Markdown
------Select a sample ID from within the batch to observe the alignment information contained in the Hypothesis.
###Code
# Select the sample ID from within the batch
SAMPLE_ID = 0
# Obtain the hypothesis for this sample, as well as some ground truth information about this sample
hypothesis = hypotheses[SAMPLE_ID]
original_sample_len = batch[1][SAMPLE_ID]
ground_truth = batch[2][SAMPLE_ID]
# The Hypothesis object contains a lot of useful information regarding the decoding step.
print(hypothesis)
###Output
_____no_output_____
###Markdown
-------Now, decode the hypothesis and compare it against the ground truth text. Note - this decoded hypothesis is at *sub-word* level for this model. Therefore sub-word tokens such as `_` may be seen here.
###Code
decoded_text = hypothesis.text
decoded_hypothesis = model.decoding.decode_ids_to_tokens(hypothesis.y_sequence.cpu().numpy().tolist())
decoded_ground_truth = model.decoding.tokenizer.ids_to_text(ground_truth.cpu().numpy().tolist())
print("Decoded ground truth :", decoded_ground_truth)
print("Decoded hypothesis :", decoded_text)
print("Decoded hyp tokens :", decoded_hypothesis)
###Output
_____no_output_____
###Markdown
---------Next we print out the 2-d alignment grid of the RNNT model:
###Code
alignments = hypothesis.alignments
# These two values should normally always match
print("Length of alignments (T): ", len(alignments))
print("Length of padded acoustic model after striding : ", int(hypothesis.length))
###Output
_____no_output_____
###Markdown
------Finally, let us calculate the alignment grid. We will de-tokenize the sub-word token if it is a valid index in the vocabulary and use `''` as a placeholder for the `Transducer Blank` token.Note that each `timestep` here is (roughly) $timestep * total\_stride\_of\_model * preprocessor.window\_stride$ seconds timestamp. **Note**: You can modify the value of `config.model.loss.warprnnt_numba_kwargs.fastemit_lambda` prior to training and see an impact on final alignment latency!
###Code
# Compute the alignment grid
for ti in range(len(alignments)):
t_u = []
for uj in range(len(alignments[ti])):
token = alignments[ti][uj]
token = token.to('cpu').numpy().tolist()
decoded_token = model.decoding.decode_ids_to_tokens([token])[0] if token != model.decoding.blank_id else '' # token at index len(vocab) == RNNT blank token
t_u.append(decoded_token)
print(f"Tokens at timestep {ti} = {t_u}")
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Transducer ModelsThis notebook is a basic tutorial for creating a Transducer ASR model and then training it on a small dataset (AN4). It includes discussion relevant to reducing memory issues when training such models and demonstrates how to change the decoding strategy after training. Finally, it also provides a brief glimpse of extracting alignment information from a trained Transducer model.As we will see in this tutorial, apart from the differences in the config and the class used to instantiate the model, nearly all steps are precisely similar to any CTC-based model training. Many concepts such as data loader setup, optimization setup, pre-trained checkpoint weight loading will be nearly identical between CTC and Transducer models.In essence, NeMo makes it seamless to take a config for a CTC ASR model, add in a few components related to Transducers (often without any modifications) and use a different class to instantiate a Transducer model!--------**Note**: It is assumed that the previous tutorial - "Intro-to-Transducers" has been reviewed, and there is some familiarity with the config components of transducer models. Preparing the datasetIn this tutorial, we will be utilizing the `AN4`dataset - also known as the Alphanumeric dataset, which was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly. Let's first download the preparation script from NeMo's scripts directory -
###Code
import os
if not os.path.exists("scripts/"):
os.makedirs("scripts")
if not os.path.exists("scripts/process_an4_data.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_an4_data.py
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Transducer ModelsThis notebook is a basic tutorial for creating a Transducer ASR model and then training it on a small dataset (AN4). It includes discussion relevant to reducing memory issues when training such models and demonstrates how to change the decoding strategy after training. Finally, it also provides a brief glimpse of extracting alignment information from a trained Transducer model.As we will see in this tutorial, apart from the differences in the config and the class used to instantiate the model, nearly all steps are precisely similar to any CTC-based model training. Many concepts such as data loader setup, optimization setup, pre-trained checkpoint weight loading will be nearly identical between CTC and Transducer models.In essence, NeMo makes it seamless to take a config for a CTC ASR model, add in a few components related to Transducers (often without any modifications) and use a different class to instantiate a Transducer model!--------**Note**: It is assumed that the previous tutorial - "Intro-to-Transducers" has been reviewed, and there is some familiarity with the config components of transducer models. Preparing the datasetIn this tutorial, we will be utilizing the `AN4`dataset - also known as the Alphanumeric dataset, which was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly. Let's first download the preparation script from NeMo's scripts directory -
###Code
import os
if not os.path.exists("scripts/"):
os.makedirs("scripts")
if not os.path.exists("scripts/process_an4_data.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_an4_data.py
###Output
_____no_output_____
###Markdown
------Download and prepare the two subsets of `AN 4`
###Code
import wget
import tarfile
import subprocess
import glob
data_dir = "datasets"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
if os.path.exists(f"{data_dir}/an4"):
print("Preparing AN4 dataset ...")
an4_path = f"{data_dir}/"
!python scripts/process_an4_data.py \
--data_root=$an4_path
print("AN4 prepared !")
# Manifest filepaths
TRAIN_MANIFEST = os.path.join(data_dir, "an4", "train_manifest.json")
TEST_MANIFEST = os.path.join(data_dir, "an4", "test_manifest.json")
###Output
_____no_output_____
###Markdown
Preparing the tokenizerNow that we have a dataset ready, we need to decide whether to use a character-based model or a sub-word-based model. For completeness' sake, we will use a tokenizer based model so that we can leverage a modern encoder architecture like ContextNet or Conformer-T.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
-----Since the dataset is tiny, we can use a small SentencePiece based tokenizer. We always delete the tokenizer directory so any changes to the manifest files are always replicated in the tokenizer.
###Code
VOCAB_SIZE = 32 # can be any value above 29
TOKENIZER_TYPE = "spe" # can be wpe or spe
SPE_TYPE = "unigram" # can be bpe or unigram
# ------------------------------------------------------------------- #
!rm -r tokenizers/
if not os.path.exists("tokenizers"):
os.makedirs("tokenizers")
!python scripts/process_asr_text_tokenizer.py \
--manifest=$TRAIN_MANIFEST \
--data_root="tokenizers" \
--tokenizer=$TOKENIZER_TYPE \
--spe_type=$SPE_TYPE \
--no_lower_case \
--log \
--vocab_size=$VOCAB_SIZE
# Tokenizer path
if TOKENIZER_TYPE == 'spe':
TOKENIZER = os.path.join("tokenizers", f"tokenizer_spe_{SPE_TYPE}_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "bpe"
else:
TOKENIZER = os.path.join("tokenizers", f"tokenizer_wpe_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "wpe"
###Output
_____no_output_____
###Markdown
Preparing a Transducer ModelNow that we have the dataset and tokenizer prepared, let us begin by setting up the config of the Transducer model! In this tutorial, we will build a slightly modified ContextNet architecture (which is obtained from the paper [ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context](https://arxiv.org/abs/2005.03191)).We can note that many of the steps here are identical to the setup of a CTC model! Prepare the configFor a dataset such as AN4, we do not need such a deep model. In fact, the depth of this model will cause much slower convergence on a small dataset, which would require far too long to train on Colab.In order to speed up training for this demo, we will take only the first five blocks of ContextNet, and discard the rest - and we can do this directly from the config.**Note**: On any realistic dataset (say Librispeech) this step would hurt the model's accuracy significantly. It is being done only to reduce the time spent waiting for training to finish on Colab.
###Code
from omegaconf import OmegaConf, open_dict
config = OmegaConf.load("configs/contextnet_rnnt.yaml")
###Output
_____no_output_____
###Markdown
-----Here, we will slice off the first five blocks from the Jasper block (used to build ContextNet). Setting the config with this subset will create a stride 2x model with just five blocks.We will also explicitly state that the last block dimension must be obtained from `model.model_defaults.enc_hidden` inside the config.
###Code
config.model.encoder.jasper = config.model.encoder.jasper[:5]
config.model.encoder.jasper[-1].filters = '${model.model_defaults.enc_hidden}'
###Output
_____no_output_____
###Markdown
-------Next, set up the data loaders of the config for the ContextNet model.
###Code
# print out the train and validation configs to know what needs to be changed
print(OmegaConf.to_yaml(config.model.train_ds))
###Output
_____no_output_____
###Markdown
-------We can note that the config here is nearly identical to the CTC ASR model configs! So let us take the same steps here to update the configs.
###Code
config.model.train_ds.manifest_filepath = TRAIN_MANIFEST
config.model.validation_ds.manifest_filepath = TEST_MANIFEST
config.model.test_ds.manifest_filepath = TEST_MANIFEST
###Output
_____no_output_____
###Markdown
------Next, we need to setup the tokenizer section of the config
###Code
print(OmegaConf.to_yaml(config.model.tokenizer))
config.model.tokenizer.dir = TOKENIZER
config.model.tokenizer.type = TOKENIZER_TYPE_CFG
###Output
_____no_output_____
###Markdown
------Now, we can update the optimization and augmentation for this dataset in order to converge to some reasonable score within a short training run.
###Code
print(OmegaConf.to_yaml(config.model.optim))
# Finally, let's remove logging of samples and the warmup since the dataset is small (similar to CTC models)
config.model.log_prediction = False
config.model.optim.sched.warmup_steps = None
###Output
_____no_output_____
###Markdown
------Next, we remove the spec augment that is provided by default for ContextNet. While additional augmentation would surely help training, it would require longer training to see significant benefits.
###Code
print(OmegaConf.to_yaml(config.model.spec_augment))
config.model.spec_augment.freq_masks = 0
config.model.spec_augment.time_masks = 0
###Output
_____no_output_____
###Markdown
------... We are now almost done! Most of the updates to a Transducer config are nearly the same as any CTC model. Fused Batch during training and evaluationWe discussed in the previous tutorial (Intro-to-Transducers) the significant memory cost of the Transducer Joint calculation during training. We also discussed that NeMo provides a simple yet effective method to nearly sidestep this limitation. We can now dive deeper into understanding what precisely NeMo's Transducer framework will do to alleviate this memory consumption issue.The following sub-cells are **voluntary** and valuable for understanding the cause, effect, and resolution of memory issues in Transducer models. The content can be skipped if one is familiar with the topic, and it is only required to use the `fused batch step`. Transducer Memory reduction with Fused Batch stepThe following few cells explain why memory is an issue when training Transducer models and how NeMo tackles the issue with its Fused Batch step.The material can be read for a thorough understanding, otherwise, it can be skipped. Diving deeper into the memory costs of Transducer Joint-------One of the significant limitations of Transducers is the exorbitant memory cost of computing the Joint module. The Joint module is comprised of two steps. 1) Projecting the Acoustic and Transcription feature dimensions to some standard hidden dimension (specified by `model.model_defaults.joint_hidden`)2) Projecting this intermediate hidden dimension to the final vocabulary space to obtain the transcription.Take the following example.**BS**=32 ; **T** (after **2x** stride) = 800, **U** (with character encoding) = 400-450 tokens, Vocabulary size **V** = 28 (26 alphabet chars, space and apostrophe). Let the hidden dimension of the Joint model be 640 (Most Google Transducer papers use hidden dimension of 640).$ Memory \, (Hidden, \, gb) = 32 \times 800 \times 450 \times 640 \times 4 = 29.49 $ gigabytes (4 bytes per float). $ Memory \, (Joint, \, gb) = 32 \times 800 \times 450 \times 28 \times 4 = 1.290 $ gigabytes (4 bytes per float)-----**NOTE**: This is just for the forward pass! We need to double this memory to store gradients! This much memory is also just for the Joint model **alone**. Far more memory is required for the Prediction model as well as the large Acoustic model itself and its gradients!Even with mixed precision, that's $\sim 30$ GB of GPU RAM for just 1 part of the network + its gradients.--------- Simple methods to reduce memory consumption------The easiest way to reduce memory consumption is to perform more downsampling in the acoustic model and use sub-word tokenization of the text to reduce the length of the target sequence.**BS**=32 ; **T** (after **8x** stride) = 200, **U** (with sub-word encoding) = 100-180 tokens, Vocabulary size **V** = 1024.$ Memory \, (Hidden, \, gb) = 32 \times 200 \times 150 \times 640 \times 4 = 2.45 $ gigabytes (4 bytes per float).$ Memory \, (Joint, \, gb) = 32 \times 200 \times 150 \times 1024 \times 4 = 3.93 $ gigabytes (4 bytes per float)-----Using Automatic Mixed Precision, we expend just around 6-7 GB of GPU RAM on the Joint + its gradient.The above memory cost is much more tractable - but we generally want larger and larger acoustic models. It is consistently the easiest way to improve transcription accuracy. So that means on a limited 32 GB GPU, we have to partition 7 GB just for the Joint and remaining memory allocated between Transcription + Acoustic Models. Fused Transcription-Joint-Loss-WER (also called Batch Splitting)----------The fundamental problem is that the joint tensor grows in size when `[T x U]` grows in size. This growth in memory cost is due to many reasons - either by model construction (downsampling) or the choice of dataset preprocessing (character tokenization vs. sub-word tokenization).Another dimension that NeMo can control is **batch**. Due to how we batch our samples, small and large samples all get clumped together into a single batch. So even though the individual samples are not all as long as the maximum length of T and U in that batch, when a batch of such samples is constructed, it will consume a significant amount of memory for the sake of compute efficiency.So as is always the case - **trade-off compute speed for memory savings**.------The fused operation goes as follows : 1) Forward the entire acoustic model in a single pass. (Use global batch size here for acoustic model - found in `model.*_ds.batch_size`)2) Split the Acoustic Model's logits by `fused_batch_size` and loop over these sub-batches.3) Construct a sub-batch of same `fused_batch_size` for the Prediction model. Now the target sequence length is $U_{sub-batch} < U$. 4) Feed this $U_{sub-batch}$ into the Joint model, along with a sub-batch from the Acoustic model (with $T_{sub-batch} < T$). Remember, we only have to slice off a part of the acoustic model here since we have the full batch of samples $(B, T, D)$ from the acoustic model.5) Performing steps (3) and (4) yields $T_{sub-batch}$ and $U_{sub-batch}$. Perform sub-batch joint step - costing an intermediate $(B, T_{sub-batch}, U_{sub-batch}, V)$ in memory.6) Compute loss on sub-batch and preserve in a list to be later concatenated. 7) Compute sub-batch metrics (such as Character / Word Error Rate) using the above Joint tensor and sub-batch of ground truth labels. Preserve the scores to be averaged across the entire batch later.8) Delete the sub-batch joint matrix $(B, T_{sub-batch}, U_{sub-batch}, V)$. Only gradients from .backward() are preserved now in the computation graph.9) Repeat steps (3) - (8) until all sub-batches are consumed.10) Cleanup step. Compute full batch WER and log. Concatenate loss list and pass to PTL to compute the equivalent of the original (full batch) Joint step. Delete ancillary objects necessary for sub-batching. Setting up Fused Batch step in a Transducer ConfigAfter all that discussion above, let us look at how to enable that entire pipeline in NeMo.As we can note below, it takes precisely two changes in the config to enable the fused batch step:
###Code
print(OmegaConf.to_yaml(config.model.joint))
# Two lines to enable the fused batch step
config.model.joint.fuse_loss_wer = True
config.model.joint.fused_batch_size = 16 # this can be any value (preferably less than model.*_ds.batch_size)
# We will also reduce the hidden dimension of the joint and the prediction networks to preserve some memory
config.model.model_defaults.pred_hidden = 64
config.model.model_defaults.joint_hidden = 64
###Output
_____no_output_____
###Markdown
--------Finally, since the dataset is tiny, we do not need an enormous model (the default is roughly 40 M parameters!).
###Code
# Use just 128 filters across the model to speed up training and reduce parameter count
config.model.model_defaults.filters = 128
###Output
_____no_output_____
###Markdown
Initialize a Transducer ASR ModelFinally, let us create a Transducer model, which is as easy as changing a line of import if you already have a script to create CTC models. We will use a small model since the dataset is just 5 hours of speech. ------Setup a Pytorch Lightning Trainer:
###Code
import torch
from pytorch_lightning import Trainer
if torch.cuda.is_available():
accelerator = 'gpu'
else:
accelerator = 'gpu'
EPOCHS = 50
# Initialize a Trainer for the Transducer model
trainer = Trainer(devices=1, accelerator=accelerator, max_epochs=EPOCHS,
enable_checkpointing=False, logger=False,
log_every_n_steps=5, check_val_every_n_epoch=10)
# Import the Transducer Model
import nemo.collections.asr as nemo_asr
# Build the model
model = nemo_asr.models.EncDecRNNTBPEModel(cfg=config.model, trainer=trainer)
model.summarize();
###Output
_____no_output_____
###Markdown
------We now have a Transducer model ready to be trained! (Optional) Partially loading pre-trained weights from another modelAn interesting point to note about Transducer models - the Acoustic model config (and therefore the Acoustic model itself) can be shared between CTC and Transducer models.This means that we can initialize the weights of a Transducer's Acoustic model with weights from a pre-trained CTC encoder model.------**Note**: This step is optional and not necessary at all to train a Transducer model. Below, we show the steps that we would take if we wanted to do this, however as the loaded model has different kernel sizes compared to the current model, the checkpoint cannot be loaded.
###Code
# Load a small CTC model
# ctc_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_citrinet_256", map_location='cpu')
###Output
_____no_output_____
###Markdown
------Then load the state dict of the CTC model's encoder into the Transducer model's encoder.
###Code
# <<< NOTE: This is only for demonstration ! >>>
# Below cell will fail because the two model's have incompatible kernel sizes in their Conv layers.
# <<< NOTE: Below cell is only shown to illustrate the method >>>
# model.encoder.load_state_dict(ctc_model.encoder.state_dict(), strict=True)
###Output
_____no_output_____
###Markdown
Training on AN4Now that the model is ready, we can finally train it!
###Code
# Prepare NeMo's Experiment manager to handle checkpoint saving and logging for us
from nemo.utils import exp_manager
# Environment variable generally used for multi-node multi-gpu training.
# In notebook environments, this flag is unnecessary and can cause logs of multiple training runs to overwrite each other.
os.environ.pop('NEMO_EXPM_VERSION', None)
exp_config = exp_manager.ExpManagerConfig(
exp_dir=f'experiments/',
name=f"Transducer-Model",
checkpoint_callback_params=exp_manager.CallbackParams(
monitor="val_wer",
mode="min",
always_save_nemo=True,
save_best_model=True,
),
)
exp_config = OmegaConf.structured(exp_config)
logdir = exp_manager.exp_manager(trainer, exp_config)
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir /content/experiments/Transducer-Model/
else:
print("To use TensorBoard, please use this notebook in a Google Colab environment.")
# Release resources prior to training
import gc
gc.collect()
if accelerator == 'gpu':
torch.cuda.empty_cache()
# Train the model
trainer.fit(model)
###Output
_____no_output_____
###Markdown
-------Lets check what the final performance on the test set.
###Code
trainer.test(model)
###Output
_____no_output_____
###Markdown
------The model should obtain some score between 10-12% WER after 50 epochs of training. Quite a good score for just 50 epochs of training a tiny model! Note that these are greedy scores, yet they are pretty strong for such a short training run.We can further improve these scores by using the internal Prediction network to calculate beam scores. Changing the Decoding StrategyDuring training, for the sake of efficiency, we were using the `greedy_batch` decoding strategy. However, we might want to perform inference with another method - say, beam search.NeMo allows changing the decoding strategy easily after the model has been trained.
###Code
import copy
decoding_config = copy.deepcopy(config.model.decoding)
print(OmegaConf.to_yaml(decoding_config))
# Update the config for the decoding strategy
decoding_config.strategy = "alsd" # Options are `greedy`, `greedy_batch`, `beam`, `tsd` and `alsd`
decoding_config.beam.beam_size = 4 # Increase beam size for better scores, but it will take much longer for transcription !
# Finally update the model's decoding strategy !
model.change_decoding_strategy(decoding_config)
trainer.test(model)
###Output
_____no_output_____
###Markdown
------Here, we improved our scores significantly by using the `Alignment-Length Synchronous Decoding` beam search. Feel free to try the other algorithms and compare the speed-accuracy tradeoff! (Extra) Extracting Transducer Model Alignments Transducers are unique in the sense that for each timestep $t \le T$, they can emit multiple target tokens $u_t$. During training, this is represented as the $T \times U$ joint that maps to the vocabulary $V$. During inference, there is no need to compute the full joint $T \times U$. Instead, after the model predicts the `Transducer Blank` token at the current timestep $t$ while predicting the target token $u_t$, the model will move onto the next acoustic timestep $t + 1$. As such, we can obtain the diagonal alignment of the Transducer model per sample relatively simply.------**Note**: While alignments can be calculated for both greedy and beam search - it is non-trivial to incorporate this alignment information for beam decoding. Therefore NeMo only supports extracting alignments during greedy decoding. -----Restore model to greedy decoding for alignment calculation
###Code
decoding_config.strategy = "greedy_batch"
# Special flag which is generally disabled
# Instruct Greedy Decoders to preserve alignment information during autoregressive decoding
with open_dict(decoding_config):
decoding_config.preserve_alignments = True
model.change_decoding_strategy(decoding_config)
###Output
_____no_output_____
###Markdown
-------Set up a test data loader that we will use to obtain the alignments for a single batch.
###Code
test_dl = model.test_dataloader()
test_dl = iter(test_dl)
batch = next(test_dl)
device = torch.device('cuda' if accelerator == 'gpu' else 'cpu')
def rnnt_alignments(model, batch):
model = model.to(device)
encoded, encoded_len = model.forward(
input_signal=batch[0].to(device), input_signal_length=batch[1].to(device)
)
current_hypotheses = model.decoding.rnnt_decoder_predictions_tensor(
encoded, encoded_len, return_hypotheses=True
)
del encoded, encoded_len
# current hypothesis is a tuple of
# 1) best hypothesis
# 2) Sorted list of hypothesis (if using beam search); None otherwise
return current_hypotheses
# Get a batch of hypotheses, as well as a batch of all obtained hypotheses (if beam search is used)
hypotheses, all_hypotheses = rnnt_alignments(model, batch)
###Output
_____no_output_____
###Markdown
------Select a sample ID from within the batch to observe the alignment information contained in the Hypothesis.
###Code
# Select the sample ID from within the batch
SAMPLE_ID = 0
# Obtain the hypothesis for this sample, as well as some ground truth information about this sample
hypothesis = hypotheses[SAMPLE_ID]
original_sample_len = batch[1][SAMPLE_ID]
ground_truth = batch[2][SAMPLE_ID]
# The Hypothesis object contains a lot of useful information regarding the decoding step.
print(hypothesis)
###Output
_____no_output_____
###Markdown
-------Now, decode the hypothesis and compare it against the ground truth text. Note - this decoded hypothesis is at *sub-word* level for this model. Therefore sub-word tokens such as `_` may be seen here.
###Code
decoded_text = hypothesis.text
decoded_hypothesis = model.decoding.decode_ids_to_tokens(hypothesis.y_sequence.cpu().numpy().tolist())
decoded_ground_truth = model.decoding.tokenizer.ids_to_text(ground_truth.cpu().numpy().tolist())
print("Decoded ground truth :", decoded_ground_truth)
print("Decoded hypothesis :", decoded_text)
print("Decoded hyp tokens :", decoded_hypothesis)
###Output
_____no_output_____
###Markdown
---------Next we print out the 2-d alignment grid of the RNNT model:
###Code
alignments = hypothesis.alignments
# These two values should normally always match
print("Length of alignments (T): ", len(alignments))
print("Length of padded acoustic model after striding : ", int(hypothesis.length))
###Output
_____no_output_____
###Markdown
------Finally, let us calculate the alignment grid. We will de-tokenize the sub-word token if it is a valid index in the vocabulary and use `''` as a placeholder for the `Transducer Blank` token.Note that each `timestep` here is (roughly) $timestep * total\_stride\_of\_model * preprocessor.window\_stride$ seconds timestamp. **Note**: You can modify the value of `config.model.loss.warprnnt_numba_kwargs.fastemit_lambda` prior to training and see an impact on final alignment latency!
###Code
# Compute the alignment grid
for ti in range(len(alignments)):
t_u = []
for uj in range(len(alignments[ti])):
token = alignments[ti][uj]
token = token.to('cpu').numpy().tolist()
decoded_token = model.decoding.decode_ids_to_tokens([token])[0] if token != model.decoding.blank_id else '' # token at index len(vocab) == RNNT blank token
t_u.append(decoded_token)
print(f"Tokens at timestep {ti} = {t_u}")
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Transducer ModelsThis notebook is a basic tutorial for creating a Transducer ASR model and then training it on a small dataset (AN4). It includes discussion relevant to reducing memory issues when training such models and demonstrates how to change the decoding strategy after training. Finally, it also provides a brief glimpse of extracting alignment information from a trained Transducer model.As we will see in this tutorial, apart from the differences in the config and the class used to instantiate the model, nearly all steps are precisely similar to any CTC-based model training. Many concepts such as data loader setup, optimization setup, pre-trained checkpoint weight loading will be nearly identical between CTC and Transducer models.In essence, NeMo makes it seamless to take a config for a CTC ASR model, add in a few components related to Transducers (often without any modifications) and use a different class to instantiate a Transducer model!--------**Note**: It is assumed that the previous tutorial - "Intro-to-Transducers" has been reviewed, and there is some familiarity with the config components of transducer models. Preparing the datasetIn this tutorial, we will be utilizing the `AN4`dataset - also known as the Alphanumeric dataset, which was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly. Let's first download the preparation script from NeMo's scripts directory -
###Code
import os
if not os.path.exists("scripts/"):
os.makedirs("scripts")
if not os.path.exists("scripts/process_an4_data.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_an4_data.py
###Output
_____no_output_____
###Markdown
------Download and prepare the two subsets of `AN 4`
###Code
import wget
import tarfile
import subprocess
import glob
data_dir = "datasets"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'https://dldata-public.s3.us-east-2.amazonaws.com/an4_sphere.tar.gz' # for the original source, please visit http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
if os.path.exists(f"{data_dir}/an4"):
print("Preparing AN4 dataset ...")
an4_path = f"{data_dir}/"
!python scripts/process_an4_data.py \
--data_root=$an4_path
print("AN4 prepared !")
# Manifest filepaths
TRAIN_MANIFEST = os.path.join(data_dir, "an4", "train_manifest.json")
TEST_MANIFEST = os.path.join(data_dir, "an4", "test_manifest.json")
###Output
_____no_output_____
###Markdown
Preparing the tokenizerNow that we have a dataset ready, we need to decide whether to use a character-based model or a sub-word-based model. For completeness' sake, we will use a tokenizer based model so that we can leverage a modern encoder architecture like ContextNet or Conformer-T.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
-----Since the dataset is tiny, we can use a small SentencePiece based tokenizer. We always delete the tokenizer directory so any changes to the manifest files are always replicated in the tokenizer.
###Code
VOCAB_SIZE = 32 # can be any value above 29
TOKENIZER_TYPE = "spe" # can be wpe or spe
SPE_TYPE = "unigram" # can be bpe or unigram
# ------------------------------------------------------------------- #
!rm -r tokenizers/
if not os.path.exists("tokenizers"):
os.makedirs("tokenizers")
!python scripts/process_asr_text_tokenizer.py \
--manifest=$TRAIN_MANIFEST \
--data_root="tokenizers" \
--tokenizer=$TOKENIZER_TYPE \
--spe_type=$SPE_TYPE \
--no_lower_case \
--log \
--vocab_size=$VOCAB_SIZE
# Tokenizer path
if TOKENIZER_TYPE == 'spe':
TOKENIZER = os.path.join("tokenizers", f"tokenizer_spe_{SPE_TYPE}_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "bpe"
else:
TOKENIZER = os.path.join("tokenizers", f"tokenizer_wpe_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "wpe"
###Output
_____no_output_____
###Markdown
Preparing a Transducer ModelNow that we have the dataset and tokenizer prepared, let us begin by setting up the config of the Transducer model! In this tutorial, we will build a slightly modified ContextNet architecture (which is obtained from the paper [ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context](https://arxiv.org/abs/2005.03191)).We can note that many of the steps here are identical to the setup of a CTC model! Prepare the configFor a dataset such as AN4, we do not need such a deep model. In fact, the depth of this model will cause much slower convergence on a small dataset, which would require far too long to train on Colab.In order to speed up training for this demo, we will take only the first five blocks of ContextNet, and discard the rest - and we can do this directly from the config.**Note**: On any realistic dataset (say Librispeech) this step would hurt the model's accuracy significantly. It is being done only to reduce the time spent waiting for training to finish on Colab.
###Code
from omegaconf import OmegaConf, open_dict
config = OmegaConf.load("configs/contextnet_rnnt.yaml")
###Output
_____no_output_____
###Markdown
-----Here, we will slice off the first five blocks from the Jasper block (used to build ContextNet). Setting the config with this subset will create a stride 2x model with just five blocks.We will also explicitly state that the last block dimension must be obtained from `model.model_defaults.enc_hidden` inside the config.
###Code
config.model.encoder.jasper = config.model.encoder.jasper[:5]
config.model.encoder.jasper[-1].filters = '${model.model_defaults.enc_hidden}'
###Output
_____no_output_____
###Markdown
-------Next, set up the data loaders of the config for the ContextNet model.
###Code
# print out the train and validation configs to know what needs to be changed
print(OmegaConf.to_yaml(config.model.train_ds))
###Output
_____no_output_____
###Markdown
-------We can note that the config here is nearly identical to the CTC ASR model configs! So let us take the same steps here to update the configs.
###Code
config.model.train_ds.manifest_filepath = TRAIN_MANIFEST
config.model.validation_ds.manifest_filepath = TEST_MANIFEST
config.model.test_ds.manifest_filepath = TEST_MANIFEST
###Output
_____no_output_____
###Markdown
------Next, we need to setup the tokenizer section of the config
###Code
print(OmegaConf.to_yaml(config.model.tokenizer))
config.model.tokenizer.dir = TOKENIZER
config.model.tokenizer.type = TOKENIZER_TYPE_CFG
###Output
_____no_output_____
###Markdown
------Now, we can update the optimization and augmentation for this dataset in order to converge to some reasonable score within a short training run.
###Code
print(OmegaConf.to_yaml(config.model.optim))
# Finally, let's remove logging of samples and the warmup since the dataset is small (similar to CTC models)
config.model.log_prediction = False
config.model.optim.sched.warmup_steps = None
###Output
_____no_output_____
###Markdown
------Next, we remove the spec augment that is provided by default for ContextNet. While additional augmentation would surely help training, it would require longer training to see significant benefits.
###Code
print(OmegaConf.to_yaml(config.model.spec_augment))
config.model.spec_augment.freq_masks = 0
config.model.spec_augment.time_masks = 0
###Output
_____no_output_____
###Markdown
------... We are now almost done! Most of the updates to a Transducer config are nearly the same as any CTC model. Fused Batch during training and evaluationWe discussed in the previous tutorial (Intro-to-Transducers) the significant memory cost of the Transducer Joint calculation during training. We also discussed that NeMo provides a simple yet effective method to nearly sidestep this limitation. We can now dive deeper into understanding what precisely NeMo's Transducer framework will do to alleviate this memory consumption issue.The following sub-cells are **voluntary** and valuable for understanding the cause, effect, and resolution of memory issues in Transducer models. The content can be skipped if one is familiar with the topic, and it is only required to use the `fused batch step`. Transducer Memory reduction with Fused Batch stepThe following few cells explain why memory is an issue when training Transducer models and how NeMo tackles the issue with its Fused Batch step.The material can be read for a thorough understanding, otherwise, it can be skipped. Diving deeper into the memory costs of Transducer Joint-------One of the significant limitations of Transducers is the exorbitant memory cost of computing the Joint module. The Joint module is comprised of two steps. 1) Projecting the Acoustic and Transcription feature dimensions to some standard hidden dimension (specified by `model.model_defaults.joint_hidden`)2) Projecting this intermediate hidden dimension to the final vocabulary space to obtain the transcription.Take the following example.**BS**=32 ; **T** (after **2x** stride) = 800, **U** (with character encoding) = 400-450 tokens, Vocabulary size **V** = 28 (26 alphabet chars, space and apostrophe). Let the hidden dimension of the Joint model be 640 (Most Google Transducer papers use hidden dimension of 640).$ Memory \, (Hidden, \, gb) = 32 \times 800 \times 450 \times 640 \times 4 = 29.49 $ gigabytes (4 bytes per float). $ Memory \, (Joint, \, gb) = 32 \times 800 \times 450 \times 28 \times 4 = 1.290 $ gigabytes (4 bytes per float)-----**NOTE**: This is just for the forward pass! We need to double this memory to store gradients! This much memory is also just for the Joint model **alone**. Far more memory is required for the Prediction model as well as the large Acoustic model itself and its gradients!Even with mixed precision, that's $\sim 30$ GB of GPU RAM for just 1 part of the network + its gradients.--------- Simple methods to reduce memory consumption------The easiest way to reduce memory consumption is to perform more downsampling in the acoustic model and use sub-word tokenization of the text to reduce the length of the target sequence.**BS**=32 ; **T** (after **8x** stride) = 200, **U** (with sub-word encoding) = 100-180 tokens, Vocabulary size **V** = 1024.$ Memory \, (Hidden, \, gb) = 32 \times 200 \times 150 \times 640 \times 4 = 2.45 $ gigabytes (4 bytes per float).$ Memory \, (Joint, \, gb) = 32 \times 200 \times 150 \times 1024 \times 4 = 3.93 $ gigabytes (4 bytes per float)-----Using Automatic Mixed Precision, we expend just around 6-7 GB of GPU RAM on the Joint + its gradient.The above memory cost is much more tractable - but we generally want larger and larger acoustic models. It is consistently the easiest way to improve transcription accuracy. So that means on a limited 32 GB GPU, we have to partition 7 GB just for the Joint and remaining memory allocated between Transcription + Acoustic Models. Fused Transcription-Joint-Loss-WER (also called Batch Splitting)----------The fundamental problem is that the joint tensor grows in size when `[T x U]` grows in size. This growth in memory cost is due to many reasons - either by model construction (downsampling) or the choice of dataset preprocessing (character tokenization vs. sub-word tokenization).Another dimension that NeMo can control is **batch**. Due to how we batch our samples, small and large samples all get clumped together into a single batch. So even though the individual samples are not all as long as the maximum length of T and U in that batch, when a batch of such samples is constructed, it will consume a significant amount of memory for the sake of compute efficiency.So as is always the case - **trade-off compute speed for memory savings**.------The fused operation goes as follows : 1) Forward the entire acoustic model in a single pass. (Use global batch size here for acoustic model - found in `model.*_ds.batch_size`)2) Split the Acoustic Model's logits by `fused_batch_size` and loop over these sub-batches.3) Construct a sub-batch of same `fused_batch_size` for the Prediction model. Now the target sequence length is $U_{sub-batch} < U$. 4) Feed this $U_{sub-batch}$ into the Joint model, along with a sub-batch from the Acoustic model (with $T_{sub-batch} < T$). Remember, we only have to slice off a part of the acoustic model here since we have the full batch of samples $(B, T, D)$ from the acoustic model.5) Performing steps (3) and (4) yields $T_{sub-batch}$ and $U_{sub-batch}$. Perform sub-batch joint step - costing an intermediate $(B, T_{sub-batch}, U_{sub-batch}, V)$ in memory.6) Compute loss on sub-batch and preserve in a list to be later concatenated. 7) Compute sub-batch metrics (such as Character / Word Error Rate) using the above Joint tensor and sub-batch of ground truth labels. Preserve the scores to be averaged across the entire batch later.8) Delete the sub-batch joint matrix $(B, T_{sub-batch}, U_{sub-batch}, V)$. Only gradients from .backward() are preserved now in the computation graph.9) Repeat steps (3) - (8) until all sub-batches are consumed.10) Cleanup step. Compute full batch WER and log. Concatenate loss list and pass to PTL to compute the equivalent of the original (full batch) Joint step. Delete ancillary objects necessary for sub-batching. Setting up Fused Batch step in a Transducer ConfigAfter all that discussion above, let us look at how to enable that entire pipeline in NeMo.As we can note below, it takes precisely two changes in the config to enable the fused batch step:
###Code
print(OmegaConf.to_yaml(config.model.joint))
# Two lines to enable the fused batch step
config.model.joint.fuse_loss_wer = True
config.model.joint.fused_batch_size = 16 # this can be any value (preferably less than model.*_ds.batch_size)
# We will also reduce the hidden dimension of the joint and the prediction networks to preserve some memory
config.model.model_defaults.pred_hidden = 64
config.model.model_defaults.joint_hidden = 64
###Output
_____no_output_____
###Markdown
--------Finally, since the dataset is tiny, we do not need an enormous model (the default is roughly 40 M parameters!).
###Code
# Use just 128 filters across the model to speed up training and reduce parameter count
config.model.model_defaults.filters = 128
###Output
_____no_output_____
###Markdown
Initialize a Transducer ASR ModelFinally, let us create a Transducer model, which is as easy as changing a line of import if you already have a script to create CTC models. We will use a small model since the dataset is just 5 hours of speech. ------Setup a Pytorch Lightning Trainer:
###Code
import torch
from pytorch_lightning import Trainer
if torch.cuda.is_available():
accelerator = 'gpu'
else:
accelerator = 'gpu'
EPOCHS = 50
# Initialize a Trainer for the Transducer model
trainer = Trainer(devices=1, accelerator=accelerator, max_epochs=EPOCHS,
enable_checkpointing=False, logger=False,
log_every_n_steps=5, check_val_every_n_epoch=10)
# Import the Transducer Model
import nemo.collections.asr as nemo_asr
# Build the model
model = nemo_asr.models.EncDecRNNTBPEModel(cfg=config.model, trainer=trainer)
model.summarize();
###Output
_____no_output_____
###Markdown
------We now have a Transducer model ready to be trained! (Optional) Partially loading pre-trained weights from another modelAn interesting point to note about Transducer models - the Acoustic model config (and therefore the Acoustic model itself) can be shared between CTC and Transducer models.This means that we can initialize the weights of a Transducer's Acoustic model with weights from a pre-trained CTC encoder model.------**Note**: This step is optional and not necessary at all to train a Transducer model. Below, we show the steps that we would take if we wanted to do this, however as the loaded model has different kernel sizes compared to the current model, the checkpoint cannot be loaded.
###Code
# Load a small CTC model
# ctc_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_citrinet_256", map_location='cpu')
###Output
_____no_output_____
###Markdown
------Then load the state dict of the CTC model's encoder into the Transducer model's encoder.
###Code
# <<< NOTE: This is only for demonstration ! >>>
# Below cell will fail because the two model's have incompatible kernel sizes in their Conv layers.
# <<< NOTE: Below cell is only shown to illustrate the method >>>
# model.encoder.load_state_dict(ctc_model.encoder.state_dict(), strict=True)
###Output
_____no_output_____
###Markdown
Training on AN4Now that the model is ready, we can finally train it!
###Code
# Prepare NeMo's Experiment manager to handle checkpoint saving and logging for us
from nemo.utils import exp_manager
# Environment variable generally used for multi-node multi-gpu training.
# In notebook environments, this flag is unnecessary and can cause logs of multiple training runs to overwrite each other.
os.environ.pop('NEMO_EXPM_VERSION', None)
exp_config = exp_manager.ExpManagerConfig(
exp_dir=f'experiments/',
name=f"Transducer-Model",
checkpoint_callback_params=exp_manager.CallbackParams(
monitor="val_wer",
mode="min",
always_save_nemo=True,
save_best_model=True,
),
)
exp_config = OmegaConf.structured(exp_config)
logdir = exp_manager.exp_manager(trainer, exp_config)
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir /content/experiments/Transducer-Model/
else:
print("To use TensorBoard, please use this notebook in a Google Colab environment.")
# Release resources prior to training
import gc
gc.collect()
if accelerator == 'gpu':
torch.cuda.empty_cache()
# Train the model
trainer.fit(model)
###Output
_____no_output_____
###Markdown
-------Lets check what the final performance on the test set.
###Code
trainer.test(model)
###Output
_____no_output_____
###Markdown
------The model should obtain some score between 10-12% WER after 50 epochs of training. Quite a good score for just 50 epochs of training a tiny model! Note that these are greedy scores, yet they are pretty strong for such a short training run.We can further improve these scores by using the internal Prediction network to calculate beam scores. Changing the Decoding StrategyDuring training, for the sake of efficiency, we were using the `greedy_batch` decoding strategy. However, we might want to perform inference with another method - say, beam search.NeMo allows changing the decoding strategy easily after the model has been trained.
###Code
import copy
decoding_config = copy.deepcopy(config.model.decoding)
print(OmegaConf.to_yaml(decoding_config))
# Update the config for the decoding strategy
decoding_config.strategy = "alsd" # Options are `greedy`, `greedy_batch`, `beam`, `tsd` and `alsd`
decoding_config.beam.beam_size = 4 # Increase beam size for better scores, but it will take much longer for transcription !
# Finally update the model's decoding strategy !
model.change_decoding_strategy(decoding_config)
trainer.test(model)
###Output
_____no_output_____
###Markdown
------Here, we improved our scores significantly by using the `Alignment-Length Synchronous Decoding` beam search. Feel free to try the other algorithms and compare the speed-accuracy tradeoff! (Extra) Extracting Transducer Model Alignments Transducers are unique in the sense that for each timestep $t \le T$, they can emit multiple target tokens $u_t$. During training, this is represented as the $T \times U$ joint that maps to the vocabulary $V$. During inference, there is no need to compute the full joint $T \times U$. Instead, after the model predicts the `Transducer Blank` token at the current timestep $t$ while predicting the target token $u_t$, the model will move onto the next acoustic timestep $t + 1$. As such, we can obtain the diagonal alignment of the Transducer model per sample relatively simply.------**Note**: While alignments can be calculated for both greedy and beam search - it is non-trivial to incorporate this alignment information for beam decoding. Therefore NeMo only supports extracting alignments during greedy decoding. -----Restore model to greedy decoding for alignment calculation
###Code
decoding_config.strategy = "greedy_batch"
# Special flag which is generally disabled
# Instruct Greedy Decoders to preserve alignment information during autoregressive decoding
with open_dict(decoding_config):
decoding_config.preserve_alignments = True
decoding_config.fused_batch_size = -1 # temporarily stop fused batch during inference.
model.change_decoding_strategy(decoding_config)
###Output
_____no_output_____
###Markdown
-------Set up a test data loader that we will use to obtain the alignments for a single batch.
###Code
test_dl = model.test_dataloader()
test_dl = iter(test_dl)
batch = next(test_dl)
device = torch.device('cuda' if accelerator == 'gpu' else 'cpu')
def rnnt_alignments(model, batch):
model = model.to(device)
encoded, encoded_len = model.forward(
input_signal=batch[0].to(device), input_signal_length=batch[1].to(device)
)
current_hypotheses = model.decoding.rnnt_decoder_predictions_tensor(
encoded, encoded_len, return_hypotheses=True
)
del encoded, encoded_len
# current hypothesis is a tuple of
# 1) best hypothesis
# 2) Sorted list of hypothesis (if using beam search); None otherwise
return current_hypotheses
# Get a batch of hypotheses, as well as a batch of all obtained hypotheses (if beam search is used)
hypotheses, all_hypotheses = rnnt_alignments(model, batch)
###Output
_____no_output_____
###Markdown
------Select a sample ID from within the batch to observe the alignment information contained in the Hypothesis.
###Code
# Select the sample ID from within the batch
SAMPLE_ID = 0
# Obtain the hypothesis for this sample, as well as some ground truth information about this sample
hypothesis = hypotheses[SAMPLE_ID]
original_sample_len = batch[1][SAMPLE_ID]
ground_truth = batch[2][SAMPLE_ID]
# The Hypothesis object contains a lot of useful information regarding the decoding step.
print(hypothesis)
###Output
_____no_output_____
###Markdown
-------Now, decode the hypothesis and compare it against the ground truth text. Note - this decoded hypothesis is at *sub-word* level for this model. Therefore sub-word tokens such as `_` may be seen here.
###Code
decoded_text = hypothesis.text
decoded_hypothesis = model.decoding.decode_ids_to_tokens(hypothesis.y_sequence.cpu().numpy().tolist())
decoded_ground_truth = model.decoding.tokenizer.ids_to_text(ground_truth.cpu().numpy().tolist())
print("Decoded ground truth :", decoded_ground_truth)
print("Decoded hypothesis :", decoded_text)
print("Decoded hyp tokens :", decoded_hypothesis)
###Output
_____no_output_____
###Markdown
---------Next we print out the 2-d alignment grid of the RNNT model:
###Code
alignments = hypothesis.alignments
# These two values should normally always match
print("Length of alignments (T): ", len(alignments))
print("Length of padded acoustic model after striding : ", int(hypothesis.length))
###Output
_____no_output_____
###Markdown
------Finally, let us calculate the alignment grid. We will de-tokenize the sub-word token if it is a valid index in the vocabulary and use `''` as a placeholder for the `Transducer Blank` token.Note that each `timestep` here is (roughly) $timestep * total\_stride\_of\_model * preprocessor.window\_stride$ seconds timestamp. **Note**: You can modify the value of `config.model.loss.warprnnt_numba_kwargs.fastemit_lambda` prior to training and see an impact on final alignment latency!
###Code
# Compute the alignment grid
for ti in range(len(alignments)):
t_u = []
for uj in range(len(alignments[ti])):
token = alignments[ti][uj]
token = token.to('cpu').numpy().tolist()
decoded_token = model.decoding.decode_ids_to_tokens([token])[0] if token != model.decoding.blank_id else '' # token at index len(vocab) == RNNT blank token
t_u.append(decoded_token)
print(f"Tokens at timestep {ti} = {t_u}")
###Output
_____no_output_____
###Markdown
------Download and prepare the two subsets of `AN 4`
###Code
import wget
import tarfile
import glob
data_dir = "datasets"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
if os.path.exists(f"{data_dir}/an4"):
print("Preparing AN4 dataset ...")
an4_path = f"{data_dir}/"
!python scripts/process_an4_data.py \
--data_root=$an4_path
print("AN4 prepared !")
# Manifest filepaths
TRAIN_MANIFEST = os.path.join(data_dir, "an4", "train_manifest.json")
TEST_MANIFEST = os.path.join(data_dir, "an4", "test_manifest.json")
###Output
_____no_output_____
###Markdown
Preparing the tokenizerNow that we have a dataset ready, we need to decide whether to use a character-based model or a sub-word-based model. For completeness' sake, we will use a tokenizer based model so that we can leverage a modern encoder architecture like ContextNet or Conformer-T.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
-----Since the dataset is tiny, we can use a small SentencePiece based tokenizer. We always delete the tokenizer directory so any changes to the manifest files are always replicated in the tokenizer.
###Code
VOCAB_SIZE = 32 # can be any value above 29
TOKENIZER_TYPE = "spe" # can be wpe or spe
SPE_TYPE = "unigram" # can be bpe or unigram
# ------------------------------------------------------------------- #
!rm -r tokenizers/
if not os.path.exists("tokenizers"):
os.makedirs("tokenizers")
!python scripts/process_asr_text_tokenizer.py \
--manifest=$TRAIN_MANIFEST \
--data_root="tokenizers" \
--tokenizer=$TOKENIZER_TYPE \
--spe_type=$SPE_TYPE \
--no_lower_case \
--log \
--vocab_size=$VOCAB_SIZE
# Tokenizer path
if TOKENIZER_TYPE == 'spe':
TOKENIZER = os.path.join("tokenizers", f"tokenizer_spe_{SPE_TYPE}_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "bpe"
else:
TOKENIZER = os.path.join("tokenizers", f"tokenizer_wpe_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "wpe"
###Output
_____no_output_____
###Markdown
Preparing a Transducer ModelNow that we have the dataset and tokenizer prepared, let us begin by setting up the config of the Transducer model! In this tutorial, we will build a slightly modified ContextNet architecture (which is obtained from the paper [ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context](https://arxiv.org/abs/2005.03191)).We can note that many of the steps here are identical to the setup of a CTC model! Prepare the configFor a dataset such as AN4, we do not need such a deep model. In fact, the depth of this model will cause much slower convergence on a small dataset, which would require far too long to train on Colab.In order to speed up training for this demo, we will take only the first five blocks of ContextNet, and discard the rest - and we can do this directly from the config.**Note**: On any realistic dataset (say Librispeech) this step would hurt the model's accuracy significantly. It is being done only to reduce the time spent waiting for training to finish on Colab.
###Code
from omegaconf import OmegaConf, open_dict
config = OmegaConf.load("configs/contextnet_rnnt.yaml")
###Output
_____no_output_____
###Markdown
-----Here, we will slice off the first five blocks from the Jasper block (used to build ContextNet). Setting the config with this subset will create a stride 2x model with just five blocks.We will also explicitly state that the last block dimension must be obtained from `model.model_defaults.enc_hidden` inside the config.
###Code
config.model.encoder.jasper = config.model.encoder.jasper[:5]
config.model.encoder.jasper[-1].filters = '${model.model_defaults.enc_hidden}'
###Output
_____no_output_____
###Markdown
-------Next, set up the data loaders of the config for the ContextNet model.
###Code
# print out the train and validation configs to know what needs to be changed
print(OmegaConf.to_yaml(config.model.train_ds))
###Output
_____no_output_____
###Markdown
-------We can note that the config here is nearly identical to the CTC ASR model configs! So let us take the same steps here to update the configs.
###Code
config.model.train_ds.manifest_filepath = TRAIN_MANIFEST
config.model.validation_ds.manifest_filepath = TEST_MANIFEST
config.model.test_ds.manifest_filepath = TEST_MANIFEST
###Output
_____no_output_____
###Markdown
------Next, we need to setup the tokenizer section of the config
###Code
print(OmegaConf.to_yaml(config.model.tokenizer))
config.model.tokenizer.dir = TOKENIZER
config.model.tokenizer.type = TOKENIZER_TYPE_CFG
###Output
_____no_output_____
###Markdown
------Now, we can update the optimization and augmentation for this dataset in order to converge to some reasonable score within a short training run.
###Code
print(OmegaConf.to_yaml(config.model.optim))
# Finally, let's remove logging of samples and the warmup since the dataset is small (similar to CTC models)
config.model.log_prediction = False
config.model.optim.sched.warmup_steps = None
###Output
_____no_output_____
###Markdown
------Next, we remove the spec augment that is provided by default for ContextNet. While additional augmentation would surely help training, it would require longer training to see significant benefits.
###Code
print(OmegaConf.to_yaml(config.model.spec_augment))
config.model.spec_augment.freq_masks = 0
config.model.spec_augment.time_masks = 0
###Output
_____no_output_____
###Markdown
------... We are now almost done! Most of the updates to a Transducer config are nearly the same as any CTC model. Fused Batch during training and evaluationWe discussed in the previous tutorial (Intro-to-Transducers) the significant memory cost of the Transducer Joint calculation during training. We also discussed that NeMo provides a simple yet effective method to nearly sidestep this limitation. We can now dive deeper into understanding what precisely NeMo's Transducer framework will do to alleviate this memory consumption issue.The following sub-cells are **voluntary** and valuable for understanding the cause, effect, and resolution of memory issues in Transducer models. The content can be skipped if one is familiar with the topic, and it is only required to use the `fused batch step`. Transducer Memory reduction with Fused Batch stepThe following few cells explain why memory is an issue when training Transducer models and how NeMo tackles the issue with its Fused Batch step.The material can be read for a thorough understanding, otherwise, it can be skipped. Diving deeper into the memory costs of Transducer Joint-------One of the significant limitations of Transducers is the exorbitant memory cost of computing the Joint module. The Joint module is comprised of two steps. 1) Projecting the Acoustic and Transcription feature dimensions to some standard hidden dimension (specified by `model.model_defaults.joint_hidden`)2) Projecting this intermediate hidden dimension to the final vocabulary space to obtain the transcription.Take the following example.**BS**=32 ; **T** (after **2x** stride) = 800, **U** (with character encoding) = 400-450 tokens, Vocabulary size **V** = 28 (26 alphabet chars, space and apostrophe). Let the hidden dimension of the Joint model be 640 (Most Google Transducer papers use hidden dimension of 640).$ Memory \, (Hidden, \, gb) = 32 \times 800 \times 450 \times 640 \times 4 = 29.49 $ gigabytes (4 bytes per float). $ Memory \, (Joint, \, gb) = 32 \times 800 \times 450 \times 28 \times 4 = 1.290 $ gigabytes (4 bytes per float)-----**NOTE**: This is just for the forward pass! We need to double this memory to store gradients! This much memory is also just for the Joint model **alone**. Far more memory is required for the Prediction model as well as the large Acoustic model itself and its gradients!Even with mixed precision, that's $\sim 30$ GB of GPU RAM for just 1 part of the network + its gradients.--------- Simple methods to reduce memory consumption------The easiest way to reduce memory consumption is to perform more downsampling in the acoustic model and use sub-word tokenization of the text to reduce the length of the target sequence.**BS**=32 ; **T** (after **8x** stride) = 200, **U** (with sub-word encoding) = 100-180 tokens, Vocabulary size **V** = 1024.$ Memory \, (Hidden, \, gb) = 32 \times 200 \times 150 \times 640 \times 4 = 2.45 $ gigabytes (4 bytes per float).$ Memory \, (Joint, \, gb) = 32 \times 200 \times 150 \times 1024 \times 4 = 3.93 $ gigabytes (4 bytes per float)-----Using Automatic Mixed Precision, we expend just around 6-7 GB of GPU RAM on the Joint + its gradient.The above memory cost is much more tractable - but we generally want larger and larger acoustic models. It is consistently the easiest way to improve transcription accuracy. So that means on a limited 32 GB GPU, we have to partition 7 GB just for the Joint and remaining memory allocated between Transcription + Acoustic Models. Fused Transcription-Joint-Loss-WER (also called Batch Splitting)----------The fundamental problem is that the joint tensor grows in size when `[T x U]` grows in size. This growth in memory cost is due to many reasons - either by model construction (downsampling) or the choice of dataset preprocessing (character tokenization vs. sub-word tokenization).Another dimension that NeMo can control is **batch**. Due to how we batch our samples, small and large samples all get clumped together into a single batch. So even though the individual samples are not all as long as the maximum length of T and U in that batch, when a batch of such samples is constructed, it will consume a significant amount of memory for the sake of compute efficiency.So as is always the case - **trade-off compute speed for memory savings**.------The fused operation goes as follows : 1) Forward the entire acoustic model in a single pass. (Use global batch size here for acoustic model - found in `model.*_ds.batch_size`)2) Split the Acoustic Model's logits by `fused_batch_size` and loop over these sub-batches.3) Construct a sub-batch of same `fused_batch_size` for the Prediction model. Now the target sequence length is $U_{sub-batch} < U$. 4) Feed this $U_{sub-batch}$ into the Joint model, along with a sub-batch from the Acoustic model (with $T_{sub-batch} < T$). Remember, we only have to slice off a part of the acoustic model here since we have the full batch of samples $(B, T, D)$ from the acoustic model.5) Perfoming steps (3) and (4) yields $T_{sub-batch}$ and $U_{sub-batch}$. Perform sub-batch joint step - costing an intermediate $(B, T_{sub-batch}, U_{sub-batch}, V)$ in memory.6) Compute loss on sub-batch and preserve in a list to be later concatenated. 7) Compute sub-batch metrics (such as Character / Word Error Rate) using the above Joint tensor and sub-batch of ground truth labels. Preserve the scores to be averaged across the entire batch later.8) Delete the sub-batch joint matrix $(B, T_{sub-batch}, U_{sub-batch}, V)$. Only gradients from .backward() are preserved now in the computation graph.9) Repeat steps (3) - (8) until all sub-batches are consumed.10) Cleanup step. Compute full batch WER and log. Concatenate loss list and pass to PTL to compute the equivalent of the original (full batch) Joint step. Delete ancillary objects necessary for sub-batching. Setting up Fused Batch step in a Transducer ConfigAfter all that discussion above, let us look at how to enable that entire pipeline in NeMo.As we can note below, it takes precisely two changes in the config to enable the fused batch step:
###Code
print(OmegaConf.to_yaml(config.model.joint))
# Two lines to enable the fused batch step
config.model.joint.experimental_fuse_loss_wer = True
config.model.joint.fused_batch_size = 16 # this can be any value (preferably less than model.*_ds.batch_size)
# We will also reduce the hidden dimension of the joint and the prediction networks to preserve some memory
config.model.model_defaults.pred_hidden = 64
config.model.model_defaults.joint_hidden = 64
###Output
_____no_output_____
###Markdown
--------Finally, since the dataset is tiny, we do not need an enormous model (the default is roughly 40 M parameters!).
###Code
# Use just 128 filters across the model to speed up training and reduce parameter count
config.model.model_defaults.filters = 128
###Output
_____no_output_____
###Markdown
Initialize a Transducer ASR ModelFinally, let us create a Transducer model, which is as easy as changing a line of import if you already have a script to create CTC models. We will use a small model since the dataset is just 5 hours of speech. ------Setup a Pytorch Lightning Trainer:
###Code
import torch
from pytorch_lightning import Trainer
if torch.cuda.is_available():
gpus = 1
else:
gpus = 0
EPOCHS = 50
# Initialize a Trainer for the Transducer model
trainer = Trainer(gpus=gpus, max_epochs=EPOCHS,
checkpoint_callback=False, logger=False,
log_every_n_steps=5, check_val_every_n_epoch=10)
# Import the Transducer Model
import nemo.collections.asr as nemo_asr
# Build the model
model = nemo_asr.models.EncDecRNNTBPEModel(cfg=config.model, trainer=trainer)
model.summarize();
###Output
_____no_output_____
###Markdown
------We now have a Transducer model ready to be trained! [link text](https:// [link text](https://)) (Optional) Partially loading pre-trained weights from another modelAn interesting point to note about Transducer models - the Acoustic model config (and therefore the Acoustic model itself) can be shared between CTC and Transducer models.This means that we can initialize the weights of a Transducer's Acoustic model with weights from a pre-trained CTC encoder model.------**Note**: This step is optional and not necessary at all to train a Transducer model. Below, we show the steps that we would take if we wanted to do this, however as the loaded model has different kernel sizes compared to the current model, the checkpoint cannot be loaded.
###Code
# Load a small CTC model
# ctc_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_citrinet_256", map_location='cpu')
###Output
_____no_output_____
###Markdown
------Then load the state dict of the CTC model's encoder into the Transducer model's encoder.
###Code
# <<< NOTE: This is only for demonstration ! >>>
# Below cell will fail because the two model's have incompatible kernel sizes in their Conv layers.
# <<< NOTE: Below cell is only shown to illustrate the method >>>
# model.encoder.load_state_dict(ctc_model.encoder.state_dict(), strict=True)
###Output
_____no_output_____
###Markdown
Training on AN4Now that the model is ready, we can finally train it!
###Code
# Prepare NeMo's Experiment manager to handle checkpoint saving and logging for us
from nemo.utils import exp_manager
# Environment variable generally used for multi-node multi-gpu training.
# In notebook environments, this flag is unnecessary and can cause logs of multiple training runs to overwrite each other.
os.environ.pop('NEMO_EXPM_VERSION', None)
exp_config = exp_manager.ExpManagerConfig(
exp_dir=f'experiments/',
name=f"Transducer-Model",
checkpoint_callback_params=exp_manager.CallbackParams(
monitor="val_wer",
mode="min",
always_save_nemo=True,
save_best_model=True,
),
)
exp_config = OmegaConf.structured(exp_config)
logdir = exp_manager.exp_manager(trainer, exp_config)
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir /content/experiments/Transducer-Model/
else:
print("To use TensorBoard, please use this notebook in a Google Colab environment.")
# Release resources prior to training
import gc
gc.collect()
if gpus > 0:
torch.cuda.empty_cache()
# Train the model
%%time
trainer.fit(model)
###Output
_____no_output_____
###Markdown
-------Lets check what the final performance on the test set.
###Code
%%time
trainer.test(model)
###Output
_____no_output_____
###Markdown
------The model should obtain some score between 10-12% WER after 50 epochs of training. Quite a good score for just 50 epochs of training a tiny model! Note that these are greedy scores, yet they are pretty strong for such a short training run.We can further improve these scores by using the internal Prediction network to calculate beam scores. Changing the Decoding StrategyDuring training, for the sake of efficiency, we were using the `greedy_batch` decoding strategy. However, we might want to perform inference with another method - say, beam search.NeMo allows changing the decoding strategy easily after the model has been trained.
###Code
import copy
decoding_config = copy.deepcopy(config.model.decoding)
print(OmegaConf.to_yaml(decoding_config))
# Update the config for the decoding strategy
decoding_config.strategy = "alsd" # Options are `greedy`, `greedy_batch`, `beam`, `tsd` and `alsd`
decoding_config.beam.beam_size = 4 # Increase beam size for better scores, but it will take much longer for transcription !
# Finally update the model's decoding strategy !
model.change_decoding_strategy(decoding_config)
%%time
trainer.test(model)
###Output
_____no_output_____
###Markdown
------Here, we improved our scores significantly by using the `Alignment-Length Synchronous Decoding` beam search. Feel free to try the other algorithms and compare the speed-accuracy tradeoff! (Extra) Extracting Transducer Model Alignments Transducers are unique in the sense that for each timestep $t \le T$, they can emit multiple target tokens $u_t$. During training, this is represented as the $T \times U$ joint that maps to the vocabulary $V$. During inference, there is no need to compute the full joint $T \times U$. Instead, after the model predicts the `Transducer Blank` token at the current timestep $t$ while predicting the target token $u_t$, the model will move onto the next acoustic timestep $t + 1$. As such, we can obtain the diagonal alignment of the Transducer model per sample relatively simply.------**Note**: While alignments can be calculated for both greedy and beam search - it is non-trivial to incorporate this alignment information for beam decoding. Therefore NeMo only supports extracting alignments during greedy decoding. -----Restore model to greedy decoding for alignment calculation
###Code
decoding_config.strategy = "greedy_batch"
# Special flag which is generally disabled
# Instruct Greedy Decoders to preserve alignment information during autoregressive decoding
with open_dict(decoding_config):
decoding_config.preserve_alignments = True
model.change_decoding_strategy(decoding_config)
###Output
_____no_output_____
###Markdown
-------Set up a test data loader that we will use to obtain the alignments for a single batch.
###Code
test_dl = model.test_dataloader()
test_dl = iter(test_dl)
batch = next(test_dl)
device = torch.device('cuda' if gpus > 0 else 'cpu')
def rnnt_alignments(model, batch):
model = model.to(device)
encoded, encoded_len = model.forward(
input_signal=batch[0].to(device), input_signal_length=batch[1].to(device)
)
current_hypotheses = model.decoding.rnnt_decoder_predictions_tensor(
encoded, encoded_len, return_hypotheses=True
)
del encoded, encoded_len
# current hypothesis is a tuple of
# 1) best hypothesis
# 2) Sorted list of hypothesis (if using beam search); None otherwise
return current_hypotheses
# Get a batch of hypotheses, as well as a batch of all obtained hypotheses (if beam search is used)
hypotheses, all_hypotheses = rnnt_alignments(model, batch)
###Output
_____no_output_____
###Markdown
------Select a sample ID from within the batch to observe the alignment information contained in the Hypothesis.
###Code
# Select the sample ID from within the batch
SAMPLE_ID = 0
# Obtain the hypothesis for this sample, as well as some ground truth information about this sample
hypothesis = hypotheses[SAMPLE_ID]
original_sample_len = batch[1][SAMPLE_ID]
ground_truth = batch[2][SAMPLE_ID]
# The Hypothesis object contains a lot of useful information regarding the decoding step.
print(hypothesis)
###Output
_____no_output_____
###Markdown
-------Now, decode the hypothesis and compare it against the ground truth text. Note - this decoded hypothesis is at *sub-word* level for this model. Therefore sub-word tokens such as `_` may be seen here.
###Code
decoded_text = hypothesis.text
decoded_hypothesis = model.decoding.decode_ids_to_tokens(hypothesis.y_sequence.cpu().numpy().tolist())
decoded_ground_truth = model.decoding.tokenizer.ids_to_text(ground_truth.cpu().numpy().tolist())
print("Decoded ground truth :", decoded_ground_truth)
print("Decoded hypothesis :", decoded_text)
print("Decoded hyp tokens :", decoded_hypothesis)
###Output
_____no_output_____
###Markdown
---------Next we print out the 2-d alignment grid of the RNNT model:
###Code
alignments = hypothesis.alignments
# These two values should normally always match
print("Length of alignments (T): ", len(alignments))
print("Length of padded acoustic model after striding : ", int(hypothesis.length))
###Output
_____no_output_____
###Markdown
------Finally, let us calculate the alignment grid. We will de-tokenze the sub-word token if it is a valid index in the vocabulary and use `''` as a placeholder for the `Transducer Blank` token.Note that each `timestep` here is (roughly) $timestep * total\_stride\_of\_model * preprocessor.window\_stride$ seconds timestamp. **Note**: You can modify the value of `config.model.loss.warprnnt_numba_kwargs.fastemit_lambda` prior to training and see an impact on final alignment latency!
###Code
# Compute the alignment grid
for ti in range(len(alignments)):
t_u = []
for uj in range(len(alignments[ti])):
token = alignments[ti][uj]
token = token.to('cpu').numpy().tolist()
decoded_token = model.decoding.decode_ids_to_tokens([token])[0] if token != model.decoding.blank_id else '' # token at index len(vocab) == RNNT blank token
t_u.append(decoded_token)
print(f"Tokens at timestep {ti} = {t_u}")
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Transducer ModelsThis notebook is a basic tutorial for creating a Transducer ASR model and then training it on a small dataset (AN4). It includes discussion relevant to reducing memory issues when training such models and demonstrates how to change the decoding strategy after training. Finally, it also provides a brief glimpse of extracting alignment information from a trained Transducer model.As we will see in this tutorial, apart from the differences in the config and the class used to instantiate the model, nearly all steps are precisely similar to any CTC-based model training. Many concepts such as data loader setup, optimization setup, pre-trained checkpoint weight loading will be nearly identical between CTC and Transducer models.In essence, NeMo makes it seamless to take a config for a CTC ASR model, add in a few components related to Transducers (often without any modifications) and use a different class to instantiate a Transducer model!--------**Note**: It is assumed that the previous tutorial - "Intro-to-Transducers" has been reviewed, and there is some familiarity with the config components of transducer models. Preparing the datasetIn this tutorial, we will be utilizing the `AN4`dataset - also known as the Alphanumeric dataset, which was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly. Let's first download the preparation script from NeMo's scripts directory -
###Code
import os
if not os.path.exists("scripts/"):
os.makedirs("scripts")
if not os.path.exists("scripts/process_an4_data.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_an4_data.py
###Output
_____no_output_____
###Markdown
------Download and prepare the two subsets of `AN 4`
###Code
import wget
import tarfile
import subprocess
import glob
data_dir = "datasets"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'https://dldata-public.s3.us-east-2.amazonaws.com/an4_sphere.tar.gz' # for the original source, please visit http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
# --- Building Manifest Files --- #
import json
import librosa
# Function to build a manifest
def build_manifest(transcripts_path, manifest_path, wav_path):
with open(transcripts_path, 'r') as fin:
with open(manifest_path, 'w') as fout:
for line in fin:
# Lines look like this:
# <s> transcript </s> (fileID)
transcript = line[: line.find('(')-1].lower()
transcript = transcript.replace('<s>', '').replace('</s>', '')
transcript = transcript.strip()
file_id = line[line.find('(')+1 : -2] # e.g. "cen4-fash-b"
audio_path = os.path.join(
data_dir, wav_path,
file_id[file_id.find('-')+1 : file_id.rfind('-')],
file_id + '.wav')
duration = librosa.core.get_duration(filename=audio_path)
# Write the metadata to the manifest
metadata = {
"audio_filepath": audio_path,
"duration": duration,
"text": transcript
}
json.dump(metadata, fout)
fout.write('\n')
# Building Manifests
print("******")
train_transcripts = os.path.join(data_dir, 'an4/etc/an4_train.transcription')
train_manifest = os.path.join(data_dir, 'an4/train_manifest.json')
if not os.path.isfile(train_manifest):
build_manifest(train_transcripts, train_manifest, 'an4/wav/an4_clstk')
print("Training manifest created.")
test_transcripts = os.path.join(data_dir, 'an4/etc/an4_test.transcription')
test_manifest = os.path.join(data_dir, 'an4/test_manifest.json')
if not os.path.isfile(test_manifest):
build_manifest(test_transcripts, test_manifest, 'an4/wav/an4test_clstk')
print("Test manifest created.")
print("***Done***")
# Manifest filepaths
TRAIN_MANIFEST = train_manifest
TEST_MANIFEST = test_manifest
###Output
_____no_output_____
###Markdown
Preparing the tokenizerNow that we have a dataset ready, we need to decide whether to use a character-based model or a sub-word-based model. For completeness' sake, we will use a tokenizer based model so that we can leverage a modern encoder architecture like ContextNet or Conformer-T.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
-----Since the dataset is tiny, we can use a small SentencePiece based tokenizer. We always delete the tokenizer directory so any changes to the manifest files are always replicated in the tokenizer.
###Code
VOCAB_SIZE = 32 # can be any value above 29
TOKENIZER_TYPE = "spe" # can be wpe or spe
SPE_TYPE = "unigram" # can be bpe or unigram
# ------------------------------------------------------------------- #
!rm -r tokenizers/
if not os.path.exists("tokenizers"):
os.makedirs("tokenizers")
!python scripts/process_asr_text_tokenizer.py \
--manifest=$TRAIN_MANIFEST \
--data_root="tokenizers" \
--tokenizer=$TOKENIZER_TYPE \
--spe_type=$SPE_TYPE \
--no_lower_case \
--log \
--vocab_size=$VOCAB_SIZE
# Tokenizer path
if TOKENIZER_TYPE == 'spe':
TOKENIZER = os.path.join("tokenizers", f"tokenizer_spe_{SPE_TYPE}_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "bpe"
else:
TOKENIZER = os.path.join("tokenizers", f"tokenizer_wpe_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "wpe"
###Output
_____no_output_____
###Markdown
Preparing a Transducer ModelNow that we have the dataset and tokenizer prepared, let us begin by setting up the config of the Transducer model! In this tutorial, we will build a slightly modified ContextNet architecture (which is obtained from the paper [ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context](https://arxiv.org/abs/2005.03191)).We can note that many of the steps here are identical to the setup of a CTC model! Prepare the configFor a dataset such as AN4, we do not need such a deep model. In fact, the depth of this model will cause much slower convergence on a small dataset, which would require far too long to train on Colab.In order to speed up training for this demo, we will take only the first five blocks of ContextNet, and discard the rest - and we can do this directly from the config.**Note**: On any realistic dataset (say Librispeech) this step would hurt the model's accuracy significantly. It is being done only to reduce the time spent waiting for training to finish on Colab.
###Code
from omegaconf import OmegaConf, open_dict
config = OmegaConf.load("../../examples/asr/conf/contextnet_rnnt/contextnet_rnnt.yaml")
###Output
_____no_output_____
###Markdown
-----Here, we will slice off the first five blocks from the Jasper block (used to build ContextNet). Setting the config with this subset will create a stride 2x model with just five blocks.We will also explicitly state that the last block dimension must be obtained from `model.model_defaults.enc_hidden` inside the config.
###Code
config.model.encoder.jasper = config.model.encoder.jasper[:5]
config.model.encoder.jasper[-1].filters = '${model.model_defaults.enc_hidden}'
###Output
_____no_output_____
###Markdown
-------Next, set up the data loaders of the config for the ContextNet model.
###Code
# print out the train and validation configs to know what needs to be changed
print(OmegaConf.to_yaml(config.model.train_ds))
###Output
_____no_output_____
###Markdown
-------We can note that the config here is nearly identical to the CTC ASR model configs! So let us take the same steps here to update the configs.
###Code
config.model.train_ds.manifest_filepath = TRAIN_MANIFEST
config.model.validation_ds.manifest_filepath = TEST_MANIFEST
config.model.test_ds.manifest_filepath = TEST_MANIFEST
###Output
_____no_output_____
###Markdown
------Next, we need to setup the tokenizer section of the config
###Code
print(OmegaConf.to_yaml(config.model.tokenizer))
config.model.tokenizer.dir = TOKENIZER
config.model.tokenizer.type = TOKENIZER_TYPE_CFG
###Output
_____no_output_____
###Markdown
------Now, we can update the optimization and augmentation for this dataset in order to converge to some reasonable score within a short training run.
###Code
print(OmegaConf.to_yaml(config.model.optim))
# Finally, let's remove logging of samples and the warmup since the dataset is small (similar to CTC models)
config.model.log_prediction = False
config.model.optim.sched.warmup_steps = None
###Output
_____no_output_____
###Markdown
------Next, we remove the spec augment that is provided by default for ContextNet. While additional augmentation would surely help training, it would require longer training to see significant benefits.
###Code
print(OmegaConf.to_yaml(config.model.spec_augment))
config.model.spec_augment.freq_masks = 0
config.model.spec_augment.time_masks = 0
###Output
_____no_output_____
###Markdown
------... We are now almost done! Most of the updates to a Transducer config are nearly the same as any CTC model. Fused Batch during training and evaluationWe discussed in the previous tutorial (Intro-to-Transducers) the significant memory cost of the Transducer Joint calculation during training. We also discussed that NeMo provides a simple yet effective method to nearly sidestep this limitation. We can now dive deeper into understanding what precisely NeMo's Transducer framework will do to alleviate this memory consumption issue.The following sub-cells are **voluntary** and valuable for understanding the cause, effect, and resolution of memory issues in Transducer models. The content can be skipped if one is familiar with the topic, and it is only required to use the `fused batch step`. Transducer Memory reduction with Fused Batch stepThe following few cells explain why memory is an issue when training Transducer models and how NeMo tackles the issue with its Fused Batch step.The material can be read for a thorough understanding, otherwise, it can be skipped. Diving deeper into the memory costs of Transducer Joint-------One of the significant limitations of Transducers is the exorbitant memory cost of computing the Joint module. The Joint module is comprised of two steps. 1) Projecting the Acoustic and Transcription feature dimensions to some standard hidden dimension (specified by `model.model_defaults.joint_hidden`)2) Projecting this intermediate hidden dimension to the final vocabulary space to obtain the transcription.Take the following example.**BS**=32 ; **T** (after **2x** stride) = 800, **U** (with character encoding) = 400-450 tokens, Vocabulary size **V** = 28 (26 alphabet chars, space and apostrophe). Let the hidden dimension of the Joint model be 640 (Most Google Transducer papers use hidden dimension of 640).$ Memory \, (Hidden, \, gb) = 32 \times 800 \times 450 \times 640 \times 4 = 29.49 $ gigabytes (4 bytes per float). $ Memory \, (Joint, \, gb) = 32 \times 800 \times 450 \times 28 \times 4 = 1.290 $ gigabytes (4 bytes per float)-----**NOTE**: This is just for the forward pass! We need to double this memory to store gradients! This much memory is also just for the Joint model **alone**. Far more memory is required for the Prediction model as well as the large Acoustic model itself and its gradients!Even with mixed precision, that's $\sim 30$ GB of GPU RAM for just 1 part of the network + its gradients.--------- Simple methods to reduce memory consumption------The easiest way to reduce memory consumption is to perform more downsampling in the acoustic model and use sub-word tokenization of the text to reduce the length of the target sequence.**BS**=32 ; **T** (after **8x** stride) = 200, **U** (with sub-word encoding) = 100-180 tokens, Vocabulary size **V** = 1024.$ Memory \, (Hidden, \, gb) = 32 \times 200 \times 150 \times 640 \times 4 = 2.45 $ gigabytes (4 bytes per float).$ Memory \, (Joint, \, gb) = 32 \times 200 \times 150 \times 1024 \times 4 = 3.93 $ gigabytes (4 bytes per float)-----Using Automatic Mixed Precision, we expend just around 6-7 GB of GPU RAM on the Joint + its gradient.The above memory cost is much more tractable - but we generally want larger and larger acoustic models. It is consistently the easiest way to improve transcription accuracy. So that means on a limited 32 GB GPU, we have to partition 7 GB just for the Joint and remaining memory allocated between Transcription + Acoustic Models. Fused Transcription-Joint-Loss-WER (also called Batch Splitting)----------The fundamental problem is that the joint tensor grows in size when `[T x U]` grows in size. This growth in memory cost is due to many reasons - either by model construction (downsampling) or the choice of dataset preprocessing (character tokenization vs. sub-word tokenization).Another dimension that NeMo can control is **batch**. Due to how we batch our samples, small and large samples all get clumped together into a single batch. So even though the individual samples are not all as long as the maximum length of T and U in that batch, when a batch of such samples is constructed, it will consume a significant amount of memory for the sake of compute efficiency.So as is always the case - **trade-off compute speed for memory savings**.------The fused operation goes as follows : 1) Forward the entire acoustic model in a single pass. (Use global batch size here for acoustic model - found in `model.*_ds.batch_size`)2) Split the Acoustic Model's logits by `fused_batch_size` and loop over these sub-batches.3) Construct a sub-batch of same `fused_batch_size` for the Prediction model. Now the target sequence length is $U_{sub-batch} < U$. 4) Feed this $U_{sub-batch}$ into the Joint model, along with a sub-batch from the Acoustic model (with $T_{sub-batch} < T$). Remember, we only have to slice off a part of the acoustic model here since we have the full batch of samples $(B, T, D)$ from the acoustic model.5) Performing steps (3) and (4) yields $T_{sub-batch}$ and $U_{sub-batch}$. Perform sub-batch joint step - costing an intermediate $(B, T_{sub-batch}, U_{sub-batch}, V)$ in memory.6) Compute loss on sub-batch and preserve in a list to be later concatenated. 7) Compute sub-batch metrics (such as Character / Word Error Rate) using the above Joint tensor and sub-batch of ground truth labels. Preserve the scores to be averaged across the entire batch later.8) Delete the sub-batch joint matrix $(B, T_{sub-batch}, U_{sub-batch}, V)$. Only gradients from .backward() are preserved now in the computation graph.9) Repeat steps (3) - (8) until all sub-batches are consumed.10) Cleanup step. Compute full batch WER and log. Concatenate loss list and pass to PTL to compute the equivalent of the original (full batch) Joint step. Delete ancillary objects necessary for sub-batching. Setting up Fused Batch step in a Transducer ConfigAfter all that discussion above, let us look at how to enable that entire pipeline in NeMo.As we can note below, it takes precisely two changes in the config to enable the fused batch step:
###Code
print(OmegaConf.to_yaml(config.model.joint))
# Two lines to enable the fused batch step
config.model.joint.fuse_loss_wer = True
config.model.joint.fused_batch_size = 16 # this can be any value (preferably less than model.*_ds.batch_size)
# We will also reduce the hidden dimension of the joint and the prediction networks to preserve some memory
config.model.model_defaults.pred_hidden = 64
config.model.model_defaults.joint_hidden = 64
###Output
_____no_output_____
###Markdown
--------Finally, since the dataset is tiny, we do not need an enormous model (the default is roughly 40 M parameters!).
###Code
# Use just 128 filters across the model to speed up training and reduce parameter count
config.model.model_defaults.filters = 128
###Output
_____no_output_____
###Markdown
Initialize a Transducer ASR ModelFinally, let us create a Transducer model, which is as easy as changing a line of import if you already have a script to create CTC models. We will use a small model since the dataset is just 5 hours of speech. ------Setup a Pytorch Lightning Trainer:
###Code
import torch
from pytorch_lightning import Trainer
if torch.cuda.is_available():
accelerator = 'gpu'
else:
accelerator = 'gpu'
EPOCHS = 50
# Initialize a Trainer for the Transducer model
trainer = Trainer(devices=1, accelerator=accelerator, max_epochs=EPOCHS,
enable_checkpointing=False, logger=False,
log_every_n_steps=5, check_val_every_n_epoch=10)
# Import the Transducer Model
import nemo.collections.asr as nemo_asr
# Build the model
model = nemo_asr.models.EncDecRNNTBPEModel(cfg=config.model, trainer=trainer)
model.summarize();
###Output
_____no_output_____
###Markdown
------We now have a Transducer model ready to be trained! (Optional) Partially loading pre-trained weights from another modelAn interesting point to note about Transducer models - the Acoustic model config (and therefore the Acoustic model itself) can be shared between CTC and Transducer models.This means that we can initialize the weights of a Transducer's Acoustic model with weights from a pre-trained CTC encoder model.------**Note**: This step is optional and not necessary at all to train a Transducer model. Below, we show the steps that we would take if we wanted to do this, however as the loaded model has different kernel sizes compared to the current model, the checkpoint cannot be loaded.
###Code
# Load a small CTC model
# ctc_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_citrinet_256", map_location='cpu')
###Output
_____no_output_____
###Markdown
------Then load the state dict of the CTC model's encoder into the Transducer model's encoder.
###Code
# <<< NOTE: This is only for demonstration ! >>>
# Below cell will fail because the two model's have incompatible kernel sizes in their Conv layers.
# <<< NOTE: Below cell is only shown to illustrate the method >>>
# model.encoder.load_state_dict(ctc_model.encoder.state_dict(), strict=True)
###Output
_____no_output_____
###Markdown
Training on AN4Now that the model is ready, we can finally train it!
###Code
# Prepare NeMo's Experiment manager to handle checkpoint saving and logging for us
from nemo.utils import exp_manager
# Environment variable generally used for multi-node multi-gpu training.
# In notebook environments, this flag is unnecessary and can cause logs of multiple training runs to overwrite each other.
os.environ.pop('NEMO_EXPM_VERSION', None)
exp_config = exp_manager.ExpManagerConfig(
exp_dir=f'experiments/',
name=f"Transducer-Model",
checkpoint_callback_params=exp_manager.CallbackParams(
monitor="val_wer",
mode="min",
always_save_nemo=True,
save_best_model=True,
),
)
exp_config = OmegaConf.structured(exp_config)
logdir = exp_manager.exp_manager(trainer, exp_config)
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir /content/experiments/Transducer-Model/
else:
print("To use TensorBoard, please use this notebook in a Google Colab environment.")
# Release resources prior to training
import gc
gc.collect()
if accelerator == 'gpu':
torch.cuda.empty_cache()
# Train the model
trainer.fit(model)
###Output
_____no_output_____
###Markdown
-------Lets check what the final performance on the test set.
###Code
trainer.test(model)
###Output
_____no_output_____
###Markdown
------The model should obtain some score between 10-12% WER after 50 epochs of training. Quite a good score for just 50 epochs of training a tiny model! Note that these are greedy scores, yet they are pretty strong for such a short training run.We can further improve these scores by using the internal Prediction network to calculate beam scores. Changing the Decoding StrategyDuring training, for the sake of efficiency, we were using the `greedy_batch` decoding strategy. However, we might want to perform inference with another method - say, beam search.NeMo allows changing the decoding strategy easily after the model has been trained.
###Code
import copy
decoding_config = copy.deepcopy(config.model.decoding)
print(OmegaConf.to_yaml(decoding_config))
# Update the config for the decoding strategy
decoding_config.strategy = "alsd" # Options are `greedy`, `greedy_batch`, `beam`, `tsd` and `alsd`
decoding_config.beam.beam_size = 4 # Increase beam size for better scores, but it will take much longer for transcription !
# Finally update the model's decoding strategy !
model.change_decoding_strategy(decoding_config)
trainer.test(model)
###Output
_____no_output_____
###Markdown
------Here, we improved our scores significantly by using the `Alignment-Length Synchronous Decoding` beam search. Feel free to try the other algorithms and compare the speed-accuracy tradeoff! (Extra) Extracting Transducer Model Alignments Transducers are unique in the sense that for each timestep $t \le T$, they can emit multiple target tokens $u_t$. During training, this is represented as the $T \times U$ joint that maps to the vocabulary $V$. During inference, there is no need to compute the full joint $T \times U$. Instead, after the model predicts the `Transducer Blank` token at the current timestep $t$ while predicting the target token $u_t$, the model will move onto the next acoustic timestep $t + 1$. As such, we can obtain the diagonal alignment of the Transducer model per sample relatively simply.------**Note**: While alignments can be calculated for both greedy and beam search - it is non-trivial to incorporate this alignment information for beam decoding. Therefore NeMo only supports extracting alignments during greedy decoding. -----Restore model to greedy decoding for alignment calculation
###Code
decoding_config.strategy = "greedy_batch"
# Special flag which is generally disabled
# Instruct Greedy Decoders to preserve alignment information during autoregressive decoding
with open_dict(decoding_config):
decoding_config.preserve_alignments = True
decoding_config.fused_batch_size = -1 # temporarily stop fused batch during inference.
model.change_decoding_strategy(decoding_config)
###Output
_____no_output_____
###Markdown
-------Set up a test data loader that we will use to obtain the alignments for a single batch.
###Code
test_dl = model.test_dataloader()
test_dl = iter(test_dl)
batch = next(test_dl)
device = torch.device('cuda' if accelerator == 'gpu' else 'cpu')
def rnnt_alignments(model, batch):
model = model.to(device)
encoded, encoded_len = model.forward(
input_signal=batch[0].to(device), input_signal_length=batch[1].to(device)
)
current_hypotheses = model.decoding.rnnt_decoder_predictions_tensor(
encoded, encoded_len, return_hypotheses=True
)
del encoded, encoded_len
# current hypothesis is a tuple of
# 1) best hypothesis
# 2) Sorted list of hypothesis (if using beam search); None otherwise
return current_hypotheses
# Get a batch of hypotheses, as well as a batch of all obtained hypotheses (if beam search is used)
hypotheses, all_hypotheses = rnnt_alignments(model, batch)
###Output
_____no_output_____
###Markdown
------Select a sample ID from within the batch to observe the alignment information contained in the Hypothesis.
###Code
# Select the sample ID from within the batch
SAMPLE_ID = 0
# Obtain the hypothesis for this sample, as well as some ground truth information about this sample
hypothesis = hypotheses[SAMPLE_ID]
original_sample_len = batch[1][SAMPLE_ID]
ground_truth = batch[2][SAMPLE_ID]
# The Hypothesis object contains a lot of useful information regarding the decoding step.
print(hypothesis)
###Output
_____no_output_____
###Markdown
-------Now, decode the hypothesis and compare it against the ground truth text. Note - this decoded hypothesis is at *sub-word* level for this model. Therefore sub-word tokens such as `_` may be seen here.
###Code
decoded_text = hypothesis.text
decoded_hypothesis = model.decoding.decode_ids_to_tokens(hypothesis.y_sequence.cpu().numpy().tolist())
decoded_ground_truth = model.decoding.tokenizer.ids_to_text(ground_truth.cpu().numpy().tolist())
print("Decoded ground truth :", decoded_ground_truth)
print("Decoded hypothesis :", decoded_text)
print("Decoded hyp tokens :", decoded_hypothesis)
###Output
_____no_output_____
###Markdown
---------Next we print out the 2-d alignment grid of the RNNT model:
###Code
alignments = hypothesis.alignments
# These two values should normally always match
print("Length of alignments (T): ", len(alignments))
print("Length of padded acoustic model after striding : ", int(hypothesis.length))
###Output
_____no_output_____
###Markdown
------Finally, let us calculate the alignment grid. We will de-tokenize the sub-word token if it is a valid index in the vocabulary and use `''` as a placeholder for the `Transducer Blank` token.Note that each `timestep` here is (roughly) $timestep * total\_stride\_of\_model * preprocessor.window\_stride$ seconds timestamp. **Note**: You can modify the value of `config.model.loss.warprnnt_numba_kwargs.fastemit_lambda` prior to training and see an impact on final alignment latency!
###Code
# Compute the alignment grid
for ti in range(len(alignments)):
t_u = []
for uj in range(len(alignments[ti])):
token = alignments[ti][uj]
token = token.to('cpu').numpy().tolist()
decoded_token = model.decoding.decode_ids_to_tokens([token])[0] if token != model.decoding.blank_id else '' # token at index len(vocab) == RNNT blank token
t_u.append(decoded_token)
print(f"Tokens at timestep {ti} = {t_u}")
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Transducer ModelsThis notebook is a basic tutorial for creating a Transducer ASR model and then training it on a small dataset (AN4). It includes discussion relevant to reducing memory issues when training such models and demonstrates how to change the decoding strategy after training. Finally, it also provides a brief glimpse of extracting alignment information from a trained Transducer model.As we will see in this tutorial, apart from the differences in the config and the class used to instantiate the model, nearly all steps are precisely similar to any CTC-based model training. Many concepts such as data loader setup, optimization setup, pre-trained checkpoint weight loading will be nearly identical between CTC and Transducer models.In essence, NeMo makes it seamless to take a config for a CTC ASR model, add in a few components related to Transducers (often without any modifications) and use a different class to instantiate a Transducer model!--------**Note**: It is assumed that the previous tutorial - "Intro-to-Transducers" has been reviewed, and there is some familiarity with the config components of transducer models. Preparing the datasetIn this tutorial, we will be utilizing the `AN4`dataset - also known as the Alphanumeric dataset, which was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly. Let's first download the preparation script from NeMo's scripts directory -
###Code
import os
if not os.path.exists("scripts/"):
os.makedirs("scripts")
if not os.path.exists("scripts/process_an4_data.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_an4_data.py
###Output
_____no_output_____
###Markdown
------Download and prepare the two subsets of `AN 4`
###Code
import wget
import tarfile
import subprocess
import glob
data_dir = "datasets"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'https://dldata-public.s3.us-east-2.amazonaws.com/an4_sphere.tar.gz' # for the original source, please visit http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
if os.path.exists(f"{data_dir}/an4"):
print("Preparing AN4 dataset ...")
an4_path = f"{data_dir}/"
!python scripts/process_an4_data.py \
--data_root=$an4_path
print("AN4 prepared !")
# Manifest filepaths
TRAIN_MANIFEST = os.path.join(data_dir, "an4", "train_manifest.json")
TEST_MANIFEST = os.path.join(data_dir, "an4", "test_manifest.json")
###Output
_____no_output_____
###Markdown
Preparing the tokenizerNow that we have a dataset ready, we need to decide whether to use a character-based model or a sub-word-based model. For completeness' sake, we will use a tokenizer based model so that we can leverage a modern encoder architecture like ContextNet or Conformer-T.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
-----Since the dataset is tiny, we can use a small SentencePiece based tokenizer. We always delete the tokenizer directory so any changes to the manifest files are always replicated in the tokenizer.
###Code
VOCAB_SIZE = 32 # can be any value above 29
TOKENIZER_TYPE = "spe" # can be wpe or spe
SPE_TYPE = "unigram" # can be bpe or unigram
# ------------------------------------------------------------------- #
!rm -r tokenizers/
if not os.path.exists("tokenizers"):
os.makedirs("tokenizers")
!python scripts/process_asr_text_tokenizer.py \
--manifest=$TRAIN_MANIFEST \
--data_root="tokenizers" \
--tokenizer=$TOKENIZER_TYPE \
--spe_type=$SPE_TYPE \
--no_lower_case \
--log \
--vocab_size=$VOCAB_SIZE
# Tokenizer path
if TOKENIZER_TYPE == 'spe':
TOKENIZER = os.path.join("tokenizers", f"tokenizer_spe_{SPE_TYPE}_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "bpe"
else:
TOKENIZER = os.path.join("tokenizers", f"tokenizer_wpe_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "wpe"
###Output
_____no_output_____
###Markdown
Preparing a Transducer ModelNow that we have the dataset and tokenizer prepared, let us begin by setting up the config of the Transducer model! In this tutorial, we will build a slightly modified ContextNet architecture (which is obtained from the paper [ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context](https://arxiv.org/abs/2005.03191)).We can note that many of the steps here are identical to the setup of a CTC model! Prepare the configFor a dataset such as AN4, we do not need such a deep model. In fact, the depth of this model will cause much slower convergence on a small dataset, which would require far too long to train on Colab.In order to speed up training for this demo, we will take only the first five blocks of ContextNet, and discard the rest - and we can do this directly from the config.**Note**: On any realistic dataset (say Librispeech) this step would hurt the model's accuracy significantly. It is being done only to reduce the time spent waiting for training to finish on Colab.
###Code
from omegaconf import OmegaConf, open_dict
config = OmegaConf.load("configs/contextnet_rnnt.yaml")
###Output
_____no_output_____
###Markdown
-----Here, we will slice off the first five blocks from the Jasper block (used to build ContextNet). Setting the config with this subset will create a stride 2x model with just five blocks.We will also explicitly state that the last block dimension must be obtained from `model.model_defaults.enc_hidden` inside the config.
###Code
config.model.encoder.jasper = config.model.encoder.jasper[:5]
config.model.encoder.jasper[-1].filters = '${model.model_defaults.enc_hidden}'
###Output
_____no_output_____
###Markdown
-------Next, set up the data loaders of the config for the ContextNet model.
###Code
# print out the train and validation configs to know what needs to be changed
print(OmegaConf.to_yaml(config.model.train_ds))
###Output
_____no_output_____
###Markdown
-------We can note that the config here is nearly identical to the CTC ASR model configs! So let us take the same steps here to update the configs.
###Code
config.model.train_ds.manifest_filepath = TRAIN_MANIFEST
config.model.validation_ds.manifest_filepath = TEST_MANIFEST
config.model.test_ds.manifest_filepath = TEST_MANIFEST
###Output
_____no_output_____
###Markdown
------Next, we need to setup the tokenizer section of the config
###Code
print(OmegaConf.to_yaml(config.model.tokenizer))
config.model.tokenizer.dir = TOKENIZER
config.model.tokenizer.type = TOKENIZER_TYPE_CFG
###Output
_____no_output_____
###Markdown
------Now, we can update the optimization and augmentation for this dataset in order to converge to some reasonable score within a short training run.
###Code
print(OmegaConf.to_yaml(config.model.optim))
# Finally, let's remove logging of samples and the warmup since the dataset is small (similar to CTC models)
config.model.log_prediction = False
config.model.optim.sched.warmup_steps = None
###Output
_____no_output_____
###Markdown
------Next, we remove the spec augment that is provided by default for ContextNet. While additional augmentation would surely help training, it would require longer training to see significant benefits.
###Code
print(OmegaConf.to_yaml(config.model.spec_augment))
config.model.spec_augment.freq_masks = 0
config.model.spec_augment.time_masks = 0
###Output
_____no_output_____
###Markdown
------... We are now almost done! Most of the updates to a Transducer config are nearly the same as any CTC model. Fused Batch during training and evaluationWe discussed in the previous tutorial (Intro-to-Transducers) the significant memory cost of the Transducer Joint calculation during training. We also discussed that NeMo provides a simple yet effective method to nearly sidestep this limitation. We can now dive deeper into understanding what precisely NeMo's Transducer framework will do to alleviate this memory consumption issue.The following sub-cells are **voluntary** and valuable for understanding the cause, effect, and resolution of memory issues in Transducer models. The content can be skipped if one is familiar with the topic, and it is only required to use the `fused batch step`. Transducer Memory reduction with Fused Batch stepThe following few cells explain why memory is an issue when training Transducer models and how NeMo tackles the issue with its Fused Batch step.The material can be read for a thorough understanding, otherwise, it can be skipped. Diving deeper into the memory costs of Transducer Joint-------One of the significant limitations of Transducers is the exorbitant memory cost of computing the Joint module. The Joint module is comprised of two steps. 1) Projecting the Acoustic and Transcription feature dimensions to some standard hidden dimension (specified by `model.model_defaults.joint_hidden`)2) Projecting this intermediate hidden dimension to the final vocabulary space to obtain the transcription.Take the following example.**BS**=32 ; **T** (after **2x** stride) = 800, **U** (with character encoding) = 400-450 tokens, Vocabulary size **V** = 28 (26 alphabet chars, space and apostrophe). Let the hidden dimension of the Joint model be 640 (Most Google Transducer papers use hidden dimension of 640).$ Memory \, (Hidden, \, gb) = 32 \times 800 \times 450 \times 640 \times 4 = 29.49 $ gigabytes (4 bytes per float). $ Memory \, (Joint, \, gb) = 32 \times 800 \times 450 \times 28 \times 4 = 1.290 $ gigabytes (4 bytes per float)-----**NOTE**: This is just for the forward pass! We need to double this memory to store gradients! This much memory is also just for the Joint model **alone**. Far more memory is required for the Prediction model as well as the large Acoustic model itself and its gradients!Even with mixed precision, that's $\sim 30$ GB of GPU RAM for just 1 part of the network + its gradients.--------- Simple methods to reduce memory consumption------The easiest way to reduce memory consumption is to perform more downsampling in the acoustic model and use sub-word tokenization of the text to reduce the length of the target sequence.**BS**=32 ; **T** (after **8x** stride) = 200, **U** (with sub-word encoding) = 100-180 tokens, Vocabulary size **V** = 1024.$ Memory \, (Hidden, \, gb) = 32 \times 200 \times 150 \times 640 \times 4 = 2.45 $ gigabytes (4 bytes per float).$ Memory \, (Joint, \, gb) = 32 \times 200 \times 150 \times 1024 \times 4 = 3.93 $ gigabytes (4 bytes per float)-----Using Automatic Mixed Precision, we expend just around 6-7 GB of GPU RAM on the Joint + its gradient.The above memory cost is much more tractable - but we generally want larger and larger acoustic models. It is consistently the easiest way to improve transcription accuracy. So that means on a limited 32 GB GPU, we have to partition 7 GB just for the Joint and remaining memory allocated between Transcription + Acoustic Models. Fused Transcription-Joint-Loss-WER (also called Batch Splitting)----------The fundamental problem is that the joint tensor grows in size when `[T x U]` grows in size. This growth in memory cost is due to many reasons - either by model construction (downsampling) or the choice of dataset preprocessing (character tokenization vs. sub-word tokenization).Another dimension that NeMo can control is **batch**. Due to how we batch our samples, small and large samples all get clumped together into a single batch. So even though the individual samples are not all as long as the maximum length of T and U in that batch, when a batch of such samples is constructed, it will consume a significant amount of memory for the sake of compute efficiency.So as is always the case - **trade-off compute speed for memory savings**.------The fused operation goes as follows : 1) Forward the entire acoustic model in a single pass. (Use global batch size here for acoustic model - found in `model.*_ds.batch_size`)2) Split the Acoustic Model's logits by `fused_batch_size` and loop over these sub-batches.3) Construct a sub-batch of same `fused_batch_size` for the Prediction model. Now the target sequence length is $U_{sub-batch} < U$. 4) Feed this $U_{sub-batch}$ into the Joint model, along with a sub-batch from the Acoustic model (with $T_{sub-batch} < T$). Remember, we only have to slice off a part of the acoustic model here since we have the full batch of samples $(B, T, D)$ from the acoustic model.5) Performing steps (3) and (4) yields $T_{sub-batch}$ and $U_{sub-batch}$. Perform sub-batch joint step - costing an intermediate $(B, T_{sub-batch}, U_{sub-batch}, V)$ in memory.6) Compute loss on sub-batch and preserve in a list to be later concatenated. 7) Compute sub-batch metrics (such as Character / Word Error Rate) using the above Joint tensor and sub-batch of ground truth labels. Preserve the scores to be averaged across the entire batch later.8) Delete the sub-batch joint matrix $(B, T_{sub-batch}, U_{sub-batch}, V)$. Only gradients from .backward() are preserved now in the computation graph.9) Repeat steps (3) - (8) until all sub-batches are consumed.10) Cleanup step. Compute full batch WER and log. Concatenate loss list and pass to PTL to compute the equivalent of the original (full batch) Joint step. Delete ancillary objects necessary for sub-batching. Setting up Fused Batch step in a Transducer ConfigAfter all that discussion above, let us look at how to enable that entire pipeline in NeMo.As we can note below, it takes precisely two changes in the config to enable the fused batch step:
###Code
print(OmegaConf.to_yaml(config.model.joint))
# Two lines to enable the fused batch step
config.model.joint.fuse_loss_wer = True
config.model.joint.fused_batch_size = 16 # this can be any value (preferably less than model.*_ds.batch_size)
# We will also reduce the hidden dimension of the joint and the prediction networks to preserve some memory
config.model.model_defaults.pred_hidden = 64
config.model.model_defaults.joint_hidden = 64
###Output
_____no_output_____
###Markdown
--------Finally, since the dataset is tiny, we do not need an enormous model (the default is roughly 40 M parameters!).
###Code
# Use just 128 filters across the model to speed up training and reduce parameter count
config.model.model_defaults.filters = 128
###Output
_____no_output_____
###Markdown
Initialize a Transducer ASR ModelFinally, let us create a Transducer model, which is as easy as changing a line of import if you already have a script to create CTC models. We will use a small model since the dataset is just 5 hours of speech. ------Setup a Pytorch Lightning Trainer:
###Code
import torch
from pytorch_lightning import Trainer
if torch.cuda.is_available():
accelerator = 'gpu'
else:
accelerator = 'gpu'
EPOCHS = 50
# Initialize a Trainer for the Transducer model
trainer = Trainer(devices=1, accelerator=accelerator, max_epochs=EPOCHS,
enable_checkpointing=False, logger=False,
log_every_n_steps=5, check_val_every_n_epoch=10)
# Import the Transducer Model
import nemo.collections.asr as nemo_asr
# Build the model
model = nemo_asr.models.EncDecRNNTBPEModel(cfg=config.model, trainer=trainer)
model.summarize();
###Output
_____no_output_____
###Markdown
------We now have a Transducer model ready to be trained! (Optional) Partially loading pre-trained weights from another modelAn interesting point to note about Transducer models - the Acoustic model config (and therefore the Acoustic model itself) can be shared between CTC and Transducer models.This means that we can initialize the weights of a Transducer's Acoustic model with weights from a pre-trained CTC encoder model.------**Note**: This step is optional and not necessary at all to train a Transducer model. Below, we show the steps that we would take if we wanted to do this, however as the loaded model has different kernel sizes compared to the current model, the checkpoint cannot be loaded.
###Code
# Load a small CTC model
# ctc_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_citrinet_256", map_location='cpu')
###Output
_____no_output_____
###Markdown
------Then load the state dict of the CTC model's encoder into the Transducer model's encoder.
###Code
# <<< NOTE: This is only for demonstration ! >>>
# Below cell will fail because the two model's have incompatible kernel sizes in their Conv layers.
# <<< NOTE: Below cell is only shown to illustrate the method >>>
# model.encoder.load_state_dict(ctc_model.encoder.state_dict(), strict=True)
###Output
_____no_output_____
###Markdown
Training on AN4Now that the model is ready, we can finally train it!
###Code
# Prepare NeMo's Experiment manager to handle checkpoint saving and logging for us
from nemo.utils import exp_manager
# Environment variable generally used for multi-node multi-gpu training.
# In notebook environments, this flag is unnecessary and can cause logs of multiple training runs to overwrite each other.
os.environ.pop('NEMO_EXPM_VERSION', None)
exp_config = exp_manager.ExpManagerConfig(
exp_dir=f'experiments/',
name=f"Transducer-Model",
checkpoint_callback_params=exp_manager.CallbackParams(
monitor="val_wer",
mode="min",
always_save_nemo=True,
save_best_model=True,
),
)
exp_config = OmegaConf.structured(exp_config)
logdir = exp_manager.exp_manager(trainer, exp_config)
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir /content/experiments/Transducer-Model/
else:
print("To use TensorBoard, please use this notebook in a Google Colab environment.")
# Release resources prior to training
import gc
gc.collect()
if accelerator == 'gpu':
torch.cuda.empty_cache()
# Train the model
trainer.fit(model)
###Output
_____no_output_____
###Markdown
-------Lets check what the final performance on the test set.
###Code
trainer.test(model)
###Output
_____no_output_____
###Markdown
------The model should obtain some score between 10-12% WER after 50 epochs of training. Quite a good score for just 50 epochs of training a tiny model! Note that these are greedy scores, yet they are pretty strong for such a short training run.We can further improve these scores by using the internal Prediction network to calculate beam scores. Changing the Decoding StrategyDuring training, for the sake of efficiency, we were using the `greedy_batch` decoding strategy. However, we might want to perform inference with another method - say, beam search.NeMo allows changing the decoding strategy easily after the model has been trained.
###Code
import copy
decoding_config = copy.deepcopy(config.model.decoding)
print(OmegaConf.to_yaml(decoding_config))
# Update the config for the decoding strategy
decoding_config.strategy = "alsd" # Options are `greedy`, `greedy_batch`, `beam`, `tsd` and `alsd`
decoding_config.beam.beam_size = 4 # Increase beam size for better scores, but it will take much longer for transcription !
# Finally update the model's decoding strategy !
model.change_decoding_strategy(decoding_config)
trainer.test(model)
###Output
_____no_output_____
###Markdown
------Here, we improved our scores significantly by using the `Alignment-Length Synchronous Decoding` beam search. Feel free to try the other algorithms and compare the speed-accuracy tradeoff! (Extra) Extracting Transducer Model Alignments Transducers are unique in the sense that for each timestep $t \le T$, they can emit multiple target tokens $u_t$. During training, this is represented as the $T \times U$ joint that maps to the vocabulary $V$. During inference, there is no need to compute the full joint $T \times U$. Instead, after the model predicts the `Transducer Blank` token at the current timestep $t$ while predicting the target token $u_t$, the model will move onto the next acoustic timestep $t + 1$. As such, we can obtain the diagonal alignment of the Transducer model per sample relatively simply.------**Note**: While alignments can be calculated for both greedy and beam search - it is non-trivial to incorporate this alignment information for beam decoding. Therefore NeMo only supports extracting alignments during greedy decoding. -----Restore model to greedy decoding for alignment calculation
###Code
decoding_config.strategy = "greedy_batch"
# Special flag which is generally disabled
# Instruct Greedy Decoders to preserve alignment information during autoregressive decoding
with open_dict(decoding_config):
decoding_config.preserve_alignments = True
decoding_config.fused_batch_size = -1 # temporarily stop fused batch during inference.
model.change_decoding_strategy(decoding_config)
###Output
_____no_output_____
###Markdown
-------Set up a test data loader that we will use to obtain the alignments for a single batch.
###Code
test_dl = model.test_dataloader()
test_dl = iter(test_dl)
batch = next(test_dl)
device = torch.device('cuda' if accelerator == 'gpu' else 'cpu')
def rnnt_alignments(model, batch):
model = model.to(device)
encoded, encoded_len = model.forward(
input_signal=batch[0].to(device), input_signal_length=batch[1].to(device)
)
current_hypotheses = model.decoding.rnnt_decoder_predictions_tensor(
encoded, encoded_len, return_hypotheses=True
)
del encoded, encoded_len
# current hypothesis is a tuple of
# 1) best hypothesis
# 2) Sorted list of hypothesis (if using beam search); None otherwise
return current_hypotheses
# Get a batch of hypotheses, as well as a batch of all obtained hypotheses (if beam search is used)
hypotheses, all_hypotheses = rnnt_alignments(model, batch)
###Output
_____no_output_____
###Markdown
------Select a sample ID from within the batch to observe the alignment information contained in the Hypothesis.
###Code
# Select the sample ID from within the batch
SAMPLE_ID = 0
# Obtain the hypothesis for this sample, as well as some ground truth information about this sample
hypothesis = hypotheses[SAMPLE_ID]
original_sample_len = batch[1][SAMPLE_ID]
ground_truth = batch[2][SAMPLE_ID]
# The Hypothesis object contains a lot of useful information regarding the decoding step.
print(hypothesis)
###Output
_____no_output_____
###Markdown
-------Now, decode the hypothesis and compare it against the ground truth text. Note - this decoded hypothesis is at *sub-word* level for this model. Therefore sub-word tokens such as `_` may be seen here.
###Code
decoded_text = hypothesis.text
decoded_hypothesis = model.decoding.decode_ids_to_tokens(hypothesis.y_sequence.cpu().numpy().tolist())
decoded_ground_truth = model.decoding.tokenizer.ids_to_text(ground_truth.cpu().numpy().tolist())
print("Decoded ground truth :", decoded_ground_truth)
print("Decoded hypothesis :", decoded_text)
print("Decoded hyp tokens :", decoded_hypothesis)
###Output
_____no_output_____
###Markdown
---------Next we print out the 2-d alignment grid of the RNNT model:
###Code
alignments = hypothesis.alignments
# These two values should normally always match
print("Length of alignments (T): ", len(alignments))
print("Length of padded acoustic model after striding : ", int(hypothesis.length))
###Output
_____no_output_____
###Markdown
------Finally, let us calculate the alignment grid. We will de-tokenize the sub-word token if it is a valid index in the vocabulary and use `''` as a placeholder for the `Transducer Blank` token.Note that each `timestep` here is (roughly) $timestep * total\_stride\_of\_model * preprocessor.window\_stride$ seconds timestamp. **Note**: You can modify the value of `config.model.loss.warprnnt_numba_kwargs.fastemit_lambda` prior to training and see an impact on final alignment latency!
###Code
# Compute the alignment grid
for ti in range(len(alignments)):
t_u = []
for uj in range(len(alignments[ti])):
token = alignments[ti][uj]
token = token.to('cpu').numpy().tolist()
decoded_token = model.decoding.decode_ids_to_tokens([token])[0] if token != model.decoding.blank_id else '' # token at index len(vocab) == RNNT blank token
t_u.append(decoded_token)
print(f"Tokens at timestep {ti} = {t_u}")
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Transducer ModelsThis notebook is a basic tutorial for creating a Transducer ASR model and then training it on a small dataset (AN4). It includes discussion relevant to reducing memory issues when training such models and demonstrates how to change the decoding strategy after training. Finally, it also provides a brief glimpse of extracting alignment information from a trained Transducer model.As we will see in this tutorial, apart from the differences in the config and the class used to instantiate the model, nearly all steps are precisely similar to any CTC-based model training. Many concepts such as data loader setup, optimization setup, pre-trained checkpoint weight loading will be nearly identical between CTC and Transducer models.In essence, NeMo makes it seamless to take a config for a CTC ASR model, add in a few components related to Transducers (often without any modifications) and use a different class to instantiate a Transducer model!--------**Note**: It is assumed that the previous tutorial - "Intro-to-Transducers" has been reviewed, and there is some familiarity with the config components of transducer models. Preparing the datasetIn this tutorial, we will be utilizing the `AN4`dataset - also known as the Alphanumeric dataset, which was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly. Let's first download the preparation script from NeMo's scripts directory -
###Code
import os
if not os.path.exists("scripts/"):
os.makedirs("scripts")
if not os.path.exists("scripts/process_an4_data.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_an4_data.py
###Output
_____no_output_____
###Markdown
------Download and prepare the two subsets of `AN 4`
###Code
import wget
import tarfile
import glob
data_dir = "datasets"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
if os.path.exists(f"{data_dir}/an4"):
print("Preparing AN4 dataset ...")
an4_path = f"{data_dir}/"
!python scripts/process_an4_data.py \
--data_root=$an4_path
print("AN4 prepared !")
# Manifest filepaths
TRAIN_MANIFEST = os.path.join(data_dir, "an4", "train_manifest.json")
TEST_MANIFEST = os.path.join(data_dir, "an4", "test_manifest.json")
###Output
_____no_output_____
###Markdown
Preparing the tokenizerNow that we have a dataset ready, we need to decide whether to use a character-based model or a sub-word-based model. For completeness' sake, we will use a tokenizer based model so that we can leverage a modern encoder architecture like ContextNet or Conformer-T.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
-----Since the dataset is tiny, we can use a small SentencePiece based tokenizer. We always delete the tokenizer directory so any changes to the manifest files are always replicated in the tokenizer.
###Code
VOCAB_SIZE = 32 # can be any value above 29
TOKENIZER_TYPE = "spe" # can be wpe or spe
SPE_TYPE = "unigram" # can be bpe or unigram
# ------------------------------------------------------------------- #
!rm -r tokenizers/
if not os.path.exists("tokenizers"):
os.makedirs("tokenizers")
!python scripts/process_asr_text_tokenizer.py \
--manifest=$TRAIN_MANIFEST \
--data_root="tokenizers" \
--tokenizer=$TOKENIZER_TYPE \
--spe_type=$SPE_TYPE \
--no_lower_case \
--log \
--vocab_size=$VOCAB_SIZE
# Tokenizer path
if TOKENIZER_TYPE == 'spe':
TOKENIZER = os.path.join("tokenizers", f"tokenizer_spe_{SPE_TYPE}_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "bpe"
else:
TOKENIZER = os.path.join("tokenizers", f"tokenizer_wpe_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "wpe"
###Output
_____no_output_____
###Markdown
Preparing a Transducer ModelNow that we have the dataset and tokenizer prepared, let us begin by setting up the config of the Transducer model! In this tutorial, we will build a slightly modified ContextNet architecture (which is obtained from the paper [ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context](https://arxiv.org/abs/2005.03191)).We can note that many of the steps here are identical to the setup of a CTC model! Prepare the configFor a dataset such as AN4, we do not need such a deep model. In fact, the depth of this model will cause much slower convergence on a small dataset, which would require far too long to train on Colab.In order to speed up training for this demo, we will take only the first five blocks of ContextNet, and discard the rest - and we can do this directly from the config.**Note**: On any realistic dataset (say Librispeech) this step would hurt the model's accuracy significantly. It is being done only to reduce the time spent waiting for training to finish on Colab.
###Code
from omegaconf import OmegaConf, open_dict
config = OmegaConf.load("configs/contextnet_rnnt.yaml")
###Output
_____no_output_____
###Markdown
-----Here, we will slice off the first five blocks from the Jasper block (used to build ContextNet). Setting the config with this subset will create a stride 2x model with just five blocks.We will also explicitly state that the last block dimension must be obtained from `model.model_defaults.enc_hidden` inside the config.
###Code
config.model.encoder.jasper = config.model.encoder.jasper[:5]
config.model.encoder.jasper[-1].filters = '${model.model_defaults.enc_hidden}'
###Output
_____no_output_____
###Markdown
-------Next, set up the data loaders of the config for the ContextNet model.
###Code
# print out the train and validation configs to know what needs to be changed
print(OmegaConf.to_yaml(config.model.train_ds))
###Output
_____no_output_____
###Markdown
-------We can note that the config here is nearly identical to the CTC ASR model configs! So let us take the same steps here to update the configs.
###Code
config.model.train_ds.manifest_filepath = TRAIN_MANIFEST
config.model.validation_ds.manifest_filepath = TEST_MANIFEST
config.model.test_ds.manifest_filepath = TEST_MANIFEST
###Output
_____no_output_____
###Markdown
------Next, we need to setup the tokenizer section of the config
###Code
print(OmegaConf.to_yaml(config.model.tokenizer))
config.model.tokenizer.dir = TOKENIZER
config.model.tokenizer.type = TOKENIZER_TYPE_CFG
###Output
_____no_output_____
###Markdown
------Now, we can update the optimization and augmentation for this dataset in order to converge to some reasonable score within a short training run.
###Code
print(OmegaConf.to_yaml(config.model.optim))
# Finally, let's remove logging of samples and the warmup since the dataset is small (similar to CTC models)
config.model.log_prediction = False
config.model.optim.sched.warmup_steps = None
###Output
_____no_output_____
###Markdown
------Next, we remove the spec augment that is provided by default for ContextNet. While additional augmentation would surely help training, it would require longer training to see significant benefits.
###Code
print(OmegaConf.to_yaml(config.model.spec_augment))
config.model.spec_augment.freq_masks = 0
config.model.spec_augment.time_masks = 0
###Output
_____no_output_____
###Markdown
------... We are now almost done! Most of the updates to a Transducer config are nearly the same as any CTC model. Fused Batch during training and evaluationWe discussed in the previous tutorial (Intro-to-Transducers) the significant memory cost of the Transducer Joint calculation during training. We also discussed that NeMo provides a simple yet effective method to nearly sidestep this limitation. We can now dive deeper into understanding what precisely NeMo's Transducer framework will do to alleviate this memory consumption issue.The following sub-cells are **voluntary** and valuable for understanding the cause, effect, and resolution of memory issues in Transducer models. The content can be skipped if one is familiar with the topic, and it is only required to use the `fused batch step`. Transducer Memory reduction with Fused Batch stepThe following few cells explain why memory is an issue when training Transducer models and how NeMo tackles the issue with its Fused Batch step.The material can be read for a thorough understanding, otherwise, it can be skipped. Diving deeper into the memory costs of Transducer Joint-------One of the significant limitations of Transducers is the exorbitant memory cost of computing the Joint module. The Joint module is comprised of two steps. 1) Projecting the Acoustic and Transcription feature dimensions to some standard hidden dimension (specified by `model.model_defaults.joint_hidden`)2) Projecting this intermediate hidden dimension to the final vocabulary space to obtain the transcription.Take the following example.**BS**=32 ; **T** (after **2x** stride) = 800, **U** (with character encoding) = 400-450 tokens, Vocabulary size **V** = 28 (26 alphabet chars, space and apostrophe). Let the hidden dimension of the Joint model be 640 (Most Google Transducer papers use hidden dimension of 640).$ Memory \, (Hidden, \, gb) = 32 \times 800 \times 450 \times 640 \times 4 = 29.49 $ gigabytes (4 bytes per float). $ Memory \, (Joint, \, gb) = 32 \times 800 \times 450 \times 28 \times 4 = 1.290 $ gigabytes (4 bytes per float)-----**NOTE**: This is just for the forward pass! We need to double this memory to store gradients! This much memory is also just for the Joint model **alone**. Far more memory is required for the Prediction model as well as the large Acoustic model itself and its gradients!Even with mixed precision, that's $\sim 30$ GB of GPU RAM for just 1 part of the network + its gradients.--------- Simple methods to reduce memory consumption------The easiest way to reduce memory consumption is to perform more downsampling in the acoustic model and use sub-word tokenization of the text to reduce the length of the target sequence.**BS**=32 ; **T** (after **8x** stride) = 200, **U** (with sub-word encoding) = 100-180 tokens, Vocabulary size **V** = 1024.$ Memory \, (Hidden, \, gb) = 32 \times 200 \times 150 \times 640 \times 4 = 2.45 $ gigabytes (4 bytes per float).$ Memory \, (Joint, \, gb) = 32 \times 200 \times 150 \times 1024 \times 4 = 3.93 $ gigabytes (4 bytes per float)-----Using Automatic Mixed Precision, we expend just around 6-7 GB of GPU RAM on the Joint + its gradient.The above memory cost is much more tractable - but we generally want larger and larger acoustic models. It is consistently the easiest way to improve transcription accuracy. So that means on a limited 32 GB GPU, we have to partition 7 GB just for the Joint and remaining memory allocated between Transcription + Acoustic Models. Fused Transcription-Joint-Loss-WER (also called Batch Splitting)----------The fundamental problem is that the joint tensor grows in size when `[T x U]` grows in size. This growth in memory cost is due to many reasons - either by model construction (downsampling) or the choice of dataset preprocessing (character tokenization vs. sub-word tokenization).Another dimension that NeMo can control is **batch**. Due to how we batch our samples, small and large samples all get clumped together into a single batch. So even though the individual samples are not all as long as the maximum length of T and U in that batch, when a batch of such samples is constructed, it will consume a significant amount of memory for the sake of compute efficiency.So as is always the case - **trade-off compute speed for memory savings**.------The fused operation goes as follows : 1) Forward the entire acoustic model in a single pass. (Use global batch size here for acoustic model - found in `model.*_ds.batch_size`)2) Split the Acoustic Model's logits by `fused_batch_size` and loop over these sub-batches.3) Construct a sub-batch of same `fused_batch_size` for the Prediction model. Now the target sequence length is $U_{sub-batch} < U$. 4) Feed this $U_{sub-batch}$ into the Joint model, along with a sub-batch from the Acoustic model (with $T_{sub-batch} < T$). Remember, we only have to slice off a part of the acoustic model here since we have the full batch of samples $(B, T, D)$ from the acoustic model.5) Perfoming steps (3) and (4) yields $T_{sub-batch}$ and $U_{sub-batch}$. Perform sub-batch joint step - costing an intermediate $(B, T_{sub-batch}, U_{sub-batch}, V)$ in memory.6) Compute loss on sub-batch and preserve in a list to be later concatenated. 7) Compute sub-batch metrics (such as Character / Word Error Rate) using the above Joint tensor and sub-batch of ground truth labels. Preserve the scores to be averaged across the entire batch later.8) Delete the sub-batch joint matrix $(B, T_{sub-batch}, U_{sub-batch}, V)$. Only gradients from .backward() are preserved now in the computation graph.9) Repeat steps (3) - (8) until all sub-batches are consumed.10) Cleanup step. Compute full batch WER and log. Concatenate loss list and pass to PTL to compute the equivalent of the original (full batch) Joint step. Delete ancillary objects necessary for sub-batching. Setting up Fused Batch step in a Transducer ConfigAfter all that discussion above, let us look at how to enable that entire pipeline in NeMo.As we can note below, it takes precisely two changes in the config to enable the fused batch step:
###Code
print(OmegaConf.to_yaml(config.model.joint))
# Two lines to enable the fused batch step
config.model.joint.experimental_fuse_loss_wer = True
config.model.joint.fused_batch_size = 16 # this can be any value (preferably less than model.*_ds.batch_size)
# We will also reduce the hidden dimension of the joint and the prediction networks to preserve some memory
config.model.model_defaults.pred_hidden = 64
config.model.model_defaults.joint_hidden = 64
###Output
_____no_output_____
###Markdown
--------Finally, since the dataset is tiny, we do not need an enormous model (the default is roughly 40 M parameters!).
###Code
# Use just 128 filters across the model to speed up training and reduce parameter count
config.model.model_defaults.filters = 128
###Output
_____no_output_____
###Markdown
Initialize a Transducer ASR ModelFinally, let us create a Transducer model, which is as easy as changing a line of import if you already have a script to create CTC models. We will use a small model since the dataset is just 5 hours of speech. ------Setup a Pytorch Lightning Trainer:
###Code
import torch
from pytorch_lightning import Trainer
if torch.cuda.is_available():
gpus = 1
else:
gpus = 0
EPOCHS = 50
# Initialize a Trainer for the Transducer model
trainer = Trainer(gpus=gpus, max_epochs=EPOCHS,
checkpoint_callback=False, logger=False,
log_every_n_steps=5, check_val_every_n_epoch=10)
# Import the Transducer Model
import nemo.collections.asr as nemo_asr
# Build the model
model = nemo_asr.models.EncDecRNNTBPEModel(cfg=config.model, trainer=trainer)
model.summarize();
###Output
_____no_output_____
###Markdown
------We now have a Transducer model ready to be trained! [link text](https:// [link text](https://)) (Optional) Partially loading pre-trained weights from another modelAn interesting point to note about Transducer models - the Acoustic model config (and therefore the Acoustic model itself) can be shared between CTC and Transducer models.This means that we can initialize the weights of a Transducer's Acoustic model with weights from a pre-trained CTC encoder model.------**Note**: This step is optional and not necessary at all to train a Transducer model. Below, we show the steps that we would take if we wanted to do this, however as the loaded model has different kernel sizes compared to the current model, the checkpoint cannot be loaded.
###Code
# Load a small CTC model
# ctc_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_citrinet_256", map_location='cpu')
###Output
_____no_output_____
###Markdown
------Then load the state dict of the CTC model's encoder into the Transducer model's encoder.
###Code
# <<< NOTE: This is only for demonstration ! >>>
# Below cell will fail because the two model's have incompatible kernel sizes in their Conv layers.
# <<< NOTE: Below cell is only shown to illustrate the method >>>
# model.encoder.load_state_dict(ctc_model.encoder.state_dict(), strict=True)
###Output
_____no_output_____
###Markdown
Training on AN4Now that the model is ready, we can finally train it!
###Code
# Prepare NeMo's Experiment manager to handle checkpoint saving and logging for us
from nemo.utils import exp_manager
# Environment variable generally used for multi-node multi-gpu training.
# In notebook environments, this flag is unnecessary and can cause logs of multiple training runs to overwrite each other.
os.environ.pop('NEMO_EXPM_VERSION', None)
exp_config = exp_manager.ExpManagerConfig(
exp_dir=f'experiments/',
name=f"Transducer-Model",
checkpoint_callback_params=exp_manager.CallbackParams(
monitor="val_wer",
mode="min",
always_save_nemo=True,
save_best_model=True,
),
)
exp_config = OmegaConf.structured(exp_config)
logdir = exp_manager.exp_manager(trainer, exp_config)
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir /content/experiments/Transducer-Model/
else:
print("To use TensorBoard, please use this notebook in a Google Colab environment.")
# Release resources prior to training
import gc
gc.collect()
if gpus > 0:
torch.cuda.empty_cache()
# Train the model
%%time
trainer.fit(model)
###Output
_____no_output_____
###Markdown
-------Lets check what the final performance on the test set.
###Code
%%time
trainer.test(model)
###Output
_____no_output_____
###Markdown
------The model should obtain some score between 10-12% WER after 50 epochs of training. Quite a good score for just 50 epochs of training a tiny model! Note that these are greedy scores, yet they are pretty strong for such a short training run.We can further improve these scores by using the internal Prediction network to calculate beam scores. Changing the Decoding StrategyDuring training, for the sake of efficiency, we were using the `greedy_batch` decoding strategy. However, we might want to perform inference with another method - say, beam search.NeMo allows changing the decoding strategy easily after the model has been trained.
###Code
import copy
decoding_config = copy.deepcopy(config.model.decoding)
print(OmegaConf.to_yaml(decoding_config))
# Update the config for the decoding strategy
decoding_config.strategy = "alsd" # Options are `greedy`, `greedy_batch`, `beam`, `tsd` and `alsd`
decoding_config.beam.beam_size = 4 # Increase beam size for better scores, but it will take much longer for transcription !
# Finally update the model's decoding strategy !
model.change_decoding_strategy(decoding_config)
%%time
trainer.test(model)
###Output
_____no_output_____
###Markdown
------Here, we improved our scores significantly by using the `Alignment-Length Synchronous Decoding` beam search. Feel free to try the other algorithms and compare the speed-accuracy tradeoff! (Extra) Extracting Transducer Model Alignments Transducers are unique in the sense that for each timestep $t \le T$, they can emit multiple target tokens $u_t$. During training, this is represented as the $T \times U$ joint that maps to the vocabulary $V$. During inference, there is no need to compute the full joint $T \times U$. Instead, after the model predicts the `Transducer Blank` token at the current timestep $t$ while predicting the target token $u_t$, the model will move onto the next acoustic timestep $t + 1$. As such, we can obtain the diagonal alignment of the Transducer model per sample relatively simply.------**Note**: While alignments can be calculated for both greedy and beam search - it is non-trivial to incorporate this alignment information for beam decoding. Therefore NeMo only supports extracting alignments during greedy decoding. -----Restore model to greedy decoding for alignment calculation
###Code
decoding_config.strategy = "greedy_batch"
# Special flag which is generally disabled
# Instruct Greedy Decoders to preserve alignment information during autoregressive decoding
with open_dict(decoding_config):
decoding_config.preserve_alignments = True
model.change_decoding_strategy(decoding_config)
###Output
_____no_output_____
###Markdown
-------Set up a test data loader that we will use to obtain the alignments for a single batch.
###Code
test_dl = model.test_dataloader()
test_dl = iter(test_dl)
batch = next(test_dl)
device = torch.device('cuda' if gpus > 0 else 'cpu')
def rnnt_alignments(model, batch):
model = model.to(device)
encoded, encoded_len = model.forward(
input_signal=batch[0].to(device), input_signal_length=batch[1].to(device)
)
current_hypotheses = model.decoding.rnnt_decoder_predictions_tensor(
encoded, encoded_len, return_hypotheses=True
)
del encoded, encoded_len
# current hypothesis is a tuple of
# 1) best hypothesis
# 2) Sorted list of hypothesis (if using beam search); None otherwise
return current_hypotheses
# Get a batch of hypotheses, as well as a batch of all obtained hypotheses (if beam search is used)
hypotheses, all_hypotheses = rnnt_alignments(model, batch)
###Output
_____no_output_____
###Markdown
------Select a sample ID from within the batch to observe the alignment information contained in the Hypothesis.
###Code
# Select the sample ID from within the batch
SAMPLE_ID = 0
# Obtain the hypothesis for this sample, as well as some ground truth information about this sample
hypothesis = hypotheses[SAMPLE_ID]
original_sample_len = batch[1][SAMPLE_ID]
ground_truth = batch[2][SAMPLE_ID]
# The Hypothesis object contains a lot of useful information regarding the decoding step.
print(hypothesis)
###Output
_____no_output_____
###Markdown
-------Now, decode the hypothesis and compare it against the ground truth text. Note - this decoded hypothesis is at *sub-word* level for this model. Therefore sub-word tokens such as `_` may be seen here.
###Code
decoded_text = hypothesis.text
decoded_hypothesis = model.decoding.decode_ids_to_tokens(hypothesis.y_sequence.cpu().numpy().tolist())
decoded_ground_truth = model.decoding.tokenizer.ids_to_text(ground_truth.cpu().numpy().tolist())
print("Decoded ground truth :", decoded_ground_truth)
print("Decoded hypothesis :", decoded_text)
print("Decoded hyp tokens :", decoded_hypothesis)
###Output
_____no_output_____
###Markdown
---------Next we print out the 2-d alignment grid of the RNNT model:
###Code
alignments = hypothesis.alignments
# These two values should normally always match
print("Length of alignments (T): ", len(alignments))
print("Length of padded acoustic model after striding : ", int(hypothesis.length))
###Output
_____no_output_____
###Markdown
------Finally, let us calculate the alignment grid. We will de-tokenze the sub-word token if it is a valid index in the vocabulary and use `''` as a placeholder for the `Transducer Blank` token.Note that each `timestep` here is (roughly) $timestep * total\_stride\_of\_model * preprocessor.window\_stride$ seconds timestamp. **Note**: You can modify the value of `config.model.loss.warprnnt_numba_kwargs.fastemit_lambda` prior to training and see an impact on final alignment latency!
###Code
# Compute the alignment grid
for ti in range(len(alignments)):
t_u = []
for uj in range(len(alignments[ti])):
token = alignments[ti][uj]
token = token.to('cpu').numpy().tolist()
decoded_token = model.decoding.decode_ids_to_tokens([token])[0] if token != model.decoding.blank_id else '' # token at index len(vocab) == RNNT blank token
t_u.append(decoded_token)
print(f"Tokens at timestep {ti} = {t_u}")
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Transducer ModelsThis notebook is a basic tutorial for creating a Transducer ASR model and then training it on a small dataset (AN4). It includes discussion relevant to reducing memory issues when training such models and demonstrates how to change the decoding strategy after training. Finally, it also provides a brief glimpse of extracting alignment information from a trained Transducer model.As we will see in this tutorial, apart from the differences in the config and the class used to instantiate the model, nearly all steps are precisely similar to any CTC-based model training. Many concepts such as data loader setup, optimization setup, pre-trained checkpoint weight loading will be nearly identical between CTC and Transducer models.In essence, NeMo makes it seamless to take a config for a CTC ASR model, add in a few components related to Transducers (often without any modifications) and use a different class to instantiate a Transducer model!--------**Note**: It is assumed that the previous tutorial - "Intro-to-Transducers" has been reviewed, and there is some familiarity with the config components of transducer models. Preparing the datasetIn this tutorial, we will be utilizing the `AN4`dataset - also known as the Alphanumeric dataset, which was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly. Let's first download the preparation script from NeMo's scripts directory -
###Code
import os
if not os.path.exists("scripts/"):
os.makedirs("scripts")
if not os.path.exists("scripts/process_an4_data.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_an4_data.py
###Output
_____no_output_____
###Markdown
------Download and prepare the two subsets of `AN 4`
###Code
import wget
import tarfile
import subprocess
import glob
data_dir = "datasets"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
if os.path.exists(f"{data_dir}/an4"):
print("Preparing AN4 dataset ...")
an4_path = f"{data_dir}/"
!python scripts/process_an4_data.py \
--data_root=$an4_path
print("AN4 prepared !")
# Manifest filepaths
TRAIN_MANIFEST = os.path.join(data_dir, "an4", "train_manifest.json")
TEST_MANIFEST = os.path.join(data_dir, "an4", "test_manifest.json")
###Output
_____no_output_____
###Markdown
Preparing the tokenizerNow that we have a dataset ready, we need to decide whether to use a character-based model or a sub-word-based model. For completeness' sake, we will use a tokenizer based model so that we can leverage a modern encoder architecture like ContextNet or Conformer-T.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
-----Since the dataset is tiny, we can use a small SentencePiece based tokenizer. We always delete the tokenizer directory so any changes to the manifest files are always replicated in the tokenizer.
###Code
VOCAB_SIZE = 32 # can be any value above 29
TOKENIZER_TYPE = "spe" # can be wpe or spe
SPE_TYPE = "unigram" # can be bpe or unigram
# ------------------------------------------------------------------- #
!rm -r tokenizers/
if not os.path.exists("tokenizers"):
os.makedirs("tokenizers")
!python scripts/process_asr_text_tokenizer.py \
--manifest=$TRAIN_MANIFEST \
--data_root="tokenizers" \
--tokenizer=$TOKENIZER_TYPE \
--spe_type=$SPE_TYPE \
--no_lower_case \
--log \
--vocab_size=$VOCAB_SIZE
# Tokenizer path
if TOKENIZER_TYPE == 'spe':
TOKENIZER = os.path.join("tokenizers", f"tokenizer_spe_{SPE_TYPE}_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "bpe"
else:
TOKENIZER = os.path.join("tokenizers", f"tokenizer_wpe_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "wpe"
###Output
_____no_output_____
###Markdown
Preparing a Transducer ModelNow that we have the dataset and tokenizer prepared, let us begin by setting up the config of the Transducer model! In this tutorial, we will build a slightly modified ContextNet architecture (which is obtained from the paper [ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context](https://arxiv.org/abs/2005.03191)).We can note that many of the steps here are identical to the setup of a CTC model! Prepare the configFor a dataset such as AN4, we do not need such a deep model. In fact, the depth of this model will cause much slower convergence on a small dataset, which would require far too long to train on Colab.In order to speed up training for this demo, we will take only the first five blocks of ContextNet, and discard the rest - and we can do this directly from the config.**Note**: On any realistic dataset (say Librispeech) this step would hurt the model's accuracy significantly. It is being done only to reduce the time spent waiting for training to finish on Colab.
###Code
from omegaconf import OmegaConf, open_dict
config = OmegaConf.load("configs/contextnet_rnnt.yaml")
###Output
_____no_output_____
###Markdown
-----Here, we will slice off the first five blocks from the Jasper block (used to build ContextNet). Setting the config with this subset will create a stride 2x model with just five blocks.We will also explicitly state that the last block dimension must be obtained from `model.model_defaults.enc_hidden` inside the config.
###Code
config.model.encoder.jasper = config.model.encoder.jasper[:5]
config.model.encoder.jasper[-1].filters = '${model.model_defaults.enc_hidden}'
###Output
_____no_output_____
###Markdown
-------Next, set up the data loaders of the config for the ContextNet model.
###Code
# print out the train and validation configs to know what needs to be changed
print(OmegaConf.to_yaml(config.model.train_ds))
###Output
_____no_output_____
###Markdown
-------We can note that the config here is nearly identical to the CTC ASR model configs! So let us take the same steps here to update the configs.
###Code
config.model.train_ds.manifest_filepath = TRAIN_MANIFEST
config.model.validation_ds.manifest_filepath = TEST_MANIFEST
config.model.test_ds.manifest_filepath = TEST_MANIFEST
###Output
_____no_output_____
###Markdown
------Next, we need to setup the tokenizer section of the config
###Code
print(OmegaConf.to_yaml(config.model.tokenizer))
config.model.tokenizer.dir = TOKENIZER
config.model.tokenizer.type = TOKENIZER_TYPE_CFG
###Output
_____no_output_____
###Markdown
------Now, we can update the optimization and augmentation for this dataset in order to converge to some reasonable score within a short training run.
###Code
print(OmegaConf.to_yaml(config.model.optim))
# Finally, let's remove logging of samples and the warmup since the dataset is small (similar to CTC models)
config.model.log_prediction = False
config.model.optim.sched.warmup_steps = None
###Output
_____no_output_____
###Markdown
------Next, we remove the spec augment that is provided by default for ContextNet. While additional augmentation would surely help training, it would require longer training to see significant benefits.
###Code
print(OmegaConf.to_yaml(config.model.spec_augment))
config.model.spec_augment.freq_masks = 0
config.model.spec_augment.time_masks = 0
###Output
_____no_output_____
###Markdown
------... We are now almost done! Most of the updates to a Transducer config are nearly the same as any CTC model. Fused Batch during training and evaluationWe discussed in the previous tutorial (Intro-to-Transducers) the significant memory cost of the Transducer Joint calculation during training. We also discussed that NeMo provides a simple yet effective method to nearly sidestep this limitation. We can now dive deeper into understanding what precisely NeMo's Transducer framework will do to alleviate this memory consumption issue.The following sub-cells are **voluntary** and valuable for understanding the cause, effect, and resolution of memory issues in Transducer models. The content can be skipped if one is familiar with the topic, and it is only required to use the `fused batch step`. Transducer Memory reduction with Fused Batch stepThe following few cells explain why memory is an issue when training Transducer models and how NeMo tackles the issue with its Fused Batch step.The material can be read for a thorough understanding, otherwise, it can be skipped. Diving deeper into the memory costs of Transducer Joint-------One of the significant limitations of Transducers is the exorbitant memory cost of computing the Joint module. The Joint module is comprised of two steps. 1) Projecting the Acoustic and Transcription feature dimensions to some standard hidden dimension (specified by `model.model_defaults.joint_hidden`)2) Projecting this intermediate hidden dimension to the final vocabulary space to obtain the transcription.Take the following example.**BS**=32 ; **T** (after **2x** stride) = 800, **U** (with character encoding) = 400-450 tokens, Vocabulary size **V** = 28 (26 alphabet chars, space and apostrophe). Let the hidden dimension of the Joint model be 640 (Most Google Transducer papers use hidden dimension of 640).$ Memory \, (Hidden, \, gb) = 32 \times 800 \times 450 \times 640 \times 4 = 29.49 $ gigabytes (4 bytes per float). $ Memory \, (Joint, \, gb) = 32 \times 800 \times 450 \times 28 \times 4 = 1.290 $ gigabytes (4 bytes per float)-----**NOTE**: This is just for the forward pass! We need to double this memory to store gradients! This much memory is also just for the Joint model **alone**. Far more memory is required for the Prediction model as well as the large Acoustic model itself and its gradients!Even with mixed precision, that's $\sim 30$ GB of GPU RAM for just 1 part of the network + its gradients.--------- Simple methods to reduce memory consumption------The easiest way to reduce memory consumption is to perform more downsampling in the acoustic model and use sub-word tokenization of the text to reduce the length of the target sequence.**BS**=32 ; **T** (after **8x** stride) = 200, **U** (with sub-word encoding) = 100-180 tokens, Vocabulary size **V** = 1024.$ Memory \, (Hidden, \, gb) = 32 \times 200 \times 150 \times 640 \times 4 = 2.45 $ gigabytes (4 bytes per float).$ Memory \, (Joint, \, gb) = 32 \times 200 \times 150 \times 1024 \times 4 = 3.93 $ gigabytes (4 bytes per float)-----Using Automatic Mixed Precision, we expend just around 6-7 GB of GPU RAM on the Joint + its gradient.The above memory cost is much more tractable - but we generally want larger and larger acoustic models. It is consistently the easiest way to improve transcription accuracy. So that means on a limited 32 GB GPU, we have to partition 7 GB just for the Joint and remaining memory allocated between Transcription + Acoustic Models. Fused Transcription-Joint-Loss-WER (also called Batch Splitting)----------The fundamental problem is that the joint tensor grows in size when `[T x U]` grows in size. This growth in memory cost is due to many reasons - either by model construction (downsampling) or the choice of dataset preprocessing (character tokenization vs. sub-word tokenization).Another dimension that NeMo can control is **batch**. Due to how we batch our samples, small and large samples all get clumped together into a single batch. So even though the individual samples are not all as long as the maximum length of T and U in that batch, when a batch of such samples is constructed, it will consume a significant amount of memory for the sake of compute efficiency.So as is always the case - **trade-off compute speed for memory savings**.------The fused operation goes as follows : 1) Forward the entire acoustic model in a single pass. (Use global batch size here for acoustic model - found in `model.*_ds.batch_size`)2) Split the Acoustic Model's logits by `fused_batch_size` and loop over these sub-batches.3) Construct a sub-batch of same `fused_batch_size` for the Prediction model. Now the target sequence length is $U_{sub-batch} < U$. 4) Feed this $U_{sub-batch}$ into the Joint model, along with a sub-batch from the Acoustic model (with $T_{sub-batch} < T$). Remember, we only have to slice off a part of the acoustic model here since we have the full batch of samples $(B, T, D)$ from the acoustic model.5) Performing steps (3) and (4) yields $T_{sub-batch}$ and $U_{sub-batch}$. Perform sub-batch joint step - costing an intermediate $(B, T_{sub-batch}, U_{sub-batch}, V)$ in memory.6) Compute loss on sub-batch and preserve in a list to be later concatenated. 7) Compute sub-batch metrics (such as Character / Word Error Rate) using the above Joint tensor and sub-batch of ground truth labels. Preserve the scores to be averaged across the entire batch later.8) Delete the sub-batch joint matrix $(B, T_{sub-batch}, U_{sub-batch}, V)$. Only gradients from .backward() are preserved now in the computation graph.9) Repeat steps (3) - (8) until all sub-batches are consumed.10) Cleanup step. Compute full batch WER and log. Concatenate loss list and pass to PTL to compute the equivalent of the original (full batch) Joint step. Delete ancillary objects necessary for sub-batching. Setting up Fused Batch step in a Transducer ConfigAfter all that discussion above, let us look at how to enable that entire pipeline in NeMo.As we can note below, it takes precisely two changes in the config to enable the fused batch step:
###Code
print(OmegaConf.to_yaml(config.model.joint))
# Two lines to enable the fused batch step
config.model.joint.experimental_fuse_loss_wer = True
config.model.joint.fused_batch_size = 16 # this can be any value (preferably less than model.*_ds.batch_size)
# We will also reduce the hidden dimension of the joint and the prediction networks to preserve some memory
config.model.model_defaults.pred_hidden = 64
config.model.model_defaults.joint_hidden = 64
###Output
_____no_output_____
###Markdown
--------Finally, since the dataset is tiny, we do not need an enormous model (the default is roughly 40 M parameters!).
###Code
# Use just 128 filters across the model to speed up training and reduce parameter count
config.model.model_defaults.filters = 128
###Output
_____no_output_____
###Markdown
Initialize a Transducer ASR ModelFinally, let us create a Transducer model, which is as easy as changing a line of import if you already have a script to create CTC models. We will use a small model since the dataset is just 5 hours of speech. ------Setup a Pytorch Lightning Trainer:
###Code
import torch
from pytorch_lightning import Trainer
if torch.cuda.is_available():
gpus = 1
else:
gpus = 0
EPOCHS = 50
# Initialize a Trainer for the Transducer model
trainer = Trainer(gpus=gpus, max_epochs=EPOCHS,
checkpoint_callback=False, logger=False,
log_every_n_steps=5, check_val_every_n_epoch=10)
# Import the Transducer Model
import nemo.collections.asr as nemo_asr
# Build the model
model = nemo_asr.models.EncDecRNNTBPEModel(cfg=config.model, trainer=trainer)
model.summarize();
###Output
_____no_output_____
###Markdown
------We now have a Transducer model ready to be trained! (Optional) Partially loading pre-trained weights from another modelAn interesting point to note about Transducer models - the Acoustic model config (and therefore the Acoustic model itself) can be shared between CTC and Transducer models.This means that we can initialize the weights of a Transducer's Acoustic model with weights from a pre-trained CTC encoder model.------**Note**: This step is optional and not necessary at all to train a Transducer model. Below, we show the steps that we would take if we wanted to do this, however as the loaded model has different kernel sizes compared to the current model, the checkpoint cannot be loaded.
###Code
# Load a small CTC model
# ctc_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_citrinet_256", map_location='cpu')
###Output
_____no_output_____
###Markdown
------Then load the state dict of the CTC model's encoder into the Transducer model's encoder.
###Code
# <<< NOTE: This is only for demonstration ! >>>
# Below cell will fail because the two model's have incompatible kernel sizes in their Conv layers.
# <<< NOTE: Below cell is only shown to illustrate the method >>>
# model.encoder.load_state_dict(ctc_model.encoder.state_dict(), strict=True)
###Output
_____no_output_____
###Markdown
Training on AN4Now that the model is ready, we can finally train it!
###Code
# Prepare NeMo's Experiment manager to handle checkpoint saving and logging for us
from nemo.utils import exp_manager
# Environment variable generally used for multi-node multi-gpu training.
# In notebook environments, this flag is unnecessary and can cause logs of multiple training runs to overwrite each other.
os.environ.pop('NEMO_EXPM_VERSION', None)
exp_config = exp_manager.ExpManagerConfig(
exp_dir=f'experiments/',
name=f"Transducer-Model",
checkpoint_callback_params=exp_manager.CallbackParams(
monitor="val_wer",
mode="min",
always_save_nemo=True,
save_best_model=True,
),
)
exp_config = OmegaConf.structured(exp_config)
logdir = exp_manager.exp_manager(trainer, exp_config)
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir /content/experiments/Transducer-Model/
else:
print("To use TensorBoard, please use this notebook in a Google Colab environment.")
# Release resources prior to training
import gc
gc.collect()
if gpus > 0:
torch.cuda.empty_cache()
# Train the model
trainer.fit(model)
###Output
_____no_output_____
###Markdown
-------Lets check what the final performance on the test set.
###Code
trainer.test(model)
###Output
_____no_output_____
###Markdown
------The model should obtain some score between 10-12% WER after 50 epochs of training. Quite a good score for just 50 epochs of training a tiny model! Note that these are greedy scores, yet they are pretty strong for such a short training run.We can further improve these scores by using the internal Prediction network to calculate beam scores. Changing the Decoding StrategyDuring training, for the sake of efficiency, we were using the `greedy_batch` decoding strategy. However, we might want to perform inference with another method - say, beam search.NeMo allows changing the decoding strategy easily after the model has been trained.
###Code
import copy
decoding_config = copy.deepcopy(config.model.decoding)
print(OmegaConf.to_yaml(decoding_config))
# Update the config for the decoding strategy
decoding_config.strategy = "alsd" # Options are `greedy`, `greedy_batch`, `beam`, `tsd` and `alsd`
decoding_config.beam.beam_size = 4 # Increase beam size for better scores, but it will take much longer for transcription !
# Finally update the model's decoding strategy !
model.change_decoding_strategy(decoding_config)
trainer.test(model)
###Output
_____no_output_____
###Markdown
------Here, we improved our scores significantly by using the `Alignment-Length Synchronous Decoding` beam search. Feel free to try the other algorithms and compare the speed-accuracy tradeoff! (Extra) Extracting Transducer Model Alignments Transducers are unique in the sense that for each timestep $t \le T$, they can emit multiple target tokens $u_t$. During training, this is represented as the $T \times U$ joint that maps to the vocabulary $V$. During inference, there is no need to compute the full joint $T \times U$. Instead, after the model predicts the `Transducer Blank` token at the current timestep $t$ while predicting the target token $u_t$, the model will move onto the next acoustic timestep $t + 1$. As such, we can obtain the diagonal alignment of the Transducer model per sample relatively simply.------**Note**: While alignments can be calculated for both greedy and beam search - it is non-trivial to incorporate this alignment information for beam decoding. Therefore NeMo only supports extracting alignments during greedy decoding. -----Restore model to greedy decoding for alignment calculation
###Code
decoding_config.strategy = "greedy_batch"
# Special flag which is generally disabled
# Instruct Greedy Decoders to preserve alignment information during autoregressive decoding
with open_dict(decoding_config):
decoding_config.preserve_alignments = True
model.change_decoding_strategy(decoding_config)
###Output
_____no_output_____
###Markdown
-------Set up a test data loader that we will use to obtain the alignments for a single batch.
###Code
test_dl = model.test_dataloader()
test_dl = iter(test_dl)
batch = next(test_dl)
device = torch.device('cuda' if gpus > 0 else 'cpu')
def rnnt_alignments(model, batch):
model = model.to(device)
encoded, encoded_len = model.forward(
input_signal=batch[0].to(device), input_signal_length=batch[1].to(device)
)
current_hypotheses = model.decoding.rnnt_decoder_predictions_tensor(
encoded, encoded_len, return_hypotheses=True
)
del encoded, encoded_len
# current hypothesis is a tuple of
# 1) best hypothesis
# 2) Sorted list of hypothesis (if using beam search); None otherwise
return current_hypotheses
# Get a batch of hypotheses, as well as a batch of all obtained hypotheses (if beam search is used)
hypotheses, all_hypotheses = rnnt_alignments(model, batch)
###Output
_____no_output_____
###Markdown
------Select a sample ID from within the batch to observe the alignment information contained in the Hypothesis.
###Code
# Select the sample ID from within the batch
SAMPLE_ID = 0
# Obtain the hypothesis for this sample, as well as some ground truth information about this sample
hypothesis = hypotheses[SAMPLE_ID]
original_sample_len = batch[1][SAMPLE_ID]
ground_truth = batch[2][SAMPLE_ID]
# The Hypothesis object contains a lot of useful information regarding the decoding step.
print(hypothesis)
###Output
_____no_output_____
###Markdown
-------Now, decode the hypothesis and compare it against the ground truth text. Note - this decoded hypothesis is at *sub-word* level for this model. Therefore sub-word tokens such as `_` may be seen here.
###Code
decoded_text = hypothesis.text
decoded_hypothesis = model.decoding.decode_ids_to_tokens(hypothesis.y_sequence.cpu().numpy().tolist())
decoded_ground_truth = model.decoding.tokenizer.ids_to_text(ground_truth.cpu().numpy().tolist())
print("Decoded ground truth :", decoded_ground_truth)
print("Decoded hypothesis :", decoded_text)
print("Decoded hyp tokens :", decoded_hypothesis)
###Output
_____no_output_____
###Markdown
---------Next we print out the 2-d alignment grid of the RNNT model:
###Code
alignments = hypothesis.alignments
# These two values should normally always match
print("Length of alignments (T): ", len(alignments))
print("Length of padded acoustic model after striding : ", int(hypothesis.length))
###Output
_____no_output_____
###Markdown
------Finally, let us calculate the alignment grid. We will de-tokenize the sub-word token if it is a valid index in the vocabulary and use `''` as a placeholder for the `Transducer Blank` token.Note that each `timestep` here is (roughly) $timestep * total\_stride\_of\_model * preprocessor.window\_stride$ seconds timestamp. **Note**: You can modify the value of `config.model.loss.warprnnt_numba_kwargs.fastemit_lambda` prior to training and see an impact on final alignment latency!
###Code
# Compute the alignment grid
for ti in range(len(alignments)):
t_u = []
for uj in range(len(alignments[ti])):
token = alignments[ti][uj]
token = token.to('cpu').numpy().tolist()
decoded_token = model.decoding.decode_ids_to_tokens([token])[0] if token != model.decoding.blank_id else '' # token at index len(vocab) == RNNT blank token
t_u.append(decoded_token)
print(f"Tokens at timestep {ti} = {t_u}")
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Transducer ModelsThis notebook is a basic tutorial for creating a Transducer ASR model and then training it on a small dataset (AN4). It includes discussion relevant to reducing memory issues when training such models and demonstrates how to change the decoding strategy after training. Finally, it also provides a brief glimpse of extracting alignment information from a trained Transducer model.As we will see in this tutorial, apart from the differences in the config and the class used to instantiate the model, nearly all steps are precisely similar to any CTC-based model training. Many concepts such as data loader setup, optimization setup, pre-trained checkpoint weight loading will be nearly identical between CTC and Transducer models.In essence, NeMo makes it seamless to take a config for a CTC ASR model, add in a few components related to Transducers (often without any modifications) and use a different class to instantiate a Transducer model!--------**Note**: It is assumed that the previous tutorial - "Intro-to-Transducers" has been reviewed, and there is some familiarity with the config components of transducer models. Preparing the datasetIn this tutorial, we will be utilizing the `AN4`dataset - also known as the Alphanumeric dataset, which was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly. Let's first download the preparation script from NeMo's scripts directory -
###Code
import os
if not os.path.exists("scripts/"):
os.makedirs("scripts")
if not os.path.exists("scripts/process_an4_data.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_an4_data.py
###Output
_____no_output_____
###Markdown
------Download and prepare the two subsets of `AN 4`
###Code
import wget
import tarfile
import subprocess
import glob
data_dir = "datasets"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'https://dldata-public.s3.us-east-2.amazonaws.com/an4_sphere.tar.gz' # for the original source, please visit http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
# --- Building Manifest Files --- #
import json
import librosa
# Function to build a manifest
def build_manifest(transcripts_path, manifest_path, wav_path):
with open(transcripts_path, 'r') as fin:
with open(manifest_path, 'w') as fout:
for line in fin:
# Lines look like this:
# <s> transcript </s> (fileID)
transcript = line[: line.find('(')-1].lower()
transcript = transcript.replace('<s>', '').replace('</s>', '')
transcript = transcript.strip()
file_id = line[line.find('(')+1 : -2] # e.g. "cen4-fash-b"
audio_path = os.path.join(
data_dir, wav_path,
file_id[file_id.find('-')+1 : file_id.rfind('-')],
file_id + '.wav')
duration = librosa.core.get_duration(filename=audio_path)
# Write the metadata to the manifest
metadata = {
"audio_filepath": audio_path,
"duration": duration,
"text": transcript
}
json.dump(metadata, fout)
fout.write('\n')
# Building Manifests
print("******")
train_transcripts = os.path.join(data_dir, 'an4/etc/an4_train.transcription')
train_manifest = os.path.join(data_dir, 'an4/train_manifest.json')
if not os.path.isfile(train_manifest):
build_manifest(train_transcripts, train_manifest, 'an4/wav/an4_clstk')
print("Training manifest created.")
test_transcripts = os.path.join(data_dir, 'an4/etc/an4_test.transcription')
test_manifest = os.path.join(data_dir, '/an4/test_manifest.json')
if not os.path.isfile(test_manifest):
build_manifest(test_transcripts, test_manifest, 'an4/wav/an4test_clstk')
print("Test manifest created.")
print("***Done***")
# Manifest filepaths
TRAIN_MANIFEST = train_manifest
TEST_MANIFEST = test_manifest
###Output
_____no_output_____
###Markdown
Preparing the tokenizerNow that we have a dataset ready, we need to decide whether to use a character-based model or a sub-word-based model. For completeness' sake, we will use a tokenizer based model so that we can leverage a modern encoder architecture like ContextNet or Conformer-T.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
-----Since the dataset is tiny, we can use a small SentencePiece based tokenizer. We always delete the tokenizer directory so any changes to the manifest files are always replicated in the tokenizer.
###Code
VOCAB_SIZE = 32 # can be any value above 29
TOKENIZER_TYPE = "spe" # can be wpe or spe
SPE_TYPE = "unigram" # can be bpe or unigram
# ------------------------------------------------------------------- #
!rm -r tokenizers/
if not os.path.exists("tokenizers"):
os.makedirs("tokenizers")
!python scripts/process_asr_text_tokenizer.py \
--manifest=$TRAIN_MANIFEST \
--data_root="tokenizers" \
--tokenizer=$TOKENIZER_TYPE \
--spe_type=$SPE_TYPE \
--no_lower_case \
--log \
--vocab_size=$VOCAB_SIZE
# Tokenizer path
if TOKENIZER_TYPE == 'spe':
TOKENIZER = os.path.join("tokenizers", f"tokenizer_spe_{SPE_TYPE}_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "bpe"
else:
TOKENIZER = os.path.join("tokenizers", f"tokenizer_wpe_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "wpe"
###Output
_____no_output_____
###Markdown
Preparing a Transducer ModelNow that we have the dataset and tokenizer prepared, let us begin by setting up the config of the Transducer model! In this tutorial, we will build a slightly modified ContextNet architecture (which is obtained from the paper [ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context](https://arxiv.org/abs/2005.03191)).We can note that many of the steps here are identical to the setup of a CTC model! Prepare the configFor a dataset such as AN4, we do not need such a deep model. In fact, the depth of this model will cause much slower convergence on a small dataset, which would require far too long to train on Colab.In order to speed up training for this demo, we will take only the first five blocks of ContextNet, and discard the rest - and we can do this directly from the config.**Note**: On any realistic dataset (say Librispeech) this step would hurt the model's accuracy significantly. It is being done only to reduce the time spent waiting for training to finish on Colab.
###Code
from omegaconf import OmegaConf, open_dict
config = OmegaConf.load("../../examples/asr/conf/contextnet_rnnt/contextnet_rnnt.yaml")
###Output
_____no_output_____
###Markdown
-----Here, we will slice off the first five blocks from the Jasper block (used to build ContextNet). Setting the config with this subset will create a stride 2x model with just five blocks.We will also explicitly state that the last block dimension must be obtained from `model.model_defaults.enc_hidden` inside the config.
###Code
config.model.encoder.jasper = config.model.encoder.jasper[:5]
config.model.encoder.jasper[-1].filters = '${model.model_defaults.enc_hidden}'
###Output
_____no_output_____
###Markdown
-------Next, set up the data loaders of the config for the ContextNet model.
###Code
# print out the train and validation configs to know what needs to be changed
print(OmegaConf.to_yaml(config.model.train_ds))
###Output
_____no_output_____
###Markdown
-------We can note that the config here is nearly identical to the CTC ASR model configs! So let us take the same steps here to update the configs.
###Code
config.model.train_ds.manifest_filepath = TRAIN_MANIFEST
config.model.validation_ds.manifest_filepath = TEST_MANIFEST
config.model.test_ds.manifest_filepath = TEST_MANIFEST
###Output
_____no_output_____
###Markdown
------Next, we need to setup the tokenizer section of the config
###Code
print(OmegaConf.to_yaml(config.model.tokenizer))
config.model.tokenizer.dir = TOKENIZER
config.model.tokenizer.type = TOKENIZER_TYPE_CFG
###Output
_____no_output_____
###Markdown
------Now, we can update the optimization and augmentation for this dataset in order to converge to some reasonable score within a short training run.
###Code
print(OmegaConf.to_yaml(config.model.optim))
# Finally, let's remove logging of samples and the warmup since the dataset is small (similar to CTC models)
config.model.log_prediction = False
config.model.optim.sched.warmup_steps = None
###Output
_____no_output_____
###Markdown
------Next, we remove the spec augment that is provided by default for ContextNet. While additional augmentation would surely help training, it would require longer training to see significant benefits.
###Code
print(OmegaConf.to_yaml(config.model.spec_augment))
config.model.spec_augment.freq_masks = 0
config.model.spec_augment.time_masks = 0
###Output
_____no_output_____
###Markdown
------... We are now almost done! Most of the updates to a Transducer config are nearly the same as any CTC model. Fused Batch during training and evaluationWe discussed in the previous tutorial (Intro-to-Transducers) the significant memory cost of the Transducer Joint calculation during training. We also discussed that NeMo provides a simple yet effective method to nearly sidestep this limitation. We can now dive deeper into understanding what precisely NeMo's Transducer framework will do to alleviate this memory consumption issue.The following sub-cells are **voluntary** and valuable for understanding the cause, effect, and resolution of memory issues in Transducer models. The content can be skipped if one is familiar with the topic, and it is only required to use the `fused batch step`. Transducer Memory reduction with Fused Batch stepThe following few cells explain why memory is an issue when training Transducer models and how NeMo tackles the issue with its Fused Batch step.The material can be read for a thorough understanding, otherwise, it can be skipped. Diving deeper into the memory costs of Transducer Joint-------One of the significant limitations of Transducers is the exorbitant memory cost of computing the Joint module. The Joint module is comprised of two steps. 1) Projecting the Acoustic and Transcription feature dimensions to some standard hidden dimension (specified by `model.model_defaults.joint_hidden`)2) Projecting this intermediate hidden dimension to the final vocabulary space to obtain the transcription.Take the following example.**BS**=32 ; **T** (after **2x** stride) = 800, **U** (with character encoding) = 400-450 tokens, Vocabulary size **V** = 28 (26 alphabet chars, space and apostrophe). Let the hidden dimension of the Joint model be 640 (Most Google Transducer papers use hidden dimension of 640).$ Memory \, (Hidden, \, gb) = 32 \times 800 \times 450 \times 640 \times 4 = 29.49 $ gigabytes (4 bytes per float). $ Memory \, (Joint, \, gb) = 32 \times 800 \times 450 \times 28 \times 4 = 1.290 $ gigabytes (4 bytes per float)-----**NOTE**: This is just for the forward pass! We need to double this memory to store gradients! This much memory is also just for the Joint model **alone**. Far more memory is required for the Prediction model as well as the large Acoustic model itself and its gradients!Even with mixed precision, that's $\sim 30$ GB of GPU RAM for just 1 part of the network + its gradients.--------- Simple methods to reduce memory consumption------The easiest way to reduce memory consumption is to perform more downsampling in the acoustic model and use sub-word tokenization of the text to reduce the length of the target sequence.**BS**=32 ; **T** (after **8x** stride) = 200, **U** (with sub-word encoding) = 100-180 tokens, Vocabulary size **V** = 1024.$ Memory \, (Hidden, \, gb) = 32 \times 200 \times 150 \times 640 \times 4 = 2.45 $ gigabytes (4 bytes per float).$ Memory \, (Joint, \, gb) = 32 \times 200 \times 150 \times 1024 \times 4 = 3.93 $ gigabytes (4 bytes per float)-----Using Automatic Mixed Precision, we expend just around 6-7 GB of GPU RAM on the Joint + its gradient.The above memory cost is much more tractable - but we generally want larger and larger acoustic models. It is consistently the easiest way to improve transcription accuracy. So that means on a limited 32 GB GPU, we have to partition 7 GB just for the Joint and remaining memory allocated between Transcription + Acoustic Models. Fused Transcription-Joint-Loss-WER (also called Batch Splitting)----------The fundamental problem is that the joint tensor grows in size when `[T x U]` grows in size. This growth in memory cost is due to many reasons - either by model construction (downsampling) or the choice of dataset preprocessing (character tokenization vs. sub-word tokenization).Another dimension that NeMo can control is **batch**. Due to how we batch our samples, small and large samples all get clumped together into a single batch. So even though the individual samples are not all as long as the maximum length of T and U in that batch, when a batch of such samples is constructed, it will consume a significant amount of memory for the sake of compute efficiency.So as is always the case - **trade-off compute speed for memory savings**.------The fused operation goes as follows : 1) Forward the entire acoustic model in a single pass. (Use global batch size here for acoustic model - found in `model.*_ds.batch_size`)2) Split the Acoustic Model's logits by `fused_batch_size` and loop over these sub-batches.3) Construct a sub-batch of same `fused_batch_size` for the Prediction model. Now the target sequence length is $U_{sub-batch} < U$. 4) Feed this $U_{sub-batch}$ into the Joint model, along with a sub-batch from the Acoustic model (with $T_{sub-batch} < T$). Remember, we only have to slice off a part of the acoustic model here since we have the full batch of samples $(B, T, D)$ from the acoustic model.5) Performing steps (3) and (4) yields $T_{sub-batch}$ and $U_{sub-batch}$. Perform sub-batch joint step - costing an intermediate $(B, T_{sub-batch}, U_{sub-batch}, V)$ in memory.6) Compute loss on sub-batch and preserve in a list to be later concatenated. 7) Compute sub-batch metrics (such as Character / Word Error Rate) using the above Joint tensor and sub-batch of ground truth labels. Preserve the scores to be averaged across the entire batch later.8) Delete the sub-batch joint matrix $(B, T_{sub-batch}, U_{sub-batch}, V)$. Only gradients from .backward() are preserved now in the computation graph.9) Repeat steps (3) - (8) until all sub-batches are consumed.10) Cleanup step. Compute full batch WER and log. Concatenate loss list and pass to PTL to compute the equivalent of the original (full batch) Joint step. Delete ancillary objects necessary for sub-batching. Setting up Fused Batch step in a Transducer ConfigAfter all that discussion above, let us look at how to enable that entire pipeline in NeMo.As we can note below, it takes precisely two changes in the config to enable the fused batch step:
###Code
print(OmegaConf.to_yaml(config.model.joint))
# Two lines to enable the fused batch step
config.model.joint.fuse_loss_wer = True
config.model.joint.fused_batch_size = 16 # this can be any value (preferably less than model.*_ds.batch_size)
# We will also reduce the hidden dimension of the joint and the prediction networks to preserve some memory
config.model.model_defaults.pred_hidden = 64
config.model.model_defaults.joint_hidden = 64
###Output
_____no_output_____
###Markdown
--------Finally, since the dataset is tiny, we do not need an enormous model (the default is roughly 40 M parameters!).
###Code
# Use just 128 filters across the model to speed up training and reduce parameter count
config.model.model_defaults.filters = 128
###Output
_____no_output_____
###Markdown
Initialize a Transducer ASR ModelFinally, let us create a Transducer model, which is as easy as changing a line of import if you already have a script to create CTC models. We will use a small model since the dataset is just 5 hours of speech. ------Setup a Pytorch Lightning Trainer:
###Code
import torch
from pytorch_lightning import Trainer
if torch.cuda.is_available():
accelerator = 'gpu'
else:
accelerator = 'gpu'
EPOCHS = 50
# Initialize a Trainer for the Transducer model
trainer = Trainer(devices=1, accelerator=accelerator, max_epochs=EPOCHS,
enable_checkpointing=False, logger=False,
log_every_n_steps=5, check_val_every_n_epoch=10)
# Import the Transducer Model
import nemo.collections.asr as nemo_asr
# Build the model
model = nemo_asr.models.EncDecRNNTBPEModel(cfg=config.model, trainer=trainer)
model.summarize();
###Output
_____no_output_____
###Markdown
------We now have a Transducer model ready to be trained! (Optional) Partially loading pre-trained weights from another modelAn interesting point to note about Transducer models - the Acoustic model config (and therefore the Acoustic model itself) can be shared between CTC and Transducer models.This means that we can initialize the weights of a Transducer's Acoustic model with weights from a pre-trained CTC encoder model.------**Note**: This step is optional and not necessary at all to train a Transducer model. Below, we show the steps that we would take if we wanted to do this, however as the loaded model has different kernel sizes compared to the current model, the checkpoint cannot be loaded.
###Code
# Load a small CTC model
# ctc_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_citrinet_256", map_location='cpu')
###Output
_____no_output_____
###Markdown
------Then load the state dict of the CTC model's encoder into the Transducer model's encoder.
###Code
# <<< NOTE: This is only for demonstration ! >>>
# Below cell will fail because the two model's have incompatible kernel sizes in their Conv layers.
# <<< NOTE: Below cell is only shown to illustrate the method >>>
# model.encoder.load_state_dict(ctc_model.encoder.state_dict(), strict=True)
###Output
_____no_output_____
###Markdown
Training on AN4Now that the model is ready, we can finally train it!
###Code
# Prepare NeMo's Experiment manager to handle checkpoint saving and logging for us
from nemo.utils import exp_manager
# Environment variable generally used for multi-node multi-gpu training.
# In notebook environments, this flag is unnecessary and can cause logs of multiple training runs to overwrite each other.
os.environ.pop('NEMO_EXPM_VERSION', None)
exp_config = exp_manager.ExpManagerConfig(
exp_dir=f'experiments/',
name=f"Transducer-Model",
checkpoint_callback_params=exp_manager.CallbackParams(
monitor="val_wer",
mode="min",
always_save_nemo=True,
save_best_model=True,
),
)
exp_config = OmegaConf.structured(exp_config)
logdir = exp_manager.exp_manager(trainer, exp_config)
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir /content/experiments/Transducer-Model/
else:
print("To use TensorBoard, please use this notebook in a Google Colab environment.")
# Release resources prior to training
import gc
gc.collect()
if accelerator == 'gpu':
torch.cuda.empty_cache()
# Train the model
trainer.fit(model)
###Output
_____no_output_____
###Markdown
-------Lets check what the final performance on the test set.
###Code
trainer.test(model)
###Output
_____no_output_____
###Markdown
------The model should obtain some score between 10-12% WER after 50 epochs of training. Quite a good score for just 50 epochs of training a tiny model! Note that these are greedy scores, yet they are pretty strong for such a short training run.We can further improve these scores by using the internal Prediction network to calculate beam scores. Changing the Decoding StrategyDuring training, for the sake of efficiency, we were using the `greedy_batch` decoding strategy. However, we might want to perform inference with another method - say, beam search.NeMo allows changing the decoding strategy easily after the model has been trained.
###Code
import copy
decoding_config = copy.deepcopy(config.model.decoding)
print(OmegaConf.to_yaml(decoding_config))
# Update the config for the decoding strategy
decoding_config.strategy = "alsd" # Options are `greedy`, `greedy_batch`, `beam`, `tsd` and `alsd`
decoding_config.beam.beam_size = 4 # Increase beam size for better scores, but it will take much longer for transcription !
# Finally update the model's decoding strategy !
model.change_decoding_strategy(decoding_config)
trainer.test(model)
###Output
_____no_output_____
###Markdown
------Here, we improved our scores significantly by using the `Alignment-Length Synchronous Decoding` beam search. Feel free to try the other algorithms and compare the speed-accuracy tradeoff! (Extra) Extracting Transducer Model Alignments Transducers are unique in the sense that for each timestep $t \le T$, they can emit multiple target tokens $u_t$. During training, this is represented as the $T \times U$ joint that maps to the vocabulary $V$. During inference, there is no need to compute the full joint $T \times U$. Instead, after the model predicts the `Transducer Blank` token at the current timestep $t$ while predicting the target token $u_t$, the model will move onto the next acoustic timestep $t + 1$. As such, we can obtain the diagonal alignment of the Transducer model per sample relatively simply.------**Note**: While alignments can be calculated for both greedy and beam search - it is non-trivial to incorporate this alignment information for beam decoding. Therefore NeMo only supports extracting alignments during greedy decoding. -----Restore model to greedy decoding for alignment calculation
###Code
decoding_config.strategy = "greedy_batch"
# Special flag which is generally disabled
# Instruct Greedy Decoders to preserve alignment information during autoregressive decoding
with open_dict(decoding_config):
decoding_config.preserve_alignments = True
decoding_config.fused_batch_size = -1 # temporarily stop fused batch during inference.
model.change_decoding_strategy(decoding_config)
###Output
_____no_output_____
###Markdown
-------Set up a test data loader that we will use to obtain the alignments for a single batch.
###Code
test_dl = model.test_dataloader()
test_dl = iter(test_dl)
batch = next(test_dl)
device = torch.device('cuda' if accelerator == 'gpu' else 'cpu')
def rnnt_alignments(model, batch):
model = model.to(device)
encoded, encoded_len = model.forward(
input_signal=batch[0].to(device), input_signal_length=batch[1].to(device)
)
current_hypotheses = model.decoding.rnnt_decoder_predictions_tensor(
encoded, encoded_len, return_hypotheses=True
)
del encoded, encoded_len
# current hypothesis is a tuple of
# 1) best hypothesis
# 2) Sorted list of hypothesis (if using beam search); None otherwise
return current_hypotheses
# Get a batch of hypotheses, as well as a batch of all obtained hypotheses (if beam search is used)
hypotheses, all_hypotheses = rnnt_alignments(model, batch)
###Output
_____no_output_____
###Markdown
------Select a sample ID from within the batch to observe the alignment information contained in the Hypothesis.
###Code
# Select the sample ID from within the batch
SAMPLE_ID = 0
# Obtain the hypothesis for this sample, as well as some ground truth information about this sample
hypothesis = hypotheses[SAMPLE_ID]
original_sample_len = batch[1][SAMPLE_ID]
ground_truth = batch[2][SAMPLE_ID]
# The Hypothesis object contains a lot of useful information regarding the decoding step.
print(hypothesis)
###Output
_____no_output_____
###Markdown
-------Now, decode the hypothesis and compare it against the ground truth text. Note - this decoded hypothesis is at *sub-word* level for this model. Therefore sub-word tokens such as `_` may be seen here.
###Code
decoded_text = hypothesis.text
decoded_hypothesis = model.decoding.decode_ids_to_tokens(hypothesis.y_sequence.cpu().numpy().tolist())
decoded_ground_truth = model.decoding.tokenizer.ids_to_text(ground_truth.cpu().numpy().tolist())
print("Decoded ground truth :", decoded_ground_truth)
print("Decoded hypothesis :", decoded_text)
print("Decoded hyp tokens :", decoded_hypothesis)
###Output
_____no_output_____
###Markdown
---------Next we print out the 2-d alignment grid of the RNNT model:
###Code
alignments = hypothesis.alignments
# These two values should normally always match
print("Length of alignments (T): ", len(alignments))
print("Length of padded acoustic model after striding : ", int(hypothesis.length))
###Output
_____no_output_____
###Markdown
------Finally, let us calculate the alignment grid. We will de-tokenize the sub-word token if it is a valid index in the vocabulary and use `''` as a placeholder for the `Transducer Blank` token.Note that each `timestep` here is (roughly) $timestep * total\_stride\_of\_model * preprocessor.window\_stride$ seconds timestamp. **Note**: You can modify the value of `config.model.loss.warprnnt_numba_kwargs.fastemit_lambda` prior to training and see an impact on final alignment latency!
###Code
# Compute the alignment grid
for ti in range(len(alignments)):
t_u = []
for uj in range(len(alignments[ti])):
token = alignments[ti][uj]
token = token.to('cpu').numpy().tolist()
decoded_token = model.decoding.decode_ids_to_tokens([token])[0] if token != model.decoding.blank_id else '' # token at index len(vocab) == RNNT blank token
t_u.append(decoded_token)
print(f"Tokens at timestep {ti} = {t_u}")
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Transducer ModelsThis notebook is a basic tutorial for creating a Transducer ASR model and then training it on a small dataset (AN4). It includes discussion relevant to reducing memory issues when training such models and demonstrates how to change the decoding strategy after training. Finally, it also provides a brief glimpse of extracting alignment information from a trained Transducer model.As we will see in this tutorial, apart from the differences in the config and the class used to instantiate the model, nearly all steps are precisely similar to any CTC-based model training. Many concepts such as data loader setup, optimization setup, pre-trained checkpoint weight loading will be nearly identical between CTC and Transducer models.In essence, NeMo makes it seamless to take a config for a CTC ASR model, add in a few components related to Transducers (often without any modifications) and use a different class to instantiate a Transducer model!--------**Note**: It is assumed that the previous tutorial - "Intro-to-Transducers" has been reviewed, and there is some familiarity with the config components of transducer models. Preparing the datasetIn this tutorial, we will be utilizing the `AN4`dataset - also known as the Alphanumeric dataset, which was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly. Let's first download the preparation script from NeMo's scripts directory -
###Code
import os
if not os.path.exists("scripts/"):
os.makedirs("scripts")
if not os.path.exists("scripts/process_an4_data.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_an4_data.py
###Output
_____no_output_____
###Markdown
------Download and prepare the two subsets of `AN 4`
###Code
import wget
import tarfile
import subprocess
import glob
data_dir = "datasets"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
if os.path.exists(f"{data_dir}/an4"):
print("Preparing AN4 dataset ...")
an4_path = f"{data_dir}/"
!python scripts/process_an4_data.py \
--data_root=$an4_path
print("AN4 prepared !")
# Manifest filepaths
TRAIN_MANIFEST = os.path.join(data_dir, "an4", "train_manifest.json")
TEST_MANIFEST = os.path.join(data_dir, "an4", "test_manifest.json")
###Output
_____no_output_____
###Markdown
Preparing the tokenizerNow that we have a dataset ready, we need to decide whether to use a character-based model or a sub-word-based model. For completeness' sake, we will use a tokenizer based model so that we can leverage a modern encoder architecture like ContextNet or Conformer-T.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
-----Since the dataset is tiny, we can use a small SentencePiece based tokenizer. We always delete the tokenizer directory so any changes to the manifest files are always replicated in the tokenizer.
###Code
VOCAB_SIZE = 32 # can be any value above 29
TOKENIZER_TYPE = "spe" # can be wpe or spe
SPE_TYPE = "unigram" # can be bpe or unigram
# ------------------------------------------------------------------- #
!rm -r tokenizers/
if not os.path.exists("tokenizers"):
os.makedirs("tokenizers")
!python scripts/process_asr_text_tokenizer.py \
--manifest=$TRAIN_MANIFEST \
--data_root="tokenizers" \
--tokenizer=$TOKENIZER_TYPE \
--spe_type=$SPE_TYPE \
--no_lower_case \
--log \
--vocab_size=$VOCAB_SIZE
# Tokenizer path
if TOKENIZER_TYPE == 'spe':
TOKENIZER = os.path.join("tokenizers", f"tokenizer_spe_{SPE_TYPE}_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "bpe"
else:
TOKENIZER = os.path.join("tokenizers", f"tokenizer_wpe_v{VOCAB_SIZE}")
TOKENIZER_TYPE_CFG = "wpe"
###Output
_____no_output_____
###Markdown
Preparing a Transducer ModelNow that we have the dataset and tokenizer prepared, let us begin by setting up the config of the Transducer model! In this tutorial, we will build a slightly modified ContextNet architecture (which is obtained from the paper [ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context](https://arxiv.org/abs/2005.03191)).We can note that many of the steps here are identical to the setup of a CTC model! Prepare the configFor a dataset such as AN4, we do not need such a deep model. In fact, the depth of this model will cause much slower convergence on a small dataset, which would require far too long to train on Colab.In order to speed up training for this demo, we will take only the first five blocks of ContextNet, and discard the rest - and we can do this directly from the config.**Note**: On any realistic dataset (say Librispeech) this step would hurt the model's accuracy significantly. It is being done only to reduce the time spent waiting for training to finish on Colab.
###Code
from omegaconf import OmegaConf, open_dict
config = OmegaConf.load("configs/contextnet_rnnt.yaml")
###Output
_____no_output_____
###Markdown
-----Here, we will slice off the first five blocks from the Jasper block (used to build ContextNet). Setting the config with this subset will create a stride 2x model with just five blocks.We will also explicitly state that the last block dimension must be obtained from `model.model_defaults.enc_hidden` inside the config.
###Code
config.model.encoder.jasper = config.model.encoder.jasper[:5]
config.model.encoder.jasper[-1].filters = '${model.model_defaults.enc_hidden}'
###Output
_____no_output_____
###Markdown
-------Next, set up the data loaders of the config for the ContextNet model.
###Code
# print out the train and validation configs to know what needs to be changed
print(OmegaConf.to_yaml(config.model.train_ds))
###Output
_____no_output_____
###Markdown
-------We can note that the config here is nearly identical to the CTC ASR model configs! So let us take the same steps here to update the configs.
###Code
config.model.train_ds.manifest_filepath = TRAIN_MANIFEST
config.model.validation_ds.manifest_filepath = TEST_MANIFEST
config.model.test_ds.manifest_filepath = TEST_MANIFEST
###Output
_____no_output_____
###Markdown
------Next, we need to setup the tokenizer section of the config
###Code
print(OmegaConf.to_yaml(config.model.tokenizer))
config.model.tokenizer.dir = TOKENIZER
config.model.tokenizer.type = TOKENIZER_TYPE_CFG
###Output
_____no_output_____
###Markdown
------Now, we can update the optimization and augmentation for this dataset in order to converge to some reasonable score within a short training run.
###Code
print(OmegaConf.to_yaml(config.model.optim))
# Finally, let's remove logging of samples and the warmup since the dataset is small (similar to CTC models)
config.model.log_prediction = False
config.model.optim.sched.warmup_steps = None
###Output
_____no_output_____
###Markdown
------Next, we remove the spec augment that is provided by default for ContextNet. While additional augmentation would surely help training, it would require longer training to see significant benefits.
###Code
print(OmegaConf.to_yaml(config.model.spec_augment))
config.model.spec_augment.freq_masks = 0
config.model.spec_augment.time_masks = 0
###Output
_____no_output_____
###Markdown
------... We are now almost done! Most of the updates to a Transducer config are nearly the same as any CTC model. Fused Batch during training and evaluationWe discussed in the previous tutorial (Intro-to-Transducers) the significant memory cost of the Transducer Joint calculation during training. We also discussed that NeMo provides a simple yet effective method to nearly sidestep this limitation. We can now dive deeper into understanding what precisely NeMo's Transducer framework will do to alleviate this memory consumption issue.The following sub-cells are **voluntary** and valuable for understanding the cause, effect, and resolution of memory issues in Transducer models. The content can be skipped if one is familiar with the topic, and it is only required to use the `fused batch step`. Transducer Memory reduction with Fused Batch stepThe following few cells explain why memory is an issue when training Transducer models and how NeMo tackles the issue with its Fused Batch step.The material can be read for a thorough understanding, otherwise, it can be skipped. Diving deeper into the memory costs of Transducer Joint-------One of the significant limitations of Transducers is the exorbitant memory cost of computing the Joint module. The Joint module is comprised of two steps. 1) Projecting the Acoustic and Transcription feature dimensions to some standard hidden dimension (specified by `model.model_defaults.joint_hidden`)2) Projecting this intermediate hidden dimension to the final vocabulary space to obtain the transcription.Take the following example.**BS**=32 ; **T** (after **2x** stride) = 800, **U** (with character encoding) = 400-450 tokens, Vocabulary size **V** = 28 (26 alphabet chars, space and apostrophe). Let the hidden dimension of the Joint model be 640 (Most Google Transducer papers use hidden dimension of 640).$ Memory \, (Hidden, \, gb) = 32 \times 800 \times 450 \times 640 \times 4 = 29.49 $ gigabytes (4 bytes per float). $ Memory \, (Joint, \, gb) = 32 \times 800 \times 450 \times 28 \times 4 = 1.290 $ gigabytes (4 bytes per float)-----**NOTE**: This is just for the forward pass! We need to double this memory to store gradients! This much memory is also just for the Joint model **alone**. Far more memory is required for the Prediction model as well as the large Acoustic model itself and its gradients!Even with mixed precision, that's $\sim 30$ GB of GPU RAM for just 1 part of the network + its gradients.--------- Simple methods to reduce memory consumption------The easiest way to reduce memory consumption is to perform more downsampling in the acoustic model and use sub-word tokenization of the text to reduce the length of the target sequence.**BS**=32 ; **T** (after **8x** stride) = 200, **U** (with sub-word encoding) = 100-180 tokens, Vocabulary size **V** = 1024.$ Memory \, (Hidden, \, gb) = 32 \times 200 \times 150 \times 640 \times 4 = 2.45 $ gigabytes (4 bytes per float).$ Memory \, (Joint, \, gb) = 32 \times 200 \times 150 \times 1024 \times 4 = 3.93 $ gigabytes (4 bytes per float)-----Using Automatic Mixed Precision, we expend just around 6-7 GB of GPU RAM on the Joint + its gradient.The above memory cost is much more tractable - but we generally want larger and larger acoustic models. It is consistently the easiest way to improve transcription accuracy. So that means on a limited 32 GB GPU, we have to partition 7 GB just for the Joint and remaining memory allocated between Transcription + Acoustic Models. Fused Transcription-Joint-Loss-WER (also called Batch Splitting)----------The fundamental problem is that the joint tensor grows in size when `[T x U]` grows in size. This growth in memory cost is due to many reasons - either by model construction (downsampling) or the choice of dataset preprocessing (character tokenization vs. sub-word tokenization).Another dimension that NeMo can control is **batch**. Due to how we batch our samples, small and large samples all get clumped together into a single batch. So even though the individual samples are not all as long as the maximum length of T and U in that batch, when a batch of such samples is constructed, it will consume a significant amount of memory for the sake of compute efficiency.So as is always the case - **trade-off compute speed for memory savings**.------The fused operation goes as follows : 1) Forward the entire acoustic model in a single pass. (Use global batch size here for acoustic model - found in `model.*_ds.batch_size`)2) Split the Acoustic Model's logits by `fused_batch_size` and loop over these sub-batches.3) Construct a sub-batch of same `fused_batch_size` for the Prediction model. Now the target sequence length is $U_{sub-batch} < U$. 4) Feed this $U_{sub-batch}$ into the Joint model, along with a sub-batch from the Acoustic model (with $T_{sub-batch} < T$). Remember, we only have to slice off a part of the acoustic model here since we have the full batch of samples $(B, T, D)$ from the acoustic model.5) Performing steps (3) and (4) yields $T_{sub-batch}$ and $U_{sub-batch}$. Perform sub-batch joint step - costing an intermediate $(B, T_{sub-batch}, U_{sub-batch}, V)$ in memory.6) Compute loss on sub-batch and preserve in a list to be later concatenated. 7) Compute sub-batch metrics (such as Character / Word Error Rate) using the above Joint tensor and sub-batch of ground truth labels. Preserve the scores to be averaged across the entire batch later.8) Delete the sub-batch joint matrix $(B, T_{sub-batch}, U_{sub-batch}, V)$. Only gradients from .backward() are preserved now in the computation graph.9) Repeat steps (3) - (8) until all sub-batches are consumed.10) Cleanup step. Compute full batch WER and log. Concatenate loss list and pass to PTL to compute the equivalent of the original (full batch) Joint step. Delete ancillary objects necessary for sub-batching. Setting up Fused Batch step in a Transducer ConfigAfter all that discussion above, let us look at how to enable that entire pipeline in NeMo.As we can note below, it takes precisely two changes in the config to enable the fused batch step:
###Code
print(OmegaConf.to_yaml(config.model.joint))
# Two lines to enable the fused batch step
config.model.joint.fuse_loss_wer = True
config.model.joint.fused_batch_size = 16 # this can be any value (preferably less than model.*_ds.batch_size)
# We will also reduce the hidden dimension of the joint and the prediction networks to preserve some memory
config.model.model_defaults.pred_hidden = 64
config.model.model_defaults.joint_hidden = 64
###Output
_____no_output_____
###Markdown
--------Finally, since the dataset is tiny, we do not need an enormous model (the default is roughly 40 M parameters!).
###Code
# Use just 128 filters across the model to speed up training and reduce parameter count
config.model.model_defaults.filters = 128
###Output
_____no_output_____
###Markdown
Initialize a Transducer ASR ModelFinally, let us create a Transducer model, which is as easy as changing a line of import if you already have a script to create CTC models. We will use a small model since the dataset is just 5 hours of speech. ------Setup a Pytorch Lightning Trainer:
###Code
import torch
from pytorch_lightning import Trainer
if torch.cuda.is_available():
gpus = 1
else:
gpus = 0
EPOCHS = 50
# Initialize a Trainer for the Transducer model
trainer = Trainer(gpus=gpus, max_epochs=EPOCHS,
checkpoint_callback=False, logger=False,
log_every_n_steps=5, check_val_every_n_epoch=10)
# Import the Transducer Model
import nemo.collections.asr as nemo_asr
# Build the model
model = nemo_asr.models.EncDecRNNTBPEModel(cfg=config.model, trainer=trainer)
model.summarize();
###Output
_____no_output_____
###Markdown
------We now have a Transducer model ready to be trained! (Optional) Partially loading pre-trained weights from another modelAn interesting point to note about Transducer models - the Acoustic model config (and therefore the Acoustic model itself) can be shared between CTC and Transducer models.This means that we can initialize the weights of a Transducer's Acoustic model with weights from a pre-trained CTC encoder model.------**Note**: This step is optional and not necessary at all to train a Transducer model. Below, we show the steps that we would take if we wanted to do this, however as the loaded model has different kernel sizes compared to the current model, the checkpoint cannot be loaded.
###Code
# Load a small CTC model
# ctc_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_citrinet_256", map_location='cpu')
###Output
_____no_output_____
###Markdown
------Then load the state dict of the CTC model's encoder into the Transducer model's encoder.
###Code
# <<< NOTE: This is only for demonstration ! >>>
# Below cell will fail because the two model's have incompatible kernel sizes in their Conv layers.
# <<< NOTE: Below cell is only shown to illustrate the method >>>
# model.encoder.load_state_dict(ctc_model.encoder.state_dict(), strict=True)
###Output
_____no_output_____
###Markdown
Training on AN4Now that the model is ready, we can finally train it!
###Code
# Prepare NeMo's Experiment manager to handle checkpoint saving and logging for us
from nemo.utils import exp_manager
# Environment variable generally used for multi-node multi-gpu training.
# In notebook environments, this flag is unnecessary and can cause logs of multiple training runs to overwrite each other.
os.environ.pop('NEMO_EXPM_VERSION', None)
exp_config = exp_manager.ExpManagerConfig(
exp_dir=f'experiments/',
name=f"Transducer-Model",
checkpoint_callback_params=exp_manager.CallbackParams(
monitor="val_wer",
mode="min",
always_save_nemo=True,
save_best_model=True,
),
)
exp_config = OmegaConf.structured(exp_config)
logdir = exp_manager.exp_manager(trainer, exp_config)
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir /content/experiments/Transducer-Model/
else:
print("To use TensorBoard, please use this notebook in a Google Colab environment.")
# Release resources prior to training
import gc
gc.collect()
if gpus > 0:
torch.cuda.empty_cache()
# Train the model
trainer.fit(model)
###Output
_____no_output_____
###Markdown
-------Lets check what the final performance on the test set.
###Code
trainer.test(model)
###Output
_____no_output_____
###Markdown
------The model should obtain some score between 10-12% WER after 50 epochs of training. Quite a good score for just 50 epochs of training a tiny model! Note that these are greedy scores, yet they are pretty strong for such a short training run.We can further improve these scores by using the internal Prediction network to calculate beam scores. Changing the Decoding StrategyDuring training, for the sake of efficiency, we were using the `greedy_batch` decoding strategy. However, we might want to perform inference with another method - say, beam search.NeMo allows changing the decoding strategy easily after the model has been trained.
###Code
import copy
decoding_config = copy.deepcopy(config.model.decoding)
print(OmegaConf.to_yaml(decoding_config))
# Update the config for the decoding strategy
decoding_config.strategy = "alsd" # Options are `greedy`, `greedy_batch`, `beam`, `tsd` and `alsd`
decoding_config.beam.beam_size = 4 # Increase beam size for better scores, but it will take much longer for transcription !
# Finally update the model's decoding strategy !
model.change_decoding_strategy(decoding_config)
trainer.test(model)
###Output
_____no_output_____
###Markdown
------Here, we improved our scores significantly by using the `Alignment-Length Synchronous Decoding` beam search. Feel free to try the other algorithms and compare the speed-accuracy tradeoff! (Extra) Extracting Transducer Model Alignments Transducers are unique in the sense that for each timestep $t \le T$, they can emit multiple target tokens $u_t$. During training, this is represented as the $T \times U$ joint that maps to the vocabulary $V$. During inference, there is no need to compute the full joint $T \times U$. Instead, after the model predicts the `Transducer Blank` token at the current timestep $t$ while predicting the target token $u_t$, the model will move onto the next acoustic timestep $t + 1$. As such, we can obtain the diagonal alignment of the Transducer model per sample relatively simply.------**Note**: While alignments can be calculated for both greedy and beam search - it is non-trivial to incorporate this alignment information for beam decoding. Therefore NeMo only supports extracting alignments during greedy decoding. -----Restore model to greedy decoding for alignment calculation
###Code
decoding_config.strategy = "greedy_batch"
# Special flag which is generally disabled
# Instruct Greedy Decoders to preserve alignment information during autoregressive decoding
with open_dict(decoding_config):
decoding_config.preserve_alignments = True
model.change_decoding_strategy(decoding_config)
###Output
_____no_output_____
###Markdown
-------Set up a test data loader that we will use to obtain the alignments for a single batch.
###Code
test_dl = model.test_dataloader()
test_dl = iter(test_dl)
batch = next(test_dl)
device = torch.device('cuda' if gpus > 0 else 'cpu')
def rnnt_alignments(model, batch):
model = model.to(device)
encoded, encoded_len = model.forward(
input_signal=batch[0].to(device), input_signal_length=batch[1].to(device)
)
current_hypotheses = model.decoding.rnnt_decoder_predictions_tensor(
encoded, encoded_len, return_hypotheses=True
)
del encoded, encoded_len
# current hypothesis is a tuple of
# 1) best hypothesis
# 2) Sorted list of hypothesis (if using beam search); None otherwise
return current_hypotheses
# Get a batch of hypotheses, as well as a batch of all obtained hypotheses (if beam search is used)
hypotheses, all_hypotheses = rnnt_alignments(model, batch)
###Output
_____no_output_____
###Markdown
------Select a sample ID from within the batch to observe the alignment information contained in the Hypothesis.
###Code
# Select the sample ID from within the batch
SAMPLE_ID = 0
# Obtain the hypothesis for this sample, as well as some ground truth information about this sample
hypothesis = hypotheses[SAMPLE_ID]
original_sample_len = batch[1][SAMPLE_ID]
ground_truth = batch[2][SAMPLE_ID]
# The Hypothesis object contains a lot of useful information regarding the decoding step.
print(hypothesis)
###Output
_____no_output_____
###Markdown
-------Now, decode the hypothesis and compare it against the ground truth text. Note - this decoded hypothesis is at *sub-word* level for this model. Therefore sub-word tokens such as `_` may be seen here.
###Code
decoded_text = hypothesis.text
decoded_hypothesis = model.decoding.decode_ids_to_tokens(hypothesis.y_sequence.cpu().numpy().tolist())
decoded_ground_truth = model.decoding.tokenizer.ids_to_text(ground_truth.cpu().numpy().tolist())
print("Decoded ground truth :", decoded_ground_truth)
print("Decoded hypothesis :", decoded_text)
print("Decoded hyp tokens :", decoded_hypothesis)
###Output
_____no_output_____
###Markdown
---------Next we print out the 2-d alignment grid of the RNNT model:
###Code
alignments = hypothesis.alignments
# These two values should normally always match
print("Length of alignments (T): ", len(alignments))
print("Length of padded acoustic model after striding : ", int(hypothesis.length))
###Output
_____no_output_____
###Markdown
------Finally, let us calculate the alignment grid. We will de-tokenize the sub-word token if it is a valid index in the vocabulary and use `''` as a placeholder for the `Transducer Blank` token.Note that each `timestep` here is (roughly) $timestep * total\_stride\_of\_model * preprocessor.window\_stride$ seconds timestamp. **Note**: You can modify the value of `config.model.loss.warprnnt_numba_kwargs.fastemit_lambda` prior to training and see an impact on final alignment latency!
###Code
# Compute the alignment grid
for ti in range(len(alignments)):
t_u = []
for uj in range(len(alignments[ti])):
token = alignments[ti][uj]
token = token.to('cpu').numpy().tolist()
decoded_token = model.decoding.decode_ids_to_tokens([token])[0] if token != model.decoding.blank_id else '' # token at index len(vocab) == RNNT blank token
t_u.append(decoded_token)
print(f"Tokens at timestep {ti} = {t_u}")
###Output
_____no_output_____ |
jupyter_book/book_template/content/features/interactive_cells.ipynb | ###Markdown
Interactive code in your bookSometimes you'd rather let people interact with code *directly on the page*instead of sending them off to a Binder or a JupyterHub. There are currentlya few ways to make this happen in Jupyter Book (both of which are experimental).This page describes how to bring interactivity to your book. Both of thesetools use [**MyBinder**](https://mybinder.org) to provide a remote kernel. Making your page inputs interactive✨**experimental**✨If you'd like to provide interactivity for your content without making your readersleave the Jupyter Book site, you can use a project called [Thebelab](https://github.com/minrk/thebelab).This provides you a button that, when clicked, will convert each code cell intoan **interactive** cell that can be edited. It also adds a "run" button to each cell,and connects to a Binder kernel running in the cloud.As an alternative to pressing the Thebelab button at the top of the page, you can press the symbol in the top right corner of each code cell to start the interactive mode.To add a Thebelab button to your Jupyter Book pages, use the following configuration:```yamluse_thebelab_button : true If 'true', display a button to allow in-page running code cells with Thebelab```In addition, you can configure the Binder settings that are used to provide a kernel forThebelab to run the code. These use the same configuration fields as the BinderHub interactbuttons described above.For an example, click the **Thebelab** button above on this page, and run the code below.
###Code
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
x = np.arange(500)
y = np.random.randn(500)
fig, ax = plt.subplots()
ax.scatter(x, y, c=y, s=x)
###Output
_____no_output_____
###Markdown
Using interactive widgets on your page✨**experimental**✨[**nbinteract**](https://www.nbinteract.com) is a tool for displaying interactive widgets in yourstatic HTML page. It uses a Binder kernel to power the widgets, and displays output that yourreaders can interact with. For example, below we will show a simple matplotlib plot that can be madeinteractive with **ipywidgets**To add a **Show Widgets** button to your Jupyter Book pages, use the following configuration:```yamluse_show_widgets_button : true If 'true', display a button to show widgets backed by a Binder kernel```Then, tell Jupyter Book that you want a cell to display a widget by **adding a tag** to the cell'smetadata called `interactive`. When a reader clicks on the "show widgets" button, any cellswith this tag will be run on Binder, and have their output widgets displayed underneath the cell.Here's an example of cell metadata that would trigger this behavior:```json{ "tags": [ "interactive", ]}```You can configure the Binder settings that are used to provide a kernel to run the code.These use the same configuration fields as the BinderHub interact buttons described above.Clicking on "show widgets" should display a widget below. We've hidden the code cell thatgenerates the widget by default (though you can always show it by clicking the button tothe right!
###Code
from ipywidgets import interact, FloatSlider
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display, HTML
plt.ion()
x = np.arange(500)
y = np.random.randn(500)
def update_plot_size(s, cmap):
if cmap == "jet":
display(HTML("<h2 style='color: red; margin: 0px auto;'>Nope</h2>"))
return
fig, ax = plt.subplots()
ax.scatter(x, y, c=y, s=x*s, cmap=cmap)
interact(update_plot_size, s=FloatSlider(value=1, min=.1, max=2, step=.1), cmap=['viridis', 'magma', 'jet']);
###Output
_____no_output_____
###Markdown
Interactive code in your bookSometimes you'd rather let people interact with code *directly on the page*instead of sending them off to a Binder or a JupyterHub. There are currentlya few ways to make this happen in Jupyter Book (both of which are experimental).This page describes how to bring interactivity to your book. Both of thesetools use [**MyBinder**](https://mybinder.org) to provide a remote kernel. Making your page inputs interactive✨**experimental**✨If you'd like to provide interactivity for your content without making your readersleave the Jupyter Book site, you can use a project called [Thebelab](https://github.com/minrk/thebelab).This provides you a button that, when clicked, will convert each code cell intoan **interactive** cell that can be edited. It also adds a "run" button to each cell,and connects to a Binder kernel running in the cloud.As an alternative to pressing the Thebelab button at the top of the page, you can press the symbol in the top right corner of each code cell to start the interactive mode.To add a Thebelab button to your Jupyter Book pages, use the following configuration:```yamluse_thebelab_button : true If 'true', display a button to allow in-page running code cells with Thebelab```In addition, you can configure the Binder settings that are used to provide a kernel forThebelab to run the code. These use the same configuration fields as the BinderHub interactbuttons described above.For an example, click the **Thebelab** button above on this page, and run the code below.
###Code
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
x = np.arange(500)
y = np.random.randn(500)
fig, ax = plt.subplots()
ax.scatter(x, y, c=y, s=x)
###Output
_____no_output_____
###Markdown
Using interactive widgets on your page✨**experimental**✨[**nbinteract**](https://www.nbinteract.com) is a tool for displaying interactive widgets in yourstatic HTML page. It uses a Binder kernel to power the widgets, and displays output that yourreaders can interact with. For example, below we will show a simple matplotlib plot that can be madeinteractive with **ipywidgets**To add a **Show Widgets** button to your Jupyter Book pages, use the following configuration:```yamluse_show_widgets_button : true If 'true', display a button to show widgets backed by a Binder kernel```Then, tell Jupyter Book that you want a cell to display a widget by **adding a tag** to the cell'smetadata called `interactive`. When a reader clicks on the "show widgets" button, any cellswith this tag will be run on Binder, and have their output widgets displayed underneath the cell.Here's an example of cell metadata that would trigger this behavior:```json{ "tags": [ "interactive", ]}```You can configure the Binder settings that are used to provide a kernel to run the code.These use the same configuration fields as the BinderHub interact buttons described above.Clicking on "show widgets" should display a widget below. We've hidden the code cell thatgenerates the widget by default (though you can always show it by clicking the button tothe right!
###Code
from ipywidgets import interact, FloatSlider
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display, HTML
plt.ion()
x = np.arange(500)
y = np.random.randn(500)
def update_plot_size(s, cmap):
if cmap == "jet":
display(HTML("<h2 style='color: red; margin: 0px auto;'>Nope</h2>"))
return
fig, ax = plt.subplots()
ax.scatter(x, y, c=y, s=x*s, cmap=cmap)
interact(update_plot_size, s=FloatSlider(value=1, min=.1, max=2, step=.1), cmap=['viridis', 'magma', 'jet']);
###Output
_____no_output_____
###Markdown
Interactive code in your bookSometimes you'd rather let people interact with code *directly on the page*instead of sending them off to a Binder or a JupyterHub. There are currentlya few ways to make this happen in Jupyter Book (both of which are experimental).This page describes how to bring interactivity to your book. Both of thesetools use [**MyBinder**](https://mybinder.org) to provide a remote kernel. Making your page inputs interactive✨**experimental**✨If you'd like to provide interactivity for your content without making your readersleave the Jupyter Book site, you can use a project called [Thebelab](https://github.com/minrk/thebelab).This provides you a button that, when clicked, will convert each code cell intoan **interactive** cell that can be edited. It also adds a "run" button to each cell,and connects to a Binder kernel running in the cloud.As an alternative to pressing the Thebelab button at the top of the page, you can press the symbol in the top right corner of each code cell to start the interactive mode.To add a Thebelab button to your Jupyter Book pages, use the following configuration:```yamluse_thebelab_button : true If 'true', display a button to allow in-page running code cells with Thebelab```In addition, you can configure the Binder settings that are used to provide a kernel forThebelab to run the code. These use the same configuration fields as the BinderHub interactbuttons described above.For an example, click the **Thebelab** button above on this page, and run the code below.
###Code
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
x = np.arange(500)
y = np.random.randn(500)
fig, ax = plt.subplots()
ax.scatter(x, y, c=y, s=x)
###Output
_____no_output_____
###Markdown
Running cells in Thebelab when it is initializedSometimes you'd like to initialize the kernel that Thebelab uses by runningsome code ahead of time. This might be code that you then hide from the userin order to narrow the focus of what they interact with. This is possibleby using Jupyter Notebook tags.Adding the tag `thebelab-init` to any code cell will cause Thebelab torun this cell after it has received a kernel. Any subsequent Thebelab cellswill have access to the same environment (e.g. any module imports made in theinitialization cell).You can then pair this with something like `hide_input` in order to runinitialization code that your user doesn't immediately see. For example,below we'll initialize a variable in a hidden cell, and then tell anothercell to print the output of that variable.
###Code
my_hidden_variable = 'wow, it worked!'
# The variable for this is defined in the cell above!
print(my_hidden_variable)
###Output
_____no_output_____
###Markdown
Using interactive widgets on your page✨**experimental**✨[**nbinteract**](https://www.nbinteract.com) is a tool for displaying interactive widgets in yourstatic HTML page. It uses a Binder kernel to power the widgets, and displays output that yourreaders can interact with. For example, below we will show a simple matplotlib plot that can be madeinteractive with **ipywidgets**To add a **Show Widgets** button to your Jupyter Book pages, use the following configuration:```yamluse_show_widgets_button : true If 'true', display a button to show widgets backed by a Binder kernel```Then, tell Jupyter Book that you want a cell to display a widget by **adding a tag** to the cell'smetadata called `interactive`. When a reader clicks on the "show widgets" button, any cellswith this tag will be run on Binder, and have their output widgets displayed underneath the cell.Here's an example of cell metadata that would trigger this behavior:```json{ "tags": [ "interactive", ]}```You can configure the Binder settings that are used to provide a kernel to run the code.These use the same configuration fields as the BinderHub interact buttons described above.Clicking on "show widgets" should display a widget below. We've hidden the code cell thatgenerates the widget by default (though you can always show it by clicking the button tothe right!
###Code
from ipywidgets import interact, FloatSlider
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display, HTML
plt.ion()
x = np.arange(500)
y = np.random.randn(500)
def update_plot_size(s, cmap):
if cmap == "jet":
display(HTML("<h2 style='color: red; margin: 0px auto;'>Nope</h2>"))
return
fig, ax = plt.subplots()
ax.scatter(x, y, c=y, s=x*s, cmap=cmap)
interact(update_plot_size, s=FloatSlider(value=1, min=.1, max=2, step=.1), cmap=['viridis', 'magma', 'jet']);
###Output
_____no_output_____ |
module4-logistic-regression/YuanjinRen_LS_DS_214_assignment.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 4*--- Logistic Regression Assignment 🌯You'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'?> We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions.- [ ] Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your model's validation accuracy. (Multiple times if you try multiple iterations.)- [ ] Get your model's test accuracy. (One time, at the end.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Load data downloaded from https://srcole.github.io/100burritos/
import pandas as pd
df = pd.read_csv(DATA_PATH+'burritos/burritos.csv')
# Derive binary classification target:
# We define a 'Great' burrito as having an
# overall rating of 4 or higher, on a 5 point scale.
# Drop unrated burritos.
df = df.dropna(subset=['overall'])
df['Great'] = df['overall'] >= 4
# Clean/combine the Burrito categories
df['Burrito'] = df['Burrito'].str.lower()
california = df['Burrito'].str.contains('california')
asada = df['Burrito'].str.contains('asada')
surf = df['Burrito'].str.contains('surf')
carnitas = df['Burrito'].str.contains('carnitas')
df.loc[california, 'Burrito'] = 'California'
df.loc[asada, 'Burrito'] = 'Asada'
df.loc[surf, 'Burrito'] = 'Surf & Turf'
df.loc[carnitas, 'Burrito'] = 'Carnitas'
df.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other'
# Drop some high cardinality categoricals
df = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood'])
# Drop some columns to prevent "leakage"
df = df.drop(columns=['Rec', 'overall'])
df.shape
df.head(5)
#Do train/validate/test split.
#Train on reviews from 2016 & earlier.
#Validate on 2017
#Test on 2018 & later.
df['Date'] = pd.to_datetime(df['Date'], infer_datetime_format=True)
train = df[df['Date']<'2017-01-01']
val = df[df['Date'].dt.year == 2017]
test = df[df['Date'] > '2018-01-01']
train.shape, val.shape, test.shape
#Begin with baselines for classification
target = 'Great'
y_train = train[target]
majority_class = y_train.mode()[0] # majority class in training set is 'Not Great'
majority_class
y_train_pred = [majority_class] * len(y_train)
from sklearn.metrics import accuracy_score, mean_absolute_error
accuracy_score(y_train, y_train_pred)
y_val = val[target]
y_val_pred = [majority_class] * len(y_val)
accuracy_score(y_val, y_val_pred)
y_test = test[target]
y_test_pred = [majority_class] * len(y_test)
accuracy_score(y_test, y_test_pred)
#Use scikit-learn for logistic regression
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
train.dtypes
features = ['Yelp', 'Tortilla','Fillings', 'Cost', 'Cheese','Lobster', 'Avocado']
# pick 7 features to do the following prediction
X_train = train[features]
X_val = val[features]
X_test = test[features]
X_train.shape, X_val.shape, X_test.shape
encoder = ce.one_hot.OneHotEncoder(use_cat_names=True)
X_train_enc = encoder.fit_transform(X_train)
X_val_enc = encoder.transform(X_val)
X_test_enc = encoder.transform(X_test)
X_train_enc.shape, X_val_enc.shape, X_test_enc.shape
X_train_enc.head()
imputer = SimpleImputer()
X_train_imp = imputer.fit_transform(X_train_enc)
X_val_imp = imputer.transform(X_val_enc)
X_test_imp = imputer.transform(X_test_enc)
X_train_imp.shape, X_val_imp.shape, X_test_imp.shape
scaler = StandardScaler()
X_train_sc = scaler.fit_transform(X_train_imp)
X_val_sc = scaler.transform(X_val_imp)
X_test_sc = scaler.transform(X_test_imp)
X_train_sc = pd.DataFrame(X_train_sc, columns=X_train_enc.columns)
X_val_sc = pd.DataFrame(X_val_sc,columns=X_val_enc.columns)
X_test_sc = pd.DataFrame(X_test_sc, columns= X_test_enc.columns)
X_train_sc.head()
model = LogisticRegression()
model.fit(X_train_sc, y_train)
y_mval_pred = model.predict(X_val_sc)
y_mtest_pred = model.predict(X_test_sc)
print(f'Validation accuracy: {accuracy_score(y_val,y_mval_pred)}') # validation set accuracy
print(f'Test accuracy: {accuracy_score(y_test, y_mtest_pred)}') # testing set accuracy
# plot coefficients
coefs = model.coef_[0]
coefs = pd.Series(coefs, X_train_sc.columns)
coefs
coefs.sort_values().plot.barh();
# show final test result
test
result = test[['Burrito','Date']].copy()
result
result['Great'] = y_mtest_pred
result
###Output
_____no_output_____ |
examples/02_2_deeptabular_models.ipynb | ###Markdown
The `deeptabular` componentIn the previous notebook I described the linear model (`Wide`) and the standard text and image classification and regression models (`DeepText` and `DeepImage`) that can be used as the `wide`, `deeptext` and `deepimage` components respectively when building a `WideDeep` model. In this notebook I will describe the different models (or architectures) available in `pytorch-widedeep` that can be used as the `deeptabular` model. Note that the `deeptabular` model alone is what normally would be referred as Deep Learning for tabular data. As I mentioned in previous notebooks, each component can be used independently. Therefore, if you wanted to use `deeptabular` alone it is perfectly possible. There are just a couple of simple requirement that will be covered in a later notebook.The models available in `pytorch-widedeep` as the `deeptabular` component are:1. `TabMlp`2. `TabResnet`3. `Tabnet`4. `TabTransformer`5. `FT-Tabransformer` (which is a simple variation of the `TabTransformer`)6. `SAINT`Let's have a close look to the 6 of them 1. `TabMlp``TabMlp` is the simples architecture and is very similar to the tabular model available in the fantastic fastai library. In fact, the implementation of the dense layers of the MLP is mostly identical to that in that library.The figure below illustrate the `TabMlp` architecture:The dashed-border boxes indicate that these components are optional. For example, we could use `TabMlp` without categorical components, or without continuous components, if we wanted.
###Code
import torch
from pytorch_widedeep.models import TabMlp
# ?TabMlp
###Output
_____no_output_____
###Markdown
Let's have a look to a model and one example
###Code
colnames = ["a", "b", "c", "d", "e"]
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
embed_input = [(u, i, j) for u, i, j in zip(colnames[:4], [4] * 4, [8] * 4)]
column_idx = {k: v for v, k in enumerate(colnames)}
tabmlp = TabMlp(
mlp_hidden_dims=[8, 4],
continuous_cols=["e"],
column_idx=column_idx,
embed_input=embed_input,
cont_norm_layer="batchnorm",
)
out = tabmlp(X_tab)
tabmlp
###Output
_____no_output_____
###Markdown
Note that the input dimension of the MLP is `33`, `32` from the embeddings and `1` for the continuous features. Before we move on, is worth commenting an aspect that applies to all models discussed here. The `TabPreprocessor` included in this package gives the user the possibility of standarising the input via `sklearn`'s `StandardScaler`. Alternatively, or in addition to it, it is possible to add a continuous normalization layer (`BatchNorm1d` or `LayerNorm`). To do so simply set the `cont_norm_layer` as indicated in the example above. See also the docs.I will insist on this in this in here and the following sections. Note that `TabMlp` (or any of the wide and deep components) does not build the final connection with the final neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s).For example:
###Code
from pytorch_widedeep.models import WideDeep
wd_model = WideDeep(deeptabular=tabmlp, pred_dim=1)
wd_model
###Output
_____no_output_____
###Markdown
voila 2. `TabResnet``TabResnet` is very similar to `TabMlp`, but the embeddings (or the concatenation of embeddings and continuous features) are passed through a series of Resnet blocks built with dense layers. This is probably the most flexible `deeptabular` component in terms of the many variants one can define via the parameters. Let's have a look to the architecture:The dashed-border boxes indicate the the component is optional and the dashed lines indicate the different paths or connections present depending on which components we decide to include. For example, we could chose to concatenate the continuous features, normalized or not via a `BatchNorm1d` layer, with the embeddings and pass the result of such a concatenation trough the series of Resnet blocks. Alternatively, we might prefer to concatenate the continuous features with the results of passing the embeddings through the Resnet blocks. Another optional component is the MLP before the output neuron(s). If not MLP is present, the output from the Resnet blocks or the results of concatenating that output with the continuous features (normalised or not) will be connected directly to the output neuron(s). Each Resnet block is comprised by the following operations:For more details see [`pytorch_widedeep/models/tab_resnet.BasicBlock`](https://github.com/jrzaurin/pytorch-widedeep/blob/master/pytorch_widedeep/models/tab_resnet.py). Let's have a look to an example now:
###Code
from pytorch_widedeep.models import TabResnet
# ?TabResnet
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ["a", "b", "c", "d", "e"]
embed_input = [(u, i, j) for u, i, j in zip(colnames[:4], [4] * 4, [8] * 4)]
column_idx = {k: v for v, k in enumerate(colnames)}
tabresnet = TabResnet(
blocks_dims=[16, 16, 16],
column_idx=column_idx,
embed_input=embed_input,
continuous_cols=["e"],
cont_norm_layer="layernorm",
concat_cont_first=False,
mlp_hidden_dims=[16, 4],
mlp_dropout=0.5,
)
out = tabresnet(X_tab)
tabresnet
###Output
_____no_output_____
###Markdown
As we can see, first the embeddings are concatenated (resulting in a tensor of dim ($*$, 32) and are projected (or resized, which happens in `lin1` and `bn1`) to the input dimension of the Resnet block (16). The we have the two Resnet blocks defined by the sequence `[INP1 (16) -> OUT1 == INP2 (16) -> OUT2 (16)]`. Finally the output from the Resnet blocks is concatenated and passed to the MLP. As I mentioned earlier, note that `TabResnet` does not build the connection to the output neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s). 3. `Tabnet`Details on this architecture can be found in [TabNet: Attentive Interpretable Tabular Learning](https://arxiv.org/pdf/1908.07442.pdf). This is not a simple algorithm. Therefore, I strongly recommend reading the paper.In general terms, `Tabnet` takes the embeddings from the categorical columns and the continuous columns (standarised or not) that are then passed through a series of `Steps`. Each `Step` involves a so called Attentive Transformer and a Feature Transformer, combined with masking and a `Relu` non-linearity. This is shown in the figure below, directly taken from the paper. The part of the diagram drawn as $[FC \rightarrow out]$ would be what in other figures I draw as $[MLP -> output \space neuron]$.Note that in the paper the authors use an encoder-decoder architecture and apply a routine that involves unsupervised pre-training plus supervised fine-tunning. However the authors found that unsupervised pre-training is useful when the data size is very small and/or there is a large number of unlabeled observations. This result is consistent with those obtained by subsequent papers using the same approach. `pytorch-widedeep` was conceived as a library to use wide and deep models with tabular data, images and text for supervised learning (regression or classification). Therefore, I decided to implement only the encoder architecture of this model, and the transformer-based models. If you want more details on each component I recommend reading the paper and have a look to the implementation by the guys at [dreamquark-ai](https://github.com/dreamquark-ai/tabnet). In fact, and let me make this clear, **the Tabnet implementation in this package is mostly a copy and paste from that at the dreamquark-ai's library**. Simply, I have adapted it to work with wide and deep models and I have added a few extras, such as being able to add dropout in the GLU blocks or to not use Ghost batch normalization. Enough writing, let's have a look to the code
###Code
from pytorch_widedeep.models import TabNet
# ?TabNet
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ["a", "b", "c", "d", "e"]
embed_input = [(u, i, j) for u, i, j in zip(colnames[:4], [4] * 4, [8] * 4)]
column_idx = {k: v for v, k in enumerate(colnames)}
tabnet = TabNet(
column_idx=column_idx,
embed_input=embed_input,
continuous_cols=["e"],
cont_norm_layer="batchnorm",
ghost_bn=False,
)
out = tabnet(X_tab)
tabnet
###Output
_____no_output_____
###Markdown
4 The transformers familyFor a tour on all transformer-based models, please, see the Transformer Family Notebook. All the content below is in that notebook. 4.1 `TabTransformer` Details on the `TabTransformer` can be found in [TabTransformer: Tabular Data ModelingUsing Contextual Embeddings](https://arxiv.org/pdf/2012.06678.pdf).In general terms, the `TabTransformer` takes the embeddings from the categorical columns that are then passed through a Tranformer encoder, concatenated with the normalised continuous features, and then passed through an MLP. Let's have a look:The dashed-border boxes indicate the the component is optional. In terms of the Transformer block, I am sure at this stage the reader has seen every possible diagram of The Transformer, its multihead attention etc, so I thought about drawing something that resembles more to the actual execution/code for each block. Note that this implementation assumes that the so called `inner-dim` (aka the projection dimension) is the same as the `dimension of the model` or, in this case, embedding dimension. Relaxing this assumption is relatively easy and programatically would involve including one parameter more in the `TabTransformer` class. For now, and consistent with other Transformer implementations, I will assume `inner-dim = dimension of the model`. Also, and again consistent other implementations, I assume that the Keys, Queries and Values are of the same `dim`. The architecture of the `FT-Transformer` is identical to that of the `TabTransformer` with the exception that the continuous cols are each passed through a 1-layer MLP with or without activation (referred in the figure below as `Cont Embeddings`) function before being concatenated with the continuous cols. A variant of the `TabTransformer` is the `FT-Transformer`, which was introduced in is a variant introduced in [Revisiting Deep Learning Models for Tabular Data](https://arxiv.org/pdf/2106.11959.pdf). The two main additions were continuous embeddings and Linear Attention. Continuous embeddings were already introduce in the `SAINT` paper: [SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training](https://arxiv.org/pdf/2106.01342.pdf).There is a dedicated `FTTransformer` model in the library that one can check in the `Transformers Family` notebook. Nonetheless, using the `TabTransformer` with continuous embeddings is as easy as setting the param `embed_continuous` to `True`. In addition, I have also added the possibility of pooling all outputs from the transformer blocks using the `[CLS]` token. Otherwise all the outputs form the transformer blocks will be concatenated. Look at some of the other example notebooks for more details.
###Code
from pytorch_widedeep.models import TabTransformer
# ?TabTransformer
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ["a", "b", "c", "d", "e"]
embed_input = [(u, i) for u, i in zip(colnames[:4], [4] * 4)]
continuous_cols = ["e"]
column_idx = {k: v for v, k in enumerate(colnames)}
tab_transformer = TabTransformer(
column_idx=column_idx, embed_input=embed_input, continuous_cols=continuous_cols
)
out = tab_transformer(X_tab)
tab_transformer
tab_transformer = TabTransformer(
column_idx=column_idx,
embed_input=embed_input,
continuous_cols=continuous_cols,
embed_continuous=True,
embed_continuous_activation="relu",
)
out = tab_transformer(X_tab)
tab_transformer
###Output
_____no_output_____
###Markdown
Finally, and as I mentioned earlier, note that `TabTransformer` class does not build the connection to the output neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s). 6. `SAINT`Details on `SAINT` (Self-Attention and Intersample Attention Transformer) can be found in [SAINT: Improved Neural Networks for Tabular Datavia Row Attention and Contrastive Pre-Training](https://arxiv.org/pdf/2106.01342.pdf). The main contribution of the saint model is the addition of an intersample attention block. In case you wonder what is this mysterious "inter-sample attention", simply, is the exact same mechanism as the well-known self-attention, but instead of features attending to each other here are observations/rows attending to each other. If you wanted to understand more details on what are the advantages of using this mechanism, I strongly encourage you to read the paper. Effectively, all that one needs to do is to reshape the input tensors of the transformer blocks and "off we go". `pytorch-widedeep`'s implementation is partially based in the [original code release](https://github.com/somepago/saint) (and the word "*partially*" is well used here in the sense that are notable differences, but in essence is the same implementation described in the paper).Let's have a look to some code
###Code
from pytorch_widedeep.models import SAINT
# ?SAINT
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ["a", "b", "c", "d", "e"]
embed_input = [(u, i) for u, i in zip(colnames[:4], [4] * 4)]
continuous_cols = ["e"]
column_idx = {k: v for v, k in enumerate(colnames)}
saint = SAINT(
column_idx=column_idx,
embed_input=embed_input,
continuous_cols=continuous_cols,
embed_continuous_activation="leaky_relu",
)
out = saint(X_tab)
saint
###Output
_____no_output_____
###Markdown
The `deeptabular` componentIn the previous notebook I described the linear model (`Wide`) and the standard text classification and regression models (`DeepText` and `DeepImage`) that can be used as the `wide`, `deeptext` and `deepimage` components respectively when building a `WideDeep` model. In this notebook I will describe the 3 models (or architectures) available in `pytorch-widedeep` that can be used as the `deeptabular` model. Note that the `deeptabular` model alone is what normally would be referred as Deep Learning for tabular data. As I mentioned in previous notebooks, each component can be used independently. Therefore, if you wanted to use `deeptabular` alone it is perfectly possible. There are just a couple of simple requirement that will be covered in a later notebook.The 3 models available in `pytorch-widedeep` as the `deeptabular` are:1. `TabMlp`2. `TabResnet`3. `TabTransformer`Let's have a close look to the 3 of them 1. `TabMlp``TabMlp` is the simples architecture and is very similar to the tabular model available in the fantastic fastai library. In fact, the implementation of the dense layers of the MLP is mostly identical to that in that library.The figure below illustrate the `TabMlp` architecture:The dashed-border boxes indicate that these components are optional. For example, we could use `TabMlp` without categorical components, or without continuous components, if we wanted.
###Code
import torch
from pytorch_widedeep.models import TabMlp
?TabMlp
###Output
_____no_output_____
###Markdown
Let's have a look to a model and one example
###Code
colnames = ['a', 'b', 'c', 'd', 'e']
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
embed_input = [(u,i,j) for u,i,j in zip(colnames[:4], [4]*4, [8]*4)]
column_idx = {k:v for v,k in enumerate(colnames)}
tabmlp = TabMlp(mlp_hidden_dims=[8,4], continuous_cols=['e'], column_idx=column_idx,
embed_input=embed_input, batchnorm_cont=True)
out = model(X_tab)
tabmlp
###Output
_____no_output_____
###Markdown
Note that the input dimension of the MLP is `33`, `32` from the embeddings and `1` for the continuous features. Before we move on, is worth commenting an aspect that applies to the three models discussed here. The `TabPreprocessor` included in this package gives the user the possibility of standarising the input via `sklearn`'s `StandardScaler`. Alternatively, or in addition to it, it is possible to add a `BatchNorm1d` layer to normalise continuous columns within `TabMlp`. To do so simply set the `batchnorm_cont` parameter as `True` when defining the model, as indicated in the example above.I will insist on this in this and the following sections. Note that `TabMlp` (or any of the wide and deep components) does not build the final connection with the final neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s).For example:
###Code
from pytorch_widedeep.models import WideDeep
wd_model = WideDeep(deeptabular=tabmlp, pred_dim=1)
wd_model
###Output
_____no_output_____
###Markdown
voila 2. `TabResnet``TabResnet` is very similar to `TabMlp`, but the embeddings (or the concatenation of embeddings and continuous features) are passed through a series of Resnet blocks built with dense layers. This is probably the most flexible `deeptabular` component in terms of the many variants one can define via the parameters. Let's have a look to the architecture:The dashed-border boxes indicate the the component is optional and the dashed lines indicate the different paths or connections present depending on which components we decide to include. For example, we could chose to concatenate the continuous features, normalized or not via a `BatchNorm1d` layer, with the embeddings and pass the result of such a concatenation trough the series of Resnet blocks. Alternatively, we might prefer to concatenate the continuous features with the results of passing the embeddings through the Resnet blocks. Another optional component is the MLP before the output neuron(s). If not MLP is present, the output from the Resnet blocks or the results of concatenating that output with the continuous features (normalised or not) will be connected directly to the output neuron(s). Each Resnet block is comprised by the following operations:For more details see [`pytorch_widedeep/models/tab_resnet.BasicBlock`](https://github.com/jrzaurin/pytorch-widedeep/blob/master/pytorch_widedeep/models/tab_resnet.py). Let's have a look to an example now:
###Code
from pytorch_widedeep.models import TabResnet
?TabResnet
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ['a', 'b', 'c', 'd', 'e']
embed_input = [(u,i,j) for u,i,j in zip(colnames[:4], [4]*4, [8]*4)]
column_idx = {k:v for v,k in enumerate(colnames)}
tabresnet = TabResnet(blocks_dims=[16,16,16],
column_idx=column_idx,
embed_input=embed_input,
continuous_cols = ['e'],
batchnorm_cont = True,
concat_cont_first = False,
mlp_hidden_dims = [16, 4],
mlp_dropout = 0.5)
out = tabresnet(X_tab)
tabresnet
###Output
_____no_output_____
###Markdown
As we can see, first the embeddings are concatenated (resulting in a tensor of dim ($*$, 32) and are projected (or resized, which happens in `lin1` and `bn1`) to the input dimension of the Resnet block (16). The we have the two Resnet blocks defined by the sequence `[INP1 (16) -> OUT1 == INP2 (16) -> OUT2 (16)]`. Finally the output from the Resnet blocks is concatenated and passed to the MLP. As I mentioned earlier, note that `TabResnet` does not build the connection to the output neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s). 3. `TabTransformer`Details on this architecture can be found in [TabTransformer: Tabular Data ModelingUsing Contextual Embeddings](https://arxiv.org/pdf/2012.06678.pdf). Also, there are so many variants and details that I thought it deserves its own post. Therefore, if you want to dive properly into the use of the Transformer for tabular data I recommend to read the paper and the post (probably in that order). In general terms, `TabTransformer` takes the embeddings from the categorical columns that are then passed through a Tranformer encoder, concatenated with the normalised continuous features, and then passed through an MLP. Let's have a look:The dashed-border boxes indicate the the component is optional. In terms of the Transformer block, I am sure at this stage the reader has seen every possible diagram of The Transformer, its multihead attention etc, so I thought about drawing something that resembles more to the actual execution/code for each block. Note that this implementation assumes that the so called `inner-dim` (aka the projection dimension) is the same as the `dimension of the model` or, in this case, embedding dimension. Relaxing this assumption is relatively easy and programatically would involve including one parameter more in the `TabTransformer` class. For now, and consistent with other Transformer implementations, I will assume `inner-dim = dimension of the model`. Also, and again consistent other implementations, I assume that the Keys, Queries and Values are of the same `dim`. Enough writing, let's have a look to the code
###Code
from pytorch_widedeep.models import TabTransformer
?TabTransformer
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ['a', 'b', 'c', 'd', 'e']
embed_input = [(u,i) for u,i in zip(colnames[:4], [4]*4)]
continuous_cols = ['e']
column_idx = {k:v for v,k in enumerate(colnames)}
tab_transformer = TabTransformer(column_idx=column_idx, embed_input=embed_input, continuous_cols=continuous_cols)
out = model(X_tab)
tab_transformer
###Output
_____no_output_____
###Markdown
The `deeptabular` componentIn the previous notebook I described the linear model (`Wide`) and the standard text and image classification and regression models (`DeepText` and `DeepImage`) that can be used as the `wide`, `deeptext` and `deepimage` components respectively when building a `WideDeep` model. In this notebook I will describe the different models (or architectures) available in `pytorch-widedeep` that can be used as the `deeptabular` model. Note that the `deeptabular` model alone is what normally would be referred as Deep Learning for tabular data. As I mentioned in previous notebooks, each component can be used independently. Therefore, if you wanted to use `deeptabular` alone it is perfectly possible. There are just a couple of simple requirement that will be covered in a later notebook.The models available in `pytorch-widedeep` as the `deeptabular` component are:1. `TabMlp`2. `TabResnet`3. `Tabnet`4. `TabTransformer`5. `FT-Tabransformer` (which is a simple variation of the `TabTransformer`)6. `SAINT`Let's have a close look to the 6 of them 1. `TabMlp``TabMlp` is the simples architecture and is very similar to the tabular model available in the fantastic fastai library. In fact, the implementation of the dense layers of the MLP is mostly identical to that in that library.The figure below illustrate the `TabMlp` architecture:The dashed-border boxes indicate that these components are optional. For example, we could use `TabMlp` without categorical components, or without continuous components, if we wanted.
###Code
import torch
from pytorch_widedeep.models import TabMlp
?TabMlp
###Output
_____no_output_____
###Markdown
Let's have a look to a model and one example
###Code
colnames = ['a', 'b', 'c', 'd', 'e']
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
embed_input = [(u,i,j) for u,i,j in zip(colnames[:4], [4]*4, [8]*4)]
column_idx = {k:v for v,k in enumerate(colnames)}
tabmlp = TabMlp(mlp_hidden_dims=[8,4], continuous_cols=['e'], column_idx=column_idx,
embed_input=embed_input, cont_norm_layer="batchnorm")
out = tabmlp(X_tab)
tabmlp
###Output
_____no_output_____
###Markdown
Note that the input dimension of the MLP is `33`, `32` from the embeddings and `1` for the continuous features. Before we move on, is worth commenting an aspect that applies to all models discussed here. The `TabPreprocessor` included in this package gives the user the possibility of standarising the input via `sklearn`'s `StandardScaler`. Alternatively, or in addition to it, it is possible to add a continuous normalization layer (`BatchNorm1d` or `LayerNorm`). To do so simply set the `cont_norm_layer` as indicated in the example above. See also the docs.I will insist on this in this in here and the following sections. Note that `TabMlp` (or any of the wide and deep components) does not build the final connection with the final neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s).For example:
###Code
from pytorch_widedeep.models import WideDeep
wd_model = WideDeep(deeptabular=tabmlp, pred_dim=1)
wd_model
###Output
_____no_output_____
###Markdown
voila 2. `TabResnet``TabResnet` is very similar to `TabMlp`, but the embeddings (or the concatenation of embeddings and continuous features) are passed through a series of Resnet blocks built with dense layers. This is probably the most flexible `deeptabular` component in terms of the many variants one can define via the parameters. Let's have a look to the architecture:The dashed-border boxes indicate the the component is optional and the dashed lines indicate the different paths or connections present depending on which components we decide to include. For example, we could chose to concatenate the continuous features, normalized or not via a `BatchNorm1d` layer, with the embeddings and pass the result of such a concatenation trough the series of Resnet blocks. Alternatively, we might prefer to concatenate the continuous features with the results of passing the embeddings through the Resnet blocks. Another optional component is the MLP before the output neuron(s). If not MLP is present, the output from the Resnet blocks or the results of concatenating that output with the continuous features (normalised or not) will be connected directly to the output neuron(s). Each Resnet block is comprised by the following operations:For more details see [`pytorch_widedeep/models/tab_resnet.BasicBlock`](https://github.com/jrzaurin/pytorch-widedeep/blob/master/pytorch_widedeep/models/tab_resnet.py). Let's have a look to an example now:
###Code
from pytorch_widedeep.models import TabResnet
?TabResnet
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ['a', 'b', 'c', 'd', 'e']
embed_input = [(u,i,j) for u,i,j in zip(colnames[:4], [4]*4, [8]*4)]
column_idx = {k:v for v,k in enumerate(colnames)}
tabresnet = TabResnet(blocks_dims=[16,16,16],
column_idx=column_idx,
embed_input=embed_input,
continuous_cols = ['e'],
cont_norm_layer = "layernorm",
concat_cont_first = False,
mlp_hidden_dims = [16, 4],
mlp_dropout = 0.5)
out = tabresnet(X_tab)
tabresnet
###Output
_____no_output_____
###Markdown
As we can see, first the embeddings are concatenated (resulting in a tensor of dim ($*$, 32) and are projected (or resized, which happens in `lin1` and `bn1`) to the input dimension of the Resnet block (16). The we have the two Resnet blocks defined by the sequence `[INP1 (16) -> OUT1 == INP2 (16) -> OUT2 (16)]`. Finally the output from the Resnet blocks is concatenated and passed to the MLP. As I mentioned earlier, note that `TabResnet` does not build the connection to the output neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s). 3. `Tabnet`Details on this architecture can be found in [TabNet: Attentive Interpretable Tabular Learning](https://arxiv.org/pdf/1908.07442.pdf). This is not a simple algorithm. Therefore, I strongly recommend reading the paper.In general terms, `Tabnet` takes the embeddings from the categorical columns and the continuous columns (standarised or not) that are then passed through a series of `Steps`. Each `Step` involves a so called Attentive Transformer and a Feature Transformer, combined with masking and a `Relu` non-linearity. This is shown in the figure below, directly taken from the paper. The part of the diagram drawn as $[FC \rightarrow out]$ would be what in other figures I draw as $[MLP -> output \space neuron]$.Note that in the paper the authors use an encoder-decoder architecture and apply a routine that involves unsupervised pre-training plus supervised fine-tunning. However the authors found that unsupervised pre-training is useful when the data size is very small and/or there is a large number of unlabeled observations. This result is consistent with those obtained by subsequent papers using the same approach. `pytorch-widedeep` was conceived as a library to use wide and deep models with tabular data, images and text for supervised learning (regression or classification). Therefore, I decided to implement only the encoder architecture of this model, and the transformer-based models. If you want more details on each component I recommend reading the paper and have a look to the implementation by the guys at [dreamquark-ai](https://github.com/dreamquark-ai/tabnet). In fact, and let me make this clear, **the Tabnet implementation in this package is mostly a copy and paste from that at the dreamquark-ai's library**. Simply, I have adapted it to work with wide and deep models and I have added a few extras, such as being able to add dropout in the GLU blocks or to not use Ghost batch normalization. Enough writing, let's have a look to the code
###Code
from pytorch_widedeep.models import TabNet
?TabNet
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ['a', 'b', 'c', 'd', 'e']
embed_input = [(u,i,j) for u,i,j in zip(colnames[:4], [4]*4, [8]*4)]
column_idx = {k:v for v,k in enumerate(colnames)}
tabnet = TabNet(
column_idx=column_idx,
embed_input=embed_input,
continuous_cols=['e'],
cont_norm_layer = "batchnorm",
ghost_bn = False,
)
out = tabnet(X_tab)
tabnet
###Output
_____no_output_____
###Markdown
4 The transformers familyFor a tour on all transformer-based models, please, see the Transformer Family Notebook. All the content below is in that notebook. 4.1 `TabTransformer` Details on the `TabTransformer` can be found in [TabTransformer: Tabular Data ModelingUsing Contextual Embeddings](https://arxiv.org/pdf/2012.06678.pdf).In general terms, the `TabTransformer` takes the embeddings from the categorical columns that are then passed through a Tranformer encoder, concatenated with the normalised continuous features, and then passed through an MLP. Let's have a look:The dashed-border boxes indicate the the component is optional. In terms of the Transformer block, I am sure at this stage the reader has seen every possible diagram of The Transformer, its multihead attention etc, so I thought about drawing something that resembles more to the actual execution/code for each block. Note that this implementation assumes that the so called `inner-dim` (aka the projection dimension) is the same as the `dimension of the model` or, in this case, embedding dimension. Relaxing this assumption is relatively easy and programatically would involve including one parameter more in the `TabTransformer` class. For now, and consistent with other Transformer implementations, I will assume `inner-dim = dimension of the model`. Also, and again consistent other implementations, I assume that the Keys, Queries and Values are of the same `dim`. The architecture of the `FT-Transformer` is identical to that of the `TabTransformer` with the exception that the continuous cols are each passed through a 1-layer MLP with or without activation (referred in the figure below as `Cont Embeddings`) function before being concatenated with the continuous cols. A variant of the `TabTransformer` is the `FT-Transformer`, which was introduced in is a variant introduced in [Revisiting Deep Learning Models for Tabular Data](https://arxiv.org/pdf/2106.11959.pdf). The two main additions were continuous embeddings and Linear Attention. Continuous embeddings were already introduce in the `SAINT` paper: [SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training](https://arxiv.org/pdf/2106.01342.pdf).There is a dedicated `FTTransformer` model in the library that one can check in the `Transformers Family` notebook. Nonetheless, using the `TabTransformer` with continuous embeddings is as easy as setting the param `embed_continuous` to `True`. In addition, I have also added the possibility of pooling all outputs from the transformer blocks using the `[CLS]` token. Otherwise all the outputs form the transformer blocks will be concatenated. Look at some of the other example notebooks for more details.
###Code
from pytorch_widedeep.models import TabTransformer
?TabTransformer
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ['a', 'b', 'c', 'd', 'e']
embed_input = [(u,i) for u,i in zip(colnames[:4], [4]*4)]
continuous_cols = ['e']
column_idx = {k:v for v,k in enumerate(colnames)}
tab_transformer = TabTransformer(column_idx=column_idx, embed_input=embed_input, continuous_cols=continuous_cols)
out = tab_transformer(X_tab)
tab_transformer
tab_transformer = TabTransformer(
column_idx=column_idx,
embed_input=embed_input,
continuous_cols=continuous_cols,
embed_continuous=True,
embed_continuous_activation="relu",
)
out = tab_transformer(X_tab)
tab_transformer
###Output
_____no_output_____
###Markdown
Finally, and as I mentioned earlier, note that `TabTransformer` class does not build the connection to the output neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s). 6. `SAINT`Details on `SAINT` (Self-Attention and Intersample Attention Transformer) can be found in [SAINT: Improved Neural Networks for Tabular Datavia Row Attention and Contrastive Pre-Training](https://arxiv.org/pdf/2106.01342.pdf). The main contribution of the saint model is the addition of an intersample attention block. In case you wonder what is this mysterious "inter-sample attention", simply, is the exact same mechanism as the well-known self-attention, but instead of features attending to each other here are observations/rows attending to each other. If you wanted to understand more details on what are the advantages of using this mechanism, I strongly encourage you to read the paper. Effectively, all that one needs to do is to reshape the input tensors of the transformer blocks and "off we go". `pytorch-widedeep`'s implementation is partially based in the [original code release](https://github.com/somepago/saint) (and the word "*partially*" is well used here in the sense that are notable differences, but in essence is the same implementation described in the paper).Let's have a look to some code
###Code
from pytorch_widedeep.models import SAINT
?SAINT
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ['a', 'b', 'c', 'd', 'e']
embed_input = [(u,i) for u,i in zip(colnames[:4], [4]*4)]
continuous_cols = ['e']
column_idx = {k:v for v,k in enumerate(colnames)}
saint = SAINT(
column_idx=column_idx,
embed_input=embed_input,
continuous_cols=continuous_cols,
embed_continuous_activation="leaky_relu",
)
out = saint(X_tab)
saint
###Output
_____no_output_____
###Markdown
The `deeptabular` componentIn the previous notebook I described the linear model (`Wide`) and the standard text and image classification and regression models (`DeepText` and `DeepImage`) that can be used as the `wide`, `deeptext` and `deepimage` components respectively when building a `WideDeep` model. In this notebook I will describe the different models (or architectures) available in `pytorch-widedeep` that can be used as the `deeptabular` model. Note that the `deeptabular` model alone is what normally would be referred as Deep Learning for tabular data. As I mentioned in previous notebooks, each component can be used independently. Therefore, if you wanted to use `deeptabular` alone it is perfectly possible. There are just a couple of simple requirement that will be covered in a later notebook.The models available in `pytorch-widedeep` as the `deeptabular` component are:1. `TabMlp`2. `TabResnet`3. `Tabnet`4. `TabTransformer`5. `FT-Tabransformer` (which is a simple variation of the `TabTransformer`)6. `SAINT`Let's have a close look to the 6 of them 1. `TabMlp``TabMlp` is the simples architecture and is very similar to the tabular model available in the fantastic fastai library. In fact, the implementation of the dense layers of the MLP is mostly identical to that in that library.The figure below illustrate the `TabMlp` architecture:The dashed-border boxes indicate that these components are optional. For example, we could use `TabMlp` without categorical components, or without continuous components, if we wanted.
###Code
import torch
from pytorch_widedeep.models import TabMlp
?TabMlp
###Output
_____no_output_____
###Markdown
Let's have a look to a model and one example
###Code
colnames = ['a', 'b', 'c', 'd', 'e']
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
embed_input = [(u,i,j) for u,i,j in zip(colnames[:4], [4]*4, [8]*4)]
column_idx = {k:v for v,k in enumerate(colnames)}
tabmlp = TabMlp(mlp_hidden_dims=[8,4], continuous_cols=['e'], column_idx=column_idx,
embed_input=embed_input, cont_norm_layer="batchnorm")
out = tabmlp(X_tab)
tabmlp
###Output
_____no_output_____
###Markdown
Note that the input dimension of the MLP is `33`, `32` from the embeddings and `1` for the continuous features. Before we move on, is worth commenting an aspect that applies to all models discussed here. The `TabPreprocessor` included in this package gives the user the possibility of standarising the input via `sklearn`'s `StandardScaler`. Alternatively, or in addition to it, it is possible to add a continuous normalization layer (`BatchNorm1d` or `LayerNorm`). To do so simply set the `cont_norm_layer` as indicated in the example above. See also the docs.I will insist on this in this and the following sections. Note that `TabMlp` (or any of the wide and deep components) does not build the final connection with the final neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s).For example:
###Code
from pytorch_widedeep.models import WideDeep
wd_model = WideDeep(deeptabular=tabmlp, pred_dim=1)
wd_model
###Output
_____no_output_____
###Markdown
voila 2. `TabResnet``TabResnet` is very similar to `TabMlp`, but the embeddings (or the concatenation of embeddings and continuous features) are passed through a series of Resnet blocks built with dense layers. This is probably the most flexible `deeptabular` component in terms of the many variants one can define via the parameters. Let's have a look to the architecture:The dashed-border boxes indicate the the component is optional and the dashed lines indicate the different paths or connections present depending on which components we decide to include. For example, we could chose to concatenate the continuous features, normalized or not via a `BatchNorm1d` layer, with the embeddings and pass the result of such a concatenation trough the series of Resnet blocks. Alternatively, we might prefer to concatenate the continuous features with the results of passing the embeddings through the Resnet blocks. Another optional component is the MLP before the output neuron(s). If not MLP is present, the output from the Resnet blocks or the results of concatenating that output with the continuous features (normalised or not) will be connected directly to the output neuron(s). Each Resnet block is comprised by the following operations:For more details see [`pytorch_widedeep/models/tab_resnet.BasicBlock`](https://github.com/jrzaurin/pytorch-widedeep/blob/master/pytorch_widedeep/models/tab_resnet.py). Let's have a look to an example now:
###Code
from pytorch_widedeep.models import TabResnet
?TabResnet
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ['a', 'b', 'c', 'd', 'e']
embed_input = [(u,i,j) for u,i,j in zip(colnames[:4], [4]*4, [8]*4)]
column_idx = {k:v for v,k in enumerate(colnames)}
tabresnet = TabResnet(blocks_dims=[16,16,16],
column_idx=column_idx,
embed_input=embed_input,
continuous_cols = ['e'],
cont_norm_layer = "layernorm",
concat_cont_first = False,
mlp_hidden_dims = [16, 4],
mlp_dropout = 0.5)
out = tabresnet(X_tab)
tabresnet
###Output
_____no_output_____
###Markdown
As we can see, first the embeddings are concatenated (resulting in a tensor of dim ($*$, 32) and are projected (or resized, which happens in `lin1` and `bn1`) to the input dimension of the Resnet block (16). The we have the two Resnet blocks defined by the sequence `[INP1 (16) -> OUT1 == INP2 (16) -> OUT2 (16)]`. Finally the output from the Resnet blocks is concatenated and passed to the MLP. As I mentioned earlier, note that `TabResnet` does not build the connection to the output neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s). 3. `Tabnet`Details on this architecture can be found in [TabNet: Attentive Interpretable Tabular Learning](https://arxiv.org/pdf/1908.07442.pdf). This is not a simple algorithm. Therefore, I strongly recommend reading the paper.In general terms, `Tabnet` takes the embeddings from the categorical columns and the continuous columns (standarised or not) that are then passed through a series of `Steps`. Each `Step` involves a so called Attentive Transformer and a Feature Transformer, combined with masking and a `Relu` non-linearity. This is shown in the figure below, directly taken from the paper. The part of the diagram drawn as $[FC \rightarrow out]$ would be what in other figures I draw as $[MLP -> output \space neuron]$.Note that in the paper the authors use an encoder-decoder architecture and apply a routine that involves unsupervised pre-training plus supervised fine-tunning. However the authors found that unsupervised pre-training is useful when the data size is very small and/or there is a large number of unlabeled observations. This result is consistent with those obtained by subsequent papers using the same approach. `pytorch-widedeep` was conceived as a library to use wide and deep models with tabular data, images and text for supervised learning (regression or classification). Therefore, I decided to implement only the encoder architecture of this model, and the transformer-based models. If you want more details on each component I recommend reading the paper and have a look to the implementation by the guys at [dreamquark-ai](https://github.com/dreamquark-ai/tabnet). In fact, and let me make this clear, **the Tabnet implementation in this package is mostly a copy and paste from that at the dreamquark-ai's library**. Simply, I have adapted it to work with wide and deep models and I have added a few extras, such as being able to add dropout in the GLU blocks or to not use Ghost batch normalization. Enough writing, let's have a look to the code
###Code
from pytorch_widedeep.models import TabNet
?TabNet
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ['a', 'b', 'c', 'd', 'e']
embed_input = [(u,i,j) for u,i,j in zip(colnames[:4], [4]*4, [8]*4)]
column_idx = {k:v for v,k in enumerate(colnames)}
tabnet = TabNet(
column_idx=column_idx,
embed_input=embed_input,
continuous_cols=['e'],
cont_norm_layer = "batchnorm",
ghost_bn = False,
)
out = tabnet(X_tab)
tabnet
###Output
_____no_output_____
###Markdown
4 and 5. `TabTransformer` and the `Feature-Tokenizer Transformer`Details on the `TabTransformer` can be found in [TabTransformer: Tabular Data ModelingUsing Contextual Embeddings](https://arxiv.org/pdf/2012.06678.pdf). The `FT-Transformer` is a variant introduced in the following two papers: [SAINT: Improved Neural Networks for Tabular Datavia Row Attention and Contrastive Pre-Training](https://arxiv.org/pdf/2106.01342.pdf) and [Revisiting Deep Learning Models for Tabular Data](https://arxiv.org/pdf/2106.11959.pdf). The name itself (`FT-Transformer`) was first used in the latter, but the variant (which I will explain in a second) was already introduced in the `SAINT` paper. In general terms, the `TabTransformer` takes the embeddings from the categorical columns that are then passed through a Tranformer encoder, concatenated with the normalised continuous features, and then passed through an MLP. Let's have a look:The dashed-border boxes indicate the the component is optional. In terms of the Transformer block, I am sure at this stage the reader has seen every possible diagram of The Transformer, its multihead attention etc, so I thought about drawing something that resembles more to the actual execution/code for each block. Note that this implementation assumes that the so called `inner-dim` (aka the projection dimension) is the same as the `dimension of the model` or, in this case, embedding dimension. Relaxing this assumption is relatively easy and programatically would involve including one parameter more in the `TabTransformer` class. For now, and consistent with other Transformer implementations, I will assume `inner-dim = dimension of the model`. Also, and again consistent other implementations, I assume that the Keys, Queries and Values are of the same `dim`. The architecture of the `FT-Transformer` is identical to that of the `TabTransformer` with the exception that the continuous cols are each passed through a 1-layer MLP with or without activation (referred in the figure below as `Cont Embeddings`) function before being concatenated with the continuous cols. Using the `FT-Transformer` with `pytorch-widedeep` is simply available by setting the param `embed_continuous` to `True`. In addition, I have also added the possibility of pooling all outputs from the transformer blocks using the `[CLS]` token. Otherwise all the outputs form the transformer blocks will be concatenated. Look at some of the other example notebooks for more details.
###Code
from pytorch_widedeep.models import TabTransformer
?TabTransformer
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ['a', 'b', 'c', 'd', 'e']
embed_input = [(u,i) for u,i in zip(colnames[:4], [4]*4)]
continuous_cols = ['e']
column_idx = {k:v for v,k in enumerate(colnames)}
tab_transformer = TabTransformer(column_idx=column_idx, embed_input=embed_input, continuous_cols=continuous_cols)
out = tab_transformer(X_tab)
tab_transformer
ft_transformer = TabTransformer(
column_idx=column_idx,
embed_input=embed_input,
continuous_cols=continuous_cols,
embed_continuous=True,
embed_continuous_activation="relu",
)
out = ft_transformer(X_tab)
ft_transformer
###Output
_____no_output_____
###Markdown
Finally, and as I mentioned earlier, note that `TabTransformer` class does not build the connection to the output neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s). 6. `SAINT`Details on `SAINT` (Self-Attention and Intersample Attention Transformer) can be found in [SAINT: Improved Neural Networks for Tabular Datavia Row Attention and Contrastive Pre-Training](https://arxiv.org/pdf/2106.01342.pdf). The main contribution of the saint model is the addition of an intersample attention block. In case you wonder what is this mysterious "inter-sample attention", simply, is the exact same mechanism as the well-known self-attention, but instead of features attending to each other here are observations/rows attending to each other. If you wanted to understand more details on what are the advantages of using this mechanism, I strongly encourage you to read the paper. Effectively, all that one needs to do is to reshape the input tensors of the transformer blocks and "off we go". `pytorch-widedeep`'s implementation is partially based in the [original code release](https://github.com/somepago/saint) (and the word "*partially*" is well used here in the sense that are notable differences, but in essence is the same implementation described in the paper).Let's have a look to some code
###Code
from pytorch_widedeep.models import SAINT
?SAINT
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ['a', 'b', 'c', 'd', 'e']
embed_input = [(u,i) for u,i in zip(colnames[:4], [4]*4)]
continuous_cols = ['e']
column_idx = {k:v for v,k in enumerate(colnames)}
saint = SAINT(
column_idx=column_idx,
embed_input=embed_input,
continuous_cols=continuous_cols,
embed_continuous=True,
embed_continuous_activation="leaky_relu",
)
out = saint(X_tab)
saint
###Output
_____no_output_____
###Markdown
The `deeptabular` componentIn the previous notebook I described the linear model (`Wide`) and the standard text classification and regression models (`DeepText` and `DeepImage`) that can be used as the `wide`, `deeptext` and `deepimage` components respectively when building a `WideDeep` model. In this notebook I will describe the 3 models (or architectures) available in `pytorch-widedeep` that can be used as the `deeptabular` model. Note that the `deeptabular` model alone is what normally would be referred as Deep Learning for tabular data. As I mentioned in previous notebooks, each component can be used independently. Therefore, if you wanted to use `deeptabular` alone it is perfectly possible. There are just a couple of simple requirement that will be covered in a later notebook.The 3 models available in `pytorch-widedeep` as the `deeptabular` are:1. `TabMlp`2. `TabResnet`3. `TabTransformer`Let's have a close look to the 3 of them 1. `TabMlp``TabMlp` is the simples architecture and is very similar to the tabular model available in the fantastic fastai library. In fact, the implementation of the dense layers of the MLP is mostly identical to that in that library.The figure below illustrate the `TabMlp` architecture:The dashed-border boxes indicate that these components are optional. For example, we could use `TabMlp` without categorical components, or without continuous components, if we wanted.
###Code
import torch
from pytorch_widedeep.models import TabMlp
?TabMlp
###Output
_____no_output_____
###Markdown
Let's have a look to a model and one example
###Code
colnames = ['a', 'b', 'c', 'd', 'e']
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
embed_input = [(u,i,j) for u,i,j in zip(colnames[:4], [4]*4, [8]*4)]
column_idx = {k:v for v,k in enumerate(colnames)}
tabmlp = TabMlp(mlp_hidden_dims=[8,4], continuous_cols=['e'], column_idx=column_idx,
embed_input=embed_input, batchnorm_cont=True)
out = tabmlp(X_tab)
tabmlp
###Output
_____no_output_____
###Markdown
Note that the input dimension of the MLP is `33`, `32` from the embeddings and `1` for the continuous features. Before we move on, is worth commenting an aspect that applies to the three models discussed here. The `TabPreprocessor` included in this package gives the user the possibility of standarising the input via `sklearn`'s `StandardScaler`. Alternatively, or in addition to it, it is possible to add a `BatchNorm1d` layer to normalise continuous columns within `TabMlp`. To do so simply set the `batchnorm_cont` parameter as `True` when defining the model, as indicated in the example above.I will insist on this in this and the following sections. Note that `TabMlp` (or any of the wide and deep components) does not build the final connection with the final neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s).For example:
###Code
from pytorch_widedeep.models import WideDeep
wd_model = WideDeep(deeptabular=tabmlp, pred_dim=1)
wd_model
###Output
_____no_output_____
###Markdown
voila 2. `TabResnet``TabResnet` is very similar to `TabMlp`, but the embeddings (or the concatenation of embeddings and continuous features) are passed through a series of Resnet blocks built with dense layers. This is probably the most flexible `deeptabular` component in terms of the many variants one can define via the parameters. Let's have a look to the architecture:The dashed-border boxes indicate the the component is optional and the dashed lines indicate the different paths or connections present depending on which components we decide to include. For example, we could chose to concatenate the continuous features, normalized or not via a `BatchNorm1d` layer, with the embeddings and pass the result of such a concatenation trough the series of Resnet blocks. Alternatively, we might prefer to concatenate the continuous features with the results of passing the embeddings through the Resnet blocks. Another optional component is the MLP before the output neuron(s). If not MLP is present, the output from the Resnet blocks or the results of concatenating that output with the continuous features (normalised or not) will be connected directly to the output neuron(s). Each Resnet block is comprised by the following operations:For more details see [`pytorch_widedeep/models/tab_resnet.BasicBlock`](https://github.com/jrzaurin/pytorch-widedeep/blob/master/pytorch_widedeep/models/tab_resnet.py). Let's have a look to an example now:
###Code
from pytorch_widedeep.models import TabResnet
?TabResnet
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ['a', 'b', 'c', 'd', 'e']
embed_input = [(u,i,j) for u,i,j in zip(colnames[:4], [4]*4, [8]*4)]
column_idx = {k:v for v,k in enumerate(colnames)}
tabresnet = TabResnet(blocks_dims=[16,16,16],
column_idx=column_idx,
embed_input=embed_input,
continuous_cols = ['e'],
batchnorm_cont = True,
concat_cont_first = False,
mlp_hidden_dims = [16, 4],
mlp_dropout = 0.5)
out = tabresnet(X_tab)
tabresnet
###Output
_____no_output_____
###Markdown
As we can see, first the embeddings are concatenated (resulting in a tensor of dim ($*$, 32) and are projected (or resized, which happens in `lin1` and `bn1`) to the input dimension of the Resnet block (16). The we have the two Resnet blocks defined by the sequence `[INP1 (16) -> OUT1 == INP2 (16) -> OUT2 (16)]`. Finally the output from the Resnet blocks is concatenated and passed to the MLP. As I mentioned earlier, note that `TabResnet` does not build the connection to the output neuron(s). This is done by the ``WideDeep`` class, which collects all wide and deep components and connects them to the output neuron(s). 3. `TabTransformer`Details on this architecture can be found in [TabTransformer: Tabular Data ModelingUsing Contextual Embeddings](https://arxiv.org/pdf/2012.06678.pdf). Also, there are so many variants and details that I thought it deserves its own post. Therefore, if you want to dive properly into the use of the Transformer for tabular data I recommend to read the paper and the post (probably in that order). In general terms, `TabTransformer` takes the embeddings from the categorical columns that are then passed through a Tranformer encoder, concatenated with the normalised continuous features, and then passed through an MLP. Let's have a look:The dashed-border boxes indicate the the component is optional. In terms of the Transformer block, I am sure at this stage the reader has seen every possible diagram of The Transformer, its multihead attention etc, so I thought about drawing something that resembles more to the actual execution/code for each block. Note that this implementation assumes that the so called `inner-dim` (aka the projection dimension) is the same as the `dimension of the model` or, in this case, embedding dimension. Relaxing this assumption is relatively easy and programatically would involve including one parameter more in the `TabTransformer` class. For now, and consistent with other Transformer implementations, I will assume `inner-dim = dimension of the model`. Also, and again consistent other implementations, I assume that the Keys, Queries and Values are of the same `dim`. Enough writing, let's have a look to the code
###Code
from pytorch_widedeep.models import TabTransformer
?TabTransformer
X_tab = torch.cat((torch.empty(5, 4).random_(4), torch.rand(5, 1)), axis=1)
colnames = ['a', 'b', 'c', 'd', 'e']
embed_input = [(u,i) for u,i in zip(colnames[:4], [4]*4)]
continuous_cols = ['e']
column_idx = {k:v for v,k in enumerate(colnames)}
tab_transformer = TabTransformer(column_idx=column_idx, embed_input=embed_input, continuous_cols=continuous_cols)
out = tab_transformer(X_tab)
tab_transformer
###Output
_____no_output_____ |
Predicting Whether A Person Makes over 50K A Year.ipynb | ###Markdown
Predicting Whether A Person Makes over 50K A Year Author: Hexing Ren Click [here](http://www.hexingren.com/practical-data-science) to go back.
###Code
import pandas as pd
import numpy as np
import scipy
from scipy import stats
import math
###Output
_____no_output_____
###Markdown
Naive Bayes Classifier IntroductionNaive Bayes is a class of simple classifiers based on the Bayes' Rule and strong (or naive) independence assumptions between features. In this problem, you will implement a Naive Bayes Classifier for the Census Income Data Set from the [UCI Machine Learning Repository](http://archive.ics.uci.edu/ml/) (which is a good website to browse through for datasets). Dataset DescriptionThe dataset consists 32561 instances, each representing an individual. The goal is to predict whether a person makes over 50K a year based on the values of 14 features. The features, extracted from the 1994 Census database, are a mix of continuous and discrete attributes. These are enumerated below: Continuous (real-valued) features- age- final_weight (computed from a number of attributes outside of this dataset; people with similar demographic attributes have similar values)- education_num- capital_gain- capital_loss- hours_per_week Categorical (discrete) features - work_class: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked- education: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool- marital_status: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse- occupation: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces- relationship: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried.- race: White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black- sex: Female, Male- native_country: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands Q1. Input preparationFirst, we need to load in the above data, provided as a CSV file. As the data is from UCI repository, it is already quite clean. However, some instances contain missing values (represented as ? in the CSV file) and these have to be discarded from the training set. Also, replace the `income` column with `label`, which is 1 if `income` is `>50K` and 0 otherwise.
###Code
def load_data(file_name):
""" loads and processes data in the manner specified above
Inputs:
file_name (str): path to csv file containing data
Outputs:
pd.DataFrame: processed dataframe
"""
df = pd.read_csv(file_name, na_values=['?'])
df.dropna(inplace=True)
df = df.reset_index(drop = True)
df['label'] = df['income'].map(lambda x: 1 if x=='>50K' else 0)
df.drop('income', axis=1, inplace=True)
return df
# AUTOLAB_IGNORE_START
df = load_data('census.csv')
# AUTOLAB_IGNORE_STOP
###Output
_____no_output_____
###Markdown
Our reference code yields the following output (pay attention to the index):```python>>> print df.dtypesage int64work_class objectfinal_weight int64education objecteducation_num int64marital_status objectoccupation objectrelationship objectrace objectsex objectcapital_gain int64capital_loss int64hours_per_week int64native_country objectlabel int64dtype: object >>> print df.tail() age work_class final_weight education education_num \30157 27 Private 257302 Assoc-acdm 12 30158 40 Private 154374 HS-grad 9 30159 58 Private 151910 HS-grad 9 30160 22 Private 201490 HS-grad 9 30161 52 Self-emp-inc 287927 HS-grad 9 marital_status occupation relationship race sex \30157 Married-civ-spouse Tech-support Wife White Female 30158 Married-civ-spouse Machine-op-inspct Husband White Male 30159 Widowed Adm-clerical Unmarried White Female 30160 Never-married Adm-clerical Own-child White Male 30161 Married-civ-spouse Exec-managerial Wife White Female capital_gain capital_loss hours_per_week native_country label 30157 0 0 38 United-States 0 30158 0 0 40 United-States 1 30159 0 0 40 United-States 0 30160 0 0 20 United-States 0 30161 15024 0 40 United-States 1 >>> print len(df)30162``` Overview of Naive Bayes classifierLet $X_1, X_2, \ldots, X_k$ be the $k$ features of a dataset, with class label given by the variable $y$. A probabilistic classifier assigns the most probable class to each instance $(x_1,\ldots,x_k)$, as expressed by$$ \hat{y} = \arg\max_y P(y\ |\ x_1,\ldots,x_k) $$Using Bayes' theorem, the above *posterior probability* can be rewritten as$$ P(y\ |\ x_1,\ldots,x_k) = \frac{P(y) P(x_1,\ldots,x_n\ |\ y)}{P(x_1,\ldots,x_k)} $$where- $P(y)$ is the prior probability of the class- $P(x_1,\ldots,x_k\ |\ y)$ is the likelihood of data under a class- $P(x_1,\ldots,x_k)$ is the evidence for dataNaive Bayes classifiers assume that the feature values are conditionally independent given the class label, that is,$ P(x_1,\ldots,x_n\ |\ y) = \prod_{i=1}^{k}P(x_i\ |\ y) $. This strong assumption helps simplify the expression for posterior probability to$$ P(y\ |\ x_1,\ldots,x_k) = \frac{P(y) \prod_{i=1}^{k}P(x_i\ |\ y)}{P(x_1,\ldots,x_k)} $$For a given input $(x_1,\ldots,x_k)$, $P(x_1,\ldots,x_k)$ is constant. Hence, we can simplify omit the denominator replace the equality sign with proportionality as follows:$$ P(y\ |\ x_1,\ldots,x_k) \propto P(y) \prod_{i=1}^{k}P(x_i\ |\ y) $$Thus, the class of a new instance can be predicted as $\hat{y} = \arg\max_y P(y) \prod_{i=1}^{k}P(x_i\ |\ y)$. Here, $P(y)$ is commonly known as the **class prior** and $P(x_i\ |\ y)$ termed **feature predictor**. The rest of the assignment deals with how each of these $k+1$ probability distributions -- $P(y), P(x_1\ |\ y), \ldots, P(x_k\ |\ y)$ -- are estimated from data.**Note**: Observe that the computation of the final expression above involve multiplication of $k+1$ probability values (which can be really low). This can lead to an underflow of numerical precision. So, it is a good practice to use a log transform of the probabilities to avoid this underflow.** TL;DR ** Your final take away from this cell is the following expression:$$\hat{y} = \arg\max_y \underbrace{\log P(y)}_{log-prior} + \underbrace{\sum_{i=1}^{k} \log P(x_i\ |\ y)}_{log-likelihood}$$Each term in the sum for log-likelihood can be regarded a partial log-likelihood based on a particular feature alone. Feature PredictorThe beauty of a Naive Bayes classifier lies in the fact we can mix-and-match different likelihood models for each feature predictor according to the prior knowledge we have about it and these models can be varied independent of each other. For example, we might know that $P(X_i|y)$ for some continuous feature $X_i$ is normally distributed or that $P(X_i|y)$ for some categorical feature follows multinomial distribution. In such cases, we can directly plugin the pdf/pmf of these distributions in place of $P(x_i\ |\ y)$.In this project, we will be using two classes of likelihood models:- Gaussian model, for continuous real-valued features (parameterized by mean $\mu$ and variance $\sigma$)- Categorical model, for discrete features (parameterized by $\mathbf{p} = $, where $l$ is the number of values taken by this categorical feature)We need to implement a predictor class for each likelihood model. Each predictor should implement two functionalities:- **Parameter estimation `init()`**: Learn parameters of the likelihood model using MLE (Maximum Likelihood Estimator). We need to keep track of $k$ sets of parameters, one for each class, *in the increasing order of class id, i.e., mu[i] indicates the mean of class $i$ in the Gaussian Predictor*.- **Partial Log-Likelihood computation for *this* feature `partial_log_likelihood()`**: Use the learnt parameters to compute the probability (density/mass for continuous/categorical features) of a given feature value. Report np.log() of this value.The parameter estimation is for the conditional distributions $P(X|Y)$. Thus, while estimating parameters for a specific class (say class 0), we will use only those data points in the training set (or rows in the input data frame) which have class label 0. Q2. Gaussian Feature PredictorThe Guassian distribution is characterized by two parameters - mean $\mu$ and standard deviation $\sigma$:$$ f_Z(z) = \frac{1}{\sqrt{2\pi}\sigma} \exp{(-\frac{(z-\mu)^2}{2\sigma^2})} $$Given $n$ samples $z_1, \ldots, z_n$ from the above distribution, the MLE for mean and standard deviation are:$$ \hat{\mu} = \frac{1}{n} \sum_{j=1}^{n} z_j $$$$ \hat{\sigma} = \sqrt{\frac{1}{n} \sum_{j=1}^{n} (z_j-\hat{\mu})^2} $$`scipy.stats.norm` would be helpful.
###Code
class GaussianPredictor:
""" Feature predictor for a normally distributed real-valued, continuous feature.
Attributes:
mu (array_like) : vector containing per class mean of the feature
sigma (array_like): vector containing per class std. deviation of the feature
"""
# feel free to define and use any more attributes, e.g., number of classes, etc
def __init__(self, x, y) :
""" initializes the predictor statistics (mu, sigma) for Gaussian distribution
Inputs:
x (array_like): feature values (continuous)
y (array_like): class labels (0,...,k-1)
"""
lab_dic = {}
self.k = len(y.unique())
self.mu = np.zeros(self.k)
self.sigma = np.zeros(self.k)
for i in range(len(y)):
if (y[i] not in lab_dic):
lab_dic[y[i]] = []
lab_dic[y[i]].append(x[i])
for j in range(self.k):
l = lab_dic[j]
self.mu[j] = float(sum(l)) / float(len(l))
self.sigma[j] = math.sqrt(float(sum([pow(n - float(sum(l)) / float(len(l)), 2) for n in l])) / float(len(l)))
def partial_log_likelihood(self, x):
""" log likelihood of feature values x according to each class
Inputs:
x (array_like): vector of feature values
Outputs:
(array_like): matrix of log likelihood for this feature alone
"""
log_lists = list()
for m in range(len(x)):
log_list = list()
for n in range(self.k):
log_list.append(np.log(stats.norm.pdf(x[m], self.mu[n], self.sigma[n])))
log_lists.append(log_list)
return np.array(log_lists)
# AUTOLAB_IGNORE_START
f = GaussianPredictor(df['age'], df['label'])
print f.mu
print f.sigma
f.partial_log_likelihood([43,40,100,10])
# AUTOLAB_IGNORE_STOP
###Output
[ 36.60806039 43.95911028]
[ 13.46433407 10.2689489 ]
###Markdown
Our reference code gives the following output:```python>>> f.muarray([ 36.60806039 43.95911028])>>> f.sigmaarray([ 13.46433407 10.2689489 ])>>> f.partial_log_likelihood([43,40,100,10])array([[ -3.63166766, -3.2524249 ], [ -3.55071473, -3.32238449], [-14.60226337, -18.13920716], [ -5.47164304, -8.71608989]]) Q3. Categorical Feature PredictorThe categorical distribution with $l$ categories $\{0,\ldots,l-1\}$ is characterized by parameters $\mathbf{p} = (p_0,\dots,p_{l-1})$:$$ P(z; \mathbf{p}) = p_0^{[z=0]}p_1^{[z=1]}\ldots p_{l-1}^{[z=l-1]} $$where $[z=t]$ is 1 if $z$ is $t$ and 0 otherwise.Given $n$ samples $z_1, \ldots, z_n$ from the above distribution, the smoothed-MLE for each $p_t$ is:$$ \hat{p_t} = \frac{n_t + \alpha}{n + l\alpha} $$where $n_t = \sum_{j=1}^{n} [z_j=t]$, i.e., the number of times the label $t$ occurred in the sample. The smoothing is done to avoid zero-count problem (similar in spirit to $n$-gram model in NLP).**Note:** You have to learn the number of classes and the number and value of labels from the data. We might be testing your code on a different categorical feature.
###Code
class CategoricalPredictor:
""" Feature predictor for a categorical feature.
Attributes:
p (dict) : dictionary of vector containing per class probability of a feature value;
the keys of dictionary should exactly match the values taken by this feature
"""
# feel free to define and use any more attributes, e.g., number of classes, etc
def __init__(self, x, y, alpha=1) :
""" initializes the predictor statistics (p) for Categorical distribution
Inputs:
x (array_like): feature values (categorical)
y (array_like): class labels (0,...,k-1)
"""
self.k = len(y.unique())
self.l = len(x.unique())
self.p = {}
for i in range(len(x)):
if (x[i] not in self.p):
self.p[x[i]] = np.zeros(self.k)
self.p[x[i]][y[i]] += 1
lab_cnt = np.zeros(self.k)
for m in range(self.k):
for feature in self.p:
lab_cnt[m] = lab_cnt[m] + self.p[feature][m]
for n in range(self.k):
for feature in self.p:
self.p[feature][n] = float(alpha + self.p[feature][n]) / float(self.l * alpha + lab_cnt[n])
def partial_log_likelihood(self, x):
""" log likelihood of feature values x according to each class
Inputs:
x (array_like): vector of feature values
Outputs:
(array_like): matrix of log likelihood for this feature
"""
fmatrix = np.zeros(shape=(len(x), self.k))
for m in range(len(x)):
for n in range(self.k):
fmatrix[m][n] = np.log(self.p[x[m]][n])
return fmatrix
# AUTOLAB_IGNORE_START
f = CategoricalPredictor(df['sex'], df['label'])
print f.p
f.partial_log_likelihood(['Male','Female','Male'])
# AUTOLAB_IGNORE_STOP
###Output
{'Male': array([ 0.61727578, 0.8517976 ]), 'Female': array([ 0.38272422, 0.1482024 ])}
###Markdown
Our reference code gives the following output:```python>>> f.p{'Female': array([ 0.38272422, 0.1482024 ]), 'Male': array([ 0.61727578, 0.8517976 ])}>>> f.partial_log_likelihood(['Male','Female','Male'])array([[-0.48243939 -0.16040634] [-0.96044059 -1.90917639] [-0.48243939 -0.16040634]])``` Q4. Putting things togetherIt's time to put all the feature predictors together and do something useful! We will implement two functions in the following class.1. **__init__()**: Compute the log prior for each class and initialize the feature predictors (based on feature type). The smoothed prior for class $t$ is given by$$ prior(t) = \frac{n_t + \alpha}{n + k\alpha} $$where $n_t = \sum_{j=1}^{n} [y_j=t]$, i.e., the number of times the label $t$ occurred in the sample. 2. **predict()**: For each instance and for each class, compute the sum of log prior and partial log likelihoods for all features. Use it to predict the final class label. Break ties by predicting the class with lower id.**Note:** Your implementation should not assume anything about the schema of the input data frame or the number of classes. The only guarantees you have are: (1) there will be a `label` column with values $0,\ldots,k-1$ for some $k$. And the datatypes of the columns will be either `object` (string, categorical) or `int64` (integer).
###Code
class NaiveBayesClassifier:
def __init__(self, df, alpha=1):
"""initializes predictors for each feature and computes class prior
Inputs:
df (pd.DataFrame): processed dataframe, without any missing values.
"""
y = df['label']
k = len(y.unique())
self.predictor = {}
self.log_prior = np.zeros(k)
for lab in (y.unique()):
self.log_prior[lab] = np.log(float(len(df[df['label'] == lab]) + alpha) / float(len(y) + k * alpha))
for col in df:
if col != 'label':
if df[col].dtype != 'int64':
t = CategoricalPredictor(df[col], df['label'],alpha)
self.predictor[col] = t
else:
t = GaussianPredictor(df[col], df['label'])
self.predictor[col] = t
def predict(self, x):
prior_log = float(0)
for col in x:
if col != 'label':
prior_log += self.predictor[col].partial_log_likelihood(x[col])
pred_y = np.argmax(self.log_prior + prior_log, axis = 1)
return pred_y
# AUTOLAB_IGNORE_START
c = NaiveBayesClassifier(df, 0)
y_pred = c.predict(df)
print c.log_prior
print y_pred.shape
print y_pred
# AUTOLAB_IGNORE_STOP
###Output
[-0.28624642 -1.39061374]
(30162,)
[0 0 0 ..., 0 0 1]
[-0.28624642 -1.39061374]
(30162,)
[0 0 0 ..., 0 0 1]
###Markdown
Our reference code gives the following output:```python>>> c.log_priorarray([-0.28624642, -1.39061374])>>> c.predictor{'age': , 'capital_gain': , 'capital_loss': , 'education': , 'education_num': , 'final_weight': , 'hours_per_week': , 'marital_status': , 'native_country': , 'occupation': , 'race': , 'relationship': , 'sex': , 'work_class': }>>> c.predictor['hours_per_week'].muarray([ 39.34859186 45.70657965])>>> c.predictor['hours_per_week'].sigmaarray([ 11.95051037 10.73627157])>>> c.predictor['work_class'].p{'Federal-gov': array([ 0.02551426, 0.04861481]), 'Local-gov': array([ 0.0643595 , 0.08111348]), 'Private': array([ 0.7685177, 0.6494406]), 'Self-emp-inc': array([ 0.02092346, 0.07991476]), 'Self-emp-not-inc': array([ 0.07879403, 0.09509856]), 'State-gov': array([ 0.04127306, 0.04581779]), 'Without-pay': array([ 0.00061799, 0. ])}>>> y_pred.shape(30162,)>>> y_predarray([0, 0, 0, ..., 0, 0, 1])``` Q5. Evaluation - Error rateIf a classifier makes $n_e$ errors on a data of size $n$, its error rate is $n_e/n$. Fill the following function, to evaluate the classifier.
###Code
def evaluate(y_hat, y):
""" Evaluates classifier predictions
Inputs:
y_hat (array_like): output from classifier
y (array_like): true class label
Output:
(double): error rate as defined above
"""
cnt = 0
for i in range(len(y_hat)):
if y[i] != y_hat[i]:
cnt = cnt + 1
err = float(cnt) / float(len(y_hat))
return err
# AUTOLAB_IGNORE_START
evaluate(y_pred, df['label'])
# AUTOLAB_IGNORE_STOP
###Output
_____no_output_____ |
doc/source/tracking/pypistats/get_pypi_stats.ipynb | ###Markdown
icepyx PyPI StatisticsUse PyPIStats library to get data on PyPI downloads of icepyx (or any other package)See the [pypistats website](https://github.com/hugovk/pypistats) for potential calls, options, and formats (e.g. markdown, rst, html, json, numpy, pandas)**Note: currently this needs to be run manually (should be able to run all cells) and the changes committed.**
###Code
import os
import pypistats
import pandas as pd
# !pip install --upgrade "pypistats[pandas]" # may need this if pypistats wasn't installed with it
# Note: a numpy version is also available
cwd = os.getcwd()
trackpath= cwd + '/' # '/doc/source/tracking/pypistats/'
downloadfn = "downloads_data.csv"
sysdownloadfn = "sys_downloads_data.csv"
downloads = pypistats.overall("icepyx", total=True, format="pandas").drop(columns=['percent'])
downloads = downloads[downloads.category != "Total"]
# try:
exist_downloads = pd.read_csv(trackpath+downloadfn)#.drop(columns=['percent'])
# exist_downloads = exist_downloads[exist_downloads.category != "Total"]
dl_data = downloads.merge(exist_downloads, how='outer',
on=['category','date','downloads']).reindex()
# except:
# dl_data = downloads
dl_data.to_csv(trackpath+downloadfn, index=False)
sysdownloads = pypistats.system("icepyx", total=True, format="pandas").drop(columns=['percent'])
sysdownloads = sysdownloads[sysdownloads.category != "Total"]
# try:
exist_sysdownloads = pd.read_csv(trackpath+sysdownloadfn)#.drop(columns=['percent'])
# exist_sysdownloads = exist_sysdownloads[exist_sysdownloads.category != "Total"]
# exist_sysdownloads['category'] = exist_sysdownloads['category'].fillna("null")
sysdl_data = sysdownloads.merge(exist_sysdownloads, how='outer',
on=['category','date','downloads']).reindex()
# except:
# dl_data = sysdownloads
sysdl_data.to_csv(trackpath+sysdownloadfn, index=False)
dl_data = dl_data.groupby("category").get_group("without_mirrors").sort_values("date")
chart = dl_data.plot(x="date", y="downloads", figsize=(10, 2),
label="Number of PyPI Downloads")
chart.figure.show()
chart.figure.savefig(trackpath+"downloads.svg")
###Output
<ipython-input-5-9ae24e11e434>:5: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
chart.figure.show()
###Markdown
icepyx PyPI StatisticsUse PyPIStats library to get data on PyPI downloads of icepyx (or any other package)See the [pypistats website](https://github.com/hugovk/pypistats) for potential calls, options, and formats (e.g. markdown, rst, html, json, numpy, pandas)**Note: currently this needs to be run manually (should be able to run all cells) and the changes committed.**
###Code
import pypistats
import pandas as pd
# !pip install --upgrade "pypistats[pandas]" # may need this if pypistats wasn't installed with it
# Note: a numpy version is also available
trackpath='doc/source/tracking/pypistats/'
downloadfn = "downloads_data.csv"
sysdownloadfn = "sys_downloads_data.csv"
downloads = pypistats.overall("icepyx", total=True, format="pandas")
try:
exist_downloads = pd.read_csv(trackpath+downloadfn)
dl_data = downloads.merge(exist_downloads, how='outer',
on=['category','date','downloads'], ignore_index=True)
except:
dl_data = downloads
dl_data.to_csv(trackpath+downloadfn)
sysdownloads = pypistats.system("icepyx", total=True, format="pandas")
try:
exist_sysdownloads = pd.read_csv(trackpath+downloadfn)
dl_data = sysdownloads.merge(exist_sysdownloads, how='outer',
on=['category','date','downloads'], ignore_index=True)
except:
dl_data = sysdownloads
dl_data.to_csv(trackpath+sysdownloadfn)
downloads = downloads.groupby("category").get_group("without_mirrors").sort_values("date")
chart = downloads.plot(x="date", y="downloads", figsize=(10, 2),
label="Number of PyPI Downloads")
chart.figure.show()
chart.figure.savefig(trackpath+"downloads.png")
###Output
_____no_output_____
###Markdown
icepyx PyPI StatisticsUse PyPIStats library to get data on PyPI downloads of icepyx (or any other package)See the [pypistats website](https://github.com/hugovk/pypistats) for potential calls, options, and formats (e.g. markdown, rst, html, json, numpy, pandas)**Note: currently this needs to be run manually (should be able to run all cells) and the changes committed.**
###Code
import os
import pypistats
import pandas as pd
# !pip install --upgrade "pypistats[pandas]" # may need this if pypistats wasn't installed with it
# Note: a numpy version is also available
cwd = os.getcwd()
trackpath= cwd + '/' # '/doc/source/tracking/pypistats/'
downloadfn = "downloads_data.csv"
sysdownloadfn = "sys_downloads_data.csv"
downloads = pypistats.overall("icepyx", total=True, format="pandas").drop(columns=['percent'])
downloads = downloads[downloads.category != "Total"]
# try:
exist_downloads = pd.read_csv(trackpath+downloadfn)#.drop(columns=['percent'])
# exist_downloads = exist_downloads[exist_downloads.category != "Total"]
dl_data = downloads.merge(exist_downloads, how='outer',
on=['category','date','downloads']).reindex()
# except:
# dl_data = downloads
dl_data.to_csv(trackpath+downloadfn, index=False)
sysdownloads = pypistats.system("icepyx", total=True, format="pandas").drop(columns=['percent'])
sysdownloads = sysdownloads[sysdownloads.category != "Total"]
# try:
exist_sysdownloads = pd.read_csv(trackpath+sysdownloadfn)#.drop(columns=['percent'])
# exist_sysdownloads = exist_sysdownloads[exist_sysdownloads.category != "Total"]
exist_sysdownloads['category'] = exist_sysdownloads['category'].fillna("null")
sysdl_data = sysdownloads.merge(exist_sysdownloads, how='outer',
on=['category','date','downloads']).reindex()
# except:
# dl_data = sysdownloads
sysdl_data.to_csv(trackpath+sysdownloadfn, index=False)
dl_data = dl_data.groupby("category").get_group("without_mirrors").sort_values("date")
chart = dl_data.plot(x="date", y="downloads", figsize=(10, 2),
label="Number of PyPI Downloads")
chart.figure.show()
chart.figure.savefig(trackpath+"downloads.svg")
###Output
_____no_output_____ |
week12_DL/day1/theory/activation_functions.ipynb | ###Markdown
ReLu
###Code
def ReLU(x):
return (abs(x) + x) / 2
ReLU(32626)
###Output
_____no_output_____
###Markdown
Sigmoid
###Code
import math
def sigmoid(x):
return 1 / (1 + math.exp(-x))
sigmoid(-590)
###Output
_____no_output_____
###Markdown
Hyperbolic tangent
###Code
import tensorflow as tf
# tf.keras.activations.tanh()
###Output
_____no_output_____
###Markdown
TESTING
###Code
import numpy as np
import matplotlib.pylab as plt
_ = np.arange(10)
x_ = np.random.random(10) -5
x_
plt.plot(_, x_)
plt.plot(_, ReLU(x_))
y_sigmoid_ = [sigmoid(x) for x in x_]
print(y_sigmoid_)
plt.scatter(x_, y_sigmoid_)
tf.keras.activations.tanh(-1.)
###Output
_____no_output_____ |
analysis_on_papers/analyze_N_mean0_38.ipynb | ###Markdown
Full length AIB9 trajectory at 500K (4M) without discretization
###Code
infile = '../../DATA/Train/AIB9/sum_phi_200ns.npy'
input_x = np.load(infile)
# data
T = 500 # unit: K
beta = 1000/(T*8.28) # kT=(8.28/1000)*T (kJ/mol/K)
hist_200ns = np.histogram(input_x, bins=32)
prob_200ns = hist_200ns[0].T/np.sum(hist_200ns[0].T)
freeE_200ns = (-1/beta)*np.log(prob_200ns-1e-11)
mids_200ns = 0.5*(hist_200ns[1][1:]+hist_200ns[1][:-1])
hist_100ns = np.histogram(input_x[:2000000], bins=32)
prob_100ns = hist_100ns[0].T/np.sum(hist_100ns[0].T)
freeE_100ns = (-1/beta)*np.log(prob_100ns-1e-11)
mids_100ns = 0.5*(hist_100ns[1][1:]+hist_100ns[1][:-1])
# plotting
fig, ax=plt.subplots(figsize=(6,4), nrows=1, ncols=1)
ax.plot(mids_100ns, freeE_100ns-np.min(freeE_100ns), linestyle='-.', color='green', label='MD 100$ns$')
ax.plot(mids_200ns, freeE_200ns-np.min(freeE_200ns), linestyle='--', color='red', label='MD 200$ns$')
ax.tick_params(axis='both', which='both', labelsize=20, direction='in')
ax.set_xlabel('$\chi$ (Radians)', size=20)
ax.set_ylabel('Free energy (kJ/mol)', size=20)
ax.set_xlim(-np.pi*5, np.pi*5)
ax.set_ylim(-1, 61)
ax.legend(loc='upper center', fontsize=20)
fig.tight_layout()
plt.savefig('input.pdf', format='pdf', dpi=300, pad_inches = 0.05)
plt.show()
###Output
_____no_output_____
###Markdown
Discretizing 4M trajectory
###Code
bins=np.arange(-15., 17, 1)
num_bins=len(bins)
idx_input_x=np.digitize(input_x, bins)
###Output
_____no_output_____
###Markdown
Longer prediction using second training
###Code
pred2=[]
for i in range(10):
pdfile = './N_mean0_38/Output-conc/{}/prediction.npy'.format(i)
prediction2 = np.load(pdfile)
pred2.append(prediction2)
###Output
_____no_output_____
###Markdown
Longer prediction using first training
###Code
pred1={}
for i in range(10):
pdfile = './Output-long/{}/prediction.npy'.format(i)
prediction1 = np.load(pdfile)
pred1[i]=prediction1
fig, ax = plt.subplots(figsize=(15,2*3), nrows=3, ncols=1)
ax[0].plot(idx_input_x[:100000], label='input')
ax[0].tick_params(axis='both', which='both', direction='in', labelsize=16)
ax[0].set_yticks(np.arange(0,32,8))
ax[0].set_xlabel('Steps', size=16)
ax[0].set_ylabel('State', size=16)
ax[0].legend(fontsize=16)
for i in range(2):
ax[i+1].plot(pred2[i][:100000], label='prediction {}'.format(i+1))
ax[i+1].tick_params(axis='both', which='both', direction='in', labelsize=16)
ax[i+1].set_yticks(np.arange(0,32,8))
ax[i+1].set_xlabel('Steps', size=16)
ax[i+1].set_ylabel('State', size=16)
ax[i+1].legend(loc='lower left', fontsize=16)
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Eigenspectrum
###Code
def eigendecompose(transmat):
eigenValues, eigenVectors = np.linalg.eig(transmat)
idx = eigenValues.real.argsort()[::-1] # Sorting by eigenvalues
eigenValues = eigenValues[idx] # Order eigenvalues
eigenVectors = eigenVectors[:,idx] # Order eigenvectors
return eigenValues.real, eigenVectors.real
# Input
hist_x = np.histogram2d(input_x[:-1], input_x[1:], bins=32) # (old, new)
trans_pi = hist_x[0]/hist_x[0].sum(axis=1, keepdims=True)
eval_pi, evec_pi = eigendecompose(trans_pi)
# After second training
eval_p2_arr=[]
for i in range(10):
hist_p2 = np.histogram2d(pred2[i][:-1], pred2[i][1:], bins=32) # (old, new)
trans_pp2 = hist_p2[0]/hist_p2[0].sum(axis=1, keepdims=True)
eval_pp2, evec_pp2 = eigendecompose(trans_pp2)
eval_p2_arr.append(eval_pp2)
mean_eval_p2, stdv_eval_p2 = np.mean(eval_p2_arr, axis=0), np.std(eval_p2_arr, axis=0)/np.sqrt(10)
# After first training
eval_p1_arr=[]
for i in range(10):
hist_p1 = np.histogram2d(pred1[i][:-1], pred1[i][1:], bins=32) # (old, new)
trans_pp1 = hist_p1[0]/hist_p1[0].sum(axis=1, keepdims=True)
eval_pp1, evec_pp1 = eigendecompose(trans_pp1)
eval_p1_arr.append(eval_pp1)
mean_eval_p1, stdv_eval_p1 = np.mean(eval_p1_arr, axis=0), np.std(eval_p1_arr, axis=0)/np.sqrt(10)
fig, ax = plt.subplots(figsize=(12,4))
ax.scatter(np.arange(len(eval_pi)), eval_pi, s=80, marker='s', facecolor='none', edgecolor='r', label='MD 200$ns$')
ax.errorbar(np.arange(32), mean_eval_p1, yerr=stdv_eval_p1, fmt='o', fillstyle='none', markersize=10, capsize=10, label='LSTM')
ax.errorbar(np.arange(32), mean_eval_p2, yerr=stdv_eval_p2, fmt='o', fillstyle='none', markersize=10, capsize=10, label='ps-LSTM')
ax.tick_params(axis='both', which='both', direction='in', labelsize=16)
ax.set_xlabel('Indices', size=16)
ax.set_ylabel('Eigenvalues', size=16)
ax.set_xticks(np.arange(32))
ax.set_xlim(-0.5,32)
ax.set_ylim(-0.2, 1.2)
ax.legend(loc='upper right', fontsize=16)
fig.tight_layout()
plt.savefig('eigenspectrum_NN0_38.pdf', format='pdf', dpi=300, pad_inches = 0.05)
plt.show()
###Output
_____no_output_____
###Markdown
Compare kappa
###Code
def compute_kappa_N_mean(pred):
kappa_arr = []
N_mean_arr = []
for i in range(10):
N0=len(np.where(pred[i]<=15)[0])
N1=len(np.where(pred[i]>=16)[0])
kappa_i = N0/N1
di=1
N_mean_i=np.sum(np.abs(pred[i][:-di]-pred[i][di:])==1)
N_mean_i/=len(pred[i])
kappa_arr.append(kappa_i)
N_mean_arr.append(N_mean_i)
return kappa_arr, N_mean_arr
# Second training
kappa2_arr, N_mean2_arr = compute_kappa_N_mean(pred2)
# First training
kappa1_arr, N_mean1_arr = compute_kappa_N_mean(pred1)
print("Second training:")
print("Mean kappa: ", np.mean(kappa2_arr), "; Stdv kappa: ", np.std(kappa2_arr)/np.sqrt(10))
print("Mean N_mean: ", np.mean(N_mean2_arr), "; Stdv N_mean: ", np.std(N_mean2_arr)/np.sqrt(10))
print("First training:")
print("Mean kappa: ", np.mean(kappa1_arr), "; Stdv kappa: ", np.std(kappa1_arr)/np.sqrt(10))
print("Mean N_mean: ", np.mean(N_mean1_arr), "; Stdv N_mean: ", np.std(N_mean1_arr)/np.sqrt(10))
T = 500 # unit: K
beta = 1000/(T*8.28) # kT=(8.28/1000)*T (kJ/mol/K)
bins=np.arange(-15., 17, 1)
num_bins=len(bins)
idx_input_x=np.digitize(input_x, bins)
hist0=np.histogram(idx_input_x, bins=32)
prob0=hist0[0].T/np.sum(hist0[0].T)
freeE0=(-1/beta)*np.log(prob0+1e-11)
mids0=0.5*(hist0[1][1:]+hist0[1][:-1])
freeE1={}
for i in range(10):
hist1=np.histogram(pred1[i], bins=32)
prob1=hist1[0].T/np.sum(hist1[0].T)
freeE1[i]=(-1/beta)*np.log(prob1+1e-11)
mids1=0.5*(hist1[1][1:]+hist1[1][:-1])
freeE2={}
for i in range(10):
hist2=np.histogram(pred2[i], bins=32)
prob2=hist2[0].T/np.sum(hist2[0].T)
freeE2[i]=(-1/beta)*np.log(prob2+1e-11)
mids2=0.5*(hist2[1][1:]+hist2[1][:-1])
###Output
_____no_output_____
###Markdown
compared with prediction by second training
###Code
freeE2_arr = np.array(list(freeE2.values()))
mean_freeE2=np.mean(freeE2_arr, axis=0)
stdv_freeE2=np.std(freeE2_arr, axis=0)/np.sqrt(len(freeE2_arr))
fig, ax=plt.subplots(figsize=(6,4), nrows=1, ncols=1)
ax.plot(mids_200ns, freeE_200ns-np.min(freeE_200ns), linestyle='--', color='red', label='MD 200$ns$')
ax.fill_between(mids2-15.5, mean_freeE2-np.min(mean_freeE2)-stdv_freeE2, mean_freeE2-np.min(mean_freeE2)+stdv_freeE2, label='ps-LSTM, $\langle N\\rangle=0.38$',
alpha=0.5, edgecolor='blue', facecolor='#069AF3')
ax.tick_params(axis='both', which='both', labelsize=20, direction='in')
ax.set_xlabel('$\chi$ (Radians)', size=20)
ax.set_ylabel('Free energy (kJ/mol)', size=20)
ax.set_xlim(-np.pi*5, np.pi*5)
ax.set_ylim(-1, 61)
ax.legend(loc='upper center', fontsize=20)
fig.tight_layout()
plt.savefig('training2_NN0_38.pdf', format='pdf', dpi=300, pad_inches = 0.05)
plt.show()
###Output
_____no_output_____
###Markdown
compared with prediction by first training
###Code
freeE1_arr = np.array(list(freeE1.values()))
mean_freeE1=np.mean(freeE1_arr, axis=0)
stdv_freeE1=np.std(freeE1_arr, axis=0)/np.sqrt(len(freeE1_arr))
fig, ax=plt.subplots(figsize=(6,4), nrows=1, ncols=1)
ax.plot(mids_200ns, freeE_200ns-np.min(freeE_200ns), linestyle='--', color='red', label='MD 200$ns$')
# ax.errorbar(mids1-16, mean_freeE1-np.min(mean_freeE1), yerr=stdv_freeE1, linestyle='--', color='coral', capsize=3, label='LSTM')
ax.fill_between(mids1-15.5, mean_freeE1-np.min(mean_freeE1)-stdv_freeE1, mean_freeE1-np.min(mean_freeE1)+stdv_freeE1, label='LSTM',
alpha=0.5, edgecolor='#CC4F1B', facecolor='coral')
ax.tick_params(axis='both', which='both', labelsize=20, direction='in')
ax.set_xlabel('$\chi$ (Radiance)', size=20)
ax.set_ylabel('Free energy (kJ/mol)', size=20)
ax.set_xlim(-np.pi*5, np.pi*5)
ax.set_ylim(-1, 61)
ax.legend(loc='upper center', fontsize=20)
fig.tight_layout()
# plt.savefig('training1.pdf', format='pdf', dpi=300, pad_inches = 0.05)
plt.show()
###Output
_____no_output_____ |
automl-in-action/01_end_to_end_ml_pipeline.ipynb | ###Markdown
End-to-end ML pipeline Setup
###Code
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.linear_model import LinearRegression
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Assembling the dataset
###Code
# Load the California housing dataset
house_dataset = fetch_california_housing()
# Display the oringal data
house_dataset.keys()
# Extract features with their names into the a dataframe format
data = pd.DataFrame(house_dataset.data, columns=house_dataset.feature_names)
# Extract target with their names into a pd.Series object with name MedPrice
target = pd.Series(house_dataset.target, name="MedPrice")
# Visualize the first 5 samples of the data
data.head(5)
###Output
_____no_output_____
###Markdown
Split the dataset into training and test set
###Code
# Split data into training and test dataset
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.2, random_state=42)
# Check the shape of whole dataset and the splited training and test set
print("--Shape of the whole data--\n {}".format(data.shape))
print("\n--Shape of the target vector--\n {}".format(target.shape))
print("\n--Shape of the training data--\n {}".format(X_train.shape))
print("\n--Shape of the testing data--\n {}".format(X_test.shape))
(data.shape, target.shape), (X_train.shape, y_train.shape), (X_test.shape, y_test.shape)
###Output
_____no_output_____
###Markdown
Data preprocessing Q1: What are the data type of the values in each feature?
###Code
data.dtypes
# Check for feature value type
print("-- Feature type --\n{}".format(data.dtypes))
print("\n-- Target type --\n{}".format(target.dtypes))
###Output
-- Feature type --
MedInc float64
HouseAge float64
AveRooms float64
AveBedrms float64
Population float64
AveOccup float64
Latitude float64
Longitude float64
dtype: object
-- Target type --
float64
###Markdown
Q2: How many distinct values each feature has in the dataset?
###Code
# Check for unique feature values
print("\n-- # of unique feature values --\n{}".format(data.nunique()))
###Output
-- # of unique feature values --
MedInc 12928
HouseAge 52
AveRooms 19392
AveBedrms 14233
Population 3888
AveOccup 18841
Latitude 862
Longitude 844
dtype: int64
###Markdown
Q3: What are the scale and basic statistics of each feature?
###Code
# Viewing the data statistics
pd.options.display.float_format = "{:,.2f}".format
data.describe()
###Output
_____no_output_____
###Markdown
Q4: Are there missing values contained in the data?
###Code
# Copy data to avoid inplace
train_data = X_train.copy()
# Add a column "MedPrice" for the target house price
train_data["MedPrice"] = y_train
# Check if there're missing values
print(
"\n-- check missing values in training data --\n{}".format(
train_data.isnull().any()
)
)
print("\n-- check missing values in test data --\n{}".format(X_test.isnull().any()))
###Output
-- check missing values in training data --
MedInc False
HouseAge False
AveRooms False
AveBedrms False
Population False
AveOccup False
Latitude False
Longitude False
MedPrice False
dtype: bool
-- check missing values in test data --
MedInc False
HouseAge False
AveRooms False
AveBedrms False
Population False
AveOccup False
Latitude False
Longitude False
dtype: bool
###Markdown
Feature engineering
###Code
# Plot the correlation across all the features and the target
plt.figure(figsize=(30, 10))
# Calculates the Pearson’s correlation coefficient matrix
correlation_matrix = train_data.corr().round(2)
sns.heatmap(data=correlation_matrix, square=True, annot=True, cmap="Blues") # fmt='.1f', annot_kws={'size':15},
# Select high correlation features & display the pairplot
selected_feature_set = ["MedInc", "AveRooms"] # 'PTRATIO', , 'Latitude', 'HouseAge'
sub_train_data = train_data[selected_feature_set + ["MedPrice"]]
# Extract the new training features
X_train = sub_train_data.drop(["MedPrice"], axis=1)
# Select same feature sets for test data
X_test = X_test[selected_feature_set]
sns.pairplot(sub_train_data, height=3.5, plot_kws={"alpha": 0.4})
plt.tight_layout()
###Output
_____no_output_____
###Markdown
ML algorithm selection Linear regression
###Code
# Training
# Create a Linear regressor
linear_regressor = LinearRegression()
# Train the model using the training sets
linear_regressor.fit(X_train, y_train)
# Display the learned parameters
# Convert the coefficient values to a dataframe
coeffcients = pd.DataFrame(
linear_regressor.coef_, X_train.columns, columns=["Coefficient"]
)
# Display the intercept value
print("Learned intercept: {:.2f}".format(linear_regressor.intercept_))
print("\n--The learned coefficient value learned by the linear regression model--")
print(coeffcients)
# Model prediction on training data
y_pred_train = linear_regressor.predict(X_train)
print("\n--Train MSE--\n{}".format(mean_squared_error(y_train, y_pred_train)))
# Testing
y_pred_test = linear_regressor.predict(X_test)
print("Test MSE: {:.2f}".format(mean_squared_error(y_test, y_pred_test)))
# Visualizing the differences between actual prices and predicted values
plt.scatter(y_test, y_pred_test)
plt.xlabel("MedPrice")
plt.ylabel("Predicted MedPrice")
plt.title("MedPrice vs Predicted MedPrice")
plt.show()
# Checking Normality of errors
sns.distplot(y_test - y_pred_test)
plt.title("Histogram of Residuals")
plt.xlabel("Residuals")
plt.ylabel("Frequency")
plt.show()
###Output
/usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
Decision tree
###Code
# Import library for decision tree
from sklearn.tree import DecisionTreeRegressor
tree_regressor = DecisionTreeRegressor(max_depth=3, random_state=42)
tree_regressor.fit(X_train, y_train)
# Model prediction on training & test data
y_pred_train = tree_regressor.predict(X_train)
y_pred_test = tree_regressor.predict(X_test)
print("Train MSE: {:.2f}".format(mean_squared_error(y_train, y_pred_train)))
print("Test MSE: {:.2f}".format(mean_squared_error(y_test, y_pred_test)))
# Plot outputs
# Visualizing the differences between actual prices and predicted values
plt.scatter(y_test, y_pred_test)
plt.xlabel("MedPrice")
plt.ylabel("Predicted MedPrice")
plt.title("MedPrice vs Predicted MedPrice")
plt.show()
# Visualizing the decision tree
from six import StringIO
import sklearn.tree as tree
import pydotplus
from IPython.display import Image
dot_data = StringIO()
tree.export_graphviz(
tree_regressor,
out_file=dot_data,
class_names=["MedPrice"], # the target names.
feature_names=selected_feature_set, # the feature names.
filled=True, # Whether to fill in the boxes with colours.
rounded=True, # Whether to round the corners of the boxes.
special_characters=True,
)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
###Output
_____no_output_____
###Markdown
Fine-Tuning: tune the tree depth hyperparameter in the tree regressor
###Code
kf = KFold(n_splits=5) # sample indices of datasets for 5-fold cv
cv_sets = []
for train_index, test_index in kf.split(X_train):
cv_sets.append(
(
X_train.iloc[train_index],
y_train.iloc[train_index],
X_train.iloc[test_index],
y_train.iloc[test_index],
)
) # construct 5-fold cv datasets
max_depths = list(range(1, 11)) # candidate max_depth hyperparamters
for max_depth in max_depths:
cv_results = []
regressor = DecisionTreeRegressor(max_depth=max_depth, random_state=42)
# loop through all the cv sets and average the validation results
for (x_tr, y_tr, x_te, y_te,) in cv_sets:
regressor.fit(x_tr, y_tr)
cv_results.append(mean_squared_error(regressor.predict(x_te), y_te))
print("Tree depth: {}, Avg. MSE: {}".format(max_depth, np.mean(cv_results)))
# Build up the decision tree regressor
regressor = DecisionTreeRegressor(random_state=42)
# Create a dictionary for the hyperparameter 'max_depth' with a range from 1 to 10
hps = {"max_depth": list(range(1, 11))}
# Transform 'performance_metric' into a scoring function using 'make_scorer'.
# The default scorer function is the greater the better, here MSE is the lower the better,
# so we set ``greater_is_better'' to be False.
scoring_fnc = make_scorer(mean_squared_error, greater_is_better=False)
# Create the grid search cv object (5-fold cross-validation)
grid_search = GridSearchCV(estimator=regressor, param_grid=hps, scoring=scoring_fnc, cv=5)
# Fit the grid search object to the training data to search the optimal model
grid_search = grid_search.fit(X_train, y_train)
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(-mean_score, params)
plt.plot(hps["max_depth"], -cvres["mean_test_score"])
plt.title("5-fold CV MSE change with tree max depth")
plt.xlabel("max_depth")
plt.ylabel("MSE")
plt.show()
###Output
0.9167053334390705 {'max_depth': 1}
0.7383634845663015 {'max_depth': 2}
0.68854467373395 {'max_depth': 3}
0.6388802215441052 {'max_depth': 4}
0.6229559075742178 {'max_depth': 5}
0.6181574550660847 {'max_depth': 6}
0.6315191091236836 {'max_depth': 7}
0.6531981343523263 {'max_depth': 8}
0.6778198281721838 {'max_depth': 9}
0.7023437729999482 {'max_depth': 10}
###Markdown
Retrive the best model
###Code
grid_search.best_params_
best_tree_regressor = grid_search.best_estimator_
# Produce the value for 'max_depth'
print("Best hyperparameter is {}.".format(grid_search.best_params_))
# Model prediction on training & test data
y_pred_train = best_tree_regressor.predict(X_train)
y_pred_test = best_tree_regressor.predict(X_test)
print("\n--Train MSE--\n{}".format(mean_squared_error(y_train, y_pred_train)))
print("\n--Test MSE--\n{}\n".format(mean_squared_error(y_test, y_pred_test)))
###Output
Best hyperparameter is {'max_depth': 6}.
--Train MSE--
0.5825729954046606
--Test MSE--
0.6422136569733781
###Markdown
Real test curve V.S. cross-validation curve
###Code
test_results = []
for max_depth in hps["max_depth"]:
tmp_results = []
regressor = DecisionTreeRegressor(max_depth=max_depth, random_state=42)
regressor.fit(X_train, y_train)
test_results.append(mean_squared_error(regressor.predict(X_test), y_test))
print("Tree depth: {}, Test MSE: {}".format(max_depth, test_results[-1]))
plt.plot(hps["max_depth"], -cvres["mean_test_score"])
plt.plot(hps["max_depth"], test_results)
plt.title("Comparison of the changing curve of the CV results and real test results")
plt.legend(["CV", "Test"])
plt.xlabel("max_depth")
plt.ylabel("MSE")
plt.show()
###Output
Tree depth: 1, Test MSE: 0.9441349708215667
Tree depth: 2, Test MSE: 0.7542635096031615
Tree depth: 3, Test MSE: 0.7063353387614023
Tree depth: 4, Test MSE: 0.6624543803195595
Tree depth: 5, Test MSE: 0.6455716785858321
Tree depth: 6, Test MSE: 0.6422136569733781
Tree depth: 7, Test MSE: 0.6423777285754818
Tree depth: 8, Test MSE: 0.6528185531960586
Tree depth: 9, Test MSE: 0.6748067953031296
Tree depth: 10, Test MSE: 0.7125774158492032
|
Sprint challenge/sagemaker_notebook.ipynb | ###Markdown
Part 1. SageMaker and Dask
###Code
import dask.dataframe as dd
###Output
_____no_output_____
###Markdown
Create dataframe
###Code
# Read in all 5 CSV files at once
df = dd.read_csv('*.csv')
df.head()
# Shape gives us the columns from the schema, but is apparently not
# enough to get past the lazy evaluation and tell me the number of rows
df.shape
# There we go, 1956 rows.
len(df)
###Output
_____no_output_____
###Markdown
How many comments are spam?
###Code
# Group by spam or not spam
df.groupby('CLASS').count().compute()
# Create lowercase column
df['lowercase'] = df['CONTENT'].apply(str.lower)
# Create column that checks for the word 'check'
df['check'] = df['lowercase'].apply(lambda x: 'check' in x)
# Groupby spam status and the word 'check'
df.groupby(['check','CLASS']).count().compute()
###Output
_____no_output_____
###Markdown
And there we go. Among comments containing 'check' (check=True), 19 are ham (CLASS=0) and 461 are spam (CLASS=1). Part 1 bonus!
###Code
# Creating a distributed client
from dask.distributed import Client
client = Client()
client
###Output
_____no_output_____ |
code/Least_Squares.ipynb | ###Markdown
Least squares fit example Let's find the least squares fit for a toy dataset. First, create the data.
###Code
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(2017)
x = np.linspace(-10,10,100)
y = 1 / (1 + np.exp(-x))
x = x + 0.1 * np.random.randn(x.size)
y = y + 0.1 * np.random.randn(y.size)
plt.plot(x, y, 'ro')
plt.xlabel("x")
plt.ylabel("y")
plt.axis('tight')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Now, let's assume linear model between x and y: $$y[n] = ax[n] + b + w[n],$$and find the LS fit for $\boldsymbol{\theta} = [a,b]^T$. We need to represent the data in a matrix form: $$\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{w}.$$Next, we will minimize the residual error $J(\boldsymbol{\theta}) = \mathbf{w}^T\mathbf{w} = (\mathbf{y} - \mathbf{X}\boldsymbol{\theta})^T(\mathbf{y} - \mathbf{X}\boldsymbol{\theta})$.
###Code
X = np.column_stack([x, np.ones_like(x)])
theta = np.dot(np.dot(np.linalg.inv(np.dot(X.T, X)), X.T), y)
###Output
_____no_output_____
###Markdown
Plot the result:
###Code
a, b = theta
plt.plot(x, y, 'ro')
plt.plot(x, a*x + b, 'b-')
plt.xlabel("x")
plt.ylabel("y")
plt.axis('tight')
plt.grid()
plt.title('Residual error: %.5f' % (np.sum((y - (a*x + b))**2)))
plt.show()
###Output
_____no_output_____
###Markdown
The nice property of LS is that we can throw in whatever columns we wish. See what happens with the second order polynomial:$$y[n] = ax^2[n] + bx[n] + c + w[n].$$Also: we use the ready-made function `np.linalg.lstsq`.
###Code
X = np.column_stack([x**2, x, np.ones_like(x)])
theta, residual, _, _ = np.linalg.lstsq(X, y)
a, b, c = theta
plt.plot(x, y, 'ro')
plt.plot(x, a*x**2 + b*x + c, 'b-')
plt.xlabel("x")
plt.ylabel("y")
plt.axis('tight')
plt.title('Residual error: %.5f' % (residual))
plt.grid()
plt.show()
###Output
_____no_output_____ |
docs/examples/ecg_delineate.ipynb | ###Markdown
Locate P, Q, S and T waves in ECG This example shows how to use Neurokit to delineate the ECG peaks in Python using NeuroKit. This means detecting and locating all components of the QRS complex, including **P-peaks** and **T-peaks**, as well their **onsets** and **offsets** from an ECG signal.This example can be referenced by [citing the package](https://github.com/neuropsychology/NeuroKitcitation).
###Code
# Load NeuroKit and other useful packages
import neurokit2 as nk
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.rcParams['figure.figsize'] = [8, 5] # Bigger images
###Output
_____no_output_____
###Markdown
In this example, we will use a short segment of ECG signal with sampling rate of 3000 Hz. Find the R peaks
###Code
# Retrieve ECG data from data folder (sampling rate= 1000 Hz)
ecg_signal = nk.data(dataset="ecg_3000hz")['ECG']
# Extract R-peaks locations
_, rpeaks = nk.ecg_peaks(ecg_signal, sampling_rate=3000)
###Output
_____no_output_____
###Markdown
The [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_peaks>) function will return a dictionary contains the samples at which R-peaks are located. Let's visualize the R-peaks location in the signal to make sure it got detected correctly.
###Code
# Visualize R-peaks in ECG signal
plot = nk.events_plot(rpeaks['ECG_R_Peaks'], ecg_signal)
# Zooming into the first 5 R-peaks
plot = nk.events_plot(rpeaks['ECG_R_Peaks'][:5], ecg_signal[:20000])
###Output
_____no_output_____
###Markdown
Visually, the R-peaks seem to have been correctly identified. You can also explore searching for R-peaks using different methods provided by Neurokit [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_peaks>). Locate other waves (P, Q, S, T) and their onset and offset In [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>), Neurokit implements different methods to segment the QRS complexes. There are the derivative method and the other methods that make use of Wavelet to delineate the complexes. Peak method First, let's take a look at the 'peak' method and its output.
###Code
# Delineate the ECG signal
_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="peak")
# Visualize the T-peaks, P-peaks, Q-peaks and S-peaks
plot = nk.events_plot([waves_peak['ECG_T_Peaks'],
waves_peak['ECG_P_Peaks'],
waves_peak['ECG_Q_Peaks'],
waves_peak['ECG_S_Peaks']], ecg_signal)
# Zooming into the first 3 R-peaks, with focus on T_peaks, P-peaks, Q-peaks and S-peaks
plot = nk.events_plot([waves_peak['ECG_T_Peaks'][:3],
waves_peak['ECG_P_Peaks'][:3],
waves_peak['ECG_Q_Peaks'][:3],
waves_peak['ECG_S_Peaks'][:3]], ecg_signal[:12500])
###Output
_____no_output_____
###Markdown
Visually, the 'peak' method seems to have correctly identified the P-peaks, Q-peaks, S-peaks and T-peaks for this signal, at least, for the first few complexes. Well done, *peak*!However, it can be quite tiring to be zooming in to each complex and inspect them one by one. To have a better overview of all complexes at once, you can make use of the `show` argument in [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>) as below.
###Code
# Delineate the ECG signal and visualizing all peaks of ECG complexes
_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="peak", show=True, show_type='peaks')
###Output
_____no_output_____
###Markdown
The 'peak' method is doing a glamorous job with identifying all the ECG peaks for this piece of ECG signal.On top of the above peaks, the peak method also identify the wave boundaries, namely the onset of P-peaks and offset of T-peaks. You can vary the argument `show_type` to specify the information you would like plot.Let's visualize them below:
###Code
# Delineate the ECG signal and visualizing all P-peaks boundaries
signal_peak, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="peak", show=True, show_type='bounds_P')
# Delineate the ECG signal and visualizing all T-peaks boundaries
signal_peaj, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="peak", show=True, show_type='bounds_T')
###Output
_____no_output_____
###Markdown
Both the onsets of P-peaks and the offsets of T-peaks appears to have been correctly identified here. This information will be used to delineate cardiac phases in [ecg_phase()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_phase>).Let's next take a look at the continuous wavelet method. Continuous Wavelet Method (CWT)
###Code
# Delineate the ECG signal
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='all')
###Output
_____no_output_____
###Markdown
By specifying *'all'* in the `show_type` argument, you can plot all delineated information output by the cwt method. However, it could be hard to evaluate the accuracy of the delineated information with everyhing plotted together. Let's tease them apart!
###Code
# Visualize P-peaks and T-peaks
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='peaks')
# Visualize T-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_T')
# Visualize P-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_P')
# Visualize R-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_R')
###Output
_____no_output_____
###Markdown
*Unlike the peak method, the continuous wavelet method does not idenfity the Q-peaks and S-peaks. However, it provides more information regarding the boundaries of the waves*Visually, except a few exception, CWT method is doing a great job. However, the P-waves boundaries are not very clearly identified here.Last but not least, we will look at the third method in Neurokit [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>) function: the discrete wavelet method. Discrete Wavelet Method (DWT) - default method
###Code
# Delineate the ECG signal
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='all')
# Visualize P-peaks and T-peaks
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='peaks')
# visualize T-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_T')
# Visualize P-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_P')
# Visualize R-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_R')
###Output
_____no_output_____
###Markdown
Locate P, Q, S and T waves in ECG This example shows how to use Neurokit to delineate the ECG peaks in Python using NeuroKit. This means detecting and locating all components of the QRS complex, including **P-peaks** and **T-peaks**, as well their **onsets** and **offsets** from an ECG signal.This example can be referenced by [citing the package](https://github.com/neuropsychology/NeuroKitcitation).
###Code
# Load NeuroKit and other useful packages
import neurokit2 as nk
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.rcParams['figure.figsize'] = [8, 5] # Bigger images
###Output
_____no_output_____
###Markdown
In this example, we will use a short segment of ECG signal with sampling rate of 3000 Hz. Find the R peaks
###Code
# Retrieve ECG data from data folder (sampling rate= 1000 Hz)
ecg_signal = nk.data(dataset="ecg_3000hz")['ECG']
# Extract R-peaks locations
_, rpeaks = nk.ecg_peaks(ecg_signal, sampling_rate=3000)
###Output
_____no_output_____
###Markdown
The [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_peaks>) function will return a dictionary contains the samples at which R-peaks are located. Let's visualize the R-peaks location in the signal to make sure it got detected correctly.
###Code
# Visualize R-peaks in ECG signal
plot = nk.events_plot(rpeaks['ECG_R_Peaks'], ecg_signal)
# Zooming into the first 5 R-peaks
plot = nk.events_plot(rpeaks['ECG_R_Peaks'][:5], ecg_signal[:20000])
###Output
_____no_output_____
###Markdown
Visually, the R-peaks seem to have been correctly identified. You can also explore searching for R-peaks using different methods provided by Neurokit [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_peaks>). Locate other waves (P, Q, S, T) and their onset and offset In [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>), Neurokit implements different methods to segment the QRS complexes. There are the derivative method and the other methods that make use of Wavelet to delineate the complexes. Peak method First, let's take a look at the 'peak' method and its output.
###Code
# Delineate the ECG signal
_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000)
# Visualize the T-peaks, P-peaks, Q-peaks and S-peaks
plot = nk.events_plot([waves_peak['ECG_T_Peaks'],
waves_peak['ECG_P_Peaks'],
waves_peak['ECG_Q_Peaks'],
waves_peak['ECG_S_Peaks']], ecg_signal)
# Zooming into the first 3 R-peaks, with focus on T_peaks, P-peaks, Q-peaks and S-peaks
plot = nk.events_plot([waves_peak['ECG_T_Peaks'][:3],
waves_peak['ECG_P_Peaks'][:3],
waves_peak['ECG_Q_Peaks'][:3],
waves_peak['ECG_S_Peaks'][:3]], ecg_signal[:12500])
###Output
_____no_output_____
###Markdown
Visually, the 'peak' method seems to have correctly identified the P-peaks, Q-peaks, S-peaks and T-peaks for this signal, at least, for the first few complexes. Well done, *peak*!However, it can be quite tiring to be zooming in to each complex and inspect them one by one. To have a better overview of all complexes at once, you can make use of the `show` argument in [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>) as below.
###Code
# Delineate the ECG signal and visualizing all peaks of ECG complexes
_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, show=True, show_type='peaks')
###Output
_____no_output_____
###Markdown
The 'peak' method is doing a glamorous job with identifying all the ECG peaks for this piece of ECG signal.On top of the above peaks, the peak method also identify the wave boundaries, namely the onset of P-peaks and offset of T-peaks. You can vary the argument `show_type` to specify the information you would like plot.Let's visualize them below:
###Code
# Delineate the ECG signal and visualizing all P-peaks boundaries
signal_peak, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, show=True, show_type='bounds_P')
# Delineate the ECG signal and visualizing all T-peaks boundaries
signal_peaj, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, show=True, show_type='bounds_T')
###Output
_____no_output_____
###Markdown
Both the onsets of P-peaks and the offsets of T-peaks appears to have been correctly identified here. This information will be used to delineate cardiac phases in [ecg_phase()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_phase>).Let's next take a look at the continuous wavelet method. Continuous Wavelet Method (CWT)
###Code
# Delineate the ECG signal
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='all')
###Output
_____no_output_____
###Markdown
By specifying *'all'* in the `show_type` argument, you can plot all delineated information output by the cwt method. However, it could be hard to evaluate the accuracy of the delineated information with everyhing plotted together. Let's tease them apart!
###Code
# Visualize P-peaks and T-peaks
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='peaks')
# Visualize T-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_T')
# Visualize P-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_P')
# Visualize R-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_R')
###Output
_____no_output_____
###Markdown
*Unlike the peak method, the continuous wavelet method does not idenfity the Q-peaks and S-peaks. However, it provides more information regarding the boundaries of the waves*Visually, except a few exception, CWT method is doing a great job. However, the P-waves boundaries are not very clearly identified here.Last but not least, we will look at the third method in Neurokit [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>) function: the discrete wavelet method. Discrete Wavelet Method (DWT)
###Code
# Delineate the ECG signal
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='all')
# Visualize P-peaks and T-peaks
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='peaks')
# visualize T-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_T')
# Visualize P-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_P')
# Visualize R-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_R')
###Output
_____no_output_____
###Markdown
Locate P, Q, S and T waves in ECG This example shows how to use Neurokit to delineate the ECG peaks in Python using NeuroKit. This means detecting and locating all components of the QRS complex, including **P-peaks** and **T-peaks**, as well their **onsets** and **offsets** from an ECG signal.
###Code
# Load NeuroKit and other useful packages
import neurokit2 as nk
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['figure.figsize'] = [8, 5] # Bigger images
%matplotlib inline
###Output
_____no_output_____
###Markdown
In this example, we will use a short segment of ECG signal with sampling rate of 3000 Hz. Find the R peaks
###Code
# Retrieve ECG data from data folder (sampling rate= 1000 Hz)
ecg_signal = nk.data(dataset="ecg_3000hz")['ECG']
# Extract R-peaks locations
_, rpeaks = nk.ecg_peaks(ecg_signal, sampling_rate=3000)
###Output
_____no_output_____
###Markdown
The [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_peaks>) function will return a dictionary contains the samples at which R-peaks are located. Let's visualize the R-peaks location in the signal to make sure it got detected correctly.
###Code
# Visualize R-peaks in ECG signal
plot = nk.events_plot(rpeaks['ECG_R_Peaks'], ecg_signal)
# Zooming into the first 5 R-peaks
plot = nk.events_plot(rpeaks['ECG_R_Peaks'][:5], ecg_signal[:20000])
###Output
_____no_output_____
###Markdown
Visually, the R-peaks seem to have been correctly identified. You can also explore searching for R-peaks using different methods provided by Neurokit [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_peaks>). Locate other waves (P, Q, S, T) and their onset and offset In [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>), Neurokit implements different methods to segment the QRS complexes. There are the derivative method and the other methods that make use of Wavelet to delineate the complexes. Peak method First, let's take a look at the 'peak' method and its output.
###Code
# Delineate the ECG signal
_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000)
# Visualize the T-peaks, P-peaks, Q-peaks and S-peaks
plot = nk.events_plot([waves_peak['ECG_T_Peaks'],
waves_peak['ECG_P_Peaks'],
waves_peak['ECG_Q_Peaks'],
waves_peak['ECG_S_Peaks']], ecg_signal)
# Zooming into the first 3 R-peaks, with focus on T_peaks, P-peaks, Q-peaks and S-peaks
plot = nk.events_plot([waves_peak['ECG_T_Peaks'][:3],
waves_peak['ECG_P_Peaks'][:3],
waves_peak['ECG_Q_Peaks'][:3],
waves_peak['ECG_S_Peaks'][:3]], ecg_signal[:12500])
###Output
_____no_output_____
###Markdown
Visually, the 'peak' method seems to have correctly identified the P-peaks, Q-peaks, S-peaks and T-peaks for this signal, at least, for the first few complexes. Well done, *peak*!However, it can be quite tiring to be zooming in to each complex and inspect them one by one. To have a better overview of all complexes at once, you can make use of the `show` argument in [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>) as below.
###Code
# Delineate the ECG signal and visualizing all peaks of ECG complexes
_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, show=True, show_type='peaks')
###Output
_____no_output_____
###Markdown
The 'peak' method is doing a glamorous job with identifying all the ECG peaks for this piece of ECG signal.On top of the above peaks, the peak method also identify the wave boundaries, namely the onset of P-peaks and offset of T-peaks. You can vary the argument `show_type` to specify the information you would like plot.Let's visualize them below:
###Code
# Delineate the ECG signal and visualizing all P-peaks boundaries
signal_peak, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, show=True, show_type='bounds_P')
# Delineate the ECG signal and visualizing all T-peaks boundaries
signal_peaj, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, show=True, show_type='bounds_T')
###Output
_____no_output_____
###Markdown
Both the onsets of P-peaks and the offsets of T-peaks appears to have been correctly identified here. This information will be used to delinate cardiac phases in [ecg_phase()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_phase>).Let's next take a look at the continuous wavelet method. Continuous Wavelet Method (CWT)
###Code
# Delineate the ECG signal
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='all')
###Output
_____no_output_____
###Markdown
By specifying *'all'* in the `show_type` argument, you can plot all delineated information output by the cwt method. However, it could be hard to evaluate the accuracy of the delineated information with everyhing plotted together. Let's tease them apart!
###Code
# Visualize P-peaks and T-peaks
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='peaks')
# Visualize T-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_T')
# Visualize P-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_P')
# Visualize R-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_R')
###Output
_____no_output_____
###Markdown
*Unlike the peak method, the continuous wavelet method does not idenfity the Q-peaks and S-peaks. However, it provides more information regarding the boundaries of the waves*Visually, except a few exception, CWT method is doing a great job. However, the P-waves boundaries are not very clearly identified here.Last but not least, we will look at the third method in Neurokit [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>) function: the discrete wavelet method. Discrete Wavelet Method (DWT)
###Code
# Delineate the ECG signal
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='all')
# Visualize P-peaks and T-peaks
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='peaks')
# visualize T-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_T')
# Visualize P-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_P')
# Visualize R-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_R')
###Output
_____no_output_____
###Markdown
Locate P, Q, S and T waves in ECG This example shows how to use Neurokit to delineate the ECG peaks in Python using NeuroKit. This means detecting and locating all components of the QRS complex, including **P-peaks** and **T-peaks**, as well their **onsets** and **offsets** from an ECG signal.
###Code
# Load NeuroKit and other useful packages
import neurokit2 as nk
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['figure.figsize'] = [8, 5] # Bigger images
###Output
_____no_output_____
###Markdown
In this example, we will use a short segment of ECG signal with sampling rate of 3000 Hz. Find the R peaks
###Code
# Retrieve ECG data from data folder (sampling rate= 1000 Hz)
ecg_signal = nk.data(dataset="ecg_3000hz")['ECG']
# Extract R-peaks locations
_, rpeaks = nk.ecg_peaks(ecg_signal, sampling_rate=3000)
###Output
_____no_output_____
###Markdown
The [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_peaks>) function will return a dictionary contains the samples at which R-peaks are located. Let's visualize the R-peaks location in the signal to make sure it got detected correctly.
###Code
# Visualize R-peaks in ECG signal
plot = nk.events_plot(rpeaks['ECG_R_Peaks'], ecg_signal)
# Zooming into the first 5 R-peaks
plot = nk.events_plot(rpeaks['ECG_R_Peaks'][:5], ecg_signal[:20000])
###Output
_____no_output_____
###Markdown
Visually, the R-peaks seem to have been correctly identified. You can also explore searching for R-peaks using different methods provided by Neurokit [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_peaks>). Locate other waves (P, Q, S, T) and their onset and offset In [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>), Neurokit implements different methods to segment the QRS complexes. There are the derivative method and the other methods that make use of Wavelet to delineate the complexes. Peak method First, let's take a look at the 'peak' method and its output.
###Code
# Delineate the ECG signal
_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000)
# Visualize the T-peaks, P-peaks, Q-peaks and S-peaks
plot = nk.events_plot([waves_peak['ECG_T_Peaks'],
waves_peak['ECG_P_Peaks'],
waves_peak['ECG_Q_Peaks'],
waves_peak['ECG_S_Peaks']], ecg_signal)
# Zooming into the first 3 R-peaks, with focus on T_peaks, P-peaks, Q-peaks and S-peaks
plot = nk.events_plot([waves_peak['ECG_T_Peaks'][:3],
waves_peak['ECG_P_Peaks'][:3],
waves_peak['ECG_Q_Peaks'][:3],
waves_peak['ECG_S_Peaks'][:3]], ecg_signal[:12500])
###Output
_____no_output_____
###Markdown
Visually, the 'peak' method seems to have correctly identified the P-peaks, Q-peaks, S-peaks and T-peaks for this signal, at least, for the first few complexes. Well done, *peak*!However, it can be quite tiring to be zooming in to each complex and inspect them one by one. To have a better overview of all complexes at once, you can make use of the `show` argument in [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>) as below.
###Code
# Delineate the ECG signal and visualizing all peaks of ECG complexes
_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, show=True, show_type='peaks')
###Output
_____no_output_____
###Markdown
The 'peak' method is doing a glamorous job with identifying all the ECG peaks for this piece of ECG signal.On top of the above peaks, the peak method also identify the wave boundaries, namely the onset of P-peaks and offset of T-peaks. You can vary the argument `show_type` to specify the information you would like plot.Let's visualize them below:
###Code
# Delineate the ECG signal and visualizing all P-peaks boundaries
signal_peak, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, show=True, show_type='bounds_P')
# Delineate the ECG signal and visualizing all T-peaks boundaries
signal_peaj, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, show=True, show_type='bounds_T')
###Output
_____no_output_____
###Markdown
Both the onsets of P-peaks and the offsets of T-peaks appears to have been correctly identified here. This information will be used to delineate cardiac phases in [ecg_phase()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_phase>).Let's next take a look at the continuous wavelet method. Continuous Wavelet Method (CWT)
###Code
# Delineate the ECG signal
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='all')
###Output
_____no_output_____
###Markdown
By specifying *'all'* in the `show_type` argument, you can plot all delineated information output by the cwt method. However, it could be hard to evaluate the accuracy of the delineated information with everyhing plotted together. Let's tease them apart!
###Code
# Visualize P-peaks and T-peaks
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='peaks')
# Visualize T-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_T')
# Visualize P-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_P')
# Visualize R-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_R')
###Output
_____no_output_____
###Markdown
*Unlike the peak method, the continuous wavelet method does not idenfity the Q-peaks and S-peaks. However, it provides more information regarding the boundaries of the waves*Visually, except a few exception, CWT method is doing a great job. However, the P-waves boundaries are not very clearly identified here.Last but not least, we will look at the third method in Neurokit [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>) function: the discrete wavelet method. Discrete Wavelet Method (DWT)
###Code
# Delineate the ECG signal
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='all')
# Visualize P-peaks and T-peaks
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='peaks')
# visualize T-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_T')
# Visualize P-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_P')
# Visualize R-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_R')
###Output
_____no_output_____
###Markdown
Locate P, Q, S and T waves in ECG This example shows how to use Neurokit to delineate the ECG peaks in Python using NeuroKit. This means detecting and locating all components of the QRS complex, including **P-peaks** and **T-peaks**, as well their **onsets** and **offsets** from an ECG signal.This example can be referenced by [citing the package](https://github.com/neuropsychology/NeuroKitcitation).
###Code
# Load NeuroKit and other useful packages
import neurokit2 as nk
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.rcParams['figure.figsize'] = [8, 5] # Bigger images
###Output
_____no_output_____
###Markdown
In this example, we will use a short segment of ECG signal with sampling rate of 3000 Hz. Find the R peaks
###Code
# Retrieve ECG data from data folder (sampling rate= 1000 Hz)
ecg_signal = nk.data(dataset="ecg_3000hz")['ECG']
# Extract R-peaks locations
_, rpeaks = nk.ecg_peaks(ecg_signal, sampling_rate=3000)
###Output
_____no_output_____
###Markdown
The [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_peaks>) function will return a dictionary contains the samples at which R-peaks are located. Let's visualize the R-peaks location in the signal to make sure it got detected correctly.
###Code
# Visualize R-peaks in ECG signal
plot = nk.events_plot(rpeaks['ECG_R_Peaks'], ecg_signal)
# Zooming into the first 5 R-peaks
plot = nk.events_plot(rpeaks['ECG_R_Peaks'][:5], ecg_signal[:20000])
###Output
_____no_output_____
###Markdown
Visually, the R-peaks seem to have been correctly identified. You can also explore searching for R-peaks using different methods provided by Neurokit [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_peaks>). Locate other waves (P, Q, S, T) and their onset and offset In [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>), Neurokit implements different methods to segment the QRS complexes. There are the derivative method and the other methods that make use of Wavelet to delineate the complexes. Peak method First, let's take a look at the 'peak' method and its output.
###Code
# Delineate the ECG signal
_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="peak")
# Visualize the T-peaks, P-peaks, Q-peaks and S-peaks
plot = nk.events_plot([waves_peak['ECG_T_Peaks'],
waves_peak['ECG_P_Peaks'],
waves_peak['ECG_Q_Peaks'],
waves_peak['ECG_S_Peaks']], ecg_signal)
# Zooming into the first 3 R-peaks, with focus on T_peaks, P-peaks, Q-peaks and S-peaks
plot = nk.events_plot([waves_peak['ECG_T_Peaks'][:3],
waves_peak['ECG_P_Peaks'][:3],
waves_peak['ECG_Q_Peaks'][:3],
waves_peak['ECG_S_Peaks'][:3]], ecg_signal[:12500])
###Output
_____no_output_____
###Markdown
Visually, the 'peak' method seems to have correctly identified the P-peaks, Q-peaks, S-peaks and T-peaks for this signal, at least, for the first few complexes. Well done, *peak*!However, it can be quite tiring to be zooming in to each complex and inspect them one by one. To have a better overview of all complexes at once, you can make use of the `show` argument in [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>) as below.
###Code
# Delineate the ECG signal and visualizing all peaks of ECG complexes
_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="peak", show=True, show_type='peaks')
###Output
_____no_output_____
###Markdown
The 'peak' method is doing a glamorous job with identifying all the ECG peaks for this piece of ECG signal.On top of the above peaks, the peak method also identify the wave boundaries, namely the onset of P-peaks and offset of T-peaks. You can vary the argument `show_type` to specify the information you would like plot.Let's visualize them below:
###Code
# Delineate the ECG signal and visualizing all P-peaks boundaries
signal_peak, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="peak", show=True, show_type='bounds_P')
# Delineate the ECG signal and visualizing all T-peaks boundaries
signal_peaj, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="peak", show=True, show_type='bounds_T')
###Output
_____no_output_____
###Markdown
Both the onsets of P-peaks and the offsets of T-peaks appears to have been correctly identified here. This information will be used to delineate cardiac phases in [ecg_phase()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_phase>).Let's next take a look at the continuous wavelet method. Continuous Wavelet Method (CWT)
###Code
# Delineate the ECG signal
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='all')
###Output
_____no_output_____
###Markdown
By specifying *'all'* in the `show_type` argument, you can plot all delineated information output by the cwt method. However, it could be hard to evaluate the accuracy of the delineated information with everyhing plotted together. Let's tease them apart!
###Code
# Visualize P-peaks and T-peaks
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='peaks')
# Visualize T-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_T')
# Visualize P-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_P')
# Visualize R-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_R')
###Output
_____no_output_____
###Markdown
*Unlike the peak method, the continuous wavelet method does not idenfity the Q-peaks and S-peaks. However, it provides more information regarding the boundaries of the waves*Visually, except a few exception, CWT method is doing a great job. However, the P-waves boundaries are not very clearly identified here.Last but not least, we will look at the third method in Neurokit [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.ecg_delineate>) function: the discrete wavelet method. Discrete Wavelet Method (DWT) - default method
###Code
# Delineate the ECG signal
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='all')
# Visualize P-peaks and T-peaks
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='peaks')
# visualize T-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_T')
# Visualize P-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_P')
# Visualize R-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_R')
###Output
_____no_output_____ |
Paragraph Annotation/2 - Binomial classification.ipynb | ###Markdown
1. Reading the text data
###Code
text_data_sentence = pd.read_csv('./Files/textdatanew.csv', encoding='ISO-8859-1')
#text_data_sentence.head(5)
text_data_sentence.head(5)
###Output
_____no_output_____
###Markdown
2. Reading the text features
###Code
text_features = pd.read_csv("text_features.csv", encoding='ISO-8859-1')
text_features.head(2)
###Output
_____no_output_____
###Markdown
3. Reading the Response file
###Code
bess_tags = pd.read_csv('CBW_Bess_tags_final2.csv')
bess_tags.head()
###Output
_____no_output_____
###Markdown
4. Preprocessing BESS Response file
###Code
bess_reponse = bess_tags.loc[:,['Content','Event','Type','para no','biographyID','collectionID']]
bess_reponse= bess_reponse.fillna(' ')
### Creating a new column for the response variable
bess_reponse.loc[:,'Response'] = bess_reponse.loc[:,['Content','Event']].apply(lambda x: '_'.join(x),axis = 1)
### Concatenating columns to create new columns
bess_reponse['Bio_col_id'] = bess_reponse['biographyID'] +"_" + bess_reponse['collectionID']
bess_reponse['Bio_col_para_id'] = bess_reponse['Bio_col_id'] +"_" + bess_reponse['para no'].astype('str')
###Output
_____no_output_____
###Markdown
4.1 Selecting the top BESS reponses for events based on TF-IDF method
###Code
doc_count = pd.DataFrame(bess_reponse[bess_reponse.Type.isin(['Event'])].\
groupby(['Response'])['Bio_col_id'].apply(lambda x: len(np.unique(x))))
term_freq = pd.DataFrame(bess_reponse[bess_reponse.Type.isin(['Event'])].\
groupby(['Response'])['Bio_col_id'].count())
total_docs = len(bess_reponse['Bio_col_id'].unique())
###Output
_____no_output_____
###Markdown
4.2 Grouping by the term frequencies to get the top values
###Code
group_by_counts = pd.concat([term_freq,doc_count],axis = 1)
group_by_counts.columns = ['Term_freq','Doc_freq']
group_by_counts['tf_idf'] = pd.DataFrame(group_by_counts['Term_freq'] * np.log(total_docs/group_by_counts['Doc_freq']) )
group_by_counts.sort_values(['tf_idf'],ascending=False)[0:10]
###Output
_____no_output_____
###Markdown
5. Preparing Final Respone File 5.1 Getting a distribution of all the responses
###Code
bio_response = pd.DataFrame(bess_reponse.groupby(['Response'])['Bio_col_para_id'].apply(lambda x: len(np.unique(x))))
bio_response.sort_values(['Bio_col_para_id'],ascending=False).head(10)
###Output
_____no_output_____
###Markdown
5.2 Selecting the response to Analyse
###Code
reponse_required = 'lover, male, named_agentType'
reponse_required_to_merge = bess_reponse[bess_reponse.Response == reponse_required]
### Merging the response with the text data file
text_data_merge = pd.merge(text_data_sentence, reponse_required_to_merge.drop_duplicates(),\
how = 'left', left_on=['CollectionID','BiographyID','ParagraphNo'],
right_on=['collectionID','biographyID','para no'])
final_data_frame = text_data_merge.loc[:,['ParagraphText','Response']]
final_data_frame['Response_binary'] = np.where(final_data_frame.Response.isnull(),0,1)
final_data_frame.head()
final_data_frame.Response_binary.value_counts()
###Output
_____no_output_____
###Markdown
6. Text Data - Preprocessing on the Final Response file 6.1 Getting stop words High Frequency and Low Frequency word list
###Code
tokenized_para = final_data_frame.ParagraphText.apply(word_tokenize)
all_sent = [words for each_sent in tokenized_para for words in each_sent]
count_dict = Counter(all_sent)
high_freq_words = [word for (word,count) in count_dict.most_common(500)]
#### Getting Low Frequency words - based on a threshold
less_freq_words = []
threshold = 5
for k,v in count_dict.items():
if v < threshold:
less_freq_words.append(k)
stop_words = stopwords.words('english')
stop_words.extend(high_freq_words)
stop_words.extend(less_freq_words)
###Output
_____no_output_____
###Markdown
6.2 Bag of Words
###Code
bow_model = CountVectorizer(ngram_range= (1,2),stop_words=stop_words)
Para_text_bow = bow_model.fit_transform(final_data_frame.ParagraphText)
features = bow_model.get_feature_names()
###Output
_____no_output_____
###Markdown
7. Model Building 7.1 Splitting data into train and test
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(Para_text_bow ,final_data_frame.Response_binary,
test_size = 0.3, random_state = 0)
# features = bow_model.get_feature_names()
# features.extend(['Sentiment'])
# features.extend(emotional_features.columns.values)
###Output
_____no_output_____
###Markdown
7.2 Machine Learning Models
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
#knn_model = KNeighborsClassifier(n_neighbors= 3, p = 1.5)
#rf_model = RandomForestClassifier(n_estimators= 50)
#rf_model = LogisticRegression()
rf_model = SVC(C = 10,kernel = 'poly')
rf_model.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
7.3 Reviewing the response Almost all the values are predicited as 0. Now looking at the confusion matrix, even all the predicted ones are not correct
###Code
from sklearn.metrics import confusion_matrix
preds = pd.DataFrame([np.argmax(each) if each.sum() != 0 else 10 for each in rf_model.predict(X_test)])[0]
confusion_matrix(y_test,preds)
y_test_ = y_test.reset_index(drop = True)
### Evaluation ###
preds = pd.DataFrame(rf_model.predict(X_test))[0]
print("Accuracy: ",(preds == y_test_).sum()/len(y_test_))
print("F1 score: ",f1_score(y_test_,preds))
feature_importances = pd.DataFrame(rf_model.feature_importances_,
index = features,
columns=['importance']).sort_values('importance',ascending=False)
feature_importances.head(20)
###Output
_____no_output_____ |
gs_quant/content/events/00_gsquant_meets_markets/03_esg_basket_portfolio_optimisation/quants_meet_markets_msci.ipynb | ###Markdown
GS Quant Meets Markets x MSCI Step 1: Import Modules
###Code
# Import modules
from typing import List
from gs_quant.api.utils import ThreadPoolManager
from gs_quant.data import Dataset
from gs_quant.api.gs.assets import GsAssetApi
from gs_quant.models.risk_model import FactorRiskModel, DataAssetsRequest
from functools import partial
from gs_quant.markets.baskets import Basket
from gs_quant.markets.indices_utils import ReturnType
from gs_quant.markets.position_set import PositionSet
from gs_quant.session import Environment, GsSession
import matplotlib.pyplot as plt
import datetime as dt
from dateutil.relativedelta import relativedelta
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2: Authenticate
###Code
# Initialize session -- for external users, input client id and secret below
client = None
secret = None
GsSession.use(Environment.PROD, client_id=client, client_secret=secret, scopes='')
###Output
_____no_output_____
###Markdown
Step 3: Implement basic functions to fetch coverage universe, ratings, factor & liquidity data
###Code
# Initialize functions
def batch_liquidity(dataset_id: str, asset_ids: list, day: dt.date, size: int=200) -> pd.DataFrame:
data = Dataset(dataset_id)
tasks = [partial(data.get_data, day, day, assetId=asset_ids[i:i+size]) for i in range(0, len(asset_ids), size)]
results = ThreadPoolManager.run_async(tasks)
return pd.concat(results)
def batch_ratings(dataset_id: str, gsid_ids: list, day: dt.date, fallback_month, filter_value: str= None, size: int=5000) -> pd.DataFrame:
data = Dataset(dataset_id)
start_date = day - relativedelta(month=fallback_month)
tasks = [partial(data.get_data, start_date=start_date, gsid=gsid_ids[i:i+size], rating=filter_value) for i in range(0, len(gsid_ids), size)] if filter_value else \
[partial(data.get_data, start_date=start_date, gsid=gsid_ids[i:i + size]) for i in range(0, len(gsid_ids), size)]
results = ThreadPoolManager.run_async(tasks)
return pd.concat(results)
def batch_asset_request(day: dt.date, gsids_list: list, limit: int=1000) -> list:
date_time = dt.datetime.combine(day, dt.datetime.min.time())
fields = ['gsid', 'bbid', 'id', 'delisted', 'assetClassificationsIsPrimary']
tasks = [partial(GsAssetApi.get_many_assets_data, gsid=gsids_list[i:i + limit], as_of=date_time, limit=limit*10, fields=fields) for i in range(0, len(gsids_list), limit)]
results = ThreadPoolManager.run_async(tasks)
return [item for sublist in results for item in sublist]
def get_universe_with_xrefs(day: dt.date, model: FactorRiskModel) -> pd.DataFrame:
print(f'---------Getting risk {model.id} coverage universe on {day}------------')
# get coverage universe on date
universe = model.get_asset_universe(day, day).iloc[:, 0].tolist()
print(f'{len(universe)} assets in {model.id} on {day} that map to gsids')
# need to map from id -> asset_identifier on date
asset_identifiers = pd.DataFrame(batch_asset_request(day, universe))
print(f'{len(asset_identifiers)} assets found')
asset_identifiers = asset_identifiers[asset_identifiers['assetClassificationsIsPrimary'] != 'false']
print(f'{len(asset_identifiers)} asset xrefs after is not primary dropped')
asset_identifiers = asset_identifiers[asset_identifiers['delisted'] != 'yes']
print(f'{len(asset_identifiers)} asset xrefs after delisted assets are dropped')
asset_identifiers = asset_identifiers[['gsid', 'bbid', 'id']].set_index('gsid')
asset_identifiers = asset_identifiers[~asset_identifiers.index.duplicated(keep='first')] # remove duplicate gsids
asset_identifiers.reset_index(inplace=True)
print(f'{len(asset_identifiers)} positions after duplicate gsids removed')
return pd.DataFrame(asset_identifiers).set_index('id')
def get_and_filter_ratings(day: dt.date, gsid_list: List[str], filter_value: str = None) -> list:
# get ratings of assets from the ratings dataset and only keep 'Buy' ratings
print(f'---------Filtering coverage universe by rating: {filter_value}------------')
fallback_month = 3
ratings_df = batch_ratings('RATINGS_CL', gsid_list, day, fallback_month, filter_value)
df_by_asset = [ratings_df[ratings_df['gsid'] == asset] for asset in set(ratings_df['gsid'].tolist())]
most_recent_rating = pd.concat([df.iloc[-1:] for df in df_by_asset])
print(f'{len(most_recent_rating)} unique assets with ratings after filtering applied')
return list(most_recent_rating['gsid'].unique())
def get_and_filter_factor_exposures(day: dt.date, identifier_list: List[str], factor_model: FactorRiskModel, factors: List[str]= [] , filter_floor: int = 0.5) -> pd.DataFrame:
# get factor info and filter by factors
print(f'---------Filtering coverage universe by factors: {factors}------------')
available_factors = factor_model.get_factor_data(day).set_index('identifier')
req = DataAssetsRequest('gsid', identifier_list)
factor_exposures = factor_model.get_universe_factor_exposure(day, day, assets=req).fillna(0)
factor_exposures.columns = [available_factors.loc[x]['name'] for x in factor_exposures.columns]
factor_exposures = factor_exposures.droplevel(1)
print(f'{len(factor_exposures)} factor exposures available')
for factor in factors:
factor_exposures = factor_exposures[factor_exposures[factor] >= filter_floor]
print(f'{len(factor_exposures)} factor exposures returned after filtering by {factor} with floor exposure {filter_floor}')
return factor_exposures
def get_and_filter_liquidity(day: dt.date, asset_ids: List[str], filter_floor: int = 0) -> pd.DataFrame:
# get mdv22Day liquidity info and take assets above average adv
print(f'---------Filtering coverage universe by liquidity value: {filter_floor}------------')
liquidity = batch_liquidity('GSEOD', asset_ids, day).set_index("assetId")
print(f'{len(liquidity)} liquidity data available for requested universe')
if filter_floor:
liquidity = liquidity[liquidity['mdv22Day'] >= filter_floor]
print(f'{len(liquidity)} unique assets with liquidity data returned after filtering')
return liquidity
def backtest_strategy(day: dt.date, position_set: List[dict], risk_model_id: str):
# make a request to pretrade liquidity to get backtest timeseries
print(f'---------Backtesting strategy------------')
query = {"currency":"USD",
"notional": 1000000,
"date": day.strftime("%Y-%m-%d"),
"positions":position_set,
"participationRate":0.1,
"riskModel":risk_model_id,
"timeSeriesBenchmarkIds":[],
"measures":["Time Series Data"]}
result = GsSession.current._post('/risk/liquidity', query)
result = result.get("timeseriesData")
return result
def graph_df_list(df_list, title):
for df in df_list:
plt.plot(df[0], label=df[1])
plt.legend(title='Measures')
plt.xlabel('Date')
plt.title(title)
plt.show()
###Output
_____no_output_____
###Markdown
Step 4: Strategy ImplementationProposed Methodology- Starting universe: Chosen risk model coverage universe- High Conviction names: Retain GS "Buy" ratings only- High ESG names: Retain high ESG scores only, using BARRA GEMLTL ESG model- High Profitability names: Retain high Profitability scores only, using BARRA GEMLTL ESG model- Liquidity adjustment: Removing the tail of illiquid names- Weighting: MDV-based weighting
###Code
# Get risk model and available style factors
start = dt.datetime.now()
# Get risk model
model_id = "BARRA_GEMLTL_ESG"
factor_model = FactorRiskModel.get(model_id)
# Get last date of risk model data
date = factor_model.get_most_recent_date_from_calendar() - dt.timedelta(1)
print(f"-----Available style factors for model {model_id}-----")
factor_data = factor_model.get_factor_data(date, date)
factor_data = factor_data[factor_data['factorCategoryId'] == 'RI']
print(factor_data['name'])
# Get universe
mqid_to_id = get_universe_with_xrefs(date, factor_model)
# Get available ratings for past 3 months and return most recent ratings data per asset
ratings_filter = 'Buy'
ratings_universe = get_and_filter_ratings(date, mqid_to_id['gsid'].tolist(), filter_value=ratings_filter)
# Pass in factors to filter by
factors = ['ESG', 'Profitability']
filter_floor = 0.5
exposures = get_and_filter_factor_exposures(date, ratings_universe, factor_model, factors=factors, filter_floor=filter_floor)
ids = mqid_to_id.reset_index().set_index("gsid")
exposures = exposures.join(ids, how='inner')
# Filter by liquidity, which takes in the MQ Id
asset_ids = exposures['id'].tolist()
liquidity_floor = 1000000
liquidity = get_and_filter_liquidity(date, asset_ids, filter_floor=liquidity_floor)
liquidity = liquidity.join(mqid_to_id, how='inner')
# Get weights as ADV / total ADV
total_adv = sum(list(liquidity['mdv22Day']))
liquidity['weights'] = liquidity['mdv22Day'] / total_adv
###Output
_____no_output_____
###Markdown
Step 5: Backtest Strategy
###Code
# Backtest composition
backtest_set = [{'assetId': index, "weight": row['weights']} for index, row in liquidity.iterrows()]
position_set = [{'bbid': row['bbid'], "weight": row['weights']} for index, row in liquidity.iterrows()]
print("Position set for basket create: ")
print(pd.DataFrame(position_set))
print(f'Total time to build position set with requested parameters {dt.datetime.now() - start}')
backtest = backtest_strategy(date, backtest_set, model_id)
print("Available measures to plot for backtested strategy: ")
measures = list(backtest[0].keys())
measures.remove("name")
print(measures)
# Graph Normalized Performance
np = ['normalizedPerformance']
series_to_plot = []
for measure in np:
timeseries = backtest[0].get(measure)
timeseries = {dt.datetime.strptime(data[0], "%Y-%m-%d"): data[1] for data in timeseries}
timeseries = (pd.Series(timeseries), measure)
series_to_plot.append(timeseries)
graph_df_list(series_to_plot, "Normalized Performance")
# Plot many measures
measures.remove("netExposure")
measures.remove("cumulativePnl")
measures.remove("maxDrawdown")
series_to_plot = []
for measure in measures:
timeseries = backtest[0].get(measure)
timeseries = {dt.datetime.strptime(data[0], "%Y-%m-%d"): data[1] for data in timeseries}
timeseries = (pd.Series(timeseries), measure)
series_to_plot.append(timeseries)
graph_df_list(series_to_plot, "Backtested Strategy Measures")
###Output
_____no_output_____
###Markdown
Step 6: Basket Creation
###Code
# Create basket with positions
my_basket = Basket()
my_basket.name = 'Basket Name'
my_basket.ticker = 'Basket Ticker'
my_basket.currency = 'USD'
my_basket.return_type = ReturnType.PRICE_RETURN
my_basket.publish_to_bloomberg = True
my_basket.publish_to_reuters = True
my_basket.publish_to_factset = False
data=[]
for row in position_set:
data.append([row['bbid'], row['weight']])
positions_df = pd.DataFrame(data, columns=['identifier', 'weight'])
position_set = PositionSet.from_frame(positions_df)
my_basket.position_set = position_set
my_basket.get_details() # we highly recommend verifying the basket state looks correct before calling create!
# Publish basket
my_basket.create()
my_basket.poll_status(timeout=10000, step=20) # optional: constantly checks create status until report succeeds, fails, or the poll times out (this example checks every 20 seconds for 2 minutes)
my_basket.get_url() # will return a url to your Marquee basket page ex. https://marquee.gs.com/s/products/MA9B9TEMQ2RW16K9/summary
###Output
_____no_output_____
###Markdown
GS Quant Meets Markets x MSCI Step 1: Import Modules
###Code
# Import modules
from typing import List
from gs_quant.api.utils import ThreadPoolManager
from gs_quant.data import Dataset
from gs_quant.api.gs.assets import GsAssetApi
from gs_quant.models.risk_model import FactorRiskModel
from gs_quant.target.risk_models import DataAssetsRequest
from functools import partial
from gs_quant.markets.baskets import Basket
from gs_quant.markets.indices_utils import ReturnType
from gs_quant.markets.position_set import PositionSet
from gs_quant.session import Environment, GsSession
import matplotlib.pyplot as plt
import datetime as dt
from dateutil.relativedelta import relativedelta
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2: Authenticate
###Code
# Initialize session -- for external users, input client id and secret below
client = None
secret = None
GsSession.use(Environment.PROD, client_id=client, client_secret=secret, scopes='')
###Output
_____no_output_____
###Markdown
Step 3: Implement basic functions to fetch coverage universe, ratings, factor & liquidity data
###Code
# Initialize functions
def batch_liquidity(dataset_id: str, asset_ids: list, day: dt.date, size: int=200) -> pd.DataFrame:
data = Dataset(dataset_id)
tasks = [partial(data.get_data, day, day, assetId=asset_ids[i:i+size]) for i in range(0, len(asset_ids), size)]
results = ThreadPoolManager.run_async(tasks)
return pd.concat(results)
def batch_ratings(dataset_id: str, gsid_ids: list, day: dt.date, fallback_month, filter_value: str= None, size: int=5000) -> pd.DataFrame:
data = Dataset(dataset_id)
start_date = day - relativedelta(month=fallback_month)
tasks = [partial(data.get_data, start_date=start_date, gsid=gsid_ids[i:i+size], rating=filter_value) for i in range(0, len(gsid_ids), size)] if filter_value else \
[partial(data.get_data, start_date=start_date, gsid=gsid_ids[i:i + size]) for i in range(0, len(gsid_ids), size)]
results = ThreadPoolManager.run_async(tasks)
return pd.concat(results)
def batch_asset_request(day: dt.date, gsids_list: list, limit: int=1000) -> list:
date_time = dt.datetime.combine(day, dt.datetime.min.time())
fields = ['gsid', 'bbid', 'id', 'delisted', 'assetClassificationsIsPrimary']
tasks = [partial(GsAssetApi.get_many_assets_data, gsid=gsids_list[i:i + limit], as_of=date_time, limit=limit*10, fields=fields) for i in range(0, len(gsids_list), limit)]
results = ThreadPoolManager.run_async(tasks)
return [item for sublist in results for item in sublist]
def get_universe_with_xrefs(day: dt.date, model: FactorRiskModel) -> pd.DataFrame:
print(f'---------Getting risk {model.id} coverage universe on {day}------------')
# get coverage universe on date
universe = model.get_asset_universe(day, day).iloc[:, 0].tolist()
print(f'{len(universe)} assets in {model.id} on {day} that map to gsids')
# need to map from id -> asset_identifier on date
asset_identifiers = pd.DataFrame(batch_asset_request(day, universe))
print(f'{len(asset_identifiers)} assets found')
asset_identifiers = asset_identifiers[asset_identifiers['assetClassificationsIsPrimary'] != 'false']
print(f'{len(asset_identifiers)} asset xrefs after is not primary dropped')
asset_identifiers = asset_identifiers[asset_identifiers['delisted'] != 'yes']
print(f'{len(asset_identifiers)} asset xrefs after delisted assets are dropped')
asset_identifiers = asset_identifiers[['gsid', 'bbid', 'id']].set_index('gsid')
asset_identifiers = asset_identifiers[~asset_identifiers.index.duplicated(keep='first')] # remove duplicate gsids
asset_identifiers.reset_index(inplace=True)
print(f'{len(asset_identifiers)} positions after duplicate gsids removed')
return pd.DataFrame(asset_identifiers).set_index('id')
def get_and_filter_ratings(day: dt.date, gsid_list: List[str], filter_value: str = None) -> list:
# get ratings of assets from the ratings dataset and only keep 'Buy' ratings
print(f'---------Filtering coverage universe by rating: {filter_value}------------')
fallback_month = 3
ratings_df = batch_ratings('RATINGS_CL', gsid_list, day, fallback_month, filter_value)
df_by_asset = [ratings_df[ratings_df['gsid'] == asset] for asset in set(ratings_df['gsid'].tolist())]
most_recent_rating = pd.concat([df.iloc[-1:] for df in df_by_asset])
print(f'{len(most_recent_rating)} unique assets with ratings after filtering applied')
return list(most_recent_rating['gsid'].unique())
def get_and_filter_factor_exposures(day: dt.date, identifier_list: List[str], factor_model: FactorRiskModel, factors: List[str]= [] , filter_floor: int = 0.5) -> pd.DataFrame:
# get factor info and filter by factors
print(f'---------Filtering coverage universe by factors: {factors}------------')
available_factors = factor_model.get_factor_data(day).set_index('identifier')
req = DataAssetsRequest('gsid', identifier_list)
factor_exposures = factor_model.get_universe_factor_exposure(day, day, assets=req).fillna(0)
factor_exposures.columns = [available_factors.loc[x]['name'] for x in factor_exposures.columns]
factor_exposures = factor_exposures.droplevel(1)
print(f'{len(factor_exposures)} factor exposures available')
for factor in factors:
factor_exposures = factor_exposures[factor_exposures[factor] >= filter_floor]
print(f'{len(factor_exposures)} factor exposures returned after filtering by {factor} with floor exposure {filter_floor}')
return factor_exposures
def get_and_filter_liquidity(day: dt.date, asset_ids: List[str], filter_floor: int = 0) -> pd.DataFrame:
# get mdv22Day liquidity info and take assets above average adv
print(f'---------Filtering coverage universe by liquidity value: {filter_floor}------------')
liquidity = batch_liquidity('GSEOD', asset_ids, day).set_index("assetId")
print(f'{len(liquidity)} liquidity data available for requested universe')
if filter_floor:
liquidity = liquidity[liquidity['mdv22Day'] >= filter_floor]
print(f'{len(liquidity)} unique assets with liquidity data returned after filtering')
return liquidity
def backtest_strategy(day: dt.date, position_set: List[dict], risk_model_id: str):
# make a request to pretrade liquidity to get backtest timeseries
print(f'---------Backtesting strategy------------')
query = {"currency":"USD",
"notional": 1000000,
"date": day.strftime("%Y-%m-%d"),
"positions":position_set,
"participationRate":0.1,
"riskModel":risk_model_id,
"timeSeriesBenchmarkIds":[],
"measures":["Time Series Data"]}
result = GsSession.current._post('/risk/liquidity', query)
result = result.get("timeseriesData")
return result
def graph_df_list(df_list, title):
for df in df_list:
plt.plot(df[0], label=df[1])
plt.legend(title='Measures')
plt.xlabel('Date')
plt.title(title)
plt.show()
###Output
_____no_output_____
###Markdown
Step 4: Strategy ImplementationProposed Methodology- Starting universe: Chosen risk model coverage universe- High Conviction names: Retain GS "Buy" ratings only- High ESG names: Retain high ESG scores only, using BARRA GEMLTL ESG model- High Profitability names: Retain high Profitability scores only, using BARRA GEMLTL ESG model- Liquidity adjustment: Removing the tail of illiquid names- Weighting: MDV-based weighting
###Code
# Get risk model and available style factors
start = dt.datetime.now()
# Get risk model
model_id = "BARRA_GEMLTL_ESG"
factor_model = FactorRiskModel.get(model_id)
# Get last date of risk model data
date = factor_model.get_most_recent_date_from_calendar() - dt.timedelta(1)
print(f"-----Available style factors for model {model_id}-----")
factor_data = factor_model.get_factor_data(date, date)
factor_data = factor_data[factor_data['factorCategoryId'] == 'RI']
print(factor_data['name'])
# Get universe
mqid_to_id = get_universe_with_xrefs(date, factor_model)
# Get available ratings for past 3 months and return most recent ratings data per asset
ratings_filter = 'Buy'
ratings_universe = get_and_filter_ratings(date, mqid_to_id['gsid'].tolist(), filter_value=ratings_filter)
# Pass in factors to filter by
factors = ['ESG', 'Profitability']
filter_floor = 0.5
exposures = get_and_filter_factor_exposures(date, ratings_universe, factor_model, factors=factors, filter_floor=filter_floor)
ids = mqid_to_id.reset_index().set_index("gsid")
exposures = exposures.join(ids, how='inner')
# Filter by liquidity, which takes in the MQ Id
asset_ids = exposures['id'].tolist()
liquidity_floor = 1000000
liquidity = get_and_filter_liquidity(date, asset_ids, filter_floor=liquidity_floor)
liquidity = liquidity.join(mqid_to_id, how='inner')
# Get weights as ADV / total ADV
total_adv = sum(list(liquidity['mdv22Day']))
liquidity['weights'] = liquidity['mdv22Day'] / total_adv
###Output
_____no_output_____
###Markdown
Step 5: Backtest Strategy
###Code
# Backtest composition
backtest_set = [{'assetId': index, "weight": row['weights']} for index, row in liquidity.iterrows()]
position_set = [{'bbid': row['bbid'], "weight": row['weights']} for index, row in liquidity.iterrows()]
print("Position set for basket create: ")
print(pd.DataFrame(position_set))
print(f'Total time to build position set with requested parameters {dt.datetime.now() - start}')
backtest = backtest_strategy(date, backtest_set, model_id)
print("Available measures to plot for backtested strategy: ")
measures = list(backtest[0].keys())
measures.remove("name")
print(measures)
# Graph Normalized Performance
np = ['normalizedPerformance']
series_to_plot = []
for measure in np:
timeseries = backtest[0].get(measure)
timeseries = {dt.datetime.strptime(data[0], "%Y-%m-%d"): data[1] for data in timeseries}
timeseries = (pd.Series(timeseries), measure)
series_to_plot.append(timeseries)
graph_df_list(series_to_plot, "Normalized Performance")
# Plot many measures
measures.remove("netExposure")
measures.remove("cumulativePnl")
measures.remove("maxDrawdown")
series_to_plot = []
for measure in measures:
timeseries = backtest[0].get(measure)
timeseries = {dt.datetime.strptime(data[0], "%Y-%m-%d"): data[1] for data in timeseries}
timeseries = (pd.Series(timeseries), measure)
series_to_plot.append(timeseries)
graph_df_list(series_to_plot, "Backtested Strategy Measures")
###Output
_____no_output_____
###Markdown
Step 6: Basket Creation
###Code
# Create basket with positions
my_basket = Basket()
my_basket.name = 'Basket Name'
my_basket.ticker = 'Basket Ticker'
my_basket.currency = 'USD'
my_basket.return_type = ReturnType.PRICE_RETURN
my_basket.publish_to_bloomberg = True
my_basket.publish_to_reuters = True
my_basket.publish_to_factset = False
data=[]
for row in position_set:
data.append([row['bbid'], row['weight']])
positions_df = pd.DataFrame(data, columns=['identifier', 'weight'])
position_set = PositionSet.from_frame(positions_df)
my_basket.position_set = position_set
my_basket.get_details() # we highly recommend verifying the basket state looks correct before calling create!
# Publish basket
my_basket.create()
my_basket.poll_status(timeout=10000, step=20) # optional: constantly checks create status until report succeeds, fails, or the poll times out (this example checks every 20 seconds for 2 minutes)
my_basket.get_url() # will return a url to your Marquee basket page ex. https://marquee.gs.com/s/products/MA9B9TEMQ2RW16K9/summary
###Output
_____no_output_____ |
Sandpit/graph_dfs.ipynb | ###Markdown
Graph Depth First SearchIn this exercise, you'll see how to do a depth first search on a graph. To start, let's create a graph class in Python.
###Code
class GraphNode(object):
def __init__(self, val):
self.value = val
self.children = []
def add_child(self,new_node):
self.children.append(new_node)
def remove_child(self,del_node):
if del_node in self.children:
self.children.remove(del_node)
class Graph(object):
def __init__(self,node_list):
self.nodes = node_list
def add_edge(self,node1,node2):
if(node1 in self.nodes and node2 in self.nodes):
node1.add_child(node2)
node2.add_child(node1)
def remove_edge(self,node1,node2):
if(node1 in self.nodes and node2 in self.nodes):
node1.remove_child(node2)
node2.remove_child(node1)
###Output
_____no_output_____
###Markdown
Now let's create the graph.
###Code
nodeG = GraphNode('G')
nodeR = GraphNode('R')
nodeA = GraphNode('A')
nodeP = GraphNode('P')
nodeH = GraphNode('H')
nodeS = GraphNode('S')
graph1 = Graph([nodeS,nodeH,nodeG,nodeP,nodeR,nodeA] )
graph1.add_edge(nodeG,nodeR)
graph1.add_edge(nodeA,nodeR)
graph1.add_edge(nodeA,nodeG)
graph1.add_edge(nodeR,nodeP)
graph1.add_edge(nodeH,nodeG)
graph1.add_edge(nodeH,nodeP)
graph1.add_edge(nodeS,nodeR)
###Output
_____no_output_____
###Markdown
Implement DFSUsing what you know about DFS for trees, apply this to graphs. Implement the `dfs_search` to return the `GraphNode` with the value `search_value` starting at the `root_node`.
###Code
def dfs_search(root_node, search_value):
visited = set()
frontier = [root_node]
while len(frontier) > 0:
current_node = frontier.pop(0)
visited.add(current_node)
if current_node.value == search_value:
return current_node
for child in current_node.children:
if (child not in visited) and (child not in frontier):
frontier.append(child)
###Output
_____no_output_____
###Markdown
Tests
###Code
assert nodeA == dfs_search(nodeS, 'A')
assert nodeS == dfs_search(nodeP, 'S')
assert nodeR == dfs_search(nodeH, 'R')
###Output
_____no_output_____ |
tutorials/1-Introduction/China_A_share_market_tushare/China_A_share_market_tushare.ipynb | ###Markdown
Quantitative trading in China A stock market with FinRL Install FinRL
###Code
!pip install git+https://github.com/AI4Finance-Foundation/FinRL.git
###Output
_____no_output_____
###Markdown
Install other libraries
###Code
!pip install stockstats
!pip install tushare
#install talib
!wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz
!tar xvzf ta-lib-0.4.0-src.tar.gz
import os
os.chdir('ta-lib')
!./configure --prefix=/usr
!make
!make install
#!sudo make install # Sometimes it need root
os.chdir('../')
!pip install TA-Lib
%cd /
!git clone https://github.com/AI4Finance-Foundation/FinRL-Meta
%cd /FinRL-Meta/
###Output
_____no_output_____ |
imdb_embedding.ipynb | ###Markdown
Data preprocessing
###Code
len(x_train[0]), len(x_train[50]), len(x_train[500]), len(x_train[1000])
# 컬럼의 사이즈가 동일해야 머신러닝 진행할 수 있음. 누구는 댓글을 길게, 짧게 써서 사이즈가 다름
# pad_sequences 연습
sequence = [[1],
[2, 3],
[4, 5, 6]]
tf.keras.preprocessing.sequence.pad_sequences(sequence, maxlen=2, truncating='post')
pad_x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=500)
len(pad_x_train[500]), pad_x_train[500]
# 컬럼 사이즈가 60 이었던 값 출력-> 컬럼 사이즈가 500으로 수정됨
np.unique(y_train)
# 0,1 값만 가지고 있음 (binary) -> dense=1
###Output
_____no_output_____
###Markdown
Make model
###Code
model = tf.keras.models.Sequential()
# Input Layer
model.add(tf.keras.layers.Embedding(input_dim=10000, output_dim=24, input_length=500))
# input_dimension: 사전에서 불러온 사이즈 -> number_words
# output_dimension: 들어온 값을 벡터화할 사이즈 -> 2의 제곱수가 좋음
# input_length: 내가 shape한 사이즈
# Hidden Layer
model.add(tf.keras.layers.Flatten())
# Output Layer
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
# y_train 값은 0,1 -> binary -> loss='sigmoid'
# Gadget
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# output에 따라 loss의 종류가 달라짐
model.summary()
# (500, 24) 형태로 결과 나옴
hist = model.fit(pad_x_train, y_train, epochs=50, validation_split=0.3, batch_size=32)
###Output
Epoch 1/50
547/547 [==============================] - 4s 8ms/step - loss: 0.0061 - accuracy: 0.9999 - val_loss: 0.3947 - val_accuracy: 0.8791
Epoch 2/50
547/547 [==============================] - 4s 8ms/step - loss: 0.0039 - accuracy: 0.9998 - val_loss: 0.4151 - val_accuracy: 0.8792
Epoch 3/50
547/547 [==============================] - 4s 8ms/step - loss: 0.0025 - accuracy: 1.0000 - val_loss: 0.4332 - val_accuracy: 0.8783
Epoch 4/50
547/547 [==============================] - 4s 8ms/step - loss: 0.0017 - accuracy: 1.0000 - val_loss: 0.4525 - val_accuracy: 0.8775
Epoch 5/50
547/547 [==============================] - 4s 8ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 0.4715 - val_accuracy: 0.8776
Epoch 6/50
547/547 [==============================] - 4s 8ms/step - loss: 8.2675e-04 - accuracy: 1.0000 - val_loss: 0.4883 - val_accuracy: 0.8780
Epoch 7/50
547/547 [==============================] - 4s 8ms/step - loss: 5.7694e-04 - accuracy: 1.0000 - val_loss: 0.5066 - val_accuracy: 0.8767
Epoch 8/50
547/547 [==============================] - 4s 7ms/step - loss: 4.0273e-04 - accuracy: 1.0000 - val_loss: 0.5217 - val_accuracy: 0.8780
Epoch 9/50
547/547 [==============================] - 4s 8ms/step - loss: 2.9019e-04 - accuracy: 1.0000 - val_loss: 0.5388 - val_accuracy: 0.8779
Epoch 10/50
547/547 [==============================] - 4s 8ms/step - loss: 2.0808e-04 - accuracy: 1.0000 - val_loss: 0.5567 - val_accuracy: 0.8771
Epoch 11/50
547/547 [==============================] - 4s 8ms/step - loss: 1.5084e-04 - accuracy: 1.0000 - val_loss: 0.5729 - val_accuracy: 0.8772
Epoch 12/50
547/547 [==============================] - 4s 7ms/step - loss: 1.0910e-04 - accuracy: 1.0000 - val_loss: 0.5897 - val_accuracy: 0.8780
Epoch 13/50
547/547 [==============================] - 4s 8ms/step - loss: 8.0560e-05 - accuracy: 1.0000 - val_loss: 0.6059 - val_accuracy: 0.8753
Epoch 14/50
547/547 [==============================] - 4s 8ms/step - loss: 5.8915e-05 - accuracy: 1.0000 - val_loss: 0.6253 - val_accuracy: 0.8752
Epoch 15/50
547/547 [==============================] - 4s 8ms/step - loss: 4.2483e-05 - accuracy: 1.0000 - val_loss: 0.6378 - val_accuracy: 0.8773
Epoch 16/50
547/547 [==============================] - 4s 7ms/step - loss: 3.1439e-05 - accuracy: 1.0000 - val_loss: 0.6536 - val_accuracy: 0.8769
Epoch 17/50
547/547 [==============================] - 4s 7ms/step - loss: 2.3223e-05 - accuracy: 1.0000 - val_loss: 0.6706 - val_accuracy: 0.8765
Epoch 18/50
547/547 [==============================] - 4s 8ms/step - loss: 1.7358e-05 - accuracy: 1.0000 - val_loss: 0.6857 - val_accuracy: 0.8768
Epoch 19/50
547/547 [==============================] - 4s 7ms/step - loss: 1.2695e-05 - accuracy: 1.0000 - val_loss: 0.7037 - val_accuracy: 0.8768
Epoch 20/50
547/547 [==============================] - 4s 7ms/step - loss: 9.5377e-06 - accuracy: 1.0000 - val_loss: 0.7183 - val_accuracy: 0.8769
Epoch 21/50
547/547 [==============================] - 4s 7ms/step - loss: 7.0912e-06 - accuracy: 1.0000 - val_loss: 0.7344 - val_accuracy: 0.8764
Epoch 22/50
547/547 [==============================] - 4s 8ms/step - loss: 5.3129e-06 - accuracy: 1.0000 - val_loss: 0.7492 - val_accuracy: 0.8767
Epoch 23/50
547/547 [==============================] - 4s 7ms/step - loss: 3.9692e-06 - accuracy: 1.0000 - val_loss: 0.7646 - val_accuracy: 0.8763
Epoch 24/50
547/547 [==============================] - 4s 7ms/step - loss: 3.0072e-06 - accuracy: 1.0000 - val_loss: 0.7799 - val_accuracy: 0.8760
Epoch 25/50
547/547 [==============================] - 4s 7ms/step - loss: 2.2749e-06 - accuracy: 1.0000 - val_loss: 0.7950 - val_accuracy: 0.8753
Epoch 26/50
547/547 [==============================] - 4s 7ms/step - loss: 1.7405e-06 - accuracy: 1.0000 - val_loss: 0.8111 - val_accuracy: 0.8764
Epoch 27/50
547/547 [==============================] - 4s 8ms/step - loss: 1.3330e-06 - accuracy: 1.0000 - val_loss: 0.8238 - val_accuracy: 0.8751
Epoch 28/50
547/547 [==============================] - 4s 8ms/step - loss: 1.0392e-06 - accuracy: 1.0000 - val_loss: 0.8391 - val_accuracy: 0.8757
Epoch 29/50
547/547 [==============================] - 4s 8ms/step - loss: 8.1422e-07 - accuracy: 1.0000 - val_loss: 0.8532 - val_accuracy: 0.8757
Epoch 30/50
547/547 [==============================] - 4s 8ms/step - loss: 6.2922e-07 - accuracy: 1.0000 - val_loss: 0.8663 - val_accuracy: 0.8755
Epoch 31/50
547/547 [==============================] - 4s 8ms/step - loss: 5.0262e-07 - accuracy: 1.0000 - val_loss: 0.8802 - val_accuracy: 0.8748
Epoch 32/50
547/547 [==============================] - 4s 8ms/step - loss: 4.0353e-07 - accuracy: 1.0000 - val_loss: 0.8913 - val_accuracy: 0.8759
Epoch 33/50
547/547 [==============================] - 4s 8ms/step - loss: 3.2220e-07 - accuracy: 1.0000 - val_loss: 0.9047 - val_accuracy: 0.8756
Epoch 34/50
547/547 [==============================] - 4s 8ms/step - loss: 2.6294e-07 - accuracy: 1.0000 - val_loss: 0.9168 - val_accuracy: 0.8756
Epoch 35/50
547/547 [==============================] - 5s 8ms/step - loss: 2.1843e-07 - accuracy: 1.0000 - val_loss: 0.9294 - val_accuracy: 0.8741
Epoch 36/50
547/547 [==============================] - 4s 8ms/step - loss: 1.8534e-07 - accuracy: 1.0000 - val_loss: 0.9441 - val_accuracy: 0.8740
Epoch 37/50
547/547 [==============================] - 5s 9ms/step - loss: 1.5458e-07 - accuracy: 1.0000 - val_loss: 0.9484 - val_accuracy: 0.8739
Epoch 38/50
547/547 [==============================] - 4s 8ms/step - loss: 1.3205e-07 - accuracy: 1.0000 - val_loss: 0.9569 - val_accuracy: 0.8744
Epoch 39/50
547/547 [==============================] - 4s 8ms/step - loss: 1.1499e-07 - accuracy: 1.0000 - val_loss: 0.9649 - val_accuracy: 0.8748
Epoch 40/50
547/547 [==============================] - 4s 8ms/step - loss: 1.0106e-07 - accuracy: 1.0000 - val_loss: 0.9720 - val_accuracy: 0.8747
Epoch 41/50
547/547 [==============================] - 5s 8ms/step - loss: 8.8453e-08 - accuracy: 1.0000 - val_loss: 0.9819 - val_accuracy: 0.8735
Epoch 42/50
547/547 [==============================] - 4s 8ms/step - loss: 8.2149e-08 - accuracy: 1.0000 - val_loss: 0.9891 - val_accuracy: 0.8756
Epoch 43/50
547/547 [==============================] - 4s 8ms/step - loss: 7.1443e-08 - accuracy: 1.0000 - val_loss: 0.9949 - val_accuracy: 0.8735
Epoch 44/50
547/547 [==============================] - 5s 8ms/step - loss: 6.4997e-08 - accuracy: 1.0000 - val_loss: 1.0031 - val_accuracy: 0.8741
Epoch 45/50
547/547 [==============================] - 4s 8ms/step - loss: 5.9016e-08 - accuracy: 1.0000 - val_loss: 1.0049 - val_accuracy: 0.8749
Epoch 46/50
547/547 [==============================] - 4s 8ms/step - loss: 5.6519e-08 - accuracy: 1.0000 - val_loss: 1.0112 - val_accuracy: 0.8747
Epoch 47/50
547/547 [==============================] - 4s 8ms/step - loss: 5.0462e-08 - accuracy: 1.0000 - val_loss: 1.0163 - val_accuracy: 0.8739
Epoch 48/50
547/547 [==============================] - 4s 8ms/step - loss: 4.6240e-08 - accuracy: 1.0000 - val_loss: 1.0201 - val_accuracy: 0.8749
Epoch 49/50
547/547 [==============================] - 4s 8ms/step - loss: 4.3036e-08 - accuracy: 1.0000 - val_loss: 1.0251 - val_accuracy: 0.8747
Epoch 50/50
547/547 [==============================] - 4s 7ms/step - loss: 3.9692e-08 - accuracy: 1.0000 - val_loss: 1.0321 - val_accuracy: 0.8740
###Markdown
Evaluation
###Code
model.evaluate(pad_x_train, y_train)
# loss 적당, acc 높음
# test 데이터 변형해서 모델에 넣기
len(x_test[20]) # 사이즈 500으로 맞춰서 모델에 넣어야 함
pad_x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=500)
len(pad_x_test[20])
model.evaluate(pad_x_test)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.