path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Parabolic PDEs Practice/Parabolic PDEs - Implicit Method.ipynb | ###Markdown
Question:The boundary and initial conditions of an aluminium rod are:$$T(x,0) = 0^{\circ}C \; \; where \; \; 0\leq x\leq 10$$$$T(0,t) = 100^{\circ}C$$$$T(10,t) = 50^{\circ}C$$Given the thermal conductivity of aluminium $k=0.49$, density $\rho=2.7$ and heat capacity $C=0.2174$, solve the temperature distributions over time assuming $\Delta x=2$ and $\Delta t=0.1$.Question from: https://www.youtube.com/watch?v=XjGiN7Fvyo0&t=63sConduction Equation used: $$\alpha \frac{\partial^2 T}{\partial x^2} = \frac{\partial T}{\partial t}$$Discretization: $$T_i^{j+1} = -T_i^{j} + \lambda (T_{i+1}^{j+1} - 2T_i^{j+1} + T_{i-1}^{j+1})$$where $\lambda = \alpha \frac{\Delta t}{\Delta x^2}$
###Code
import numpy as np
import matplotlib.pyplot as plt
#Initializing the variables
x=10
dx=2
nx=6
nt=1
dt=0.1
k=0.49
rho=2.7
C=0.2174
lamda=k/(C*rho)*dt/dx**2
print(lamda)
T_init=np.zeros(nx)
xarr=np.linspace(0,x,nx)
print(xarr)
#Boundary conditions
Tleft=100
Tright=100
T_init[0]=Tleft
T_init[-1]=Tright
print(T_init)
#Implicit numerical solution using TDMA
#Lower Diagonal matrix
a=[-lamda]*3+[0]
#Middle Diagonal matrix
b=[1.04175]*4
#Upper Diagonal Matrix
c=[0]+[-lamda]*3
#Right hand side of the system
d=[2.0875]+[0]*2+[2.0875]
print(a, b, c, d)
def TDMAAlgo(a,b,c,d):
n = len(d)
w = np.zeros(n-1,float)
g = np.zeros(n, float)
p = np.zeros(n,float)
w[0] = c[0]/b[0]
g[0] = d[0]/b[0]
for i in range(1,n-1):
w[i] = c[i]/(b[i] - a[i-1]*w[i-1])
for i in range(1,n):
g[i] = (d[i] - a[i-1]*g[i-1])/(b[i] - a[i-1]*w[i-1])
p[n-1] = g[n-1]
for i in range(n-1,0,-1):
p[i-1] = g[i-1] - w[i-1]*p[i]
return p
labels = {1: "At t=0.1s", 2: "At t=0.2s", 3: "At t=0.3s", 4: "At t=0.4s", 5: "At t=0.5s", 6: "At t=0.6s", 7: "At t=0.7s", 8: "At t=0.8s", 9: "At t=0.9s", 10: "At t=1.0s"}
for it in range(0, 10):
T_comp=TDMAAlgo(a,b,c,d)
print("Iteration: ", it+1)
print(T_comp)
print("\n")
d = [T_comp[0] + lamda*100, T_comp[1], T_comp[2], T_comp[3] + lamda*100]
to_plot = [100] + list(T_comp) + [100]
plt.plot(xarr, to_plot, label=labels[it+1])
plt.legend()
###Output
Iteration: 1
[2.00383969 0.04096419 0.04098031 2.00466066]
Iteration: 2
[3.92684737 0.12040322 0.12048128 3.93004906]
Iteration: 3
[5.77278707 0.23595646 0.2361832 5.78059194]
Iteration: 4
[7.54474743 0.3853757 0.38588805 7.55997005]
Iteration: 5
[9.24569329 0.56652053 0.56751295 9.27167491]
Iteration: 6
[10.8784707 0.77735376 0.77908405 10.91901857]
Iteration: 7
[12.44581162 1.01593713 1.01873071 12.50514285]
Iteration: 8
[13.95033855 1.28042707 1.28468001 14.03302816]
Iteration: 9
[15.39456886 1.56907064 1.57525165 15.50550179]
Iteration: 10
[16.78091904 1.88020169 1.888853 16.92524582]
|
week-4/jupyter_build/00_python_introduction.ipynb | ###Markdown
Web Data Scraping AcknowledgementsThese notebooks are adaptations from a 5 session mini course at the University of Colorado. The github repo can be found [here](https://github.com/CU-ITSS/Web-Data-Scraping-S2019) [Spring 2019 ITSS Mini-Course] The course is taught by [Brian C. Keegan, Ph.D.](http://brianckeegan.com/) [Assistant Professor, Department of Information Science](https://www.colorado.edu/cmci/people/information-science/brian-c-keegan). They have been adapted for relevant content and integration with Docker so that we all have the same environment. Professor Keegan suggests using a most recent version of Python (3.7) which is set in the `requirements.txt` file.The Spring ITSS Mini-Course was adapted from a number of sources including [Allison Morgan](https://allisonmorgan.github.io/) for the [2018 Summer Institute for Computational Social Science](https://github.com/allisonmorgan/sicss_boulder), which were in turn derived from [other resources](https://github.com/simonmunzert/web-scraping-with-r-extended-edition) developed by [Simon Munzert](http://simonmunzert.github.io/) and [Chris Bail](http://www.chrisbail.net/). This notebook is adapted from excellent notebooks in Dr. [Cody Buntain](http://cody.bunta.in/)'s seminar on [Social Media and Crisis Informatics](http://cody.bunta.in/teaching/2018_winter_umd_inst728e/) as well as the [PRAW documentation](https://praw.readthedocs.io/en/latest/). Introduction to Jupyter NotebooksThis is an example of a code cell below. You type the code into the cell and run the cell with the "Run" button in the toolbar or pressing Shift+Enter.
###Code
name = 'Nate Langholz'
print(name)
2+2
###Output
_____no_output_____
###Markdown
Forms of structured dataThere are three primary forms of structured data you will encounter on the web: HTML, XML, and JSON. HTMLWe can use Python's `requests` library to make a valid HTTP "get" request to the Oscars' web server for the 90 Academy Awards which will return the raw HTML. There are more than 144,000 characters in the document!
###Code
import requests
# Pretend to be a web browser and make a get request of a webpage
oscars90_request = requests.get('https://www.oscars.org/oscars/ceremonies/2018')
# The .text returns the text from the request
oscars90_html = oscars90_request.text
# The oscars90_html is a string, we can use the common len function to ask how long the string is (in characters)
len(oscars90_html)
###Output
_____no_output_____
###Markdown
Let's look at the first thousand characters. Mostly declarations to handle Internet Explorer's notorious refusal to follow standards—stuff you don't need to worry about.
###Code
# The [0:1000] is a slicing notation
# It gets the first (position 0 in Python) character until the 1000th character
print(oscars90_html[0:1000])
###Output
<!DOCTYPE html>
<!--[if IEMobile 7]><html class="no-js ie iem7" lang="en" dir="ltr"><![endif]-->
<!--[if lte IE 6]><html class="no-js ie lt-ie9 lt-ie8 lt-ie7" lang="en" dir="ltr"><![endif]-->
<!--[if (IE 7)&(!IEMobile)]><html class="no-js ie lt-ie9 lt-ie8" lang="en" dir="ltr"><![endif]-->
<!--[if IE 8]><html class="no-js ie lt-ie9" lang="en" dir="ltr"><![endif]-->
<!--[if (gte IE 9)|(gt IEMobile 7)]><html class="no-js ie" lang="en" dir="ltr" prefix="og: http://ogp.me/ns# article: http://ogp.me/ns/article# book: http://ogp.me/ns/book# profile: http://ogp.me/ns/profile# video: http://ogp.me/ns/video# product: http://ogp.me/ns/product#"><![endif]-->
<!--[if !IE]><!--><html class="no-js" lang="en" dir="ltr" prefix="og: http://ogp.me/ns# article: http://ogp.me/ns/article# book: http://ogp.me/ns/book# profile: http://ogp.me/ns/profile# video: http://ogp.me/ns/video# product: http://ogp.me/ns/product#"><!--<![endif]-->
<head>
<meta charset="utf-8" /><script type="text/javascript
###Markdown
Looking at 1,000 lines about a third of the way through the document, we can see some of the structure we found with the "Inspect" tool above corresponding to the closing lines of the "Actor in a Supporting Role" grouping and the opening lines of the "Acress in a Leading Role" grouping.
###Code
# You can slice any ranges you'd like up, as long as it's not beyond the length of the string
# oscars90_html[144588:] would return an error
print(oscars90_html[50000:51000])
###Output
esc field--type-text-long field--label-hidden ellipsis"><div class="field-items"><div class="field-item even">Lee Unkrich and Darla K. Anderson</div></div></div> </div>
</div>
</div></li>
</ul></div> </div>
<div class="views-field views-field-field-more-highlights"> <div class="field-content"><a href="https://www.oscars.org/oscars/ceremonies/90th-oscar-highlights" class="btn-link">View More Highlights</a></div> </div>
<div class="views-field views-field-field-memorable-moments"> <span class="views-label views-label-field-memorable-moments">Memorable Moments</span> <div class="field-content"><ul><li><div class="field-collection-view clearfix view-mode-full"><div class="entity entity-field-collection-item field-collection-item-field-memorable-moments clearfix" class="entity entity-field-collection-item field-collection-item-field-memorable-moments">
<div class="content">
<div class="field field--name-field-ceremonies-media field--type-image field--label-hidd
###Markdown
We're not actually going to be slicing the text to get this structured data out, we'll use a wonderful tool call [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) to do the heavy lifting for us. XMLXML has taken on something of an afterlife as the official data standard for the U.S. Congress. The [House](http://clerk.house.gov/index.aspx) and [Senate](https://www.senate.gov/general/XML.htm) both release information about members, committees, schedules, legislation, and votes in XML. These are immaculately formatted and documented and remarkably up-to-date: the data for members of the 116th Congress are already posted.Use the `requests` library to make a HTTP get request to the House's webserver and get the list of current member data.
###Code
house_raw = requests.get('http://clerk.house.gov/xml/lists/MemberData.xml').text
senate_raw = requests.get('https://www.senate.gov/legislative/LIS_MEMBER/cvc_member_data.xml').text
###Output
_____no_output_____
###Markdown
This data is still in a string format (`type(house_raw)`), so it's difficult to search and navigate. Let's make our first soup together using [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
###Code
# First import the library
from bs4 import BeautifulSoup
# Then make the soup, specifying the "lxml" parser
house_soup = BeautifulSoup(house_raw,'lxml')
###Output
_____no_output_____
###Markdown
What's so great about this soup-ified string? We now have a suite of new functions and methods that let us navigate the tree. First, let's inspect the different tags/elements in this tree of House member data. This is the full tree of data.
###Code
# Make an empty list to store data
children = []
# Start a loop to go through all the children tags in house_soup
for tag in house_soup.findChildren():
# If the name of the tag (tag.name) is not already in the children list
if tag.name not in children:
# Add the name of the tag to the children list
children.append(tag.name)
# Look at the list members
children
###Output
_____no_output_____
###Markdown
We can navigate through the tree. You won't do this in practice, but it's helpful for debugging. In this case, we navigated from the root node (`html`) into the `body` tag, then the `memberdata` tag, then the `members` tag. There are 441 descendents at this level, corresponding to the 435 voting seats and the 6 seats for territories.
###Code
len(house_soup.html.body.memberdata.members)
###Output
_____no_output_____
###Markdown
You can also short-cut to the members tag directly rather than navigating down the parent elements.
###Code
len(house_soup.members)
###Output
_____no_output_____
###Markdown
The `.contents` method is great for getting a list of the children below the tag as a list. We can use the `[0]` slice to get the first member and their data in the list. Interestingly, the `` tags are currently empty since these have not yet been allocated, but will in the next few weeks.
###Code
house_soup.members.contents[0]
###Output
_____no_output_____
###Markdown
You could keep navigating down the tree from here.
###Code
house_soup.members.contents[0].bioguideid
###Output
_____no_output_____
###Markdown
Note that this navigation method breaks when the tag has a hyphen in it.
###Code
house_soup.members.contents[0].state-fullname
###Output
_____no_output_____
###Markdown
Instead you can use the `.find()` method to handle these hyphenated cases.
###Code
house_soup.members.contents[0].find('state-fullname')
###Output
_____no_output_____
###Markdown
We can access the text inside the tag with `.text`
###Code
house_soup.members.contents[0].find('state-fullname').text
###Output
_____no_output_____
###Markdown
The `.find_all()` method will be your primary tool when working with structured data. The `` tag codes party membership (D=Democratic, R=Republican) for each representative.
###Code
house_soup.find_all('party')[:10]
###Output
_____no_output_____
###Markdown
There [should be](https://en.wikipedia.org/wiki/116th_United_States_CongressParty_summary) 235 Democrats and 199 Republicans, plus the other non-voting members from territories.
###Code
# Initialize a counter
democrats = 0
republicans = 0
other = 0
# Loop through each element of the caucus tags
for p in house_soup.find_all('party'):
# Check if it's D, R, or something else
if p.text == "D":
# Increment the appropriate counter
democrats += 1
elif p.text == "R":
republicans += 1
else:
other += 1
print("There are {0} Democrats, {1} Republicans, and {2} others in the 116th Congress.".format(democrats,republicans,other))
###Output
There are 239 Democrats, 199 Republicans, and 3 others in the 116th Congress.
###Markdown
JSONJSON is attractive for programmers using JavaScript and Python because it can represent a mix of different data types. We need to make a brief digression into Python's fundamental data stuctures in order to understand the contemporary attraction to JSON. Python has a few fundamental data types for representing collections of information:* **Lists**: This is a basic ordered data structure that can contain strings, ints, and floats.* **Dictionaries**: This is an unordered data structure containing key-value pairs, like a phonebook.Let's look at some examples of lists and dictionaries and then we can try the exercises below. ExercisesBelow is an example of a [tweet status](https://dev.twitter.com/overview/api/tweets) object that Twitter's [API returns](https://dev.twitter.com/rest/reference/get/statuses/show/id). This `obama_tweet` dictionary corresponds to [this tweet](https://twitter.com/BarackObama/status/831527113211645959). This is a classic example of a JSON object containing a mixture of dictionaries, lists, lists of dictionaries, dictionaries of lists, *etc*.
###Code
obama_tweet = {'created_at': 'Tue Feb 14 15:34:47 +0000 2017',
'favorite_count': 1023379,
'hashtags': [],
'id': 831527113211645959,
'id_str': '831527113211645959',
'lang': 'en',
'media': [{'display_url': 'pic.twitter.com/O0UhJWoqGN',
'expanded_url': 'https://twitter.com/BarackObama/status/831527113211645959/photo/1',
'id': 831526916398149634,
'media_url': 'http://pbs.twimg.com/media/C4otUykWcAIbSy1.jpg',
'media_url_https': 'https://pbs.twimg.com/media/C4otUykWcAIbSy1.jpg',
'sizes': {'large': {'h': 800, 'resize': 'fit', 'w': 1200},
'medium': {'h': 800, 'resize': 'fit', 'w': 1200},
'small': {'h': 453, 'resize': 'fit', 'w': 680},
'thumb': {'h': 150, 'resize': 'crop', 'w': 150}},
'type': 'photo',
'url': 'https://t.co/O0UhJWoqGN'}],
'retweet_count': 252266,
'source': '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>',
'text': 'Happy Valentine’s Day, @michelleobama! Almost 28 years with you, but it always feels new. https://t.co/O0UhJWoqGN',
'urls': [],
'user': {'created_at': 'Mon Mar 05 22:08:25 +0000 2007',
'description': 'Dad, husband, President, citizen.',
'favourites_count': 10,
'followers_count': 84814791,
'following': True,
'friends_count': 631357,
'id': 813286,
'lang': 'en',
'listed_count': 221906,
'location': 'Washington, DC',
'name': 'Barack Obama',
'profile_background_color': '77B0DC',
'profile_background_image_url': 'http://pbs.twimg.com/profile_background_images/451819093436268544/kLbRvwBg.png',
'profile_banner_url': 'https://pbs.twimg.com/profile_banners/813286/1484945688',
'profile_image_url': 'http://pbs.twimg.com/profile_images/822547732376207360/5g0FC8XX_normal.jpg',
'profile_link_color': '2574AD',
'profile_sidebar_fill_color': 'C2E0F6',
'profile_text_color': '333333',
'screen_name': 'BarackObama',
'statuses_count': 15436,
'time_zone': 'Eastern Time (US & Canada)',
'url': 'https://t.co/93Y27HEnnX',
'utc_offset': -18000,
'verified': True},
'user_mentions': [{'id': 409486555,
'name': 'Michelle Obama',
'screen_name': 'MichelleObama'}]}
###Output
_____no_output_____
###Markdown
1. What are the top-most keys in the `obama_tweet` object?2. When was this tweet sent?3. Does this tweet mention anyone?4. How many retweets did this tweet receive (at the time I collected it)?5. How many followers does the "user" who wrote this tweet have?6. What's the "media_url" for the image in this tweet?
###Code
# Question 1:
obama_tweet.keys()
# Question 2:
obama_tweet.get('created_at')
# Question 3:
obama_tweet.get('user_mentions')
# Question 4:
obama_tweet.get('retweet_count')
# Question 5:
obama_tweet['user']['followers_count']
# Question 6: Add [0] when it is a list of contents
obama_tweet['media'][0]['media_url']
###Output
_____no_output_____
###Markdown
Web Data Scraping AcknowledgementsThese notebooks are adaptations from a 5 session mini course at the University of Colorado. The github repo can be found [here](https://github.com/CU-ITSS/Web-Data-Scraping-S2019) [Spring 2019 ITSS Mini-Course] The course is taught by [Brian C. Keegan, Ph.D.](http://brianckeegan.com/) [Assistant Professor, Department of Information Science](https://www.colorado.edu/cmci/people/information-science/brian-c-keegan). They have been adapted for relevant content and integration with Docker so that we all have the same environment. Professor Keegan suggests using a most recent version of Python (3.7) which is set in the `requirements.txt` file.The Spring ITSS Mini-Course was adapted from a number of sources including [Allison Morgan](https://allisonmorgan.github.io/) for the [2018 Summer Institute for Computational Social Science](https://github.com/allisonmorgan/sicss_boulder), which were in turn derived from [other resources](https://github.com/simonmunzert/web-scraping-with-r-extended-edition) developed by [Simon Munzert](http://simonmunzert.github.io/) and [Chris Bail](http://www.chrisbail.net/). This notebook is adapted from excellent notebooks in Dr. [Cody Buntain](http://cody.bunta.in/)'s seminar on [Social Media and Crisis Informatics](http://cody.bunta.in/teaching/2018_winter_umd_inst728e/) as well as the [PRAW documentation](https://praw.readthedocs.io/en/latest/). Introduction to Jupyter NotebooksThis is an example of a code cell below. You type the code into the cell and run the cell with the "Run" button in the toolbar or pressing Shift+Enter.
###Code
name = 'Nate Langholz'
print(name)
2+2
curl -s https://github.com/natelangholz/stat418-tools-in-datascience/blob/hw1-submissions/week-2/hw1/homework-submissions/hw1_starter.sh | bash
###Output
_____no_output_____
###Markdown
Forms of structured dataThere are three primary forms of structured data you will encounter on the web: HTML, XML, and JSON. HTMLWe can use Python's `requests` library to make a valid HTTP "get" request to the Oscars' web server for the 90 Academy Awards which will return the raw HTML. There are more than 144,000 characters in the document!
###Code
import requests
# Pretend to be a web browser and make a get request of a webpage
oscars90_request = requests.get('https://www.oscars.org/oscars/ceremonies/2018')
# The .text returns the text from the request
oscars90_html = oscars90_request.text
# The oscars90_html is a string, we can use the common len function to ask how long the string is (in characters)
len(oscars90_html)
###Output
_____no_output_____
###Markdown
Let's look at the first thousand characters. Mostly declarations to handle Internet Explorer's notorious refusal to follow standards—stuff you don't need to worry about.
###Code
# The [0:1000] is a slicing notation
# It gets the first (position 0 in Python) character until the 1000th character
print(oscars90_html[0:1000])
###Output
_____no_output_____
###Markdown
Looking at 1,000 lines about a third of the way through the document, we can see some of the structure we found with the "Inspect" tool above corresponding to the closing lines of the "Actor in a Supporting Role" grouping and the opening lines of the "Acress in a Leading Role" grouping.
###Code
# You can slice any ranges you'd like up, as long as it's not beyond the length of the string
# oscars90_html[144588:] would return an error
print(oscars90_html[50000:51000])
###Output
_____no_output_____
###Markdown
We're not actually going to be slicing the text to get this structured data out, we'll use a wonderful tool call [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) to do the heavy lifting for us. XMLXML has taken on something of an afterlife as the official data standard for the U.S. Congress. The [House](http://clerk.house.gov/index.aspx) and [Senate](https://www.senate.gov/general/XML.htm) both release information about members, committees, schedules, legislation, and votes in XML. These are immaculately formatted and documented and remarkably up-to-date: the data for members of the 116th Congress are already posted.Use the `requests` library to make a HTTP get request to the House's webserver and get the list of current member data.
###Code
house_raw = requests.get('http://clerk.house.gov/xml/lists/MemberData.xml').text
senate_raw = requests.get('https://www.senate.gov/legislative/LIS_MEMBER/cvc_member_data.xml').text
###Output
_____no_output_____
###Markdown
This data is still in a string format (`type(house_raw)`), so it's difficult to search and navigate. Let's make our first soup together using [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
###Code
# First import the library
from bs4 import BeautifulSoup
# Then make the soup, specifying the "lxml" parser
house_soup = BeautifulSoup(house_raw,'lxml')
###Output
_____no_output_____
###Markdown
What's so great about this soup-ified string? We now have a suite of new functions and methods that let us navigate the tree. First, let's inspect the different tags/elements in this tree of House member data. This is the full tree of data.
###Code
# Make an empty list to store data
children = []
# Start a loop to go through all the children tags in house_soup
for tag in house_soup.findChildren():
# If the name of the tag (tag.name) is not already in the children list
if tag.name not in children:
# Add the name of the tag to the children list
children.append(tag.name)
# Look at the list members
children
###Output
_____no_output_____
###Markdown
We can navigate through the tree. You won't do this in practice, but it's helpful for debugging. In this case, we navigated from the root node (`html`) into the `body` tag, then the `memberdata` tag, then the `members` tag. There are 441 descendents at this level, corresponding to the 435 voting seats and the 6 seats for territories.
###Code
len(house_soup.html.body.memberdata.members)
###Output
_____no_output_____
###Markdown
You can also short-cut to the members tag directly rather than navigating down the parent elements.
###Code
len(house_soup.members)
###Output
_____no_output_____
###Markdown
The `.contents` method is great for getting a list of the children below the tag as a list. We can use the `[0]` slice to get the first member and their data in the list. Interestingly, the `` tags are currently empty since these have not yet been allocated, but will in the next few weeks.
###Code
house_soup.members.contents[0]
###Output
_____no_output_____
###Markdown
You could keep navigating down the tree from here.
###Code
house_soup.members.contents[0].bioguideid
###Output
_____no_output_____
###Markdown
Note that this navigation method breaks when the tag has a hyphen in it.
###Code
house_soup.members.contents[0].state-fullname
###Output
_____no_output_____
###Markdown
Instead you can use the `.find()` method to handle these hyphenated cases.
###Code
house_soup.members.contents[0].find('state-fullname')
###Output
_____no_output_____
###Markdown
We can access the text inside the tag with `.text`
###Code
house_soup.members.contents[0].find('state-fullname').text
###Output
_____no_output_____
###Markdown
The `.find_all()` method will be your primary tool when working with structured data. The `` tag codes party membership (D=Democratic, R=Republican) for each representative.
###Code
house_soup.find_all('party')[:10]
###Output
_____no_output_____
###Markdown
There [should be](https://en.wikipedia.org/wiki/116th_United_States_CongressParty_summary) 235 Democrats and 199 Republicans, plus the other non-voting members from territories.
###Code
# Initialize a counter
democrats = 0
republicans = 0
other = 0
# Loop through each element of the caucus tags
for p in house_soup.find_all('party'):
# Check if it's D, R, or something else
if p.text == "D":
# Increment the appropriate counter
democrats += 1
elif p.text == "R":
republicans += 1
else:
other += 1
print("There are {0} Democrats, {1} Republicans, and {2} others in the 116th Congress.".format(democrats,republicans,other))
###Output
_____no_output_____
###Markdown
JSONJSON is attractive for programmers using JavaScript and Python because it can represent a mix of different data types. We need to make a brief digression into Python's fundamental data stuctures in order to understand the contemporary attraction to JSON. Python has a few fundamental data types for representing collections of information:* **Lists**: This is a basic ordered data structure that can contain strings, ints, and floats.* **Dictionaries**: This is an unordered data structure containing key-value pairs, like a phonebook.Let's look at some examples of lists and dictionaries and then we can try the exercises below. ExercisesBelow is an example of a [tweet status](https://dev.twitter.com/overview/api/tweets) object that Twitter's [API returns](https://dev.twitter.com/rest/reference/get/statuses/show/id). This `obama_tweet` dictionary corresponds to [this tweet](https://twitter.com/BarackObama/status/831527113211645959). This is a classic example of a JSON object containing a mixture of dictionaries, lists, lists of dictionaries, dictionaries of lists, *etc*.
###Code
obama_tweet = {'created_at': 'Tue Feb 14 15:34:47 +0000 2017',
'favorite_count': 1023379,
'hashtags': [],
'id': 831527113211645959,
'id_str': '831527113211645959',
'lang': 'en',
'media': [{'display_url': 'pic.twitter.com/O0UhJWoqGN',
'expanded_url': 'https://twitter.com/BarackObama/status/831527113211645959/photo/1',
'id': 831526916398149634,
'media_url': 'http://pbs.twimg.com/media/C4otUykWcAIbSy1.jpg',
'media_url_https': 'https://pbs.twimg.com/media/C4otUykWcAIbSy1.jpg',
'sizes': {'large': {'h': 800, 'resize': 'fit', 'w': 1200},
'medium': {'h': 800, 'resize': 'fit', 'w': 1200},
'small': {'h': 453, 'resize': 'fit', 'w': 680},
'thumb': {'h': 150, 'resize': 'crop', 'w': 150}},
'type': 'photo',
'url': 'https://t.co/O0UhJWoqGN'}],
'retweet_count': 252266,
'source': '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>',
'text': 'Happy Valentine’s Day, @michelleobama! Almost 28 years with you, but it always feels new. https://t.co/O0UhJWoqGN',
'urls': [],
'user': {'created_at': 'Mon Mar 05 22:08:25 +0000 2007',
'description': 'Dad, husband, President, citizen.',
'favourites_count': 10,
'followers_count': 84814791,
'following': True,
'friends_count': 631357,
'id': 813286,
'lang': 'en',
'listed_count': 221906,
'location': 'Washington, DC',
'name': 'Barack Obama',
'profile_background_color': '77B0DC',
'profile_background_image_url': 'http://pbs.twimg.com/profile_background_images/451819093436268544/kLbRvwBg.png',
'profile_banner_url': 'https://pbs.twimg.com/profile_banners/813286/1484945688',
'profile_image_url': 'http://pbs.twimg.com/profile_images/822547732376207360/5g0FC8XX_normal.jpg',
'profile_link_color': '2574AD',
'profile_sidebar_fill_color': 'C2E0F6',
'profile_text_color': '333333',
'screen_name': 'BarackObama',
'statuses_count': 15436,
'time_zone': 'Eastern Time (US & Canada)',
'url': 'https://t.co/93Y27HEnnX',
'utc_offset': -18000,
'verified': True},
'user_mentions': [{'id': 409486555,
'name': 'Michelle Obama',
'screen_name': 'MichelleObama'}]}
obama_tweet.keys()
obama_tweet['created_at']
obama_tweet['user_mentions']
obama_tweet['retweet_count']
obama_tweet['user']['followers_count']
obama_tweet['user']['profile_background_image_url']
###Output
_____no_output_____
###Markdown
Web Data Scraping AcknowledgementsThese notebooks are adaptations from a 5 session mini course at the University of Colorado. The github repo can be found [here](https://github.com/CU-ITSS/Web-Data-Scraping-S2019) [Spring 2019 ITSS Mini-Course] The course is taught by [Brian C. Keegan, Ph.D.](http://brianckeegan.com/) [Assistant Professor, Department of Information Science](https://www.colorado.edu/cmci/people/information-science/brian-c-keegan). They have been adapted for relevant content and integration with Docker so that we all have the same environment. Professor Keegan suggests using a most recent version of Python (3.7) which is set in the `requirements.txt` file.The Spring ITSS Mini-Course was adapted from a number of sources including [Allison Morgan](https://allisonmorgan.github.io/) for the [2018 Summer Institute for Computational Social Science](https://github.com/allisonmorgan/sicss_boulder), which were in turn derived from [other resources](https://github.com/simonmunzert/web-scraping-with-r-extended-edition) developed by [Simon Munzert](http://simonmunzert.github.io/) and [Chris Bail](http://www.chrisbail.net/). This notebook is adapted from excellent notebooks in Dr. [Cody Buntain](http://cody.bunta.in/)'s seminar on [Social Media and Crisis Informatics](http://cody.bunta.in/teaching/2018_winter_umd_inst728e/) as well as the [PRAW documentation](https://praw.readthedocs.io/en/latest/). Introduction to Jupyter NotebooksThis is an example of a code cell below. You type the code into the cell and run the cell with the "Run" button in the toolbar or pressing Shift+Enter.
###Code
name = 'Nate Langholz'
print(name)
2+2
ls
! curl -s http://users.csc.tntech.edu/~elbrown/access_log.bz2 | bunzip2 - | wc -l
###Output
234794
###Markdown
Forms of structured dataThere are three primary forms of structured data you will encounter on the web: HTML, XML, and JSON. HTMLWe can use Python's `requests` library to make a valid HTTP "get" request to the Oscars' web server for the 90 Academy Awards which will return the raw HTML. There are more than 144,000 characters in the document!
###Code
import requests
# Pretend to be a web browser and make a get request of a webpage
oscars90_request = requests.get('https://www.oscars.org/oscars/ceremonies/2018')
# The .text returns the text from the request
oscars90_html = oscars90_request.text
# The oscars90_html is a string, we can use the common len function to ask how long the string is (in characters)
len(oscars90_html)
###Output
_____no_output_____
###Markdown
Let's look at the first thousand characters. Mostly declarations to handle Internet Explorer's notorious refusal to follow standards—stuff you don't need to worry about.
###Code
# The [0:1000] is a slicing notation
# It gets the first (position 0 in Python) character until the 1000th character
print(oscars90_html[0:1000])
###Output
<!DOCTYPE html>
<!--[if IEMobile 7]><html class="no-js ie iem7" lang="en" dir="ltr"><![endif]-->
<!--[if lte IE 6]><html class="no-js ie lt-ie9 lt-ie8 lt-ie7" lang="en" dir="ltr"><![endif]-->
<!--[if (IE 7)&(!IEMobile)]><html class="no-js ie lt-ie9 lt-ie8" lang="en" dir="ltr"><![endif]-->
<!--[if IE 8]><html class="no-js ie lt-ie9" lang="en" dir="ltr"><![endif]-->
<!--[if (gte IE 9)|(gt IEMobile 7)]><html class="no-js ie" lang="en" dir="ltr" prefix="og: http://ogp.me/ns# article: http://ogp.me/ns/article# book: http://ogp.me/ns/book# profile: http://ogp.me/ns/profile# video: http://ogp.me/ns/video# product: http://ogp.me/ns/product#"><![endif]-->
<!--[if !IE]><!--><html class="no-js" lang="en" dir="ltr" prefix="og: http://ogp.me/ns# article: http://ogp.me/ns/article# book: http://ogp.me/ns/book# profile: http://ogp.me/ns/profile# video: http://ogp.me/ns/video# product: http://ogp.me/ns/product#"><!--<![endif]-->
<head>
<meta charset="utf-8" /><script type="text/javascript
###Markdown
Looking at 1,000 lines about a third of the way through the document, we can see some of the structure we found with the "Inspect" tool above corresponding to the closing lines of the "Actor in a Supporting Role" grouping and the opening lines of the "Acress in a Leading Role" grouping.
###Code
# You can slice any ranges you'd like up, as long as it's not beyond the length of the string
# oscars90_html[144588:] would return an error
print(oscars90_html[50000:51000])
###Output
</div></div></div><div class="field field--name-field-film-desc field--type-text-long field--label-hidden ellipsis"><div class="field-items"><div class="field-item even">Lee Unkrich and Darla K. Anderson</div></div></div> </div>
</div>
</div></li>
</ul></div> </div>
<div class="views-field views-field-field-more-highlights"> <div class="field-content"><a href="https://www.oscars.org/oscars/ceremonies/90th-oscar-highlights" class="btn-link">View More Highlights</a></div> </div>
<div class="views-field views-field-field-memorable-moments"> <span class="views-label views-label-field-memorable-moments">Memorable Moments</span> <div class="field-content"><ul><li><div class="field-collection-view clearfix view-mode-full"><div class="entity entity-field-collection-item field-collection-item-field-memorable-moments clearfix" class="entity entity-field-collection-item field-collection-item-field-memorable-moments">
<div class="content">
<div class="field field--na
###Markdown
We're not actually going to be slicing the text to get this structured data out, we'll use a wonderful tool call [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) to do the heavy lifting for us. XMLXML has taken on something of an afterlife as the official data standard for the U.S. Congress. The [House](http://clerk.house.gov/index.aspx) and [Senate](https://www.senate.gov/general/XML.htm) both release information about members, committees, schedules, legislation, and votes in XML. These are immaculately formatted and documented and remarkably up-to-date: the data for members of the 116th Congress are already posted.Use the `requests` library to make a HTTP get request to the House's webserver and get the list of current member data.
###Code
house_raw = requests.get('http://clerk.house.gov/xml/lists/MemberData.xml').text
senate_raw = requests.get('https://www.senate.gov/legislative/LIS_MEMBER/cvc_member_data.xml').text
###Output
_____no_output_____
###Markdown
This data is still in a string format (`type(house_raw)`), so it's difficult to search and navigate. Let's make our first soup together using [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
###Code
# First import the library
from bs4 import BeautifulSoup
# Then make the soup, specifying the "lxml" parser
house_soup = BeautifulSoup(house_raw)
###Output
_____no_output_____
###Markdown
What's so great about this soup-ified string? We now have a suite of new functions and methods that let us navigate the tree. First, let's inspect the different tags/elements in this tree of House member data. This is the full tree of data.
###Code
# Make an empty list to store data
children = []
# Start a loop to go through all the children tags in house_soup
for tag in house_soup.findChildren():
# If the name of the tag (tag.name) is not already in the children list
if tag.name not in children:
# Add the name of the tag to the children list
children.append(tag.name)
# Look at the list members
children
###Output
_____no_output_____
###Markdown
We can navigate through the tree. You won't do this in practice, but it's helpful for debugging. In this case, we navigated from the root node (`html`) into the `body` tag, then the `memberdata` tag, then the `members` tag. There are 441 descendents at this level, corresponding to the 435 voting seats and the 6 seats for territories.
###Code
len(house_soup.html.body.memberdata.members)
###Output
_____no_output_____
###Markdown
You can also short-cut to the members tag directly rather than navigating down the parent elements.
###Code
len(house_soup.members)
###Output
_____no_output_____
###Markdown
The `.contents` method is great for getting a list of the children below the tag as a list. We can use the `[0]` slice to get the first member and their data in the list. Interestingly, the `` tags are currently empty since these have not yet been allocated, but will in the next few weeks.
###Code
house_soup.members.contents[0]
###Output
_____no_output_____
###Markdown
You could keep navigating down the tree from here.
###Code
house_soup.members.contents[0].bioguideid
###Output
_____no_output_____
###Markdown
Note that this navigation method breaks when the tag has a hyphen in it.
###Code
house_soup.members.contents[0].state-fullname
###Output
_____no_output_____
###Markdown
Instead you can use the `.find()` method to handle these hyphenated cases.
###Code
house_soup.members.contents[0].find('state-fullname')
###Output
_____no_output_____
###Markdown
We can access the text inside the tag with `.text`
###Code
house_soup.members.contents[0].find('state-fullname').text
###Output
_____no_output_____
###Markdown
The `.find_all()` method will be your primary tool when working with structured data. The `` tag codes party membership (D=Democratic, R=Republican) for each representative.
###Code
house_soup.find_all('party')[:10]
###Output
_____no_output_____
###Markdown
There [should be](https://en.wikipedia.org/wiki/116th_United_States_CongressParty_summary) 235 Democrats and 199 Republicans, plus the other non-voting members from territories.
###Code
# Initialize a counter
democrats = 0
republicans = 0
other = 0
# Loop through each element of the caucus tags
for p in house_soup.find_all('party'):
# Check if it's D, R, or something else
if p.text == "D":
# Increment the appropriate counter
democrats += 1
elif p.text == "R":
republicans += 1
else:
other += 1
print("There are {0} Democrats, {1} Republicans, and {2} others in the 116th Congress.".format(democrats,republicans,other))
###Output
There are 239 Democrats, 199 Republicans, and 3 others in the 116th Congress.
###Markdown
JSONJSON is attractive for programmers using JavaScript and Python because it can represent a mix of different data types. We need to make a brief digression into Python's fundamental data stuctures in order to understand the contemporary attraction to JSON. Python has a few fundamental data types for representing collections of information:* **Lists**: This is a basic ordered data structure that can contain strings, ints, and floats.* **Dictionaries**: This is an unordered data structure containing key-value pairs, like a phonebook.Let's look at some examples of lists and dictionaries and then we can try the exercises below. ExercisesBelow is an example of a [tweet status](https://dev.twitter.com/overview/api/tweets) object that Twitter's [API returns](https://dev.twitter.com/rest/reference/get/statuses/show/id). This `obama_tweet` dictionary corresponds to [this tweet](https://twitter.com/BarackObama/status/831527113211645959). This is a classic example of a JSON object containing a mixture of dictionaries, lists, lists of dictionaries, dictionaries of lists, *etc*.
###Code
obama_tweet = {'created_at': 'Tue Feb 14 15:34:47 +0000 2017',
'favorite_count': 1023379,
'hashtags': [],
'id': 831527113211645959,
'id_str': '831527113211645959',
'lang': 'en',
'media': [{'display_url': 'pic.twitter.com/O0UhJWoqGN',
'expanded_url': 'https://twitter.com/BarackObama/status/831527113211645959/photo/1',
'id': 831526916398149634,
'media_url': 'http://pbs.twimg.com/media/C4otUykWcAIbSy1.jpg',
'media_url_https': 'https://pbs.twimg.com/media/C4otUykWcAIbSy1.jpg',
'sizes': {'large': {'h': 800, 'resize': 'fit', 'w': 1200},
'medium': {'h': 800, 'resize': 'fit', 'w': 1200},
'small': {'h': 453, 'resize': 'fit', 'w': 680},
'thumb': {'h': 150, 'resize': 'crop', 'w': 150}},
'type': 'photo',
'url': 'https://t.co/O0UhJWoqGN'}],
'retweet_count': 252266,
'source': '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>',
'text': 'Happy Valentine’s Day, @michelleobama! Almost 28 years with you, but it always feels new. https://t.co/O0UhJWoqGN',
'urls': [],
'user': {'created_at': 'Mon Mar 05 22:08:25 +0000 2007',
'description': 'Dad, husband, President, citizen.',
'favourites_count': 10,
'followers_count': 84814791,
'following': True,
'friends_count': 631357,
'id': 813286,
'lang': 'en',
'listed_count': 221906,
'location': 'Washington, DC',
'name': 'Barack Obama',
'profile_background_color': '77B0DC',
'profile_background_image_url': 'http://pbs.twimg.com/profile_background_images/451819093436268544/kLbRvwBg.png',
'profile_banner_url': 'https://pbs.twimg.com/profile_banners/813286/1484945688',
'profile_image_url': 'http://pbs.twimg.com/profile_images/822547732376207360/5g0FC8XX_normal.jpg',
'profile_link_color': '2574AD',
'profile_sidebar_fill_color': 'C2E0F6',
'profile_text_color': '333333',
'screen_name': 'BarackObama',
'statuses_count': 15436,
'time_zone': 'Eastern Time (US & Canada)',
'url': 'https://t.co/93Y27HEnnX',
'utc_offset': -18000,
'verified': True},
'user_mentions': [{'id': 409486555,
'name': 'Michelle Obama',
'screen_name': 'MichelleObama'}]}
###Output
_____no_output_____
###Markdown
1. What are the top-most keys in the `obama_tweet` object?2. When was this tweet sent?3. Does this tweet mention anyone?4. How many retweets did this tweet receive (at the time I collected it)?5. How many followers does the "user" who wrote this tweet have?6. What's the "media_url" for the image in this tweet?
###Code
obama_tweet.keys()
obama_tweet['created_at']
obama_tweet['user_mentions']
obama_tweet['retweet_count']
obama_tweet['user']['followers_count']
obama_tweet['media'][0]['media_url']
###Output
_____no_output_____
###Markdown
Web Data Scraping AcknowledgementsThese notebooks are adaptations from a 5 session mini course at the University of Colorado. The github repo can be found [here](https://github.com/CU-ITSS/Web-Data-Scraping-S2019) [Spring 2019 ITSS Mini-Course] The course is taught by [Brian C. Keegan, Ph.D.](http://brianckeegan.com/) [Assistant Professor, Department of Information Science](https://www.colorado.edu/cmci/people/information-science/brian-c-keegan). They have been adapted for relevant content and integration with Docker so that we all have the same environment. Professor Keegan suggests using a most recent version of Python (3.7) which is set in the `requirements.txt` file.The Spring ITSS Mini-Course was adapted from a number of sources including [Allison Morgan](https://allisonmorgan.github.io/) for the [2018 Summer Institute for Computational Social Science](https://github.com/allisonmorgan/sicss_boulder), which were in turn derived from [other resources](https://github.com/simonmunzert/web-scraping-with-r-extended-edition) developed by [Simon Munzert](http://simonmunzert.github.io/) and [Chris Bail](http://www.chrisbail.net/). This notebook is adapted from excellent notebooks in Dr. [Cody Buntain](http://cody.bunta.in/)'s seminar on [Social Media and Crisis Informatics](http://cody.bunta.in/teaching/2018_winter_umd_inst728e/) as well as the [PRAW documentation](https://praw.readthedocs.io/en/latest/). Introduction to Jupyter NotebooksThis is an example of a code cell below. You type the code into the cell and run the cell with the "Run" button in the toolbar or pressing Shift+Enter.
###Code
name = 'Nate Langholz'
print(name)
2+2
###Output
_____no_output_____
###Markdown
Forms of structured dataThere are three primary forms of structured data you will encounter on the web: HTML, XML, and JSON. HTMLWe can use Python's `requests` library to make a valid HTTP "get" request to the Oscars' web server for the 90 Academy Awards which will return the raw HTML. There are more than 144,000 characters in the document!
###Code
import requests
# Pretend to be a web browser and make a get request of a webpage
oscars90_request = requests.get('https://www.oscars.org/oscars/ceremonies/2018')
# The .text returns the text from the request
oscars90_html = oscars90_request.text
# The oscars90_html is a string, we can use the common len function to ask how long the string is (in characters)
len(oscars90_html)
###Output
_____no_output_____
###Markdown
Let's look at the first thousand characters. Mostly declarations to handle Internet Explorer's notorious refusal to follow standards—stuff you don't need to worry about.
###Code
# The [0:1000] is a slicing notation
# It gets the first (position 0 in Python) character until the 1000th character
print(oscars90_html[0:1000])
###Output
<!DOCTYPE html>
<!--[if IEMobile 7]><html class="no-js ie iem7" lang="en" dir="ltr"><![endif]-->
<!--[if lte IE 6]><html class="no-js ie lt-ie9 lt-ie8 lt-ie7" lang="en" dir="ltr"><![endif]-->
<!--[if (IE 7)&(!IEMobile)]><html class="no-js ie lt-ie9 lt-ie8" lang="en" dir="ltr"><![endif]-->
<!--[if IE 8]><html class="no-js ie lt-ie9" lang="en" dir="ltr"><![endif]-->
<!--[if (gte IE 9)|(gt IEMobile 7)]><html class="no-js ie" lang="en" dir="ltr" prefix="og: http://ogp.me/ns# article: http://ogp.me/ns/article# book: http://ogp.me/ns/book# profile: http://ogp.me/ns/profile# video: http://ogp.me/ns/video# product: http://ogp.me/ns/product#"><![endif]-->
<!--[if !IE]><!--><html class="no-js" lang="en" dir="ltr" prefix="og: http://ogp.me/ns# article: http://ogp.me/ns/article# book: http://ogp.me/ns/book# profile: http://ogp.me/ns/profile# video: http://ogp.me/ns/video# product: http://ogp.me/ns/product#"><!--<![endif]-->
<head>
<meta charset="utf-8" /><script type="text/javascript
###Markdown
Looking at 1,000 lines about a third of the way through the document, we can see some of the structure we found with the "Inspect" tool above corresponding to the closing lines of the "Actor in a Supporting Role" grouping and the opening lines of the "Acress in a Leading Role" grouping.
###Code
# You can slice any ranges you'd like up, as long as it's not beyond the length of the string
# oscars90_html[144588:] would return an error
print(oscars90_html[50000:51000])
###Output
esc field--type-text-long field--label-hidden ellipsis"><div class="field-items"><div class="field-item even">Lee Unkrich and Darla K. Anderson</div></div></div> </div>
</div>
</div></li>
</ul></div> </div>
<div class="views-field views-field-field-more-highlights"> <div class="field-content"><a href="https://www.oscars.org/oscars/ceremonies/90th-oscar-highlights" class="btn-link">View More Highlights</a></div> </div>
<div class="views-field views-field-field-memorable-moments"> <span class="views-label views-label-field-memorable-moments">Memorable Moments</span> <div class="field-content"><ul><li><div class="field-collection-view clearfix view-mode-full"><div class="entity entity-field-collection-item field-collection-item-field-memorable-moments clearfix" class="entity entity-field-collection-item field-collection-item-field-memorable-moments">
<div class="content">
<div class="field field--name-field-ceremonies-media field--type-image field--label-hidd
###Markdown
We're not actually going to be slicing the text to get this structured data out, we'll use a wonderful tool call [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) to do the heavy lifting for us. XMLXML has taken on something of an afterlife as the official data standard for the U.S. Congress. The [House](http://clerk.house.gov/index.aspx) and [Senate](https://www.senate.gov/general/XML.htm) both release information about members, committees, schedules, legislation, and votes in XML. These are immaculately formatted and documented and remarkably up-to-date: the data for members of the 116th Congress are already posted.Use the `requests` library to make a HTTP get request to the House's webserver and get the list of current member data.
###Code
house_raw = requests.get('http://clerk.house.gov/xml/lists/MemberData.xml').text
senate_raw = requests.get('https://www.senate.gov/legislative/LIS_MEMBER/cvc_member_data.xml').text
###Output
_____no_output_____
###Markdown
This data is still in a string format (`type(house_raw)`), so it's difficult to search and navigate. Let's make our first soup together using [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
###Code
# First import the library
from bs4 import BeautifulSoup
# Then make the soup, specifying the "lxml" parser
house_soup = BeautifulSoup(house_raw,'lxml')
###Output
_____no_output_____
###Markdown
What's so great about this soup-ified string? We now have a suite of new functions and methods that let us navigate the tree. First, let's inspect the different tags/elements in this tree of House member data. This is the full tree of data.
###Code
# Make an empty list to store data
children = []
# Start a loop to go through all the children tags in house_soup
for tag in house_soup.findChildren():
# If the name of the tag (tag.name) is not already in the children list
if tag.name not in children:
# Add the name of the tag to the children list
children.append(tag.name)
# Look at the list members
children
###Output
_____no_output_____
###Markdown
We can navigate through the tree. You won't do this in practice, but it's helpful for debugging. In this case, we navigated from the root node (`html`) into the `body` tag, then the `memberdata` tag, then the `members` tag. There are 441 descendents at this level, corresponding to the 435 voting seats and the 6 seats for territories.
###Code
len(house_soup.html.body.memberdata.members)
###Output
_____no_output_____
###Markdown
You can also short-cut to the members tag directly rather than navigating down the parent elements.
###Code
len(house_soup.members)
###Output
_____no_output_____
###Markdown
The `.contents` method is great for getting a list of the children below the tag as a list. We can use the `[0]` slice to get the first member and their data in the list. Interestingly, the `` tags are currently empty since these have not yet been allocated, but will in the next few weeks.
###Code
house_soup.members.contents[0]
###Output
_____no_output_____
###Markdown
You could keep navigating down the tree from here.
###Code
house_soup.members.contents[0].bioguideid
###Output
_____no_output_____
###Markdown
Note that this navigation method breaks when the tag has a hyphen in it.
###Code
house_soup.members.contents[0].state-fullname
###Output
_____no_output_____
###Markdown
Instead you can use the `.find()` method to handle these hyphenated cases.
###Code
house_soup.members.contents[0].find('state-fullname')
###Output
_____no_output_____
###Markdown
We can access the text inside the tag with `.text`
###Code
house_soup.members.contents[0].find('state-fullname').text
###Output
_____no_output_____
###Markdown
The `.find_all()` method will be your primary tool when working with structured data. The `` tag codes party membership (D=Democratic, R=Republican) for each representative.
###Code
house_soup.find_all('party')[:10]
###Output
_____no_output_____
###Markdown
There [should be](https://en.wikipedia.org/wiki/116th_United_States_CongressParty_summary) 235 Democrats and 199 Republicans, plus the other non-voting members from territories.
###Code
# Initialize a counter
democrats = 0
republicans = 0
other = 0
# Loop through each element of the caucus tags
for p in house_soup.find_all('party'):
# Check if it's D, R, or something else
if p.text == "D":
# Increment the appropriate counter
democrats += 1
elif p.text == "R":
republicans += 1
else:
other += 1
print("There are {0} Democrats, {1} Republicans, and {2} others in the 116th Congress.".format(democrats,republicans,other))
###Output
There are 239 Democrats, 199 Republicans, and 3 others in the 116th Congress.
###Markdown
JSONJSON is attractive for programmers using JavaScript and Python because it can represent a mix of different data types. We need to make a brief digression into Python's fundamental data stuctures in order to understand the contemporary attraction to JSON. Python has a few fundamental data types for representing collections of information:* **Lists**: This is a basic ordered data structure that can contain strings, ints, and floats.* **Dictionaries**: This is an unordered data structure containing key-value pairs, like a phonebook.Let's look at some examples of lists and dictionaries and then we can try the exercises below. ExercisesBelow is an example of a [tweet status](https://dev.twitter.com/overview/api/tweets) object that Twitter's [API returns](https://dev.twitter.com/rest/reference/get/statuses/show/id). This `obama_tweet` dictionary corresponds to [this tweet](https://twitter.com/BarackObama/status/831527113211645959). This is a classic example of a JSON object containing a mixture of dictionaries, lists, lists of dictionaries, dictionaries of lists, *etc*.
###Code
obama_tweet = {'created_at': 'Tue Feb 14 15:34:47 +0000 2017',
'favorite_count': 1023379,
'hashtags': [],
'id': 831527113211645959,
'id_str': '831527113211645959',
'lang': 'en',
'media': [{'display_url': 'pic.twitter.com/O0UhJWoqGN',
'expanded_url': 'https://twitter.com/BarackObama/status/831527113211645959/photo/1',
'id': 831526916398149634,
'media_url': 'http://pbs.twimg.com/media/C4otUykWcAIbSy1.jpg',
'media_url_https': 'https://pbs.twimg.com/media/C4otUykWcAIbSy1.jpg',
'sizes': {'large': {'h': 800, 'resize': 'fit', 'w': 1200},
'medium': {'h': 800, 'resize': 'fit', 'w': 1200},
'small': {'h': 453, 'resize': 'fit', 'w': 680},
'thumb': {'h': 150, 'resize': 'crop', 'w': 150}},
'type': 'photo',
'url': 'https://t.co/O0UhJWoqGN'}],
'retweet_count': 252266,
'source': '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>',
'text': 'Happy Valentine’s Day, @michelleobama! Almost 28 years with you, but it always feels new. https://t.co/O0UhJWoqGN',
'urls': [],
'user': {'created_at': 'Mon Mar 05 22:08:25 +0000 2007',
'description': 'Dad, husband, President, citizen.',
'favourites_count': 10,
'followers_count': 84814791,
'following': True,
'friends_count': 631357,
'id': 813286,
'lang': 'en',
'listed_count': 221906,
'location': 'Washington, DC',
'name': 'Barack Obama',
'profile_background_color': '77B0DC',
'profile_background_image_url': 'http://pbs.twimg.com/profile_background_images/451819093436268544/kLbRvwBg.png',
'profile_banner_url': 'https://pbs.twimg.com/profile_banners/813286/1484945688',
'profile_image_url': 'http://pbs.twimg.com/profile_images/822547732376207360/5g0FC8XX_normal.jpg',
'profile_link_color': '2574AD',
'profile_sidebar_fill_color': 'C2E0F6',
'profile_text_color': '333333',
'screen_name': 'BarackObama',
'statuses_count': 15436,
'time_zone': 'Eastern Time (US & Canada)',
'url': 'https://t.co/93Y27HEnnX',
'utc_offset': -18000,
'verified': True},
'user_mentions': [{'id': 409486555,
'name': 'Michelle Obama',
'screen_name': 'MichelleObama'}]}
###Output
_____no_output_____
###Markdown
1. What are the top-most keys in the `obama_tweet` object?2. When was this tweet sent?3. Does this tweet mention anyone?4. How many retweets did this tweet receive (at the time I collected it)?5. How many followers does the "user" who wrote this tweet have?6. What's the "media_url" for the image in this tweet?
###Code
obama_tweet.keys()
obama_tweet['created_at']
obama_tweet['user_mentions'][0]['name']
obama_tweet['retweet_count']
obama_tweet['user']['followers_count']
obama_tweet['media'][0]['media_url']
###Output
_____no_output_____ |
rl_dynamic_programming/Dynamic_Programming-yw.ipynb | ###Markdown
Mini Project: Dynamic ProgrammingIn this notebook, you will write your own implementations of many classical dynamic programming algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore FrozenLakeEnvUse the code cell below to create an instance of the [FrozenLake](https://github.com/openai/gym/blob/master/gym/envs/toy_text/frozen_lake.py) environment.
###Code
!pip install -q matplotlib==2.2.2
from frozenlake import FrozenLakeEnv
# with is_slippery=False, it simulates an deterministic MDP environment.
# env = FrozenLakeEnv(is_slippery=False)
# with is_slippery=Ture, it simulates an stochastic MDP environment
env = FrozenLakeEnv(is_slippery=True)
###Output
[33mYou are using pip version 10.0.1, however version 18.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
###Markdown
The agent moves through a $4 \times 4$ gridworld, with states numbered as follows:```[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]]```and the agent has 4 potential actions:```LEFT = 0DOWN = 1RIGHT = 2UP = 3```Thus, $\mathcal{S}^+ = \{0, 1, \ldots, 15\}$, and $\mathcal{A} = \{0, 1, 2, 3\}$. Verify this by running the code cell below.
###Code
# print the state space and action space
print(env.observation_space)
print(env.action_space)
# print the total number of states and actions
print(env.nS)
print(env.nA)
###Output
Discrete(16)
Discrete(4)
16
4
###Markdown
Dynamic programming assumes that the agent has full knowledge of the MDP. We have already amended the `frozenlake.py` file to make the one-step dynamics accessible to the agent. Execute the code cell below to return the one-step dynamics corresponding to a particular state and action. In particular, `env.P[1][0]` returns the the probability of each possible reward and next state, if the agent is in state 1 of the gridworld and decides to go left.
###Code
env.P[1][0]
###Output
_____no_output_____
###Markdown
Each entry takes the form ```prob, next_state, reward, done```where: - `prob` details the conditional probability of the corresponding (`next_state`, `reward`) pair, and- `done` is `True` if the `next_state` is a terminal state, and otherwise `False`.Thus, we can interpret `env.P[1][0]` as follows:$$\mathbb{P}(S_{t+1}=s',R_{t+1}=r|S_t=1,A_t=0) = \begin{cases} \frac{1}{3} \text{ if } s'=1, r=0\\ \frac{1}{3} \text{ if } s'=0, r=0\\ \frac{1}{3} \text{ if } s'=5, r=0\\ 0 \text{ else} \end{cases}$$To understand the value of `env.P[1][0]`, note that when you create a FrozenLake environment, it takes as an (optional) argument `is_slippery`, which defaults to `True`. To see this, change the first line in the notebook from `env = FrozenLakeEnv()` to `env = FrozenLakeEnv(is_slippery=False)`. Then, when you check `env.P[1][0]`, it should look like what you expect (i.e., `env.P[1][0] = [(1.0, 0, 0.0, False)]`).The default value for the `is_slippery` argument is `True`, and so `env = FrozenLakeEnv()` is equivalent to `env = FrozenLakeEnv(is_slippery=True)`. In the event that `is_slippery=True`, you see that this can result in the agent moving in a direction that it did not intend (where the idea is that the ground is *slippery*, and so the agent can slide to a location other than the one it wanted).Feel free to change the code cell above to explore how the environment behaves in response to other (state, action) pairs. Before proceeding to the next part, make sure that you set `is_slippery=True`, so that your implementations below will work with the slippery environment! Part 1: Iterative Policy EvaluationIn this section, you will write your own implementation of iterative policy evaluation.Your algorithm should accept four arguments as **input**:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).- `theta`: This is a very small positive number that is used to decide if the estimate has sufficiently converged to the true value function (default value: `1e-8`).The algorithm returns as **output**:- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s` under the input policy.Please complete the function in the code cell below.
###Code
import numpy as np
def policy_evaluation(env, policy, gamma=1, theta=1e-8):
# Initialze estimated state-value for all state
V = np.zeros(env.nS)
## TODO: complete the function
while True:
# delta = np.float64(0.0)
delta = 0
for s in range(env.nS):
old_estimate = V[s]
# V[s] = np.sum(
# np.array(
# [ policy[s][a]*prob*(r + gamma*V[s_]) for a in range(4) for prob, s_, r, _ in env.P[s][a] ]
# )
# )
V[s] = np.sum(
np.array(
[ a_prob * prob * (r + gamma*V[s_]) for a, a_prob in enumerate(policy[s]) for prob, s_, r, done in env.P[s][a] ]
)
)
# delta = np.maximum(delta, np.abs(old_estimate - V[s]))
delta = max(delta, np.abs(old_estimate - V[s]))
if (delta < theta):
break;
return V
###Output
_____no_output_____
###Markdown
We will evaluate the equiprobable random policy $\pi$, where $\pi(a|s) = \frac{1}{|\mathcal{A}(s)|}$ for all $s\in\mathcal{S}$ and $a\in\mathcal{A}(s)$. Use the code cell below to specify this policy in the variable `random_policy`.
###Code
random_policy = np.ones([env.nS, env.nA]) / env.nA
###Output
_____no_output_____
###Markdown
Run the next code cell to evaluate the equiprobable random policy and visualize the output. The state-value function has been reshaped to match the shape of the gridworld.
###Code
from plot_utils import plot_values
# evaluate the policy
V = policy_evaluation(env, random_policy)
plot_values(V)
###Output
_____no_output_____
###Markdown
Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! **Note:** In order to ensure accurate results, make sure that your `policy_evaluation` function satisfies the requirements outlined above (with four inputs, a single output, and with the default values of the input arguments unchanged).
###Code
import check_test
check_test.run_check('policy_evaluation_check', policy_evaluation)
###Output
_____no_output_____
###Markdown
Part 2: Obtain $q_\pi$ from $v_\pi$In this section, you will write a function that takes the state-value function estimate as input, along with some state $s\in\mathcal{S}$. It returns the **row in the action-value function** corresponding to the input state $s\in\mathcal{S}$. That is, your function should accept as input both $v_\pi$ and $s$, and return $q_\pi(s,a)$ for all $a\in\mathcal{A}(s)$.Your algorithm should accept four arguments as **input**:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.- `s`: This is an integer corresponding to a state in the environment. It should be a value between `0` and `(env.nS)-1`, inclusive.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as **output**:- `q`: This is a 1D numpy array with `q.shape[0]` equal to the number of actions (`env.nA`). `q[a]` contains the (estimated) value of state `s` and action `a`.Please complete the function in the code cell below.
###Code
def q_from_v(env, V, s, gamma=1):
# Default initialization of the estimated action-value for state s
q = np.zeros(env.nA)
## TODO: complete the function
for a in range(env.nA):
# s_ next state
for prob, s_, r, done in env.P[s][a]:
q[a] += prob* (r + gamma * V[s_])
return q
###Output
_____no_output_____
###Markdown
Run the code cell below to print the action-value function corresponding to the above state-value function.
###Code
Q = np.zeros([env.nS, env.nA])
for s in range(env.nS):
Q[s] = q_from_v(env, V, s)
print("Action-Value Function:")
print(Q)
###Output
Action-Value Function:
[[0.0147094 0.01393978 0.01393978 0.01317015]
[0.00852356 0.01163091 0.0108613 0.01550788]
[0.02444514 0.02095298 0.02406033 0.01435346]
[0.01047649 0.01047649 0.00698432 0.01396865]
[0.02166487 0.01701828 0.01624865 0.01006281]
[0. 0. 0. 0. ]
[0.05433538 0.04735105 0.05433538 0.00698432]
[0. 0. 0. 0. ]
[0.01701828 0.04099204 0.03480619 0.04640826]
[0.07020885 0.11755991 0.10595784 0.05895312]
[0.18940421 0.17582037 0.16001424 0.04297382]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0.08799677 0.20503718 0.23442716 0.17582037]
[0.25238823 0.53837051 0.52711478 0.43929118]
[0. 0. 0. 0. ]]
###Markdown
Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! **Note:** In order to ensure accurate results, make sure that the `q_from_v` function satisfies the requirements outlined above (with four inputs, a single output, and with the default values of the input arguments unchanged).
###Code
check_test.run_check('q_from_v_check', q_from_v)
###Output
_____no_output_____
###Markdown
Part 3: Policy ImprovementIn this section, you will write your own implementation of policy improvement. Your algorithm should accept three arguments as **input**:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as **output**:- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.Please complete the function in the code cell below. You are encouraged to use the `q_from_v` function you implemented above.
###Code
def policy_improvement(env, V, gamma=1):
policy = np.zeros([env.nS, env.nA]) / env.nA
# this shall be the random equal-probably polices, thus with ones.
# policy = np.ones([env.nS, env.nA]) / env.nA
for s in range(env.nS):
q = q_from_v(env, V, s, gamma=gamma)
# option 1: a deterministic policy
# policy[s][np.argmax(q)] = 1
# In case of multiple occurrences of the maximum values,
# the indices corresponding to the first occurrence are returned.
# option 2: a stochastic policy
# q==np.max(q) gives back an matrix of zero and ones
# np.argwhere give the index of the none zero, they are actons with max expected action-value
# flatten() flatte matrix to list
best_a = np.argwhere(q==np.max(q)).flatten()
# with for loops
#for a in range(env.nA):
# policy[s] = np.zeros(env.nA) # reset zero.
#for a in best_a:
# policy[s][a] = 1/len(best_a)
# np.eye(env.nA) is the identity matrix
# np.eye(env.nA)[i] gives back an array where the ith item is 1 and other zero
# [np.eye(env.nA)[i]] gives back an matrix
# np.sum([[0,1,0],[1,0,0]], axis=0) sums the zero dimension, the x dimension
# returns [1,1,0]
# [1,1,0]/len(2) = [1/2,1/2, 0] gives the equal probability for each of the one element.
policy[s] = np.sum([np.eye(env.nA)[i] for i in best_a], axis=0)/len(best_a)
## TODO: complete the function
return policy
###Output
_____no_output_____
###Markdown
Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! **Note:** In order to ensure accurate results, make sure that the `policy_improvement` function satisfies the requirements outlined above (with three inputs, a single output, and with the default values of the input arguments unchanged).Before moving on to the next part of the notebook, you are strongly encouraged to check out the solution in **Dynamic_Programming_Solution.ipynb**. There are many correct ways to approach this function!
###Code
check_test.run_check('policy_improvement_check', policy_improvement)
###Output
_____no_output_____
###Markdown
Part 4: Policy IterationIn this section, you will write your own implementation of policy iteration. The algorithm returns the optimal policy, along with its corresponding state-value function.Your algorithm should accept three arguments as **input**:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).- `theta`: This is a very small positive number that is used to decide if the policy evaluation step has sufficiently converged to the true value function (default value: `1e-8`).The algorithm returns as **output**:- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.Please complete the function in the code cell below. You are strongly encouraged to use the `policy_evaluation` and `policy_improvement` functions you implemented above.
###Code
import copy
def policy_iteration(env, gamma=1, theta=1e-8):
policy = np.ones([env.nS, env.nA]) / env.nA
delta = 0
while True:
V_old = policy_evaluation(env, policy, theta=theta)
policy_old = copy.copy(policy)
policy = policy_improvement(env, V_old, gamma=gamma)
V = policy_evaluation(env, policy, theta=theta)
# the optimal policy is expected to be greater than any other policy
# thus the V - V_old shall be greater than 0, no np.abs() needed.
# OPTION 1: stop if the value function estimates for successive policies has converged
delta = np.max(V - V_old)
if delta <= theta*1e2:
break
# OPTION 2: stop if the policy is unchanged after an improvement step
# # .all() test if all the element are True in the matrix
# if (new_policy == policy).all():
# break;
return policy, V
###Output
_____no_output_____
###Markdown
Run the next code cell to solve the MDP and visualize the output. The optimal state-value function has been reshaped to match the shape of the gridworld.**Compare the optimal state-value function to the state-value function from Part 1 of this notebook**. _Is the optimal state-value function consistently greater than or equal to the state-value function for the equiprobable random policy?_Yes
###Code
# obtain the optimal policy and optimal state-value function
policy_pi, V_pi = policy_iteration(env)
# print the optimal policy
print("\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):")
print(policy_pi,"\n")
plot_values(V_pi)
###Output
Optimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):
[[1. 0. 0. 0. ]
[0. 0. 0. 1. ]
[0. 0. 0. 1. ]
[0. 0. 0. 1. ]
[1. 0. 0. 0. ]
[0.25 0.25 0.25 0.25]
[0.5 0. 0.5 0. ]
[0.25 0.25 0.25 0.25]
[0. 0. 0. 1. ]
[0. 1. 0. 0. ]
[1. 0. 0. 0. ]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0. 0. 1. 0. ]
[0. 1. 0. 0. ]
[0.25 0.25 0.25 0.25]]
###Markdown
Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! **Note:** In order to ensure accurate results, make sure that the `policy_iteration` function satisfies the requirements outlined above (with three inputs, two outputs, and with the default values of the input arguments unchanged).
###Code
check_test.run_check('policy_iteration_check', policy_iteration)
###Output
_____no_output_____
###Markdown
Part 5: Truncated Policy IterationIn this section, you will write your own implementation of truncated policy iteration. You will begin by implementing truncated policy evaluation. Your algorithm should accept five arguments as **input**:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.- `max_it`: This is a positive integer that corresponds to the number of sweeps through the state space (default value: `1`).- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as **output**:- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.Please complete the function in the code cell below.
###Code
def truncated_policy_evaluation(env, policy, V, max_it=1, gamma=1):
## TODO: complete the function
count = 0
while count < max_it:
for s in range(env.nS):
v = 0
q = q_from_v(env, V, s, gamma)
for a, a_prob in enumerate(policy[s]):
v += a_prob * q[a]
V[s] = v
# Option: list comprehension
# V[s] = np.sum(
# [ a_prob * prob * (r + gamma*V[s_]) for a, a_prob in enumerate(policy[s]) for prob, s_, r, done in env.P[s][a]]
# )
count += 1
return V
###Output
_____no_output_____
###Markdown
Next, you will implement truncated policy iteration. Your algorithm should accept five arguments as **input**:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `max_it`: This is a positive integer that corresponds to the number of sweeps through the state space (default value: `1`).- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).- `theta`: This is a very small positive number that is used for the stopping criterion (default value: `1e-8`).The algorithm returns as **output**:- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.Please complete the function in the code cell below.
###Code
def truncated_policy_iteration(env, max_it=1, gamma=1, theta=1e-8):
V = np.zeros(env.nS)
policy = np.zeros([env.nS, env.nA]) / env.nA
## TODO: complete the function
while True:
policy = policy_improvement(env, V, gamma=gamma)
V_old = copy.copy(V)
V = truncated_policy_evaluation(env, policy, V, max_it=max_it, gamma=gamma)
if np.max(V-V_old) < theta:
break
return policy, V
###Output
_____no_output_____
###Markdown
Run the next code cell to solve the MDP and visualize the output. The state-value function has been reshaped to match the shape of the gridworld.Play with the value of the `max_it` argument. Do you always end with the optimal state-value function?
###Code
policy_tpi, V_tpi = truncated_policy_iteration(env, max_it=2)
# print the optimal policy
print("\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):")
print(policy_tpi,"\n")
# plot the optimal state-value function
plot_values(V_tpi)
###Output
Optimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):
[[1. 0. 0. 0. ]
[0. 0. 0. 1. ]
[0. 0. 0. 1. ]
[0. 0. 0. 1. ]
[1. 0. 0. 0. ]
[0.25 0.25 0.25 0.25]
[0.5 0. 0.5 0. ]
[0.25 0.25 0.25 0.25]
[0. 0. 0. 1. ]
[0. 1. 0. 0. ]
[1. 0. 0. 0. ]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0. 0. 1. 0. ]
[0. 1. 0. 0. ]
[0.25 0.25 0.25 0.25]]
###Markdown
Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! **Note:** In order to ensure accurate results, make sure that the `truncated_policy_iteration` function satisfies the requirements outlined above (with four inputs, two outputs, and with the default values of the input arguments unchanged).
###Code
check_test.run_check('truncated_policy_iteration_check', truncated_policy_iteration)
###Output
_____no_output_____
###Markdown
Part 6: Value IterationIn this section, you will write your own implementation of value iteration.Your algorithm should accept three arguments as input:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).- `theta`: This is a very small positive number that is used for the stopping criterion (default value: `1e-8`).The algorithm returns as **output**:- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.
###Code
def value_iteration(env, gamma=1, theta=1e-8):
V = np.zeros(env.nS)
## TODO: complete the function
while True:
delta = 0
for s in range(env.nS):
v_old = V[s]
V[s] = max(q_from_v(env, V, s, gamma=gamma))
delta = max(delta, abs(v_old - V[s]))
if delta < theta:
break
policy = policy_improvement(env, V, gamma=gamma)
return policy, V
###Output
_____no_output_____
###Markdown
Use the next code cell to solve the MDP and visualize the output. The state-value function has been reshaped to match the shape of the gridworld.
###Code
policy_vi, V_vi = value_iteration(env)
# print the optimal policy
print("\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):")
print(policy_vi,"\n")
# plot the optimal state-value function
plot_values(V_vi)
###Output
Optimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):
[[1. 0. 0. 0. ]
[0. 0. 0. 1. ]
[0. 0. 0. 1. ]
[0. 0. 0. 1. ]
[1. 0. 0. 0. ]
[0.25 0.25 0.25 0.25]
[0.5 0. 0.5 0. ]
[0.25 0.25 0.25 0.25]
[0. 0. 0. 1. ]
[0. 1. 0. 0. ]
[1. 0. 0. 0. ]
[0.25 0.25 0.25 0.25]
[0.25 0.25 0.25 0.25]
[0. 0. 1. 0. ]
[0. 1. 0. 0. ]
[0.25 0.25 0.25 0.25]]
###Markdown
Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! **Note:** In order to ensure accurate results, make sure that the `value_iteration` function satisfies the requirements outlined above (with three inputs, two outputs, and with the default values of the input arguments unchanged).
###Code
check_test.run_check('value_iteration_check', value_iteration)
###Output
_____no_output_____ |
tutorials/tts/FastPitch_Finetuning.ipynb | ###Markdown
Finetuning FastPitch for a new speakerIn this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on the new speaker's text and speech pairs (though see the section at the end to learn more about mixing speaker data).We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.A final section will describe approaches to improve audio quality past this notebook. License> Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.>> Licensed under the Apache License, Version 2.0 (the "License");> you may not use this file except in compliance with the License.> You may obtain a copy of the License at>> http://www.apache.org/licenses/LICENSE-2.0>> Unless required by applicable law or agreed to in writing, software> distributed under the License is distributed on an "AS IS" BASIS,> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.> See the License for the specific language governing permissions and> limitations under the License.
###Code
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
BRANCH = 'main'
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# !apt-get install sox libsndfile1 ffmpeg
# !pip install wget unidecode pynini==2.1.4
# !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
###Output
_____no_output_____
###Markdown
Downloading data For our tutorial, we will use a small part of the Hi-Fi Multi-Speaker English TTS (Hi-Fi TTS) dataset. You can read more about dataset [here](https://arxiv.org/abs/2104.01497). We will use speaker 6097 as the target speaker, and only a 5-minute subset of audio will be used for this fine-tuning example. We additionally resample audio to 22050 kHz.
###Code
!wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data
!tar -xzf 6097_5_mins.tar.gz
###Output
_____no_output_____
###Markdown
Looking at `manifest.json`, we see a standard NeMo json that contains the filepath, text, and duration. Please note that our `manifest.json` contains the relative path.Let's make sure that the entries look something like this:```{"audio_filepath": "audio/presentpictureofnsw_02_mann_0532.wav", "text": "not to stop more than ten minutes by the way", "duration": 2.6, "text_no_preprocessing": "not to stop more than ten minutes by the way,", "text_normalized": "not to stop more than ten minutes by the way,"}```
###Code
!head -n 1 ./6097_5_mins/manifest.json
###Output
_____no_output_____
###Markdown
Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set.As mentioned, since the paths in the manifest are relative, we also create a symbolic link to the audio folder such that `audio/` goes to the correct directory.
###Code
!cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json
!cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json
!ln -s ./6097_5_mins/audio audio
###Output
_____no_output_____
###Markdown
Let's also download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to `~/.cache`, so let's move that to our current directory. *Note: please, check that `home_path` refers to your home folder. Otherwise, change it manually.*
###Code
home_path = !(echo $HOME)
home_path = home_path[0]
print(home_path)
import os
import json
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
from nemo.collections.tts.models import FastPitchModel
FastPitchModel.from_pretrained("tts_en_fastpitch")
from pathlib import Path
nemo_files = [p for p in Path(f"{home_path}/.cache/torch/NeMo/").glob("**/tts_en_fastpitch_align.nemo")]
print(f"Copying {nemo_files[0]} to ./")
Path("./tts_en_fastpitch_align.nemo").write_bytes(nemo_files[0].read_bytes())
###Output
_____no_output_____
###Markdown
To finetune the FastPitch model on the above created filelists, we use the `examples/tts/fastpitch_finetune.py` script to train the models with the `fastpitch_align_v1.05.yaml` configuration.Let's grab those files.
###Code
!wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch_finetune.py
!mkdir -p conf \
&& cd conf \
&& wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align_v1.05.yaml \
&& cd ..
###Output
_____no_output_____
###Markdown
We also need some additional files (see `FastPitch_MixerTTS_Training.ipynb` tutorial for more details) for training. Let's download these, too.
###Code
# additional files
!mkdir -p tts_dataset_files && cd tts_dataset_files \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/cmudict-0.7b_nv22.01 \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/heteronyms-030921 \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/nemo_text_processing/text_normalization/en/data/whitelist/lj_speech.tsv \
&& cd ..
###Output
_____no_output_____
###Markdown
Finetuning FastPitch We can now train our model with the following command:**NOTE: This will take about 50 minutes on colab's K80 GPUs.**
###Code
# TODO(oktai15): remove +model.text_tokenizer.add_blank_at=true when we update FastPitch checkpoint
!(python fastpitch_finetune.py --config-name=fastpitch_align_v1.05.yaml \
train_dataset=./6097_manifest_train_dur_5_mins_local.json \
validation_datasets=./6097_manifest_dev_ns_all_local.json \
sup_data_path=./fastpitch_sup_data \
phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 \
heteronyms_path=tts_dataset_files/heteronyms-030921 \
whitelist_path=tts_dataset_files/lj_speech.tsv \
exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins \
+init_from_nemo_model=./tts_en_fastpitch_align.nemo \
+trainer.max_steps=1000 ~trainer.max_epochs \
trainer.check_val_every_n_epoch=25 \
model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 \
model.n_speakers=1 model.pitch_mean=121.9 model.pitch_std=23.1 \
model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 \
~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null \
+model.text_tokenizer.add_blank_at=true \
)
###Output
_____no_output_____
###Markdown
Let's take a closer look at the training command:* `--config-name=fastpitch_align_v1.05.yaml` * We first tell the script what config file to use.* `train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json sup_data_path=./fastpitch_sup_data` * We tell the script what manifest files to train and eval on, as well as where supplementary data is located (or will be calculated and saved during training if not provided). * `phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 heteronyms_path=tts_dataset_files/heteronyms-030921whitelist_path=tts_dataset_files/lj_speech.tsv ` * We tell the script where `phoneme_dict_path`, `heteronyms-030921` and `whitelist_path` are located. These are the additional files we downloaded earlier, and are used in preprocessing the data. * `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins` * Where we want to save our log files, tensorboard file, checkpoints, and more.* `+init_from_nemo_model=./tts_en_fastpitch_align.nemo` * We tell the script what checkpoint to finetune from.* `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25` * For this experiment, we tell the script to train for 1000 training steps/iterations rather than specifying a number of epochs to run. Since the config file specifies `max_epochs` instead, we need to remove that using `~trainer.max_epochs`.* `model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24` * Set batch sizes for the training and validation data loaders.* `model.n_speakers=1` * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook.* `model.pitch_mean=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512` * For the new speaker, we need to define new pitch hyperparameters for better audio quality. * These parameters work for speaker 6097 from the Hi-Fi TTS dataset. * For speaker 92, we suggest `model.pitch_mean=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512`. * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker. * After fmin and fmax are defined, pitch mean and std can be easily extracted.* `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam` * For fine-tuning, we lower the learning rate. * We use a fixed learning rate of 2e-4. * We switch from the lamb optimizer to the adam optimizer.* `trainer.devices=1 trainer.strategy=null` * For this notebook, we default to 1 gpu which means that we do not need ddp. * If you have the compute resources, feel free to scale this up to the number of free gpus you have available. * Please remove the `trainer.strategy=null` section if you intend on multi-gpu training. Synthesize Samples from Finetuned Checkpoints Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFi-GAN vocoder trained on LJSpeech.We define some helper functions as well.
###Code
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
vocoder = HifiGanModel.from_pretrained("tts_hifigan")
vocoder = vocoder.eval().cuda()
def infer(spec_gen_model, vocoder_model, str_input, speaker=None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Args:
spec_gen_model: Spectrogram generator model (FastPitch in our case)
vocoder_model: Vocoder model (HiFiGAN in our case)
str_input: Text input for the synthesis
speaker: Speaker ID
Returns:
spectrogram and waveform of the synthesized audio.
"""
with torch.no_grad():
parsed = spec_gen_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().to(device=spec_gen_model.device)
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker=speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt_from_last_run(
base_dir,
new_speaker_id,
duration_mins,
mixing_enabled,
original_speaker_id,
model_name="FastPitch"
):
mixing = "no_mixing" if not mixing_enabled else "mixing"
d = f"{original_speaker_id}_to_{new_speaker_id}_{mixing}_{duration_mins}_mins"
exp_dirs = list([i for i in (Path(base_dir) / d / model_name).iterdir() if i.is_dir()])
last_exp_dir = sorted(exp_dirs)[-1]
last_checkpoint_dir = last_exp_dir / "checkpoints"
last_ckpt = list(last_checkpoint_dir.glob('*-last.ckpt'))
if len(last_ckpt) == 0:
raise ValueError(f"There is no last checkpoint in {last_checkpoint_dir}.")
return str(last_ckpt[0])
###Output
_____no_output_____
###Markdown
Specify the speaker ID, duration of the dataset in minutes, and speaker mixing variables to find the relevant checkpoint (for example, we've saved our model in `ljspeech_to_6097_no_mixing_5_mins/`) and compare the synthesized audio with validation samples of the new speaker.The mixing variable is False for now, but we include some code to handle multiple speakers for reference.
###Code
new_speaker_id = 6097
duration_mins = 5
mixing = False
original_speaker_id = "ljspeech"
last_ckpt = get_best_ckpt_from_last_run("./", new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
# Only need to set speaker_id if there is more than one speaker
speaker_id = None
if mixing:
speaker_id = 1
num_val = 2 # Number of validation samples
val_records = []
with open(f"{new_speaker_id}_manifest_dev_ns_all_local.json", "r") as f:
for i, line in enumerate(f):
val_records.append(json.loads(line))
if len(val_records) >= num_val:
break
for val_record in val_records:
print("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))
print(f"SYNTHESIZED FOR -- Speaker: {new_speaker_id} | Dataset size: {duration_mins} mins | Mixing:{mixing} | Text: {val_record['text']}")
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker=speaker_id)
ipd.display(ipd.Audio(audio, rate=22050))
%matplotlib inline
imshow(spec, origin="lower", aspect="auto")
plt.show()
###Output
_____no_output_____
###Markdown
Improving Speech QualityWe see that from fine-tuning FastPitch, we were able to generate audio in a male voice but the audio quality is not as good as we expect. We recommend two steps to improve audio quality:* Finetuning HiFi-GAN* Adding more data**Note that both of these steps are outside the scope of the notebook due to the limited compute available on Colab, but the code is included below for you to use outside of this notebook.** Finetuning HiFi-GANFrom the synthesized samples, there might be audible audio crackling. To fix this, we need to finetune HiFi-GAN on the new speaker's data. HiFi-GAN shows improvement using **synthesized mel spectrograms**, so the first step is to generate mel spectrograms with our finetuned FastPitch model to use as input.The code below uses our finetuned model to generate synthesized mels for the training set we have been using. You will also need to do the same for the validation set (code should be very similar, just with paths changed).
###Code
import json
import numpy as np
import torch
import soundfile as sf
from pathlib import Path
from nemo.collections.tts.torch.helpers import BetaBinomialInterpolator
def load_wav(audio_file, target_sr=None):
with sf.SoundFile(audio_file, 'r') as f:
samples = f.read(dtype='float32')
sample_rate = f.samplerate
if target_sr is not None and target_sr != sample_rate:
samples = librosa.core.resample(samples, orig_sr=sample_rate, target_sr=target_sr)
return samples.transpose()
# Get records from the training manifest
manifest_path = "./6097_manifest_train_dur_5_mins_local.json"
records = []
with open(manifest_path, "r") as f:
for i, line in enumerate(f):
records.append(json.loads(line))
beta_binomial_interpolator = BetaBinomialInterpolator()
spec_model.eval()
device = spec_model.device
save_dir = Path("./6097_manifest_train_dur_5_mins_local_mels")
save_dir.mkdir(exist_ok=True, parents=True)
# Generate a spectrograms (we need to use ground truth alignment for correct matching between audio and mels)
for i, r in enumerate(records):
audio = load_wav(r["audio_filepath"])
audio = torch.from_numpy(audio).unsqueeze(0).to(device)
audio_len = torch.tensor(audio.shape[1], dtype=torch.long, device=device).unsqueeze(0)
# Again, our finetuned FastPitch model doesn't use multiple speakers,
# but we keep the code to support it here for reference
if spec_model.fastpitch.speaker_emb is not None and "speaker" in r:
speaker = torch.tensor([r['speaker']]).to(device)
else:
speaker = None
with torch.no_grad():
if "normalized_text" in r:
text = spec_model.parse(r["normalized_text"], normalize=False)
else:
text = spec_model.parse(r['text'])
text_len = torch.tensor(text.shape[-1], dtype=torch.long, device=device).unsqueeze(0)
spect, spect_len = spec_model.preprocessor(input_signal=audio, length=audio_len)
# Generate attention prior and spectrogram inputs for HiFi-GAN
attn_prior = torch.from_numpy(
beta_binomial_interpolator(spect_len.item(), text_len.item())
).unsqueeze(0).to(text.device)
spectrogram = spec_model.forward(
text=text,
input_lens=text_len,
spec=spect,
mel_lens=spect_len,
attn_prior=attn_prior,
speaker=speaker,
)[0]
save_path = save_dir / f"mel_{i}.npy"
np.save(save_path, spectrogram[0].to('cpu').numpy())
r["mel_filepath"] = str(save_path)
hifigan_manifest_path = "hifigan_train_ft.json"
with open(hifigan_manifest_path, "w") as f:
for r in records:
f.write(json.dumps(r) + '\n')
# Please do the same for the validation json. Code is omitted.
###Output
_____no_output_____
###Markdown
Finetuning FastPitch for a new speakerIn this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on the new speaker's text and speech pairs (though see the section at the end to learn more about mixing speaker data).We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.A final section will describe approaches to improve audio quality past this notebook. License> Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.>> Licensed under the Apache License, Version 2.0 (the "License");> you may not use this file except in compliance with the License.> You may obtain a copy of the License at>> http://www.apache.org/licenses/LICENSE-2.0>> Unless required by applicable law or agreed to in writing, software> distributed under the License is distributed on an "AS IS" BASIS,> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.> See the License for the specific language governing permissions and> limitations under the License.
###Code
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
BRANCH = 'main'
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# !apt-get install sox libsndfile1 ffmpeg
# !pip install wget unidecode pynini==2.1.4
# !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
###Output
_____no_output_____
###Markdown
Downloading data For our tutorial, we will use a small part of the Hi-Fi Multi-Speaker English TTS (Hi-Fi TTS) dataset. You can read more about dataset [here](https://arxiv.org/abs/2104.01497). We will use speaker 6097 as the target speaker, and only a 5-minute subset of audio will be used for this fine-tuning example. We additionally resample audio to 22050 kHz.
###Code
!wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data
!tar -xzf 6097_5_mins.tar.gz
###Output
_____no_output_____
###Markdown
Looking at `manifest.json`, we see a standard NeMo json that contains the filepath, text, and duration. Please note that our `manifest.json` contains the relative path.Let's make sure that the entries look something like this:```{"audio_filepath": "audio/presentpictureofnsw_02_mann_0532.wav", "text": "not to stop more than ten minutes by the way", "duration": 2.6, "text_no_preprocessing": "not to stop more than ten minutes by the way,", "text_normalized": "not to stop more than ten minutes by the way,"}```
###Code
!head -n 1 ./6097_5_mins/manifest.json
###Output
_____no_output_____
###Markdown
Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set.As mentioned, since the paths in the manifest are relative, we also create a symbolic link to the audio folder such that `audio/` goes to the correct directory.
###Code
!cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json
!cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json
!ln -s ./6097_5_mins/audio audio
###Output
_____no_output_____
###Markdown
Let's also download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to `~/.cache`, so let's move that to our current directory. *Note: please, check that `home_path` refers to your home folder. Otherwise, change it manually.*
###Code
home_path = !(echo $HOME)
home_path = home_path[0]
print(home_path)
import os
import json
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
from nemo.collections.tts.models import FastPitchModel
FastPitchModel.from_pretrained("tts_en_fastpitch")
from pathlib import Path
nemo_files = [p for p in Path(f"{home_path}/.cache/torch/NeMo/").glob("**/tts_en_fastpitch_align.nemo")]
print(f"Copying {nemo_files[0]} to ./")
Path("./tts_en_fastpitch_align.nemo").write_bytes(nemo_files[0].read_bytes())
###Output
_____no_output_____
###Markdown
To finetune the FastPitch model on the above created filelists, we use the `examples/tts/fastpitch_finetune.py` script to train the models with the `fastpitch_align_v1.05.yaml` configuration.Let's grab those files.
###Code
!wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch_finetune.py
!mkdir -p conf \
&& cd conf \
&& wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align_v1.05.yaml \
&& cd ..
###Output
_____no_output_____
###Markdown
We also need some additional files (see `FastPitch_MixerTTS_Training.ipynb` tutorial for more details) for training. Let's download these, too.
###Code
# additional files
!mkdir -p tts_dataset_files && cd tts_dataset_files \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/cmudict-0.7b_nv22.01 \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/heteronyms-030921 \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/nemo_text_processing/text_normalization/en/data/whitelist_lj_speech.tsv \
&& cd ..
###Output
_____no_output_____
###Markdown
Finetuning FastPitch We can now train our model with the following command:**NOTE: This will take about 50 minutes on colab's K80 GPUs.**
###Code
# TODO(oktai15): remove +model.text_tokenizer.add_blank_at=true when we update FastPitch checkpoint
!(python fastpitch_finetune.py --config-name=fastpitch_align_v1.05.yaml \
train_dataset=./6097_manifest_train_dur_5_mins_local.json \
validation_datasets=./6097_manifest_dev_ns_all_local.json \
sup_data_path=./fastpitch_sup_data \
phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 \
heteronyms_path=tts_dataset_files/heteronyms-030921 \
whitelist_path=tts_dataset_files/whitelist_lj_speech.tsv \
exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins \
+init_from_nemo_model=./tts_en_fastpitch_align.nemo \
+trainer.max_steps=1000 ~trainer.max_epochs \
trainer.check_val_every_n_epoch=25 \
model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 \
model.n_speakers=1 model.pitch_mean=121.9 model.pitch_std=23.1 \
model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 \
~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null \
+model.text_tokenizer.add_blank_at=true \
)
###Output
_____no_output_____
###Markdown
Let's take a closer look at the training command:* `--config-name=fastpitch_align_v1.05.yaml` * We first tell the script what config file to use.* `train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json sup_data_path=./fastpitch_sup_data` * We tell the script what manifest files to train and eval on, as well as where supplementary data is located (or will be calculated and saved during training if not provided). * `phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 heteronyms_path=tts_dataset_files/heteronyms-030921whitelist_path=tts_dataset_files/whitelist_lj_speech.tsv ` * We tell the script where `phoneme_dict_path`, `heteronyms-030921` and `whitelist_path` are located. These are the additional files we downloaded earlier, and are used in preprocessing the data. * `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins` * Where we want to save our log files, tensorboard file, checkpoints, and more.* `+init_from_nemo_model=./tts_en_fastpitch_align.nemo` * We tell the script what checkpoint to finetune from.* `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25` * For this experiment, we tell the script to train for 1000 training steps/iterations rather than specifying a number of epochs to run. Since the config file specifies `max_epochs` instead, we need to remove that using `~trainer.max_epochs`.* `model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24` * Set batch sizes for the training and validation data loaders.* `model.n_speakers=1` * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook.* `model.pitch_mean=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512` * For the new speaker, we need to define new pitch hyperparameters for better audio quality. * These parameters work for speaker 6097 from the Hi-Fi TTS dataset. * For speaker 92, we suggest `model.pitch_mean=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512`. * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker. * After fmin and fmax are defined, pitch mean and std can be easily extracted.* `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam` * For fine-tuning, we lower the learning rate. * We use a fixed learning rate of 2e-4. * We switch from the lamb optimizer to the adam optimizer.* `trainer.devices=1 trainer.strategy=null` * For this notebook, we default to 1 gpu which means that we do not need ddp. * If you have the compute resources, feel free to scale this up to the number of free gpus you have available. * Please remove the `trainer.strategy=null` section if you intend on multi-gpu training. Synthesize Samples from Finetuned Checkpoints Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFi-GAN vocoder trained on LJSpeech.We define some helper functions as well.
###Code
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
vocoder = HifiGanModel.from_pretrained("tts_hifigan")
vocoder = vocoder.eval().cuda()
def infer(spec_gen_model, vocoder_model, str_input, speaker=None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Args:
spec_gen_model: Spectrogram generator model (FastPitch in our case)
vocoder_model: Vocoder model (HiFiGAN in our case)
str_input: Text input for the synthesis
speaker: Speaker ID
Returns:
spectrogram and waveform of the synthesized audio.
"""
with torch.no_grad():
parsed = spec_gen_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().to(device=spec_gen_model.device)
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker=speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt_from_last_run(
base_dir,
new_speaker_id,
duration_mins,
mixing_enabled,
original_speaker_id,
model_name="FastPitch"
):
mixing = "no_mixing" if not mixing_enabled else "mixing"
d = f"{original_speaker_id}_to_{new_speaker_id}_{mixing}_{duration_mins}_mins"
exp_dirs = list([i for i in (Path(base_dir) / d / model_name).iterdir() if i.is_dir()])
last_exp_dir = sorted(exp_dirs)[-1]
last_checkpoint_dir = last_exp_dir / "checkpoints"
last_ckpt = list(last_checkpoint_dir.glob('*-last.ckpt'))
if len(last_ckpt) == 0:
raise ValueError(f"There is no last checkpoint in {last_checkpoint_dir}.")
return str(last_ckpt[0])
###Output
_____no_output_____
###Markdown
Specify the speaker ID, duration of the dataset in minutes, and speaker mixing variables to find the relevant checkpoint (for example, we've saved our model in `ljspeech_to_6097_no_mixing_5_mins/`) and compare the synthesized audio with validation samples of the new speaker.The mixing variable is False for now, but we include some code to handle multiple speakers for reference.
###Code
new_speaker_id = 6097
duration_mins = 5
mixing = False
original_speaker_id = "ljspeech"
last_ckpt = get_best_ckpt_from_last_run("./", new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
# Only need to set speaker_id if there is more than one speaker
speaker_id = None
if mixing:
speaker_id = 1
num_val = 2 # Number of validation samples
val_records = []
with open(f"{new_speaker_id}_manifest_dev_ns_all_local.json", "r") as f:
for i, line in enumerate(f):
val_records.append(json.loads(line))
if len(val_records) >= num_val:
break
for val_record in val_records:
print("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))
print(f"SYNTHESIZED FOR -- Speaker: {new_speaker_id} | Dataset size: {duration_mins} mins | Mixing:{mixing} | Text: {val_record['text']}")
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker=speaker_id)
ipd.display(ipd.Audio(audio, rate=22050))
%matplotlib inline
imshow(spec, origin="lower", aspect="auto")
plt.show()
###Output
_____no_output_____
###Markdown
Improving Speech QualityWe see that from fine-tuning FastPitch, we were able to generate audio in a male voice but the audio quality is not as good as we expect. We recommend two steps to improve audio quality:* Finetuning HiFi-GAN* Adding more data**Note that both of these steps are outside the scope of the notebook due to the limited compute available on Colab, but the code is included below for you to use outside of this notebook.** Finetuning HiFi-GANFrom the synthesized samples, there might be audible audio crackling. To fix this, we need to finetune HiFi-GAN on the new speaker's data. HiFi-GAN shows improvement using **synthesized mel spectrograms**, so the first step is to generate mel spectrograms with our finetuned FastPitch model to use as input.The code below uses our finetuned model to generate synthesized mels for the training set we have been using. You will also need to do the same for the validation set (code should be very similar, just with paths changed).
###Code
import json
import numpy as np
import torch
import soundfile as sf
from pathlib import Path
from nemo.collections.tts.torch.helpers import BetaBinomialInterpolator
def load_wav(audio_file):
with sf.SoundFile(audio_file, 'r') as f:
samples = f.read(dtype='float32')
return samples.transpose()
# Get records from the training manifest
manifest_path = "./6097_manifest_train_dur_5_mins_local.json"
records = []
with open(manifest_path, "r") as f:
for i, line in enumerate(f):
records.append(json.loads(line))
beta_binomial_interpolator = BetaBinomialInterpolator()
spec_model.eval()
device = spec_model.device
save_dir = Path("./6097_manifest_train_dur_5_mins_local_mels")
save_dir.mkdir(exist_ok=True, parents=True)
# Generate a spectrograms (we need to use ground truth alignment for correct matching between audio and mels)
for i, r in enumerate(records):
audio = load_wav(r["audio_filepath"])
audio = torch.from_numpy(audio).unsqueeze(0).to(device)
audio_len = torch.tensor(audio.shape[1], dtype=torch.long, device=device).unsqueeze(0)
# Again, our finetuned FastPitch model doesn't use multiple speakers,
# but we keep the code to support it here for reference
if spec_model.fastpitch.speaker_emb is not None and "speaker" in r:
speaker = torch.tensor([r['speaker']]).to(device)
else:
speaker = None
with torch.no_grad():
if "normalized_text" in r:
text = spec_model.parse(r["normalized_text"], normalize=False)
else:
text = spec_model.parse(r['text'])
text_len = torch.tensor(text.shape[-1], dtype=torch.long, device=device).unsqueeze(0)
spect, spect_len = spec_model.preprocessor(input_signal=audio, length=audio_len)
# Generate attention prior and spectrogram inputs for HiFi-GAN
attn_prior = torch.from_numpy(
beta_binomial_interpolator(spect_len.item(), text_len.item())
).unsqueeze(0).to(text.device)
spectrogram = spec_model.forward(
text=text,
input_lens=text_len,
spec=spect,
mel_lens=spect_len,
attn_prior=attn_prior,
speaker=speaker,
)[0]
save_path = save_dir / f"mel_{i}.npy"
np.save(save_path, spectrogram[0].to('cpu').numpy())
r["mel_filepath"] = str(save_path)
hifigan_manifest_path = "hifigan_train_ft.json"
with open(hifigan_manifest_path, "w") as f:
for r in records:
f.write(json.dumps(r) + '\n')
# Please do the same for the validation json. Code is omitted.
###Output
_____no_output_____
###Markdown
Finetuning FastPitch for a new speakerIn this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on the new speaker's text and speech pairs (though see the section at the end to learn more about mixing speaker data).We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.A final section will describe approaches to improve audio quality past this notebook. License> Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.>> Licensed under the Apache License, Version 2.0 (the "License");> you may not use this file except in compliance with the License.> You may obtain a copy of the License at>> http://www.apache.org/licenses/LICENSE-2.0>> Unless required by applicable law or agreed to in writing, software> distributed under the License is distributed on an "AS IS" BASIS,> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.> See the License for the specific language governing permissions and> limitations under the License.
###Code
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
BRANCH = 'main'
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# !apt-get install sox libsndfile1 ffmpeg
# !pip install wget unidecode pynini==2.1.4
# !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
###Output
_____no_output_____
###Markdown
Downloading data For our tutorial, we will use a small part of the Hi-Fi Multi-Speaker English TTS (Hi-Fi TTS) dataset. You can read more about dataset [here](https://arxiv.org/abs/2104.01497). We will use speaker 6097 as the target speaker, and only a 5-minute subset of audio will be used for this fine-tuning example. We additionally resample audio to 22050 kHz.
###Code
!wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data
!tar -xzf 6097_5_mins.tar.gz
###Output
_____no_output_____
###Markdown
Looking at `manifest.json`, we see a standard NeMo json that contains the filepath, text, and duration. Please note that our `manifest.json` contains the relative path.Let's make sure that the entries look something like this:```{"audio_filepath": "audio/presentpictureofnsw_02_mann_0532.wav", "text": "not to stop more than ten minutes by the way", "duration": 2.6, "text_no_preprocessing": "not to stop more than ten minutes by the way,", "text_normalized": "not to stop more than ten minutes by the way,"}```
###Code
!head -n 1 ./6097_5_mins/manifest.json
###Output
_____no_output_____
###Markdown
Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set.As mentioned, since the paths in the manifest are relative, we also create a symbolic link to the audio folder such that `audio/` goes to the correct directory.
###Code
!cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json
!cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json
!ln -s ./6097_5_mins/audio audio
###Output
_____no_output_____
###Markdown
Let's also download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to `~/.cache`, so let's move that to our current directory. *Note: please, check that `home_path` refers to your home folder. Otherwise, change it manually.*
###Code
home_path = !(echo $HOME)
home_path = home_path[0]
print(home_path)
import os
import json
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
from nemo.collections.tts.models import FastPitchModel
FastPitchModel.from_pretrained("tts_en_fastpitch")
from pathlib import Path
nemo_files = [p for p in Path(f"{home_path}/.cache/torch/NeMo/").glob("**/tts_en_fastpitch_align.nemo")]
print(f"Copying {nemo_files[0]} to ./")
Path("./tts_en_fastpitch_align.nemo").write_bytes(nemo_files[0].read_bytes())
###Output
_____no_output_____
###Markdown
To finetune the FastPitch model on the above created filelists, we use the `examples/tts/fastpitch_finetune.py` script to train the models with the `fastpitch_align_v1.05.yaml` configuration.Let's grab those files.
###Code
!wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch_finetune.py
!mkdir -p conf \
&& cd conf \
&& wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align_v1.05.yaml \
&& cd ..
###Output
_____no_output_____
###Markdown
We also need some additional files (see `FastPitch_MixerTTS_Training.ipynb` tutorial for more details) for training. Let's download these, too.
###Code
# additional files
!mkdir -p tts_dataset_files && cd tts_dataset_files \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/cmudict-0.7b_nv22.01 \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/heteronyms-030921 \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/nemo_text_processing/text_normalization/en/data/whitelist/lj_speech.tsv \
&& cd ..
###Output
_____no_output_____
###Markdown
Finetuning FastPitch We can now train our model with the following command:**NOTE: This will take about 50 minutes on colab's K80 GPUs.**
###Code
# TODO(oktai15): remove +model.text_tokenizer.add_blank_at=true when we update FastPitch checkpoint
!(python fastpitch_finetune.py --config-name=fastpitch_align_v1.05.yaml \
train_dataset=./6097_manifest_train_dur_5_mins_local.json \
validation_datasets=./6097_manifest_dev_ns_all_local.json \
sup_data_path=./fastpitch_sup_data \
phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 \
heteronyms_path=tts_dataset_files/heteronyms-030921 \
whitelist_path=tts_dataset_files/lj_speech.tsv \
exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins \
+init_from_nemo_model=./tts_en_fastpitch_align.nemo \
+trainer.max_steps=1000 ~trainer.max_epochs \
trainer.check_val_every_n_epoch=25 \
model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 \
model.n_speakers=1 model.pitch_mean=121.9 model.pitch_std=23.1 \
model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 \
~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null \
+model.text_tokenizer.add_blank_at=true \
)
###Output
_____no_output_____
###Markdown
Let's take a closer look at the training command:* `--config-name=fastpitch_align_v1.05.yaml` * We first tell the script what config file to use.* `train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json sup_data_path=./fastpitch_sup_data` * We tell the script what manifest files to train and eval on, as well as where supplementary data is located (or will be calculated and saved during training if not provided). * `phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 heteronyms_path=tts_dataset_files/heteronyms-030921whitelist_path=tts_dataset_files/lj_speech.tsv ` * We tell the script where `phoneme_dict_path`, `heteronyms-030921` and `whitelist_path` are located. These are the additional files we downloaded earlier, and are used in preprocessing the data. * `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins` * Where we want to save our log files, tensorboard file, checkpoints, and more.* `+init_from_nemo_model=./tts_en_fastpitch_align.nemo` * We tell the script what checkpoint to finetune from.* `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25` * For this experiment, we tell the script to train for 1000 training steps/iterations rather than specifying a number of epochs to run. Since the config file specifies `max_epochs` instead, we need to remove that using `~trainer.max_epochs`.* `model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24` * Set batch sizes for the training and validation data loaders.* `model.n_speakers=1` * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook.* `model.pitch_mean=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512` * For the new speaker, we need to define new pitch hyperparameters for better audio quality. * These parameters work for speaker 6097 from the Hi-Fi TTS dataset. * For speaker 92, we suggest `model.pitch_mean=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512`. * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker. * After fmin and fmax are defined, pitch mean and std can be easily extracted.* `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam` * For fine-tuning, we lower the learning rate. * We use a fixed learning rate of 2e-4. * We switch from the lamb optimizer to the adam optimizer.* `trainer.devices=1 trainer.strategy=null` * For this notebook, we default to 1 gpu which means that we do not need ddp. * If you have the compute resources, feel free to scale this up to the number of free gpus you have available. * Please remove the `trainer.strategy=null` section if you intend on multi-gpu training. Synthesize Samples from Finetuned Checkpoints Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFi-GAN vocoder trained on LJSpeech.We define some helper functions as well.
###Code
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
vocoder = HifiGanModel.from_pretrained("tts_hifigan")
vocoder = vocoder.eval().cuda()
def infer(spec_gen_model, vocoder_model, str_input, speaker=None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Args:
spec_gen_model: Spectrogram generator model (FastPitch in our case)
vocoder_model: Vocoder model (HiFiGAN in our case)
str_input: Text input for the synthesis
speaker: Speaker ID
Returns:
spectrogram and waveform of the synthesized audio.
"""
with torch.no_grad():
parsed = spec_gen_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().to(device=spec_gen_model.device)
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker=speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt_from_last_run(
base_dir,
new_speaker_id,
duration_mins,
mixing_enabled,
original_speaker_id,
model_name="FastPitch"
):
mixing = "no_mixing" if not mixing_enabled else "mixing"
d = f"{original_speaker_id}_to_{new_speaker_id}_{mixing}_{duration_mins}_mins"
exp_dirs = list([i for i in (Path(base_dir) / d / model_name).iterdir() if i.is_dir()])
last_exp_dir = sorted(exp_dirs)[-1]
last_checkpoint_dir = last_exp_dir / "checkpoints"
last_ckpt = list(last_checkpoint_dir.glob('*-last.ckpt'))
if len(last_ckpt) == 0:
raise ValueError(f"There is no last checkpoint in {last_checkpoint_dir}.")
return str(last_ckpt[0])
###Output
_____no_output_____
###Markdown
Specify the speaker ID, duration of the dataset in minutes, and speaker mixing variables to find the relevant checkpoint (for example, we've saved our model in `ljspeech_to_6097_no_mixing_5_mins/`) and compare the synthesized audio with validation samples of the new speaker.The mixing variable is False for now, but we include some code to handle multiple speakers for reference.
###Code
new_speaker_id = 6097
duration_mins = 5
mixing = False
original_speaker_id = "ljspeech"
last_ckpt = get_best_ckpt_from_last_run("./", new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
# Only need to set speaker_id if there is more than one speaker
speaker_id = None
if mixing:
speaker_id = 1
num_val = 2 # Number of validation samples
val_records = []
with open(f"{new_speaker_id}_manifest_dev_ns_all_local.json", "r") as f:
for i, line in enumerate(f):
val_records.append(json.loads(line))
if len(val_records) >= num_val:
break
for val_record in val_records:
print("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))
print(f"SYNTHESIZED FOR -- Speaker: {new_speaker_id} | Dataset size: {duration_mins} mins | Mixing:{mixing} | Text: {val_record['text']}")
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker=speaker_id)
ipd.display(ipd.Audio(audio, rate=22050))
%matplotlib inline
imshow(spec, origin="lower", aspect="auto")
plt.show()
###Output
_____no_output_____
###Markdown
Improving Speech QualityWe see that from fine-tuning FastPitch, we were able to generate audio in a male voice but the audio quality is not as good as we expect. We recommend two steps to improve audio quality:* Finetuning HiFi-GAN* Adding more data**Note that both of these steps are outside the scope of the notebook due to the limited compute available on Colab, but the code is included below for you to use outside of this notebook.** Finetuning HiFi-GANFrom the synthesized samples, there might be audible audio crackling. To fix this, we need to finetune HiFi-GAN on the new speaker's data. HiFi-GAN shows improvement using **synthesized mel spectrograms**, so the first step is to generate mel spectrograms with our finetuned FastPitch model to use as input.The code below uses our finetuned model to generate synthesized mels for the training set we have been using. You will also need to do the same for the validation set (code should be very similar, just with paths changed).
###Code
import json
import numpy as np
import torch
import soundfile as sf
from pathlib import Path
from nemo.collections.tts.torch.helpers import BetaBinomialInterpolator
def load_wav(audio_file):
with sf.SoundFile(audio_file, 'r') as f:
samples = f.read(dtype='float32')
return samples.transpose()
# Get records from the training manifest
manifest_path = "./6097_manifest_train_dur_5_mins_local.json"
records = []
with open(manifest_path, "r") as f:
for i, line in enumerate(f):
records.append(json.loads(line))
beta_binomial_interpolator = BetaBinomialInterpolator()
spec_model.eval()
device = spec_model.device
save_dir = Path("./6097_manifest_train_dur_5_mins_local_mels")
save_dir.mkdir(exist_ok=True, parents=True)
# Generate a spectrograms (we need to use ground truth alignment for correct matching between audio and mels)
for i, r in enumerate(records):
audio = load_wav(r["audio_filepath"])
audio = torch.from_numpy(audio).unsqueeze(0).to(device)
audio_len = torch.tensor(audio.shape[1], dtype=torch.long, device=device).unsqueeze(0)
# Again, our finetuned FastPitch model doesn't use multiple speakers,
# but we keep the code to support it here for reference
if spec_model.fastpitch.speaker_emb is not None and "speaker" in r:
speaker = torch.tensor([r['speaker']]).to(device)
else:
speaker = None
with torch.no_grad():
if "normalized_text" in r:
text = spec_model.parse(r["normalized_text"], normalize=False)
else:
text = spec_model.parse(r['text'])
text_len = torch.tensor(text.shape[-1], dtype=torch.long, device=device).unsqueeze(0)
spect, spect_len = spec_model.preprocessor(input_signal=audio, length=audio_len)
# Generate attention prior and spectrogram inputs for HiFi-GAN
attn_prior = torch.from_numpy(
beta_binomial_interpolator(spect_len.item(), text_len.item())
).unsqueeze(0).to(text.device)
spectrogram = spec_model.forward(
text=text,
input_lens=text_len,
spec=spect,
mel_lens=spect_len,
attn_prior=attn_prior,
speaker=speaker,
)[0]
save_path = save_dir / f"mel_{i}.npy"
np.save(save_path, spectrogram[0].to('cpu').numpy())
r["mel_filepath"] = str(save_path)
hifigan_manifest_path = "hifigan_train_ft.json"
with open(hifigan_manifest_path, "w") as f:
for r in records:
f.write(json.dumps(r) + '\n')
# Please do the same for the validation json. Code is omitted.
###Output
_____no_output_____
###Markdown
Finetuning FastPitch for a new speakerIn this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on new speaker's text and speech pairs.We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.A final section will describe approaches to improve audio quality past this notebook. License> Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.>> Licensed under the Apache License, Version 2.0 (the "License");> you may not use this file except in compliance with the License.> You may obtain a copy of the License at>> http://www.apache.org/licenses/LICENSE-2.0>> Unless required by applicable law or agreed to in writing, software> distributed under the License is distributed on an "AS IS" BASIS,> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.> See the License for the specific language governing permissions and> limitations under the License.
###Code
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
BRANCH = 'main'
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# !apt-get install sox libsndfile1 ffmpeg
# !pip install wget unidecode
# !python -m pip install git+https://github.com/NeMo/NeMo.git@$BRANCH#egg=nemo_toolkit[tts]
###Output
_____no_output_____
###Markdown
Downloading Data___ Download and untar the data.The data contains a 5 minute subset of audio from speaker 6097 from the HiFiTTS dataset.
###Code
!wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data
!tar -xzf 6097_5_mins.tar.gz
###Output
_____no_output_____
###Markdown
Looking at manifest.json, we see a standard NeMo json that contains the filepath, text, and duration. Please note that manifest.json only contains the relative path.```{"audio_filepath": "audio/presentpictureofnsw_02_mann_0532.wav", "text": "not to stop more than ten minutes by the way", "duration": 2.6, "text_no_preprocessing": "not to stop more than ten minutes by the way,", "text_normalized": "not to stop more than ten minutes by the way,"}```Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set.
###Code
!cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json
!cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json
!ln -s ./6097_5_mins/audio audio
###Output
_____no_output_____
###Markdown
Finetuning FastPitch___ Let's first download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to ~/.cache, so let's move that to our current directory
###Code
import os
import json
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
from nemo.collections.tts.models import FastPitchModel
FastPitchModel.from_pretrained("tts_en_fastpitch")
from pathlib import Path
nemo_files = [p for p in Path("/root/.cache/torch/NeMo/").glob("**/tts_en_fastpitch_align.nemo")]
print(f"Copying {nemo_files[0]} to ./")
Path("./tts_en_fastpitch_align.nemo").write_bytes(nemo_files[0].read_bytes())
###Output
_____no_output_____
###Markdown
To finetune the FastPitch model on the above created filelists, we use `examples/tts/fastpitch2_finetune.py` script to train the models with the `fastpitch_align.yaml` configuration.Let's grab those files.
###Code
!wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch2_finetune.py
!mkdir -p conf && cd conf && wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align.yaml && cd ..
###Output
_____no_output_____
###Markdown
We can now train our model with the following command:**NOTE: This will take about 50 minutes on colab's K80 GPUs.**`python fastpitch2_finetune.py --config-name=fastpitch_align.yaml train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json +init_from_nemo_model=./tts_en_fastpitch_align.nemo +trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25 prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins model.n_speakers=1 model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null`
###Code
!python fastpitch2_finetune.py --config-name=fastpitch_align.yaml train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json +init_from_nemo_model=./tts_en_fastpitch_align.nemo +trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25 prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins model.n_speakers=1 model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null
###Output
_____no_output_____
###Markdown
Let's take a closer look at the training command:* `python fastpitch2_finetune.py --config-name=fastpitch_align.yaml` * --config-name tells the script what config to use.* `train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json` * We tell the model what manifest files we can to train and eval on.* `+init_from_nemo_model=./tts_en_fastpitch_align.nemo` * We tell the script what checkpoint to finetune from.* `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25` * For this experiment, we need to tell the script to train for 1000 training steps/iterations. We need to remove max_epochs using `~trainer.max_epochs`.* `prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24` * Some dataset parameters. The dataset does some online processing and stores the processing steps to the `prior_folder`.* `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins` * Where we want to save our log files, tensorboard file, checkpoints, and more* `model.n_speakers=1` * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook* `model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512` * For the new speaker, we need to define new pitch hyperparameters for better audio quality. * These parameters work for speaker 6097 from the HiFiTTS dataset * For speaker 92, we suggest `model.pitch_avg=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512` * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker. * After fmin and fmax are defined, pitch mean and std can be easily extracted* `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam` * For fine-tuning, we lower the learning rate * We use a fixed learning rate of 2e-4 * We switch from the lamb optimizer to the adam optimizer* `trainer.devices=1 trainer.strategy=null` * For this notebook, we default to 1 gpu which means that we do not need ddp * If you have the compute resources, feel free to scale this up to the number of free gpus you have available * Please remove the `trainer.strategy=null` section if you intend on multi-gpu training Synthesize Samples from Finetuned Checkpoints--- Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFiGAN vocoder trained on LJSpeech.We define some helper functions as well.
###Code
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
vocoder = HifiGanModel.from_pretrained("tts_hifigan")
vocoder = vocoder.eval().cuda()
def infer(spec_gen_model, vocoder_model, str_input, speaker = None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Arguments:
spec_gen_model -- Instance of FastPitch model
vocoder_model -- Instance of a vocoder model (HiFiGAN in our case)
str_input -- Text input for the synthesis
speaker -- Speaker number (in the case of a multi-speaker model -- in the mixing case)
Returns:
spectrogram, waveform of the synthesized audio.
"""
parser_model = spec_gen_model
with torch.no_grad():
parsed = parser_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().cuda()
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker = speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt(experiment_base_dir, new_speaker_id, duration_mins, mixing_enabled, original_speaker_id):
"""
Gives the model checkpoint paths of an experiment we ran.
Arguments:
experiment_base_dir -- Base experiment directory (specified on top of this notebook as exp_base_dir)
new_speaker_id -- Speaker id of new HiFiTTS speaker we finetuned FastPitch on
duration_mins -- total minutes of the new speaker data
mixing_enabled -- True or False depending on whether we want to mix the original speaker data or not
original_speaker_id -- speaker id of the original HiFiTTS speaker
Returns:
List of all checkpoint paths sorted by validation error, Last checkpoint path
"""
if not mixing_enabled:
exp_dir = "{}/{}_to_{}_no_mixing_{}_mins".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)
else:
exp_dir = "{}/{}_to_{}_mixing_{}_mins".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)
ckpt_candidates = []
last_ckpt = None
for root, dirs, files in os.walk(exp_dir):
for file in files:
if file.endswith(".ckpt"):
val_error = float(file.split("v_loss=")[1].split("-epoch")[0])
if "last" in file:
last_ckpt = os.path.join(root, file)
ckpt_candidates.append( (val_error, os.path.join(root, file)))
ckpt_candidates.sort()
return ckpt_candidates, last_ckpt
###Output
_____no_output_____
###Markdown
Specify the speaker id, duration mins and mixing variable to find the relevant checkpoint from the exp_base_dir and compare the synthesized audio with validation samples of the new speaker.
###Code
new_speaker_id = 6097
duration_mins = 5
mixing = False
original_speaker_id = "ljspeech"
_ ,last_ckpt = get_best_ckpt("./", new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
_speaker=None
if mixing:
_speaker = 1
num_val = 2
manifest_path = os.path.join("./", "{}_manifest_dev_ns_all_local.json".format(new_speaker_id))
val_records = []
with open(manifest_path, "r") as f:
for i, line in enumerate(f):
val_records.append( json.loads(line) )
if len(val_records) >= num_val:
break
for val_record in val_records:
print ("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))
print ("SYNTHESIZED FOR -- Speaker: {} | Dataset size: {} mins | Mixing:{} | Text: {}".format(new_speaker_id, duration_mins, mixing, val_record['text']))
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker = _speaker)
ipd.display(ipd.Audio(audio, rate=22050))
%matplotlib inline
#if spec is not None:
imshow(spec, origin="lower", aspect = "auto")
plt.show()
###Output
_____no_output_____
###Markdown
Finetuning FastPitch for a new speakerIn this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on new speaker's text and speech pairs.We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.A final section will describe approaches to improve audio quality past this notebook. License> Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.>> Licensed under the Apache License, Version 2.0 (the "License");> you may not use this file except in compliance with the License.> You may obtain a copy of the License at>> http://www.apache.org/licenses/LICENSE-2.0>> Unless required by applicable law or agreed to in writing, software> distributed under the License is distributed on an "AS IS" BASIS,> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.> See the License for the specific language governing permissions and> limitations under the License.
###Code
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
BRANCH = 'r1.7.0'
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# !apt-get install sox libsndfile1 ffmpeg
# !pip install wget unidecode
# !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
###Output
_____no_output_____
###Markdown
Downloading data For our tutorial, we will use small part of Hi-Fi Multi-Speaker English TTS (Hi-Fi TTS) dataset. You can read more about dataset [here](https://arxiv.org/abs/2104.01497). As a target speaker, will use speaker whose id is 6097 and only 5 minute subset of audio will be used. We additionally resampled audios to 22050 kHz.
###Code
!wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data
!tar -xzf 6097_5_mins.tar.gz
###Output
_____no_output_____
###Markdown
Looking at `manifest.json`, we see a standard NeMo json that contains the filepath, text, and duration. Please note that our `manifest.json` contains the relative path.```{"audio_filepath": "audio/presentpictureofnsw_02_mann_0532.wav", "text": "not to stop more than ten minutes by the way", "duration": 2.6, "text_no_preprocessing": "not to stop more than ten minutes by the way,", "text_normalized": "not to stop more than ten minutes by the way,"}```Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set.
###Code
!cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json
!cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json
!ln -s ./6097_5_mins/audio audio
###Output
_____no_output_____
###Markdown
Let's also download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to `~/.cache`, so let's move that to our current directory. *Note: please, check that `home_path` refers to your home folder. Otherwise, change it manually.*
###Code
home_path = !(echo $HOME)
home_path = home_path[0]
print(home_path)
import os
import json
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
from nemo.collections.tts.models import FastPitchModel
FastPitchModel.from_pretrained("tts_en_fastpitch")
from pathlib import Path
nemo_files = [p for p in Path(f"{home_path}/.cache/torch/NeMo/").glob("**/tts_en_fastpitch_align.nemo")]
print(f"Copying {nemo_files[0]} to ./")
Path("./tts_en_fastpitch_align.nemo").write_bytes(nemo_files[0].read_bytes())
###Output
_____no_output_____
###Markdown
To finetune the FastPitch model on the above created filelists, we use `examples/tts/fastpitch_finetune.py` script to train the models with the `fastpitch_align_v1.05.yaml` configuration.Let's grab those files.
###Code
!wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch_finetune.py
!mkdir -p conf \
&& cd conf \
&& wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align_v1.05.yaml \
&& cd ..
###Output
_____no_output_____
###Markdown
We also need some additional files (see `FastPitch_MixerTTS_Training.ipynb` tutorial for more details) for training. Let's download it too.
###Code
# additional files
!mkdir -p tts_dataset_files && cd tts_dataset_files \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/cmudict-0.7b_nv22.01 \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/heteronyms-030921 \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/nemo_text_processing/text_normalization/en/data/whitelist_lj_speech.tsv \
&& cd ..
###Output
_____no_output_____
###Markdown
Finetuning FastPitch We can now train our model with the following command:**NOTE: This will take about 50 minutes on colab's K80 GPUs.**
###Code
# TODO(oktai15): remove +model.text_tokenizer.add_blank_at=true when we update FastPitch checkpoint
!(python fastpitch_finetune.py --config-name=fastpitch_align_v1.05.yaml \
train_dataset=./6097_manifest_train_dur_5_mins_local.json \
validation_datasets=./6097_manifest_dev_ns_all_local.json \
sup_data_path=./fastpitch_sup_data \
phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 \
heteronyms_path=tts_dataset_files/heteronyms-030921 \
whitelist_path=tts_dataset_files/whitelist_lj_speech.tsv \
exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins \
+init_from_nemo_model=./tts_en_fastpitch_align.nemo \
+trainer.max_steps=1000 ~trainer.max_epochs \
trainer.check_val_every_n_epoch=25 \
model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 \
model.n_speakers=1 model.pitch_mean=121.9 model.pitch_std=23.1 \
model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 \
~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null \
+model.text_tokenizer.add_blank_at=true \
)
###Output
_____no_output_____
###Markdown
Let's take a closer look at the training command:* `--config-name=fastpitch_align_v1.05.yaml` * --config-name tells the script what config to use.* `train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json sup_data_path=./fastpitch_sup_data` * We tell the script what manifest files we can to train and eval on and where supplementary data is located or will be calculated and saved during training. * `phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 heteronyms_path=tts_dataset_files/heteronyms-030921whitelist_path=tts_dataset_files/whitelist_lj_speech.tsv ` * We tell the script where `phoneme_dict_path`, `heteronyms-030921` and `whitelist_path` are located. * `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins` * Where we want to save our log files, tensorboard file, checkpoints, and more.* `+init_from_nemo_model=./tts_en_fastpitch_align.nemo` * We tell the script what checkpoint to finetune from.* `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25` * For this experiment, we need to tell the script to train for 1000 training steps/iterations. We need to remove max_epochs using `~trainer.max_epochs`.* `model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24` * Set batch sizes. * `model.n_speakers=1` * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook.* `model.pitch_mean=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512` * For the new speaker, we need to define new pitch hyperparameters for better audio quality. * These parameters work for speaker 6097 from the Hi-Fi TTS dataset. * For speaker 92, we suggest `model.pitch_mean=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512`. * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker. * After fmin and fmax are defined, pitch mean and std can be easily extracted.* `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam` * For fine-tuning, we lower the learning rate. * We use a fixed learning rate of 2e-4. * We switch from the lamb optimizer to the adam optimizer.* `trainer.devices=1 trainer.strategy=null` * For this notebook, we default to 1 gpu which means that we do not need ddp. * If you have the compute resources, feel free to scale this up to the number of free gpus you have available. * Please remove the `trainer.strategy=null` section if you intend on multi-gpu training. Synthesize Samples from Finetuned Checkpoints Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFiGAN vocoder trained on LJSpeech.We define some helper functions as well.
###Code
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
vocoder = HifiGanModel.from_pretrained("tts_hifigan")
vocoder = vocoder.eval().cuda()
def infer(spec_gen_model, vocoder_model, str_input, speaker=None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Args:
spec_gen_model: Spectrogram generator model (FastPitch in our case)
vocoder_model: Vocoder model (HiFiGAN in our case)
str_input: Text input for the synthesis
speaker: Speaker ID
Returns:
spectrogram and waveform of the synthesized audio.
"""
with torch.no_grad():
parsed = spec_gen_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().to(device=spec_gen_model.device)
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker=speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt_from_last_run(
base_dir,
new_speaker_id,
duration_mins,
mixing_enabled,
original_speaker_id,
model_name="FastPitch"
):
mixing = "no_mixing" if not mixing_enabled else "mixing"
d = f"{original_speaker_id}_to_{new_speaker_id}_{mixing}_{duration_mins}_mins"
exp_dirs = list([i for i in (Path(base_dir) / d / model_name).iterdir() if i.is_dir()])
last_exp_dir = sorted(exp_dirs)[-1]
last_checkpoint_dir = last_exp_dir / "checkpoints"
last_ckpt = list(last_checkpoint_dir.glob('*-last.ckpt'))
if len(last_ckpt) == 0:
raise ValueError(f"There is no last checkpoint in {last_checkpoint_dir}.")
return str(last_ckpt[0])
###Output
_____no_output_____
###Markdown
Specify the speaker id, duration mins and mixing variable to find the relevant checkpoint and compare the synthesized audio with validation samples of the new speaker.
###Code
new_speaker_id = 6097
duration_mins = 5
mixing = False
original_speaker_id = "ljspeech"
last_ckpt = get_best_ckpt_from_last_run("./", new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
speaker_id = None
if mixing:
speaker_id = 1
num_val = 2
val_records = []
with open(f"{new_speaker_id}_manifest_dev_ns_all_local.json", "r") as f:
for i, line in enumerate(f):
val_records.append(json.loads(line))
if len(val_records) >= num_val:
break
for val_record in val_records:
print("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))
print(f"SYNTHESIZED FOR -- Speaker: {new_speaker_id} | Dataset size: {duration_mins} mins | Mixing:{mixing} | Text: {val_record['text']}")
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker=speaker_id)
ipd.display(ipd.Audio(audio, rate=22050))
%matplotlib inline
imshow(spec, origin="lower", aspect="auto")
plt.show()
###Output
_____no_output_____
###Markdown
Finetuning FastPitch for a new speakerIn this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on new speaker's text and speech pairs.We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.A final section will describe approaches to improve audio quality past this notebook. License> Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.>> Licensed under the Apache License, Version 2.0 (the "License");> you may not use this file except in compliance with the License.> You may obtain a copy of the License at>> http://www.apache.org/licenses/LICENSE-2.0>> Unless required by applicable law or agreed to in writing, software> distributed under the License is distributed on an "AS IS" BASIS,> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.> See the License for the specific language governing permissions and> limitations under the License.
###Code
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
BRANCH = 'main'
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# !apt-get install sox libsndfile1 ffmpeg
# !pip install wget unidecode
# !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
###Output
_____no_output_____
###Markdown
Downloading data For our tutorial, we will use small part of Hi-Fi Multi-Speaker English TTS (Hi-Fi TTS) dataset. You can read more about dataset [here](https://arxiv.org/abs/2104.01497). As a target speaker, will use speaker whose id is 6097 and only 5 minute subset of audio will be used. We additionally resampled audios to 22050 kHz.
###Code
!wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data
!tar -xzf 6097_5_mins.tar.gz
###Output
_____no_output_____
###Markdown
Looking at `manifest.json`, we see a standard NeMo json that contains the filepath, text, and duration. Please note that our `manifest.json` contains the relative path.```{"audio_filepath": "audio/presentpictureofnsw_02_mann_0532.wav", "text": "not to stop more than ten minutes by the way", "duration": 2.6, "text_no_preprocessing": "not to stop more than ten minutes by the way,", "text_normalized": "not to stop more than ten minutes by the way,"}```Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set.
###Code
!cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json
!cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json
!ln -s ./6097_5_mins/audio audio
###Output
_____no_output_____
###Markdown
Let's also download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to `~/.cache`, so let's move that to our current directory. *Note: please, check that `home_path` refers to your home folder. Otherwise, change it manually.*
###Code
home_path = !(echo $HOME)
home_path = home_path[0]
print(home_path)
import os
import json
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
from nemo.collections.tts.models import FastPitchModel
FastPitchModel.from_pretrained("tts_en_fastpitch")
from pathlib import Path
nemo_files = [p for p in Path(f"{home_path}/.cache/torch/NeMo/").glob("**/tts_en_fastpitch_align.nemo")]
print(f"Copying {nemo_files[0]} to ./")
Path("./tts_en_fastpitch_align.nemo").write_bytes(nemo_files[0].read_bytes())
###Output
_____no_output_____
###Markdown
To finetune the FastPitch model on the above created filelists, we use `examples/tts/fastpitch_finetune.py` script to train the models with the `fastpitch_align_v1.05.yaml` configuration.Let's grab those files.
###Code
!wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch_finetune.py
!mkdir -p conf \
&& cd conf \
&& wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align_v1.05.yaml \
&& cd ..
###Output
_____no_output_____
###Markdown
We also need some additional files (see `FastPitch_MixerTTS_Training.ipynb` tutorial for more details) for training. Let's download it too.
###Code
# additional files
!mkdir -p tts_dataset_files && cd tts_dataset_files \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/cmudict-0.7b_nv22.01 \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/heteronyms-030921 \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/nemo_text_processing/text_normalization/en/data/whitelist_lj_speech.tsv \
&& cd ..
###Output
_____no_output_____
###Markdown
Finetuning FastPitch We can now train our model with the following command:**NOTE: This will take about 50 minutes on colab's K80 GPUs.**
###Code
# TODO(oktai15): remove +model.text_tokenizer.add_blank_at=true when we update FastPitch checkpoint
!(python fastpitch_finetune.py --config-name=fastpitch_align_v1.05.yaml \
train_dataset=./6097_manifest_train_dur_5_mins_local.json \
validation_datasets=./6097_manifest_dev_ns_all_local.json \
sup_data_path=./fastpitch_sup_data \
phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 \
heteronyms_path=tts_dataset_files/heteronyms-030921 \
whitelist_path=tts_dataset_files/whitelist_lj_speech.tsv \
exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins \
+init_from_nemo_model=./tts_en_fastpitch_align.nemo \
+trainer.max_steps=1000 ~trainer.max_epochs \
trainer.check_val_every_n_epoch=25 \
model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 \
model.n_speakers=1 model.pitch_mean=121.9 model.pitch_std=23.1 \
model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 \
~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null \
+model.text_tokenizer.add_blank_at=true \
)
###Output
_____no_output_____
###Markdown
Let's take a closer look at the training command:* `--config-name=fastpitch_align_v1.05.yaml` * --config-name tells the script what config to use.* `train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json sup_data_path=./fastpitch_sup_data` * We tell the script what manifest files we can to train and eval on and where supplementary data is located or will be calculated and saved during training. * `phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 heteronyms_path=tts_dataset_files/heteronyms-030921whitelist_path=tts_dataset_files/whitelist_lj_speech.tsv ` * We tell the script where `phoneme_dict_path`, `heteronyms-030921` and `whitelist_path` are located. * `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins` * Where we want to save our log files, tensorboard file, checkpoints, and more.* `+init_from_nemo_model=./tts_en_fastpitch_align.nemo` * We tell the script what checkpoint to finetune from.* `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25` * For this experiment, we need to tell the script to train for 1000 training steps/iterations. We need to remove max_epochs using `~trainer.max_epochs`.* `model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24` * Set batch sizes. * `model.n_speakers=1` * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook.* `model.pitch_mean=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512` * For the new speaker, we need to define new pitch hyperparameters for better audio quality. * These parameters work for speaker 6097 from the Hi-Fi TTS dataset. * For speaker 92, we suggest `model.pitch_mean=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512`. * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker. * After fmin and fmax are defined, pitch mean and std can be easily extracted.* `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam` * For fine-tuning, we lower the learning rate. * We use a fixed learning rate of 2e-4. * We switch from the lamb optimizer to the adam optimizer.* `trainer.devices=1 trainer.strategy=null` * For this notebook, we default to 1 gpu which means that we do not need ddp. * If you have the compute resources, feel free to scale this up to the number of free gpus you have available. * Please remove the `trainer.strategy=null` section if you intend on multi-gpu training. Synthesize Samples from Finetuned Checkpoints Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFiGAN vocoder trained on LJSpeech.We define some helper functions as well.
###Code
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
vocoder = HifiGanModel.from_pretrained("tts_hifigan")
vocoder = vocoder.eval().cuda()
def infer(spec_gen_model, vocoder_model, str_input, speaker=None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Args:
spec_gen_model: Spectrogram generator model (FastPitch in our case)
vocoder_model: Vocoder model (HiFiGAN in our case)
str_input: Text input for the synthesis
speaker: Speaker ID
Returns:
spectrogram and waveform of the synthesized audio.
"""
with torch.no_grad():
parsed = spec_gen_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().to(device=spec_gen_model.device)
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker=speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt_from_last_run(
base_dir,
new_speaker_id,
duration_mins,
mixing_enabled,
original_speaker_id,
model_name="FastPitch"
):
mixing = "no_mixing" if not mixing_enabled else "mixing"
d = f"{original_speaker_id}_to_{new_speaker_id}_{mixing}_{duration_mins}_mins"
exp_dirs = list([i for i in (Path(base_dir) / d / model_name).iterdir() if i.is_dir()])
last_exp_dir = sorted(exp_dirs)[-1]
last_checkpoint_dir = last_exp_dir / "checkpoints"
last_ckpt = list(last_checkpoint_dir.glob('*-last.ckpt'))
if len(last_ckpt) == 0:
raise ValueError(f"There is no last checkpoint in {last_checkpoint_dir}.")
return str(last_ckpt[0])
###Output
_____no_output_____
###Markdown
Specify the speaker id, duration mins and mixing variable to find the relevant checkpoint and compare the synthesized audio with validation samples of the new speaker.
###Code
new_speaker_id = 6097
duration_mins = 5
mixing = False
original_speaker_id = "ljspeech"
last_ckpt = get_best_ckpt_from_last_run("./", new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
speaker_id = None
if mixing:
speaker_id = 1
num_val = 2
val_records = []
with open(f"{new_speaker_id}_manifest_dev_ns_all_local.json", "r") as f:
for i, line in enumerate(f):
val_records.append(json.loads(line))
if len(val_records) >= num_val:
break
for val_record in val_records:
print("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))
print(f"SYNTHESIZED FOR -- Speaker: {new_speaker_id} | Dataset size: {duration_mins} mins | Mixing:{mixing} | Text: {val_record['text']}")
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker=speaker_id)
ipd.display(ipd.Audio(audio, rate=22050))
%matplotlib inline
imshow(spec, origin="lower", aspect="auto")
plt.show()
###Output
_____no_output_____
###Markdown
Finetuning FastPitch for a new speakerIn this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on new speaker's text and speech pairs.We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.A final section will describe approaches to improve audio quality past this notebook. License> Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.>> Licensed under the Apache License, Version 2.0 (the "License");> you may not use this file except in compliance with the License.> You may obtain a copy of the License at>> http://www.apache.org/licenses/LICENSE-2.0>> Unless required by applicable law or agreed to in writing, software> distributed under the License is distributed on an "AS IS" BASIS,> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.> See the License for the specific language governing permissions and> limitations under the License.
###Code
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# !apt-get install sox libsndfile1 ffmpeg
# !pip install wget unidecode
# BRANCH = 'main'
# !python -m pip install git+https://github.com/NeMo/NeMo.git@$BRANCH#egg=nemo_toolkit[tts]
###Output
_____no_output_____
###Markdown
Downloading Data___ Download and untar the data.The data contains a 5 minute subset of audio from speaker 6097 from the HiFiTTS dataset.
###Code
!wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data
!tar -xzf 6097_5_mins.tar.gz
###Output
_____no_output_____
###Markdown
Looking at manifest.json, we see a standard NeMo json that contains the filepath, text, and duration. Please note that manifest.json only contains the relative path.```{"audio_filepath": "audio/presentpictureofnsw_02_mann_0532.wav", "text": "not to stop more than ten minutes by the way", "duration": 2.6, "text_no_preprocessing": "not to stop more than ten minutes by the way,", "text_normalized": "not to stop more than ten minutes by the way,"}```Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set.
###Code
!cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json
!cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json
!ln -s ./6097_5_mins/audio audio
###Output
_____no_output_____
###Markdown
Finetuning FastPitch___ Let's first download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to ~/.cache, so let's move that to our current directory
###Code
import os
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
from nemo.collections.tts.models import FastPitchModel
FastPitchModel.from_pretrained("tts_en_fastpitch")
from pathlib import Path
nemo_files = [p for p in Path("/root/.cache/torch/NeMo/").glob("**/tts_en_fastpitch_align.nemo")]
print(f"Copying {nemo_files[0]} to ./")
Path("./tts_en_fastpitch_align.nemo").write_bytes(nemo_files[0].read_bytes())
###Output
_____no_output_____
###Markdown
To finetune the FastPitch model on the above created filelists, we use `examples/tts/fastpitch2_finetune.py` script to train the models with the `fastpitch_align.yaml` configuration.Let's grab those files.
###Code
!wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch2_finetune.py
!mkdir conf && cd conf && wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align.yaml && cd ..
###Output
_____no_output_____
###Markdown
We can now train our model with the following command:NOTE: This will take about 50 minutes on colab's K80 GPUs.`python fastpitch2_finetune.py --config-name=fastpitch_align.yaml train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json +init_from_nemo_model=./tts_en_fastpitch_align.nemo +trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25 prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins model.n_speakers=1 model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam`
###Code
!python fastpitch2_finetune.py --config-name=fastpitch_align.yaml train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json +init_from_nemo_model=./tts_en_fastpitch_align.nemo +trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25 prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins model.n_speakers=1 model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam
###Output
_____no_output_____
###Markdown
Let's take a closer look at the training command:* `python fastpitch2_finetune.py --config-name=fastpitch_align.yaml` * --config-name tells the script what config to use.* `train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json` * We tell the model what manifest files we can to train and eval on.* `+init_from_nemo_model=./tts_en_fastpitch_align.nemo` * We tell the script what checkpoint to finetune from.* `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25` * For this experiment, we need to tell the script to train for 1000 training steps/iterations. We need to remove max_epochs using `~trainer.max_epochs`.* `prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24` * Some dataset parameters. The dataset does some online processing and stores the processing steps to the `prior_folder`.* `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins` * Where we want to save our log files, tensorboard file, checkpoints, and more* `model.n_speakers=1` * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook* `model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512` * For the new speaker, we need to define new pitch hyperparameters for better audio quality. * These parameters work for speaker 6097 from the HiFiTTS dataset * For speaker 92, we suggest `model.pitch_avg=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512` * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker. * After fmin and fmax are defined, pitch mean and std can be easily extracted* `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam` * For fine-tuning, we lower the learning rate * We use a fixed learning rate of 2e-4 * We switch from the lamb optimizer to the adam optimizer Synthesize Samples from Finetuned Checkpoints--- Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFiGAN vocoder trained on LJSpeech.We define some helper functions as well.
###Code
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
vocoder = HifiGanModel.from_pretrained("tts_hifigan")
vocoder.eval().cuda()
def infer(spec_gen_model, vocoder_model, str_input, speaker = None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Arguments:
spec_gen_model -- Instance of FastPitch model
vocoder_model -- Instance of a vocoder model (HiFiGAN in our case)
str_input -- Text input for the synthesis
speaker -- Speaker number (in the case of a multi-speaker model -- in the mixing case)
Returns:
spectrogram, waveform of the synthesized audio.
"""
parser_model = spec_gen_model
with torch.no_grad():
parsed = parser_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().cuda()
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker = speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt(experiment_base_dir, new_speaker_id, duration_mins, mixing_enabled, original_speaker_id):
"""
Gives the model checkpoint paths of an experiment we ran.
Arguments:
experiment_base_dir -- Base experiment directory (specified on top of this notebook as exp_base_dir)
new_speaker_id -- Speaker id of new HiFiTTS speaker we finetuned FastPitch on
duration_mins -- total minutes of the new speaker data
mixing_enabled -- True or False depending on whether we want to mix the original speaker data or not
original_speaker_id -- speaker id of the original HiFiTTS speaker
Returns:
List of all checkpoint paths sorted by validation error, Last checkpoint path
"""
if not mixing_enabled:
exp_dir = "{}/{}_to_{}_no_mixing_{}_mins".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)
else:
exp_dir = "{}/{}_to_{}_mixing_{}_mins".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)
ckpt_candidates = []
last_ckpt = None
for root, dirs, files in os.walk(exp_dir):
for file in files:
if file.endswith(".ckpt"):
val_error = float(file.split("v_loss=")[1].split("-epoch")[0])
if "last" in file:
last_ckpt = os.path.join(root, file)
ckpt_candidates.append( (val_error, os.path.join(root, file)))
ckpt_candidates.sort()
return ckpt_candidates, last_ckpt
###Output
_____no_output_____
###Markdown
Specify the speaker id, duration mins and mixing variable to find the relevant checkpoint from the exp_base_dir and compare the synthesized audio with validation samples of the new speaker.
###Code
new_speaker_id = 6097
duration_mins = 5
mixing = False
original_speaker_id = "ljspeech"
_ ,last_ckpt = get_best_ckpt("./", new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
_speaker=None
if mixing:
_speaker = 1
num_val = 2
manifest_path = os.path.join("./", "{}_manifest_dev_ns_all_local.json".format(new_speaker_id))
val_records = []
with open(manifest_path, "r") as f:
for i, line in enumerate(f):
val_records.append( json.loads(line) )
if len(val_records) >= num_val:
break
for val_record in val_records:
print ("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))
print ("SYNTHESIZED FOR -- Speaker: {} | Dataset size: {} mins | Mixing:{} | Text: {}".format(new_speaker_id, duration_mins, mixing, val_record['text']))
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker = _speaker)
ipd.display(ipd.Audio(audio, rate=22050))
%matplotlib inline
#if spec is not None:
imshow(spec, origin="lower", aspect = "auto")
plt.show()
###Output
_____no_output_____
###Markdown
Finetuning FastPitch for a new speakerIn this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on new speaker's text and speech pairs.We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.A final section will describe approaches to improve audio quality past this notebook. License> Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.>> Licensed under the Apache License, Version 2.0 (the "License");> you may not use this file except in compliance with the License.> You may obtain a copy of the License at>> http://www.apache.org/licenses/LICENSE-2.0>> Unless required by applicable law or agreed to in writing, software> distributed under the License is distributed on an "AS IS" BASIS,> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.> See the License for the specific language governing permissions and> limitations under the License.
###Code
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
BRANCH = 'main'
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# !apt-get install sox libsndfile1 ffmpeg
# !pip install wget unidecode
# !python -m pip install git+https://github.com/NeMo/NeMo.git@$BRANCH#egg=nemo_toolkit[tts]
###Output
_____no_output_____
###Markdown
Downloading Data___ Download and untar the data.The data contains a 5 minute subset of audio from speaker 6097 from the HiFiTTS dataset.
###Code
!wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data
!tar -xzf 6097_5_mins.tar.gz
###Output
_____no_output_____
###Markdown
Looking at manifest.json, we see a standard NeMo json that contains the filepath, text, and duration. Please note that manifest.json only contains the relative path.```{"audio_filepath": "audio/presentpictureofnsw_02_mann_0532.wav", "text": "not to stop more than ten minutes by the way", "duration": 2.6, "text_no_preprocessing": "not to stop more than ten minutes by the way,", "text_normalized": "not to stop more than ten minutes by the way,"}```Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set.
###Code
!cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json
!cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json
!ln -s ./6097_5_mins/audio audio
###Output
_____no_output_____
###Markdown
Finetuning FastPitch___ Let's first download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to ~/.cache, so let's move that to our current directory. *Note: please, check that `home_path` refers to your home folder. Otherwise, change it manually.*
###Code
home_path = !(echo $HOME)
home_path = home_path[0]
print(home_path)
import os
import json
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
from nemo.collections.tts.models import FastPitchModel
FastPitchModel.from_pretrained("tts_en_fastpitch")
from pathlib import Path
nemo_files = [p for p in Path(f"{home_path}/.cache/torch/NeMo/").glob("**/tts_en_fastpitch_align.nemo")]
print(f"Copying {nemo_files[0]} to ./")
Path("./tts_en_fastpitch_align.nemo").write_bytes(nemo_files[0].read_bytes())
###Output
_____no_output_____
###Markdown
To finetune the FastPitch model on the above created filelists, we use `examples/tts/fastpitch_finetune.py` script to train the models with the `fastpitch_align.yaml` configuration.Let's grab those files.
###Code
!wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch_finetune.py
!mkdir -p conf && cd conf && wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align.yaml && cd ..
###Output
_____no_output_____
###Markdown
We can now train our model with the following command:**NOTE: This will take about 50 minutes on colab's K80 GPUs.**`python fastpitch_finetune.py --config-name=fastpitch_align.yaml train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json +init_from_nemo_model=./tts_en_fastpitch_align.nemo +trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25 prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins model.n_speakers=1 model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null`
###Code
!(python fastpitch_finetune.py --config-name=fastpitch_align.yaml \
train_dataset=./6097_manifest_train_dur_5_mins_local.json \
validation_datasets=./6097_manifest_dev_ns_all_local.json \
+init_from_nemo_model=./tts_en_fastpitch_align.nemo \
+trainer.max_steps=1000 ~trainer.max_epochs \
trainer.check_val_every_n_epoch=25 \
prior_folder=./Priors6097 \
model.train_ds.dataloader_params.batch_size=24 \
model.validation_ds.dataloader_params.batch_size=24 \
exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins \
model.n_speakers=1 model.pitch_avg=121.9 model.pitch_std=23.1 \
model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 \
~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null \
)
###Output
_____no_output_____
###Markdown
Let's take a closer look at the training command:* `python fastpitch_finetune.py --config-name=fastpitch_align.yaml` * --config-name tells the script what config to use.* `train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json` * We tell the model what manifest files we can to train and eval on.* `+init_from_nemo_model=./tts_en_fastpitch_align.nemo` * We tell the script what checkpoint to finetune from.* `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25` * For this experiment, we need to tell the script to train for 1000 training steps/iterations. We need to remove max_epochs using `~trainer.max_epochs`.* `prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24` * Some dataset parameters. The dataset does some online processing and stores the processing steps to the `prior_folder`.* `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins` * Where we want to save our log files, tensorboard file, checkpoints, and more* `model.n_speakers=1` * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook* `model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512` * For the new speaker, we need to define new pitch hyperparameters for better audio quality. * These parameters work for speaker 6097 from the HiFiTTS dataset * For speaker 92, we suggest `model.pitch_avg=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512` * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker. * After fmin and fmax are defined, pitch mean and std can be easily extracted* `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam` * For fine-tuning, we lower the learning rate * We use a fixed learning rate of 2e-4 * We switch from the lamb optimizer to the adam optimizer* `trainer.devices=1 trainer.strategy=null` * For this notebook, we default to 1 gpu which means that we do not need ddp * If you have the compute resources, feel free to scale this up to the number of free gpus you have available * Please remove the `trainer.strategy=null` section if you intend on multi-gpu training Synthesize Samples from Finetuned Checkpoints--- Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFiGAN vocoder trained on LJSpeech.We define some helper functions as well.
###Code
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
vocoder = HifiGanModel.from_pretrained("tts_hifigan")
vocoder = vocoder.eval().cuda()
def infer(spec_gen_model, vocoder_model, str_input, speaker = None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Arguments:
spec_gen_model -- Instance of FastPitch model
vocoder_model -- Instance of a vocoder model (HiFiGAN in our case)
str_input -- Text input for the synthesis
speaker -- Speaker number (in the case of a multi-speaker model -- in the mixing case)
Returns:
spectrogram, waveform of the synthesized audio.
"""
parser_model = spec_gen_model
with torch.no_grad():
parsed = parser_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().cuda()
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker = speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt(experiment_base_dir, new_speaker_id, duration_mins, mixing_enabled, original_speaker_id):
"""
Gives the model checkpoint paths of an experiment we ran.
Arguments:
experiment_base_dir -- Base experiment directory (specified on top of this notebook as exp_base_dir)
new_speaker_id -- Speaker id of new HiFiTTS speaker we finetuned FastPitch on
duration_mins -- total minutes of the new speaker data
mixing_enabled -- True or False depending on whether we want to mix the original speaker data or not
original_speaker_id -- speaker id of the original HiFiTTS speaker
Returns:
List of all checkpoint paths sorted by validation error, Last checkpoint path
"""
if not mixing_enabled:
exp_dir = "{}/{}_to_{}_no_mixing_{}_mins".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)
else:
exp_dir = "{}/{}_to_{}_mixing_{}_mins".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)
ckpt_candidates = []
last_ckpt = None
for root, dirs, files in os.walk(exp_dir):
for file in files:
if file.endswith(".ckpt"):
val_error = float(file.split("v_loss=")[1].split("-epoch")[0])
if "last" in file:
last_ckpt = os.path.join(root, file)
ckpt_candidates.append( (val_error, os.path.join(root, file)))
ckpt_candidates.sort()
return ckpt_candidates, last_ckpt
###Output
_____no_output_____
###Markdown
Specify the speaker id, duration mins and mixing variable to find the relevant checkpoint from the exp_base_dir and compare the synthesized audio with validation samples of the new speaker.
###Code
new_speaker_id = 6097
duration_mins = 5
mixing = False
original_speaker_id = "ljspeech"
_ ,last_ckpt = get_best_ckpt("./", new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
_speaker=None
if mixing:
_speaker = 1
num_val = 2
manifest_path = os.path.join("./", "{}_manifest_dev_ns_all_local.json".format(new_speaker_id))
val_records = []
with open(manifest_path, "r") as f:
for i, line in enumerate(f):
val_records.append( json.loads(line) )
if len(val_records) >= num_val:
break
for val_record in val_records:
print ("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))
print ("SYNTHESIZED FOR -- Speaker: {} | Dataset size: {} mins | Mixing:{} | Text: {}".format(new_speaker_id, duration_mins, mixing, val_record['text']))
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker = _speaker)
ipd.display(ipd.Audio(audio, rate=22050))
%matplotlib inline
#if spec is not None:
imshow(spec, origin="lower", aspect = "auto")
plt.show()
###Output
_____no_output_____
###Markdown
Finetuning FastPitch for a new speakerIn this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on new speaker's text and speech pairs.We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.A final section will describe approaches to improve audio quality past this notebook. License> Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.>> Licensed under the Apache License, Version 2.0 (the "License");> you may not use this file except in compliance with the License.> You may obtain a copy of the License at>> http://www.apache.org/licenses/LICENSE-2.0>> Unless required by applicable law or agreed to in writing, software> distributed under the License is distributed on an "AS IS" BASIS,> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.> See the License for the specific language governing permissions and> limitations under the License.
###Code
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
BRANCH = 'main'
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# !apt-get install sox libsndfile1 ffmpeg
# !pip install wget unidecode
# !python -m pip install git+https://github.com/NeMo/NeMo.git@$BRANCH#egg=nemo_toolkit[tts]
###Output
_____no_output_____
###Markdown
Downloading Data___ Download and untar the data.The data contains a 5 minute subset of audio from speaker 6097 from the HiFiTTS dataset.
###Code
!wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data
!tar -xzf 6097_5_mins.tar.gz
###Output
_____no_output_____
###Markdown
Looking at manifest.json, we see a standard NeMo json that contains the filepath, text, and duration. Please note that manifest.json only contains the relative path.```{"audio_filepath": "audio/presentpictureofnsw_02_mann_0532.wav", "text": "not to stop more than ten minutes by the way", "duration": 2.6, "text_no_preprocessing": "not to stop more than ten minutes by the way,", "text_normalized": "not to stop more than ten minutes by the way,"}```Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set.
###Code
!cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json
!cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json
!ln -s ./6097_5_mins/audio audio
###Output
_____no_output_____
###Markdown
Finetuning FastPitch___ Let's first download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to ~/.cache, so let's move that to our current directory. *Note: please, check that `home_path` refers to your home folder. Otherwise, change it manually.*
###Code
home_path = !(echo $HOME)
home_path = home_path[0]
print(home_path)
import os
import json
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
from nemo.collections.tts.models import FastPitchModel
FastPitchModel.from_pretrained("tts_en_fastpitch")
from pathlib import Path
nemo_files = [p for p in Path(f"{home_path}/.cache/torch/NeMo/").glob("**/tts_en_fastpitch_align.nemo")]
print(f"Copying {nemo_files[0]} to ./")
Path("./tts_en_fastpitch_align.nemo").write_bytes(nemo_files[0].read_bytes())
###Output
_____no_output_____
###Markdown
To finetune the FastPitch model on the above created filelists, we use `examples/tts/fastpitch_finetune.py` script to train the models with the `fastpitch_align.yaml` configuration.Let's grab those files.
###Code
!wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch_finetune.py
!mkdir -p conf && cd conf && wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align.yaml && cd ..
###Output
_____no_output_____
###Markdown
We can now train our model with the following command:**NOTE: This will take about 50 minutes on colab's K80 GPUs.**`python fastpitch_finetune.py --config-name=fastpitch_align.yaml train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json +init_from_nemo_model=./tts_en_fastpitch_align.nemo +trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25 prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins model.n_speakers=1 model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null`
###Code
!(python fastpitch_finetune.py --config-name=fastpitch_align.yaml \
train_dataset=./6097_manifest_train_dur_5_mins_local.json \
validation_datasets=./6097_manifest_dev_ns_all_local.json \
+init_from_nemo_model=./tts_en_fastpitch_align.nemo \
+trainer.max_steps=1000 ~trainer.max_epochs \
trainer.check_val_every_n_epoch=25 \
prior_folder=./Priors6097 \
model.train_ds.dataloader_params.batch_size=24 \
model.validation_ds.dataloader_params.batch_size=24 \
exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins \
model.n_speakers=1 model.pitch_avg=121.9 model.pitch_std=23.1 \
model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 \
~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null \
)
###Output
_____no_output_____
###Markdown
Let's take a closer look at the training command:* `python fastpitch_finetune.py --config-name=fastpitch_align.yaml` * --config-name tells the script what config to use.* `train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json` * We tell the model what manifest files we can to train and eval on.* `+init_from_nemo_model=./tts_en_fastpitch_align.nemo` * We tell the script what checkpoint to finetune from.* `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25` * For this experiment, we need to tell the script to train for 1000 training steps/iterations. We need to remove max_epochs using `~trainer.max_epochs`.* `prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24` * Some dataset parameters. The dataset does some online processing and stores the processing steps to the `prior_folder`.* `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins` * Where we want to save our log files, tensorboard file, checkpoints, and more* `model.n_speakers=1` * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook* `model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512` * For the new speaker, we need to define new pitch hyperparameters for better audio quality. * These parameters work for speaker 6097 from the HiFiTTS dataset * For speaker 92, we suggest `model.pitch_avg=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512` * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker. * After fmin and fmax are defined, pitch mean and std can be easily extracted* `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam` * For fine-tuning, we lower the learning rate * We use a fixed learning rate of 2e-4 * We switch from the lamb optimizer to the adam optimizer* `trainer.devices=1 trainer.strategy=null` * For this notebook, we default to 1 gpu which means that we do not need ddp * If you have the compute resources, feel free to scale this up to the number of free gpus you have available * Please remove the `trainer.strategy=null` section if you intend on multi-gpu training Synthesize Samples from Finetuned Checkpoints--- Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFiGAN vocoder trained on LJSpeech.We define some helper functions as well.
###Code
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
vocoder = HifiGanModel.from_pretrained("tts_hifigan")
vocoder = vocoder.eval().cuda()
def infer(spec_gen_model, vocoder_model, str_input, speaker = None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Arguments:
spec_gen_model -- Instance of FastPitch model
vocoder_model -- Instance of a vocoder model (HiFiGAN in our case)
str_input -- Text input for the synthesis
speaker -- Speaker number (in the case of a multi-speaker model -- in the mixing case)
Returns:
spectrogram, waveform of the synthesized audio.
"""
parser_model = spec_gen_model
with torch.no_grad():
parsed = parser_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().cuda()
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker = speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt(experiment_base_dir, new_speaker_id, duration_mins, mixing_enabled, original_speaker_id):
"""
Gives the model checkpoint paths of an experiment we ran.
Arguments:
experiment_base_dir -- Base experiment directory (specified on top of this notebook as exp_base_dir)
new_speaker_id -- Speaker id of new HiFiTTS speaker we finetuned FastPitch on
duration_mins -- total minutes of the new speaker data
mixing_enabled -- True or False depending on whether we want to mix the original speaker data or not
original_speaker_id -- speaker id of the original HiFiTTS speaker
Returns:
List of all checkpoint paths sorted by validation error, Last checkpoint path
"""
if not mixing_enabled:
exp_dir = "{}/{}_to_{}_no_mixing_{}_mins".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)
else:
exp_dir = "{}/{}_to_{}_mixing_{}_mins".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)
ckpt_candidates = []
last_ckpt = None
for root, dirs, files in os.walk(exp_dir):
for file in files:
if file.endswith(".ckpt"):
val_error = float(file.split("v_loss=")[1].split("-epoch")[0])
if "last" in file:
last_ckpt = os.path.join(root, file)
ckpt_candidates.append( (val_error, os.path.join(root, file)))
ckpt_candidates.sort()
return ckpt_candidates, last_ckpt
###Output
_____no_output_____
###Markdown
Specify the speaker id, duration mins and mixing variable to find the relevant checkpoint from the exp_base_dir and compare the synthesized audio with validation samples of the new speaker.
###Code
new_speaker_id = 6097
duration_mins = 5
mixing = False
original_speaker_id = "ljspeech"
_ ,last_ckpt = get_best_ckpt("./", new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
_speaker=None
if mixing:
_speaker = 1
num_val = 2
manifest_path = os.path.join("./", "{}_manifest_dev_ns_all_local.json".format(new_speaker_id))
val_records = []
with open(manifest_path, "r") as f:
for i, line in enumerate(f):
val_records.append( json.loads(line) )
if len(val_records) >= num_val:
break
for val_record in val_records:
print ("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))
print ("SYNTHESIZED FOR -- Speaker: {} | Dataset size: {} mins | Mixing:{} | Text: {}".format(new_speaker_id, duration_mins, mixing, val_record['text']))
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker = _speaker)
ipd.display(ipd.Audio(audio, rate=22050))
%matplotlib inline
#if spec is not None:
imshow(spec, origin="lower", aspect = "auto")
plt.show()
###Output
_____no_output_____
###Markdown
Finetuning FastPitch for a new speakerIn this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on new speaker's text and speech pairs.We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.A final section will describe approaches to improve audio quality past this notebook. License> Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.>> Licensed under the Apache License, Version 2.0 (the "License");> you may not use this file except in compliance with the License.> You may obtain a copy of the License at>> http://www.apache.org/licenses/LICENSE-2.0>> Unless required by applicable law or agreed to in writing, software> distributed under the License is distributed on an "AS IS" BASIS,> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.> See the License for the specific language governing permissions and> limitations under the License.
###Code
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
BRANCH = 'main'
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# !apt-get install sox libsndfile1 ffmpeg
# !pip install wget unidecode
# !python -m pip install git+https://github.com/NeMo/NeMo.git@$BRANCH#egg=nemo_toolkit[tts]
###Output
_____no_output_____
###Markdown
Downloading Data___ Download and untar the data.The data contains a 5 minute subset of audio from speaker 6097 from the HiFiTTS dataset.
###Code
!wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data
!tar -xzf 6097_5_mins.tar.gz
###Output
_____no_output_____
###Markdown
Looking at manifest.json, we see a standard NeMo json that contains the filepath, text, and duration. Please note that manifest.json only contains the relative path.```{"audio_filepath": "audio/presentpictureofnsw_02_mann_0532.wav", "text": "not to stop more than ten minutes by the way", "duration": 2.6, "text_no_preprocessing": "not to stop more than ten minutes by the way,", "text_normalized": "not to stop more than ten minutes by the way,"}```Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set.
###Code
!cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json
!cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json
!ln -s ./6097_5_mins/audio audio
###Output
_____no_output_____
###Markdown
Finetuning FastPitch___ Let's first download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to ~/.cache, so let's move that to our current directory
###Code
import os
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
from nemo.collections.tts.models import FastPitchModel
FastPitchModel.from_pretrained("tts_en_fastpitch")
from pathlib import Path
nemo_files = [p for p in Path("/root/.cache/torch/NeMo/").glob("**/tts_en_fastpitch_align.nemo")]
print(f"Copying {nemo_files[0]} to ./")
Path("./tts_en_fastpitch_align.nemo").write_bytes(nemo_files[0].read_bytes())
###Output
_____no_output_____
###Markdown
To finetune the FastPitch model on the above created filelists, we use `examples/tts/fastpitch2_finetune.py` script to train the models with the `fastpitch_align.yaml` configuration.Let's grab those files.
###Code
!wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch2_finetune.py
!mkdir conf && cd conf && wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align.yaml && cd ..
###Output
_____no_output_____
###Markdown
We can now train our model with the following command:NOTE: This will take about 50 minutes on colab's K80 GPUs.`python fastpitch2_finetune.py --config-name=fastpitch_align.yaml train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json +init_from_nemo_model=./tts_en_fastpitch_align.nemo +trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25 prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins model.n_speakers=1 model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam`
###Code
!python fastpitch2_finetune.py --config-name=fastpitch_align.yaml train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json +init_from_nemo_model=./tts_en_fastpitch_align.nemo +trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25 prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins model.n_speakers=1 model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam
###Output
_____no_output_____
###Markdown
Let's take a closer look at the training command:* `python fastpitch2_finetune.py --config-name=fastpitch_align.yaml` * --config-name tells the script what config to use.* `train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json` * We tell the model what manifest files we can to train and eval on.* `+init_from_nemo_model=./tts_en_fastpitch_align.nemo` * We tell the script what checkpoint to finetune from.* `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25` * For this experiment, we need to tell the script to train for 1000 training steps/iterations. We need to remove max_epochs using `~trainer.max_epochs`.* `prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24` * Some dataset parameters. The dataset does some online processing and stores the processing steps to the `prior_folder`.* `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins` * Where we want to save our log files, tensorboard file, checkpoints, and more* `model.n_speakers=1` * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook* `model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512` * For the new speaker, we need to define new pitch hyperparameters for better audio quality. * These parameters work for speaker 6097 from the HiFiTTS dataset * For speaker 92, we suggest `model.pitch_avg=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512` * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker. * After fmin and fmax are defined, pitch mean and std can be easily extracted* `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam` * For fine-tuning, we lower the learning rate * We use a fixed learning rate of 2e-4 * We switch from the lamb optimizer to the adam optimizer Synthesize Samples from Finetuned Checkpoints--- Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFiGAN vocoder trained on LJSpeech.We define some helper functions as well.
###Code
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
vocoder = HifiGanModel.from_pretrained("tts_hifigan")
vocoder.eval().cuda()
def infer(spec_gen_model, vocoder_model, str_input, speaker = None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Arguments:
spec_gen_model -- Instance of FastPitch model
vocoder_model -- Instance of a vocoder model (HiFiGAN in our case)
str_input -- Text input for the synthesis
speaker -- Speaker number (in the case of a multi-speaker model -- in the mixing case)
Returns:
spectrogram, waveform of the synthesized audio.
"""
parser_model = spec_gen_model
with torch.no_grad():
parsed = parser_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().cuda()
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker = speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt(experiment_base_dir, new_speaker_id, duration_mins, mixing_enabled, original_speaker_id):
"""
Gives the model checkpoint paths of an experiment we ran.
Arguments:
experiment_base_dir -- Base experiment directory (specified on top of this notebook as exp_base_dir)
new_speaker_id -- Speaker id of new HiFiTTS speaker we finetuned FastPitch on
duration_mins -- total minutes of the new speaker data
mixing_enabled -- True or False depending on whether we want to mix the original speaker data or not
original_speaker_id -- speaker id of the original HiFiTTS speaker
Returns:
List of all checkpoint paths sorted by validation error, Last checkpoint path
"""
if not mixing_enabled:
exp_dir = "{}/{}_to_{}_no_mixing_{}_mins".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)
else:
exp_dir = "{}/{}_to_{}_mixing_{}_mins".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)
ckpt_candidates = []
last_ckpt = None
for root, dirs, files in os.walk(exp_dir):
for file in files:
if file.endswith(".ckpt"):
val_error = float(file.split("v_loss=")[1].split("-epoch")[0])
if "last" in file:
last_ckpt = os.path.join(root, file)
ckpt_candidates.append( (val_error, os.path.join(root, file)))
ckpt_candidates.sort()
return ckpt_candidates, last_ckpt
###Output
_____no_output_____
###Markdown
Specify the speaker id, duration mins and mixing variable to find the relevant checkpoint from the exp_base_dir and compare the synthesized audio with validation samples of the new speaker.
###Code
new_speaker_id = 6097
duration_mins = 5
mixing = False
original_speaker_id = "ljspeech"
_ ,last_ckpt = get_best_ckpt("./", new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
_speaker=None
if mixing:
_speaker = 1
num_val = 2
manifest_path = os.path.join("./", "{}_manifest_dev_ns_all_local.json".format(new_speaker_id))
val_records = []
with open(manifest_path, "r") as f:
for i, line in enumerate(f):
val_records.append( json.loads(line) )
if len(val_records) >= num_val:
break
for val_record in val_records:
print ("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))
print ("SYNTHESIZED FOR -- Speaker: {} | Dataset size: {} mins | Mixing:{} | Text: {}".format(new_speaker_id, duration_mins, mixing, val_record['text']))
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker = _speaker)
ipd.display(ipd.Audio(audio, rate=22050))
%matplotlib inline
#if spec is not None:
imshow(spec, origin="lower", aspect = "auto")
plt.show()
###Output
_____no_output_____
###Markdown
Finetuning FastPitch for a new speaker In this tutorial, we will finetune a single speaker FastPitch (with alignment) model on limited amount of new speaker's data. We cover two finetuning techniques: 1. We finetune the model parameters only on new speaker's text and speech pairs; 2. We add a learnable speaker embedding layer to the model, and finetune on a mixture of original speaker's and new speaker's data.We will first prepare filelists containing the audiopaths and text of the samples on which we wish to finetune the model, then generate and run a training command to finetune Fastpitch on 5 mins of data, and finally synthesize the audio from the trained checkpoint. Creating filelists for training We will first create filelists of audio on which we wish to finetune the FastPitch model. We will create two kinds of filelists, one which contains only the audio files of the new speaker and one which contains the mixed audio files of the new speaker and the speaker used for training the pre-trained FastPitch Checkpoint. WARNING: This notebook requires downloading the HiFiTTS dataset which is 41GB. We plan on reducing the amount the download amount.
###Code
import random
import os
import json
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
data_dir = <ADD_PATH_TO_DIRECTORY_CONTAINING_HIFIGAN_DATASET> # Download dataset from https://www.openslr.org/109/. Specify path to Hi_Fi_TTS_v_0
filelist_dir = <ADD_PATH_TO_DIRECTORY_IN_WHICH_WE_WISH_TO_SAVE_FILELISTS> # will be created if it does not exist
exp_base_dir = <ADD_PATH_TO_BASE_EXPERIMENT_DIRECTORY_FOR_CHECKPOINTS_AND_LOGS> # will be created if it does not exist
def make_sub_file_list(speaker_id, clean_other, split, num_samples, total_duration_mins, seed=42):
"""
Creates a subset of training data for a HiFiTTS speaker. Specify either the num_samples or total_duration_mins
Saves the filelist in the filelist_dir. split is either "train" or "dev"
Arguments:
speaker_id -- speaker id of the new HiFiTTS speaker
clean_other -- "clean" or "other" depending on type of data of new HiFiTTS speaker
split -- "train" or "dev"
num_samples -- Number samples of new speaker (set None if specifying total_duration_mins)
total_duration_mins -- Total duration of new speaker's data (set None if specifying num_samples)
"""
file_list_name = "{}_manifest_{}_{}.json".format(speaker_id, clean_other, split)
with open(os.path.join(data_dir, file_list_name), 'r') as f:
all_records = [json.loads(l) for l in f.read().split("\n") if len(l) > 0]
for r in all_records:
r['audio_filepath'] = r['audio_filepath'][r['audio_filepath'].find("wav/"):]
random.seed(seed)
random.shuffle(all_records)
if num_samples is not None and total_duration_mins is None:
sub_records = all_records[:num_samples]
fname_extension = "ns_{}".format(num_samples)
elif num_samples is None and total_duration_mins is not None:
sub_record_duration = 0.0
sub_records = []
for r in all_records:
sub_record_duration += r['duration']
if sub_record_duration > total_duration_mins*60.0:
print ("Duration reached {} mins using {} records".format(total_duration_mins, len(sub_records)))
break
sub_records.append(r)
fname_extension = "dur_{}_mins".format( int(round(total_duration_mins )))
elif num_samples is None and total_duration_mins is None:
sub_records = all_records
fname_extension = "ns_all"
else:
raise NotImplementedError()
print ("num sub records", len(sub_records))
if not os.path.exists(filelist_dir):
os.makedirs(filelist_dir)
target_fp = os.path.join(filelist_dir, "{}_mainifest_{}_{}_local.json".format(speaker_id, split, fname_extension))
with open(target_fp, 'w') as f:
for record in json.loads(json.dumps(sub_records)):
record['audio_filepath'] = record['audio_filepath'][record['audio_filepath'].find("wav/"):]
record['audio_filepath'] = os.path.join(data_dir, record['audio_filepath'])
f.write(json.dumps(record) + "\n")
def mix_file_list(speaker_id, clean_other, split, num_samples, total_duration_mins, original_speaker_id, original_clean_other, n_orig=None, seed=42):
"""
Creates a mixed dataset of new and original speaker. num_samples or total_duration_mins specifies the amount
of new speaker data to be used and n_orig specifies the number of original speaker samples. Creates a balanced
dataset with alternating new and old speaker samples. Saves the filelist in the filelist_dir.
Arguments:
speaker_id -- speaker id of the new HiFiTTS speaker
clean_other -- "clean" or "other" depending on type of data of new HiFiTTS speaker
split -- "train" or "dev"
num_samples -- Number samples of new speaker (set None if specifying total_duration_mins)
total_duration_mins -- Total duration of new speaker's data (set None if specifying num_samples)
original_speaker_id -- speaker id of the original HiFiTTS speaker (on which FastPitch was trained)
original_clean_other -- "clean" or "other" depending on type of data of new HiFiTTS speaker
n_orig -- Number of samples of old speaker to be mixed with new speaker
"""
file_list_name = "{}_manifest_{}_{}.json".format(speaker_id, clean_other, split)
with open(os.path.join(data_dir, file_list_name), 'r') as f:
all_records = [json.loads(l) for l in f.read().split("\n") if len(l) > 0]
for r in all_records:
r['audio_filepath'] = r['audio_filepath'][r['audio_filepath'].find("wav/"):]
original_file_list_name = "{}_manifest_{}_{}.json".format(original_speaker_id, original_clean_other, "train")
with open(os.path.join(data_dir, original_file_list_name), 'r') as f:
original_all_records = [json.loads(l) for l in f.read().split("\n") if len(l) > 0]
for r in original_all_records:
r['audio_filepath'] = r['audio_filepath'][r['audio_filepath'].find("wav/"):]
random.seed(seed)
if n_orig is not None:
random.shuffle(original_all_records)
original_all_records = original_all_records[:n_orig]
random.seed(seed)
random.shuffle(all_records)
if num_samples is not None and total_duration_mins is None:
sub_records = all_records[:num_samples]
fname_extension = "ns_{}".format(num_samples)
elif num_samples is None and total_duration_mins is not None:
sub_record_duration = 0.0
sub_records = []
for r in all_records:
sub_record_duration += r['duration']
if sub_record_duration > total_duration_mins * 60.0:
print ("Duration reached {} mins using {} records".format(total_duration_mins, len(sub_records)))
break
sub_records.append(r)
fname_extension = "dur_{}_mins".format( int(round(total_duration_mins)))
elif num_samples is None and total_duration_mins is None:
sub_records = all_records
fname_extension = "ns_all"
else:
raise NotImplementedError()
print(len(original_all_records))
if not os.path.exists(filelist_dir):
os.makedirs(filelist_dir)
target_fp = os.path.join(filelist_dir, "{}_mainifest_{}_{}_local_mix_{}.json".format(speaker_id, split, fname_extension, original_speaker_id))
with open(target_fp, 'w') as f:
for ridx, original_record in enumerate(original_all_records):
original_record['audio_filepath'] = original_record['audio_filepath'][original_record['audio_filepath'].find("wav/"):]
original_record['audio_filepath'] = os.path.join(data_dir, original_record['audio_filepath'])
new_speaker_record = sub_records[ridx % len(sub_records)]
new_speaker_record['audio_filepath'] = new_speaker_record['audio_filepath'][new_speaker_record['audio_filepath'].find("wav/"):]
new_speaker_record['audio_filepath'] = os.path.join(data_dir, new_speaker_record['audio_filepath'])
new_speaker_record['speaker'] = 1
original_record['speaker'] = 0
f.write(json.dumps(original_record) + "\n")
f.write(json.dumps(new_speaker_record) + "\n")
make_sub_file_list(92, "clean", "train", None, 5)
mix_file_list(92, "clean", "train", None, 5, 8051, "other", n_orig=5000)
make_sub_file_list(92, "clean", "dev", None, None)
###Output
_____no_output_____
###Markdown
Finetuning the model on filelists To finetune the FastPitch model on the above created filelists, we use `examples/tts/fastpitch2_finetune.py` script to train the models with the `fastpitch_align_44100.yaml` configuration. This configuration file has been defined for 44100Hz HiFiGan dataset audio. The function `generate_training_command` in this notebook can be used to generate a training command for a given speaker and finetuning technique.
###Code
# pitch statistics of the new speakers
# These can be computed from the pitch contours extracted using librosa yin
# Finetuning can still work without these, but we get better results using speaker specific pitch stats
pitch_stats = {
92 : {
'mean' : 214.5, # female speaker
'std' : 30.9,
'fmin' : 80,
'fmax' : 512
},
6097 : {
'mean' : 121.9, # male speaker
'std' : 23.1,
'fmin' : 30,
'fmax' : 512
}
}
def generate_training_command(new_speaker_id, duration_mins, mixing_enabled, original_speaker_id, ckpt, use_new_pitch_stats=False):
"""
Generates the training command string to be run from the NeMo/ directory. Assumes we have created the finetuning filelists
using the instructions given above.
Arguments:
new_speaker_id -- speaker id of the new HiFiTTS speaker
duration_mins -- total minutes of the new speaker data (same as that used for creating the filelists)
mixing_enabled -- True or False depending on whether we want to mix the original speaker data or not
original_speaker_id -- speaker id of the original HiFiTTS speaker
use_new_pitch_stats -- whether to use pitch_stats dictionary given above or not
ckpt: Path to pretrained FastPitch checkpoint
Returns:
Training command string
"""
def _find_epochs(duration_mins, mixing_enabled, n_orig=None):
# estimated num of epochs
if duration_mins == 5:
epochs = 1000
elif duration_mins == 30:
epochs = 300
elif duration_mins == 60:
epochs = 150
if mixing_enabled:
if duration_mins == 5:
epochs = epochs/50 + 1
elif duration_mins == 30:
epochs = epochs/10 + 1
elif duration_mins == 60:
epochs = epochs/5 + 1
return int(epochs)
if ckpt.endswith(".nemo"):
ckpt_arg_name = "init_from_nemo_model"
else:
ckpt_arg_name = "init_from_ptl_ckpt"
if not mixing_enabled:
train_dataset = "{}_mainifest_train_dur_{}_mins_local.json".format(new_speaker_id, duration_mins)
val_dataset = "{}_mainifest_dev_ns_all_local.json".format(new_speaker_id)
prior_folder = os.path.join(data_dir, "Priors{}".format(new_speaker_id))
exp_dir = "{}_to_{}_no_mixing_{}_mins".format(original_speaker_id, new_speaker_id, duration_mins)
n_speakers = 1
else:
train_dataset = "{}_mainifest_train_dur_{}_mins_local_mix_{}.json".format(new_speaker_id, duration_mins, original_speaker_id)
val_dataset = "{}_mainifest_dev_ns_all_local.json".format(new_speaker_id)
prior_folder = os.path.join(data_dir, "Priors_{}_mix_{}".format(new_speaker_id, original_speaker_id))
exp_dir = "{}_to_{}_mixing_{}_mins".format(original_speaker_id, new_speaker_id, duration_mins)
n_speakers = 2
train_dataset = os.path.join(filelist_dir, train_dataset)
val_dataset = os.path.join(filelist_dir, val_dataset)
exp_dir = os.path.join(exp_base_dir, exp_dir)
max_epochs = _find_epochs(duration_mins, mixing_enabled, n_orig=None)
config_name = "fastpitch_align_44100.yaml"
training_command = "python examples/tts/fastpitch2_finetune.py --config-name={} train_dataset={} validation_datasets={} +{}={} trainer.max_epochs={} trainer.check_val_every_n_epoch=1 prior_folder={} model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 exp_manager.exp_dir={} model.n_speakers={}".format(
config_name, train_dataset, val_dataset, ckpt_arg_name, ckpt, max_epochs, prior_folder, exp_dir, n_speakers)
if use_new_pitch_stats:
training_command += " model.pitch_avg={} model.pitch_std={} model.pitch_fmin={} model.pitch_fmax={}".format(
pitch_stats[new_speaker_id]['mean'],
pitch_stats[new_speaker_id]['std'],
pitch_stats[new_speaker_id]['fmin'],
pitch_stats[new_speaker_id]['fmax']
)
training_command += " model.optim.lr=2e-4 ~model.optim.sched"
return training_command
new_speaker_id = 92
duration_mins = 5
mixing = False
original_speaker_id = 8051
ckpt_path = <PATH_TO_PRETRAINED_FASTPITCH_CHECKPOINT>
print(generate_training_command(new_speaker_id, duration_mins, mixing, original_speaker_id, ckpt_path, True))
###Output
_____no_output_____
###Markdown
The generated command should look something like this. We can ofcourse tweak things like epochs/learning rate if we like`python examples/tts/fastpitch2_finetune.py --config-name=fastpitch_align_44100 train_dataset=filelists/92_mainifest_train_dur_5_mins_local.json validation_datasets=filelists/92_mainifest_dev_ns_all_local.json +init_from_nemo_model=PreTrainedModels/FastPitch.nemo trainer.max_epochs=1000 trainer.check_val_every_n_epoch=1 prior_folder=Hi_Fi_TTS_v_0/Priors92 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 exp_manager.exp_dir=inetuningDemo/8051_to_92_no_mixing_5_mins model.n_speakers=1 model.pitch_avg=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512 model.optim.lr=2e-4 ~model.optim.sched` ^ Run the above command from the terminal from the `NeMo/` directory to start finetuning a model. Synthesize samples from finetuned checkpoints Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFiGAN vocoder trained on multiple speakers, get the trained checkpoint path for our trained model and synthesize audio for a given text as follows.
###Code
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
hifigan_ckpt_path = <PATH_TO_PRETRAINED_HIFIGAN_CHECKPOINT>
vocoder = HifiGanModel.load_from_checkpoint(hifigan_ckpt_path)
vocoder.eval().cuda()
def infer(spec_gen_model, vocoder_model, str_input, speaker = None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Arguments:
spec_gen_model -- Instance of FastPitch model
vocoder_model -- Instance of a vocoder model (HiFiGAN in our case)
str_input -- Text input for the synthesis
speaker -- Speaker number (in the case of a multi-speaker model -- in the mixing case)
Returns:
spectrogram, waveform of the synthesized audio.
"""
parser_model = spec_gen_model
with torch.no_grad():
parsed = parser_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().cuda()
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker = speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt(experiment_base_dir, new_speaker_id, duration_mins, mixing_enabled, original_speaker_id):
"""
Gives the model checkpoint paths of an experiment we ran.
Arguments:
experiment_base_dir -- Base experiment directory (specified on top of this notebook as exp_base_dir)
new_speaker_id -- Speaker id of new HiFiTTS speaker we finetuned FastPitch on
duration_mins -- total minutes of the new speaker data
mixing_enabled -- True or False depending on whether we want to mix the original speaker data or not
original_speaker_id -- speaker id of the original HiFiTTS speaker
Returns:
List of all checkpoint paths sorted by validation error, Last checkpoint path
"""
if not mixing_enabled:
exp_dir = "{}/{}_to_{}_no_mixing_{}_mins".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)
else:
exp_dir = "{}/{}_to_{}_mixing_{}_mins".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)
ckpt_candidates = []
last_ckpt = None
for root, dirs, files in os.walk(exp_dir):
for file in files:
if file.endswith(".ckpt"):
val_error = float(file.split("v_loss=")[1].split("-epoch")[0])
if "last" in file:
last_ckpt = os.path.join(root, file)
ckpt_candidates.append( (val_error, os.path.join(root, file)))
ckpt_candidates.sort()
return ckpt_candidates, last_ckpt
###Output
_____no_output_____
###Markdown
Specify the speaker id, duration mins and mixing variable to find the relevant checkpoint from the exp_base_dir and compare the synthesized audio with validation samples of the new speaker.
###Code
new_speaker_id = 92
duration_mins = 5
mixing = False
original_speaker_id = 8051
_ ,last_ckpt = get_best_ckpt(exp_base_dir, new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
_speaker=None
if mixing:
_speaker = 1
num_val = 2
manifest_path = os.path.join(filelist_dir, "{}_mainifest_dev_ns_all_local.json".format(new_speaker_id))
val_records = []
with open(manifest_path, "r") as f:
for i, line in enumerate(f):
val_records.append( json.loads(line) )
if len(val_records) >= num_val:
break
for val_record in val_records:
print ("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=44100))
print ("SYNTHESIZED FOR -- Speaker: {} | Dataset size: {} mins | Mixing:{} | Text: {}".format(new_speaker_id, duration_mins, mixing, val_record['text']))
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker = _speaker)
ipd.display(ipd.Audio(audio, rate=44100))
%matplotlib inline
#if spec is not None:
imshow(spec, origin="lower", aspect = "auto")
plt.show()
###Output
_____no_output_____
###Markdown
Finetuning FastPitch for a new speakerIn this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on new speaker's text and speech pairs.We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.A final section will describe approaches to improve audio quality past this notebook. License> Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.>> Licensed under the Apache License, Version 2.0 (the "License");> you may not use this file except in compliance with the License.> You may obtain a copy of the License at>> http://www.apache.org/licenses/LICENSE-2.0>> Unless required by applicable law or agreed to in writing, software> distributed under the License is distributed on an "AS IS" BASIS,> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.> See the License for the specific language governing permissions and> limitations under the License.
###Code
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
BRANCH = 'main'
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# !apt-get install sox libsndfile1 ffmpeg
# !pip install wget unidecode
# !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
###Output
_____no_output_____
###Markdown
Downloading data For our tutorial, we will use small part of Hi-Fi Multi-Speaker English TTS (Hi-Fi TTS) dataset. You can read more about dataset [here](https://arxiv.org/abs/2104.01497). As a target speaker, will use speaker whose id is 6097 and only 5 minute subset of audio will be used. We additionally resampled audios to 22050 kHz.
###Code
!wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data
!tar -xzf 6097_5_mins.tar.gz
###Output
_____no_output_____
###Markdown
Looking at `manifest.json`, we see a standard NeMo json that contains the filepath, text, and duration. Please note that our `manifest.json` contains the relative path.```{"audio_filepath": "audio/presentpictureofnsw_02_mann_0532.wav", "text": "not to stop more than ten minutes by the way", "duration": 2.6, "text_no_preprocessing": "not to stop more than ten minutes by the way,", "text_normalized": "not to stop more than ten minutes by the way,"}```Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set.
###Code
!cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json
!cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json
!ln -s ./6097_5_mins/audio audio
###Output
_____no_output_____
###Markdown
Let's also download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to `~/.cache`, so let's move that to our current directory. *Note: please, check that `home_path` refers to your home folder. Otherwise, change it manually.*
###Code
home_path = !(echo $HOME)
home_path = home_path[0]
print(home_path)
import os
import json
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
from nemo.collections.tts.models import FastPitchModel
FastPitchModel.from_pretrained("tts_en_fastpitch")
from pathlib import Path
nemo_files = [p for p in Path(f"{home_path}/.cache/torch/NeMo/").glob("**/tts_en_fastpitch_align.nemo")]
print(f"Copying {nemo_files[0]} to ./")
Path("./tts_en_fastpitch_align.nemo").write_bytes(nemo_files[0].read_bytes())
###Output
_____no_output_____
###Markdown
To finetune the FastPitch model on the above created filelists, we use `examples/tts/fastpitch_finetune.py` script to train the models with the `fastpitch_align_v1.05.yaml` configuration.Let's grab those files.
###Code
!wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch_finetune.py
!mkdir -p conf \
&& cd conf \
&& wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align_v1.05.yaml \
&& cd ..
###Output
_____no_output_____
###Markdown
We also need some additional files (see `FastPitch_MixerTTS_Training.ipynb` tutorial for more details) for training. Let's download it too.
###Code
# additional files
!mkdir -p tts_dataset_files && cd tts_dataset_files \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/cmudict-0.7b_nv22.01 \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/heteronyms-030921 \
&& wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/nemo_text_processing/text_normalization/en/data/whitelist_lj_speech.tsv \
&& cd ..
###Output
_____no_output_____
###Markdown
Finetuning FastPitch We can now train our model with the following command:**NOTE: This will take about 50 minutes on colab's K80 GPUs.**
###Code
# TODO(oktai15): remove +model.text_tokenizer.add_blank_at=true when we update FastPitch checkpoint
!(python fastpitch_finetune.py --config-name=fastpitch_align_v1.05.yaml \
train_dataset=./6097_manifest_train_dur_5_mins_local.json \
validation_datasets=./6097_manifest_dev_ns_all_local.json \
sup_data_path=./fastpitch_sup_data \
phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 \
heteronyms_path=tts_dataset_files/heteronyms-030921 \
whitelist_path=tts_dataset_files/whitelist_lj_speech.tsv \
exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins \
+init_from_nemo_model=./tts_en_fastpitch_align.nemo \
+trainer.max_steps=1000 ~trainer.max_epochs \
trainer.check_val_every_n_epoch=25 \
model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 \
model.n_speakers=1 model.pitch_mean=121.9 model.pitch_std=23.1 \
model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 \
~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null \
+model.text_tokenizer.add_blank_at=true \
)
###Output
_____no_output_____
###Markdown
Let's take a closer look at the training command:* `--config-name=fastpitch_align_v1.05.yaml` * --config-name tells the script what config to use.* `train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json sup_data_path=./fastpitch_sup_data` * We tell the script what manifest files we can to train and eval on and where supplementary data is located or will be calculated and saved during training. * `phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 heteronyms_path=tts_dataset_files/heteronyms-030921whitelist_path=tts_dataset_files/whitelist_lj_speech.tsv ` * We tell the script where `phoneme_dict_path`, `heteronyms-030921` and `whitelist_path` are located. * `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins` * Where we want to save our log files, tensorboard file, checkpoints, and more.* `+init_from_nemo_model=./tts_en_fastpitch_align.nemo` * We tell the script what checkpoint to finetune from.* `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25` * For this experiment, we need to tell the script to train for 1000 training steps/iterations. We need to remove max_epochs using `~trainer.max_epochs`.* `model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24` * Set batch sizes. * `model.n_speakers=1` * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook.* `model.pitch_mean=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512` * For the new speaker, we need to define new pitch hyperparameters for better audio quality. * These parameters work for speaker 6097 from the Hi-Fi TTS dataset. * For speaker 92, we suggest `model.pitch_mean=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512`. * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker. * After fmin and fmax are defined, pitch mean and std can be easily extracted.* `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam` * For fine-tuning, we lower the learning rate. * We use a fixed learning rate of 2e-4. * We switch from the lamb optimizer to the adam optimizer.* `trainer.devices=1 trainer.strategy=null` * For this notebook, we default to 1 gpu which means that we do not need ddp. * If you have the compute resources, feel free to scale this up to the number of free gpus you have available. * Please remove the `trainer.strategy=null` section if you intend on multi-gpu training. Synthesize Samples from Finetuned Checkpoints Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFiGAN vocoder trained on LJSpeech.We define some helper functions as well.
###Code
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
vocoder = HifiGanModel.from_pretrained("tts_hifigan")
vocoder = vocoder.eval().cuda()
def infer(spec_gen_model, vocoder_model, str_input, speaker=None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Args:
spec_gen_model: Spectrogram generator model (FastPitch in our case)
vocoder_model: Vocoder model (HiFiGAN in our case)
str_input: Text input for the synthesis
speaker: Speaker ID
Returns:
spectrogram and waveform of the synthesized audio.
"""
with torch.no_grad():
parsed = spec_gen_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().to(device=spec_gen_model.device)
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker=speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt_from_last_run(
base_dir,
new_speaker_id,
duration_mins,
mixing_enabled,
original_speaker_id,
model_name="FastPitch"
):
mixing = "no_mixing" if not mixing_enabled else "mixing"
d = f"{original_speaker_id}_to_{new_speaker_id}_{mixing}_{duration_mins}_mins"
exp_dirs = list([i for i in (Path(base_dir) / d / model_name).iterdir() if i.is_dir()])
last_exp_dir = sorted(exp_dirs)[-1]
last_checkpoint_dir = last_exp_dir / "checkpoints"
last_ckpt = list(last_checkpoint_dir.glob('*-last.ckpt'))
if len(last_ckpt) == 0:
raise ValueError(f"There is no last checkpoint in {last_checkpoint_dir}.")
return str(last_ckpt[0])
###Output
_____no_output_____
###Markdown
Specify the speaker id, duration mins and mixing variable to find the relevant checkpoint and compare the synthesized audio with validation samples of the new speaker.
###Code
new_speaker_id = 6097
duration_mins = 5
mixing = False
original_speaker_id = "ljspeech"
last_ckpt = get_best_ckpt_from_last_run("./", new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
speaker_id = None
if mixing:
speaker_id = 1
num_val = 2
val_records = []
with open(f"{new_speaker_id}_manifest_dev_ns_all_local.json", "r") as f:
for i, line in enumerate(f):
val_records.append(json.loads(line))
if len(val_records) >= num_val:
break
for val_record in val_records:
print("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))
print(f"SYNTHESIZED FOR -- Speaker: {new_speaker_id} | Dataset size: {duration_mins} mins | Mixing:{mixing} | Text: {val_record['text']}")
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker=speaker_id)
ipd.display(ipd.Audio(audio, rate=22050))
%matplotlib inline
imshow(spec, origin="lower", aspect="auto")
plt.show()
###Output
_____no_output_____ |
Kaggle-Competitions/PAKDD/PAKDD_EDA.ipynb | ###Markdown
Two kinds of historical information are given: __sale log__ and __repair log__. The time period of the __sale log__ is from _January/2005_ to _February/2008_; while the time period of the __repair log__ is from _February/2005_ to _December/2009_. Details of these two files are described in the File description section.Participants should exploit the sale and repair log to predict the the __monthly repair amount__ for each __module-component__ from _January/2010 to July/2011_. In other words, the model should output a series (nineteen elements, one element for one month) of predicted __real-value__ (amount of repair) for each module-component.
###Code
# load files
repair_train = pd.read_csv(os.path.join(DATA_DIR, 'RepairTrain.csv'), parse_dates=[2, 3])
sale_train = pd.read_csv(os.path.join(DATA_DIR, 'SaleTrain.csv'), parse_dates=[2])
output_mapping = pd.read_csv(os.path.join(DATA_DIR, 'Output_TargetID_Mapping.csv'))
sample_sub = pd.read_csv(os.path.join(DATA_DIR, 'SampleSubmission.csv'))
repair_train.head()
sale_train.head()
output_mapping.head()
###Output
_____no_output_____
###Markdown
** How many of the module and component category are in the training set as well ? **
###Code
def count_module_components_in_train(df, output_mapping):
num_mod_comp = 0
checked = {}
output_mapping_without_duplicates = output_mapping[['module_category', 'component_category']].drop_duplicates()
for mod, comp in zip(output_mapping_without_duplicates['module_category'], output_mapping_without_duplicates['component_category']):
mask = (df.module_category == mod) & (df.component_category == comp)
if (mod,comp) not in checked and df.loc[mask].shape[0] > 0:
num_mod_comp += 1
checked[(mod, comp)] = True
return num_mod_comp
print('Number of module and component in the repair_train ', count_module_components_in_train(repair_train, output_mapping))
print('Number of module and component in the sale_train ', count_module_components_in_train(sale_train, output_mapping))
###Output
Number of module and component in the repair_train 224
Number of module and component in the sale_train 224
###Markdown
** So all of the module and component pairs are present in the sales and repair dataset. **
###Code
repair_per_month = repair_train.pivot_table(values='number_repair', index=['module_category', 'component_category'],\
columns=['year/month(repair)'], fill_value=0, aggfunc='sum')
def decrease_in_last_6_months(pair):
module, component = pair
num_decrease = 0
last_6_month = repair_per_month.ix[(module, component)][-6:]
for i in range(5):
if last_6_month.iloc[i] > last_6_month.iloc[i+1]:
num_decrease += 1
return num_decrease
repair_per_month['num_decrease'] = list(map(decrease_in_last_6_months, repair_per_month.index.values))
def log_value(module, component, key):
key = pd.to_datetime(key)
return np.log(1 + repair_per_month.ix[(module, component)][key])
years = ['2009/05', '2009/06', '2009/07', '2009/08', '2009/09', '2009/10', '2009/11', '2009/12']
repair_per_month['log_1_month'] = [log_value(mod, comp, years[0]) for mod, comp in repair_per_month.index.values]
repair_per_month['log_2_month'] = [log_value(mod, comp, years[1]) for mod, comp in repair_per_month.index.values]
repair_per_month['log_3_month'] = [log_value(mod, comp, years[2]) for mod, comp in repair_per_month.index.values]
repair_per_month['log_4_month'] = [log_value(mod, comp, years[3]) for mod, comp in repair_per_month.index.values]
repair_per_month['log_5_month'] = [log_value(mod, comp, years[4]) for mod, comp in repair_per_month.index.values]
repair_per_month['log_6_month'] = [log_value(mod, comp, years[5]) for mod, comp in repair_per_month.index.values]
repair_per_month['log_7_month'] = [log_value(mod, comp, years[6]) for mod, comp in repair_per_month.index.values]
repair_per_month['log_1_month'].head(1)
def linear_coefficient(row):
y = np.hstack((row['log_1_month'], row['log_2_month'], row['log_3_month'],
row['log_4_month'], row['log_5_month'], row['log_6_month'],
row['log_7_month']
)
)
x = np.arange(0, 7)
z = np.polyfit(x, y, 1) # linear estimation
intercept = z[0]
if intercept >= 0:
return np.log(0.91)
else:
return intercept
repair_per_month['linear_estimation'] = repair_per_month[['log_1_month', 'log_2_month', 'log_3_month', \
'log_4_month', 'log_5_month','log_6_month', 'log_7_month'\
]].apply(linear_coefficient, axis=1)
repair_per_month['decay_coefficient'] = np.exp(repair_per_month.linear_estimation)
repair_per_month['decay_coefficient_processed'] = repair_per_month.decay_coefficient + \
(1 - repair_per_month.decay_coefficient) / 2
repair_per_month['decay_coefficient_processed'].head(2)
###Output
_____no_output_____
###Markdown
Extrapolate to the future 19 months by using the decay parameter per row and initialize the first element based on the number of decreases. If number of decreases greater than 3 then initialize with the number of repairs in the last month i.e. 2009/12 and multiply with decay rate else take average of last 3 months repair values and then multiply with the decay rate and take that as the initial value.
###Code
repair_per_month.ix[('M1', 'P02')].num_decrease
def create_predictions(index):
prediction_dict = defaultdict(list)
for i in range(len(index)):
mod, comp = index[i]
row = repair_per_month.ix[index[i]]
decay_coefficient = row['decay_coefficient_processed']
if row.num_decrease > 3:
prediction_dict[(mod, comp)].append(row[pd.to_datetime('2009/12')] * decay_coefficient)
else:
average_ = (row[pd.to_datetime('2009/10')] + row[pd.to_datetime('2009/11')] \
+ row[pd.to_datetime('2009/12')]) / 3.
prediction_dict[(mod, comp)].append(average_ * decay_coefficient)
for j in range(1, 19):
prediction_dict[(mod, comp)].append(prediction_dict[(mod, comp)][j-1] * decay_coefficient)
return prediction_dict
prediction_dict = create_predictions(repair_per_month.index.values)
###Output
_____no_output_____
###Markdown
Submissions
###Code
output_mapping['predictions'] = np.ones(len(output_mapping))
def prepare_submission(modules, components, output_mapping):
for mod, comp in zip(modules, components):
mask = (output_mapping.module_category == mod) & (output_mapping.component_category == comp)
output_mapping.loc[mask, 'predictions'] = prediction_dict[(mod, comp)]
return output_mapping
output_mapping = prepare_submission(module_component_unique_pairs['module_category'],\
module_component_unique_pairs['component_category'],
output_mapping)
sample_sub['target'] = output_mapping.predictions
sample_sub.to_csv('./submissions/ryan_locar_solution.csv', index=False)
###Output
_____no_output_____ |
Resources/Financial-Analysis/Financial-Analysis/Time-Series-Analysis/1-Statsmodels.ipynb | ###Markdown
___ ___*Copyright Pierian Data 2017**For more information, visit us at www.pieriandata.com* Introduction to StatsmodelsStatsmodels is a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration. An extensive list of result statistics are available for each estimator. The results are tested against existing statistical packages to ensure that they are correct. The package is released under the open source Modified BSD (3-clause) license. The online documentation is hosted at statsmodels.org.The reason we will cover it for use in this course, is that you may find it very useful later on when discussing time series data (typical of quantitative financial analysis).Let's walk through a very simple example of using statsmodels!
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# You can safely ignore the warning:
# Please use the pandas.tseries module instead. from pandas.core import datetools
import statsmodels.api as sm
df = sm.datasets.macrodata.load_pandas().data
print(sm.datasets.macrodata.NOTE)
df.head()
index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))
df.index = index
df.head()
df['realgdp'].plot()
plt.ylabel("REAL GDP")
###Output
_____no_output_____
###Markdown
Using Statsmodels to get the trendThe Hodrick-Prescott filter separates a time-series y_t into a trend τ_t and a cyclical component ζt$y_t = \tau_t + \zeta_t$The components are determined by minimizing the following quadratic loss function$\min_{\\{ \tau_{t}\\} }\sum_{t}^{T}\zeta_{t}^{2}+\lambda\sum_{t=1}^{T}\left[\left(\tau_{t}-\tau_{t-1}\right)-\left(\tau_{t-1}-\tau_{t-2}\right)\right]^{2}$
###Code
# Tuple unpacking
gdp_cycle, gdp_trend = sm.tsa.filters.hpfilter(df.realgdp)
gdp_cycle
type(gdp_cycle)
df["trend"] = gdp_trend
df[['trend','realgdp']].plot()
df[['trend','realgdp']]["2000-03-31":].plot(figsize=(12,8))
###Output
_____no_output_____ |
examples/kissgp_additive_regression_cuda.ipynb | ###Markdown
This example shows how to use a `AdditiveGridInducingVariationalGP` module. This classifcation module is designed for when the function you’re modeling has an additive decomposition over dimension (if it doesn't you should use `GridInterpolationKernel`).The use of inducing points allows for scaling up the training data by making computational complexity linear instead of cubic.In this example, we’re modeling $y=sin(x_0) + 2\pi cos(x_1)$Since the function here decomposes additively over dimension 1 and 2, we can use the AdditiveGridInducingVariationalGP.
###Code
# Imports
import math
import torch
import gpytorch
import numpy
from matplotlib import pyplot as plt
from torch import nn, optim
from torch.autograd import Variable
from gpytorch.kernels import RBFKernel, AdditiveGridInterpolationKernel
from gpytorch.means import ConstantMean
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.random_variables import GaussianRandomVariable
# Inline plotting
%matplotlib inline
# We store the data as a 10k 1D vector
# It actually represents [0,1]x[0,1] in cartesian coordinates
n = 100
train_x = torch.zeros(pow(n, 2), 2)
for i in range(n):
for j in range(n):
# Each coordinate varies from 0 to 1 in n=100 steps
train_x[i * n + j][0] = float(i) / (n-1)
train_x[i * n + j][1] = float(j) / (n-1)
# Cuda variable the x_data
train_x = Variable(train_x).cuda()
# function is y=sin(x0) + 2*pi*cos(x1)
train_y = Variable((torch.sin(train_x.data[:, 0]) + torch.cos(train_x.data[:, 1])) * (2 * math.pi)).cuda()
# Use the exact GP model for regression and interpolate between grid points
class GPRegressionModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)
# Constant mean and RBF kernel
self.mean_module = ConstantMean()
self.base_covar_module = RBFKernel()
# Put the AdditiveGridInterpolationKernel over the RBF kernel
# There are two dimensions (n_components=2)
self.covar_module = AdditiveGridInterpolationKernel(self.base_covar_module,
grid_size=400,
grid_bounds=[(0, 1)],
n_components=2)
# Register the lengthscale of the RBF kernel as a parameter to be optimized
self.register_parameter('log_outputscale', nn.Parameter(torch.Tensor([0])))
def forward(self,x):
mean_x = self.mean_module(x)
# Put the input through the AdditiveGridInterpolationKernel and scale
# the covariance matrix
covar_x = self.covar_module(x)
covar_x = covar_x.mul(self.log_outputscale.exp())
return GaussianRandomVariable(mean_x, covar_x)
# initialize the likelihood and model
likelihood = GaussianLikelihood().cuda()
model = GPRegressionModel(train_x.data, train_y.data, likelihood).cuda()
# Optimize the model
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam([
{'params': model.parameters()}, # Includes GaussianLikelihood parameters
], lr=0.2)
num_iter = 20
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
# See dkl_mnist for toeplitz explanation
with gpytorch.settings.use_toeplitz(False):
for i in range(num_iter):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f' % (i + 1, num_iter, loss.data[0]))
optimizer.step()
# Set into eval mode
model.eval()
likelihood.eval()
# Create 100 test data points
# Over the square [0,1]x[0,1]
n = 10
test_x = Variable(torch.zeros(int(pow(n, 2)), 2)).cuda()
for i in range(n):
for j in range(n):
test_x.data[i * n + j][0] = float(i) / (n-1)
test_x.data[i * n + j][1] = float(j) / (n-1)
# Put the test data through the model then likelihood
observed_pred = likelihood(model(test_x))
# the mean of the Gaussians are our predicted labels
pred_labels = observed_pred.mean().view(n, n).data.cpu().numpy()
# Calculate the true test values
test_y_actual = ((torch.sin(test_x.data[:, 0]) + torch.cos(test_x.data[:, 1])) * (2 * math.pi))
test_y_actual = test_y_actual.cpu().numpy().reshape(n, n)
# Compute absolute error
delta_y = numpy.absolute(pred_labels - test_y_actual)
# Define a plotting function
def ax_plot(f, ax, y_labels, title):
im = ax.imshow(y_labels)
ax.set_title(title)
f.colorbar(im)
# Make a plot of the predicted values
f, observed_ax = plt.subplots(1, 1, figsize=(4, 3))
ax_plot(f, observed_ax, pred_labels, 'Predicted Values (Likelihood)')
# Make a plot of the actual values
f, observed_ax2 = plt.subplots(1, 1, figsize=(4, 3))
ax_plot(f, observed_ax2, test_y_actual, 'Actual Values (Likelihood)')
# Make a plot of the errors
f, observed_ax3 = plt.subplots(1, 1, figsize=(4, 3))
ax_plot(f, observed_ax3, delta_y, 'Absolute Error Surface')
###Output
_____no_output_____
###Markdown
This example shows how to use a `AdditiveGridInducingVariationalGP` module. This classifcation module is designed for when the function you’re modeling has an additive decomposition over dimension (if it doesn't you should use `GridInterpolationKernel`).The use of inducing points allows for scaling up the training data by making computational complexity linear instead of cubic.In this example, we’re modeling $y=sin(x_0) + 2\pi cos(x_1)$Since the function here decomposes additively over dimension 1 and 2, we can use the AdditiveGridInducingVariationalGP.
###Code
# Imports
import math
import torch
import gpytorch
import numpy
from matplotlib import pyplot as plt
from torch import nn, optim
from torch.autograd import Variable
from gpytorch.kernels import RBFKernel, AdditiveGridInterpolationKernel
from gpytorch.means import ConstantMean
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.random_variables import GaussianRandomVariable
# Inline plotting
%matplotlib inline
# We store the data as a 10k 1D vector
# It actually represents [0,1]x[0,1] in cartesian coordinates
n = 100
train_x = torch.zeros(pow(n, 2), 2)
for i in range(n):
for j in range(n):
# Each coordinate varies from 0 to 1 in n=100 steps
train_x[i * n + j][0] = float(i) / (n-1)
train_x[i * n + j][1] = float(j) / (n-1)
# Cuda variable the x_data
train_x = Variable(train_x).cuda()
# function is y=sin(x0) + 2*pi*cos(x1)
train_y = Variable((torch.sin(train_x.data[:, 0]) + torch.cos(train_x.data[:, 1])) * (2 * math.pi)).cuda()
# Use the exact GP model for regression and interpolate between grid points
class GPRegressionModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)
# Constant mean and RBF kernel
self.mean_module = ConstantMean(constant_bounds=[-1e-5,1e-5])
self.base_covar_module = RBFKernel(log_lengthscale_bounds=(-5, 6))
# Put the AdditiveGridInterpolationKernel over the RBF kernel
# There are two dimensions (n_components=2)
self.covar_module = AdditiveGridInterpolationKernel(self.base_covar_module,
grid_size=400,
grid_bounds=[(0, 1)],
n_components=2)
# Register the lengthscale of the RBF kernel as a parameter to be optimized
self.register_parameter('log_outputscale', nn.Parameter(torch.Tensor([0])),
bounds=(-5,6))
def forward(self,x):
mean_x = self.mean_module(x)
# Put the input through the AdditiveGridInterpolationKernel and scale
# the covariance matrix
covar_x = self.covar_module(x)
covar_x = covar_x.mul(self.log_outputscale.exp())
return GaussianRandomVariable(mean_x, covar_x)
# initialize the likelihood and model
likelihood = GaussianLikelihood().cuda()
model = GPRegressionModel(train_x.data, train_y.data, likelihood).cuda()
# Optimize the model
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam([
{'params': model.parameters()}, # Includes GaussianLikelihood parameters
], lr=0.2)
num_iter = 20
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
# See dkl_mnist for toeplitz explanation
with gpytorch.settings.use_toeplitz(False):
for i in range(num_iter):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f' % (i + 1, num_iter, loss.data[0]))
optimizer.step()
# Set into eval mode
model.eval()
likelihood.eval()
# Create 100 test data points
# Over the square [0,1]x[0,1]
n = 10
test_x = Variable(torch.zeros(int(pow(n, 2)), 2)).cuda()
for i in range(n):
for j in range(n):
test_x.data[i * n + j][0] = float(i) / (n-1)
test_x.data[i * n + j][1] = float(j) / (n-1)
# Put the test data through the model then likelihood
observed_pred = likelihood(model(test_x))
# the mean of the Gaussians are our predicted labels
pred_labels = observed_pred.mean().view(n, n).data.cpu().numpy()
# Calculate the true test values
test_y_actual = ((torch.sin(test_x.data[:, 0]) + torch.cos(test_x.data[:, 1])) * (2 * math.pi))
test_y_actual = test_y_actual.cpu().numpy().reshape(n, n)
# Compute absolute error
delta_y = numpy.absolute(pred_labels - test_y_actual)
# Define a plotting function
def ax_plot(f, ax, y_labels, title):
im = ax.imshow(y_labels)
ax.set_title(title)
f.colorbar(im)
# Make a plot of the predicted values
f, observed_ax = plt.subplots(1, 1, figsize=(4, 3))
ax_plot(f, observed_ax, pred_labels, 'Predicted Values (Likelihood)')
# Make a plot of the actual values
f, observed_ax2 = plt.subplots(1, 1, figsize=(4, 3))
ax_plot(f, observed_ax2, test_y_actual, 'Actual Values (Likelihood)')
# Make a plot of the errors
f, observed_ax3 = plt.subplots(1, 1, figsize=(4, 3))
ax_plot(f, observed_ax3, delta_y, 'Absolute Error Surface')
###Output
_____no_output_____ |
ps2_programming/PS2_Programming.ipynb | ###Markdown
CS 542 Machine Learning, Summer 2020, PS2 Programming Programming (a) Linear Regression We are given data used in a study of the homicide rate (HOM) in Detroit, over the years 1961-1973. The following data were collected by J.C. Fisher, and used in his paper ”Homicide in Detroit: The Role of Firearms,” Criminology, vol. 14, pp. 387-400, 1976. Each row is for a year, and each column are values of a variable.It turns out that three of the variables together are good predictors of the homicide rate: `FTP`, `WE`, and one more variable.Use methods described in Chapter 3 of the textbook to devise a mathematical formulation to determine the third variable. Implement your formulation and then conduct experiments to determine the third variable. In your report, be sure to provide the step-by-step mathematical formulation (citing Chapter 3 as needed) that corresponds to the implementation you turn in. Also give plots and a rigorous argument to justify the scheme you use and your conclusions.**Note**: the file `detroit.npy` containing the data is given on the resources section of our course Piazza. To load the data into Python, use `X=numpy.load(‘detroit.npy’)` command. Least-squares linear regression in Python can be done with the help of `numpy.linalg.lstsq()`. **Your answer:**Type your step-by-step mathematical formualtion (citing chapter 3 as needed)
###Code
# You code. Feel free to creat more cells if needed.
###Output
_____no_output_____
###Markdown
(b) k-Nearest NeighborsFor this problem, you will be implementing the k-Nearest Neighbor (k-NN) classifier and evaluating on the `Credit Approval` (CA) dataset. It describes credit worthiness data (in this case, binary classification). (see http://archive.ics.uci.edu/ml/datasets/Credit+Approval) We have split the available data into a training set `crx.data.training` and a testing set `crx.data.testing`. These are both comma-separated text files (CSVs). The first step to working with the CA dataset is to process the data. In looking at the data description `crx.names`, note that there are some missing values, there exist both numerical and categorical features, and that it is a relatively balanced dataset (meaning a roughly equal number of positive and negative examples - not that you should particularly care in this case, but something you should look for in general). A great Python library for handling data like this is Pandas (https://pandas.pydata.org/pandas-docs/stable/). You can read in the data with `X = pandas.read csv(‘crx.data.training’, header=None, na values=‘?’)`. The last option tells Pandas to treat the character `?` as a missing value. Pandas holds data in a "dataframe". We'll deal with individual rows and columns, which Pandas calls "series". Pandas contains many convenient tools, bu the most basic you'll use is `X.iloc[i,j]`, accessing the element in the i-th row and j-th column. You can use this for both getting and setting values. You can also slice like normal Python, grabbing the i-th row with `[i,:]`. You can view the first 20 rows with `X.head(20)`. The last column, number 15, contains the labels. You’ll see some elements are missing, marked with `NaN`. While there are more sophisticated (and better) methods for imputing missing values, for this assign- ment, we will just use mean/mode imputation. This means that for feature 0, you should replace all of the question marks with a `b` as this is the mode, the most common value (regardless if you condition on the label or not). For real-valued features, just replace missing values with the label-conditioned mean (e.g. $μ(x_1|+)$ for instances labeled as positive).The second aspect one should consider is normalizing features. Nominal features can be left in their given form where we define the distance to be a constant value (e.g. 1) if they are different values, and 0 if they are the same. However, it is often wise to normalize real-valued features. For the purpose of this assignment, we will use $z$-scaling, where$$z_{i}^{(m)} \leftarrow \frac{x_{i}^{(m)}-\mu_{i}}{\sigma_{i}}$$such that $z(m)$ indicates feature $i$ for instance $m$ (similarly $x(m)$ is the raw input), $μ_i$ isthe average value of feature $i$ over all instances, and $σ_i$ is the corresponding standard deviation over all instances.In this notebook, include the following functions:i. A function `impute_missing_data()` that accepts two Pandas dataframes, one training and one testing, and returns two dataframes with missing values filled in. In your report include your exact methods for each type of feature. Note that you are free to impute the values using statistics over the entire dataset (training and testing combined) or just training, but please state your method.ii. A function normalize `features()` that accepts a training and testing dataframe and returns two dataframes with real-valued features normalized.iii. A function `distance()` that accepts two rows of a dataframe and returns a float, the L2 distance: $D_{L2}(\mathbf{a},\mathbf{b}) = \sqrt{\sum_i (ai −bi)^2}$ . Note that we define $D_{L2}$ to have a component-wise value of 1 for categorical attribute-values that disagree and 0 if they do agree (as previously implied). Remember not to use the label column in your distance calculation!iv. A funtion `predict()` that accepts three arguments: a training dataframe, a testing dataframe, and an integer $k$ - the number of nearest neighbors to use in predicting. This function should return a column of $+/-$ labels, one for every row in the testing data.v. A function `accuracy()` that accepts two columns, one true labels and one predicted by your algorithm, and returns a float between 0 and 1, the fraction of labels you guessed correctly.In your report, include accuracy results on `crx.data.testing` for at least three different values of `k`.vi. Try your algorithm on some other data! We’ve included the “lenses” dataset (https://archive.ics.uci.edu/ml/datasets/Lenses). It has no missing values and only categorical attributes, so no need for imputation or normalization. Include accuracy results from `lenses.testing` in your report as well. The code you submit must be your own. If you find/use information about specific algorithms from the Web, etc., be sure to cite the source(s) clearly in your sourcecode. You are not allowed to submit code downloaded from the internet (obviously). **Your answer:**
###Code
### You code for question (b), create more cells as needed.
###Output
_____no_output_____ |
hddm/examples/TEST_RLHDDM_NN.ipynb | ###Markdown
The columns in the datafile represent: __subj_idx__ (subject id), __response__ (1=best option), __cond__ (identifies condition, but not used in model), __rt__ (in seconds), 0=worst option), __trial__ (the trial-iteration for a subject within each condition), __split_by__ (identifying condition, used for running the model), __feedback__ (whether the response given was rewarded or not), __q_init__ (the initial q-value used for the model, explained above).
###Code
#run the model by calling hddm.HDDMrl (instead of hddm.HDDM for normal HDDM)
m = hddm.HDDMrl(data)
#set sample and burn-in
m.sample(100,burn=10,dbname='traces.db',db='pickle')
#print stats to get an overview of posterior distribution of estimated parameters
m.print_stats()
###Output
{}
()
No model attribute --> setting up standard HDDM
Includes supplied: ()
printing self.nn
False
Set model to ddm
[--- 10% ] 10 of 100 complete in 58.9 secHalting at iteration 9 of 100
Could not generate output statistics for t
Could not generate output statistics for a_subj.4
Could not generate output statistics for a_subj.6
Could not generate output statistics for a_subj.3
Could not generate output statistics for a_subj.17
Could not generate output statistics for t_subj.22
Could not generate output statistics for a_subj.12
Could not generate output statistics for a_subj.71
Could not generate output statistics for alpha_subj.5
Could not generate output statistics for t_subj.5
Could not generate output statistics for t_subj.23
Could not generate output statistics for t_subj.24
Could not generate output statistics for a_subj.8
Could not generate output statistics for a_subj.18
Could not generate output statistics for t_subj.26
Could not generate output statistics for t_subj.33
Could not generate output statistics for a_subj.19
Could not generate output statistics for a_subj.20
Could not generate output statistics for a
Could not generate output statistics for t_subj.34
Could not generate output statistics for a_subj.22
Could not generate output statistics for t_subj.35
Could not generate output statistics for a_subj.23
Could not generate output statistics for t_std
Could not generate output statistics for t_subj.36
Could not generate output statistics for a_subj.24
Could not generate output statistics for v_subj.4
Could not generate output statistics for v_subj.3
Could not generate output statistics for t_subj.39
Could not generate output statistics for a_subj.26
Could not generate output statistics for v_subj.17
Could not generate output statistics for v_subj.6
Could not generate output statistics for v_subj.12
Could not generate output statistics for t_subj.42
Could not generate output statistics for a_subj.33
Could not generate output statistics for v_subj.42
Could not generate output statistics for v_subj.35
Could not generate output statistics for t_subj.50
Could not generate output statistics for a_subj.34
Could not generate output statistics for t_subj.52
Could not generate output statistics for a_subj.35
Could not generate output statistics for v_subj.18
Could not generate output statistics for t_subj.56
Could not generate output statistics for a_subj.36
Could not generate output statistics for v_subj.19
Could not generate output statistics for alpha
Could not generate output statistics for a_std
Could not generate output statistics for a_subj.63
Could not generate output statistics for v_subj.20
Could not generate output statistics for t_subj.59
Could not generate output statistics for a_subj.39
Could not generate output statistics for v_subj.22
Could not generate output statistics for v_subj.36
Could not generate output statistics for t_subj.63
Could not generate output statistics for a_subj.42
Could not generate output statistics for v_subj.23
Could not generate output statistics for v_subj.39
Could not generate output statistics for t_subj.71
Could not generate output statistics for a_subj.50
Could not generate output statistics for a_subj.80
Could not generate output statistics for v
Could not generate output statistics for v_subj.24
Could not generate output statistics for t_subj.75
Could not generate output statistics for a_subj.52
Could not generate output statistics for alpha_std
Could not generate output statistics for a_subj.75
Could not generate output statistics for v_subj.26
Could not generate output statistics for v_std
Could not generate output statistics for t_subj.80
Could not generate output statistics for a_subj.56
Could not generate output statistics for v_subj.5
Could not generate output statistics for v_subj.34
Could not generate output statistics for alpha_subj.42
Could not generate output statistics for v_subj.33
Could not generate output statistics for a_subj.59
Could not generate output statistics for v_subj.50
Could not generate output statistics for alpha_subj.71
Could not generate output statistics for alpha_subj.3
Could not generate output statistics for v_subj.52
Could not generate output statistics for alpha_subj.75
Could not generate output statistics for v_subj.56
Could not generate output statistics for alpha_subj.17
Could not generate output statistics for alpha_subj.80
Could not generate output statistics for v_subj.59
Could not generate output statistics for alpha_subj.4
Could not generate output statistics for alpha_subj.6
Could not generate output statistics for alpha_subj.12
Could not generate output statistics for v_subj.63
Could not generate output statistics for alpha_subj.52
Could not generate output statistics for a_subj.5
Could not generate output statistics for alpha_subj.63
Could not generate output statistics for v_subj.71
Could not generate output statistics for v_subj.75
Could not generate output statistics for v_subj.80
Could not generate output statistics for alpha_subj.18
Could not generate output statistics for t_subj.4
Could not generate output statistics for alpha_subj.19
Could not generate output statistics for t_subj.20
Could not generate output statistics for alpha_subj.20
Could not generate output statistics for alpha_subj.22
Could not generate output statistics for alpha_subj.23
Could not generate output statistics for alpha_subj.24
Could not generate output statistics for t_subj.17
Could not generate output statistics for alpha_subj.26
Could not generate output statistics for alpha_subj.50
Could not generate output statistics for t_subj.12
Could not generate output statistics for v_subj.8
Could not generate output statistics for alpha_subj.8
Could not generate output statistics for alpha_subj.33
Could not generate output statistics for alpha_subj.34
Could not generate output statistics for t_subj.18
Could not generate output statistics for alpha_subj.35
Could not generate output statistics for alpha_subj.36
Could not generate output statistics for t_subj.19
Could not generate output statistics for alpha_subj.56
Could not generate output statistics for alpha_subj.39
Could not generate output statistics for t_subj.3
Could not generate output statistics for alpha_subj.59
Could not generate output statistics for t_subj.8
Could not generate output statistics for t_subj.6
###Markdown
__Interpreting output from print_stats:__ The model estimates group mean and standard deviation parameters and subject parameters for the following latent variables: a = decision threshold v = scaling parameter t = non-decision time alpha = learning rate, note that it's not bound between 0 and 1. to transform take inverse logit: np.exp(alpha)/(1+np.exp(alpha)) The columns represent the mean, standard deviation and quantiles of the approximated posterior distribution of each parameter HDDMrl vs. HDDM__There are a few things to note that is different from the normal HDDM model.__ First of all, the estimated learning rate does not necessarily fall between 0 and 1. This is because it is estimated as a normal distribution for purposes of sampling hierarchically and then transformed by an inverse logit function to 0Second, the v-parameter in the output is the scaling factor that is multiplied by the difference in q-values, so it is not the actual drift rate (or rather, it is the equivalent drift rate when the difference in Q values is exactly 1). 6. Checking results
###Code
# plot the posteriors of parameters
m.plot_posteriors()
###Output
_____no_output_____
###Markdown
__Fig__. The mixing of the posterior distribution and autocorrelation looks ok. Convergence of chainsThe Gelman-Rubin statistic is a test of whether the chains in the model converges. The Gelman-Ruben statistic measures the degree of variation between and within chains. Values close to 1 indicate convergence and that there is small variation between chains, i.e. that they end up as the same distribution across chains. A common heuristic is to assume convergence if all values are below 1.1. To run this you need to run multiple models, combine them and perform the Gelman-Rubin statistic:
###Code
# estimate convergence
from kabuki.analyze import gelman_rubin
models = []
for i in range(3):
m = hddm.HDDMrl(data=data)
m.sample(1500, burn=500,dbname='traces.db',db='pickle')
models.append(m)
gelman_rubin(models)
np.max(list(gelman_rubin(models).values()))
###Output
_____no_output_____
###Markdown
The model seems to have converged, i.e. the Gelman-Rubin statistic is below 1.1 for all parameters. It is important to always run this test, especially for more complex models ([as with separate learning rates for positive and negative prediction errors](9.-Separate-learning-rates-for-positive-and-negative-prediction-errors)). So now we can combine these three models to get a better approximation of the posterior distribution.
###Code
# Combine the models we ran to test for convergence.
m = kabuki.utils.concat_models(models)
###Output
_____no_output_____
###Markdown
Joint posterior distributionAnother test of the model is to look at collinearity. If the estimation of parameters is very codependent (correlation is strong) it can indicate that their variance trades off, in particular if there is a negative correlation. The following plot shows there is generally low correlation across all combinations of parameters. It does not seem to be the case for this dataset, but common for RLDDM is a negative correlation between learning rate and the scaling factor, similar to what's usually observed between learning rate and inverse temperature for RL models that uses softmax as the choice rule (e.g. [Daw, 2011](https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199600434.001.0001/acprof-9780199600434-chapter-001)).
###Code
alpha, t, a, v = m.nodes_db.node[['alpha', 't', 'a','v']]
samples = {'alpha':alpha.trace(),'t':t.trace(),'a':a.trace(),'v':v.trace()}
samp = pd.DataFrame(data=samples)
def corrfunc(x, y, **kws):
r, _ = stats.pearsonr(x, y)
ax = plt.gca()
ax.annotate("r = {:.2f}".format(r),
xy=(.1, .9), xycoords=ax.transAxes)
g = sns.PairGrid(samp, palette=["red"])
g.map_upper(plt.scatter, s=10)
g.map_diag(sns.distplot, kde=False)
g.map_lower(sns.kdeplot, cmap="Blues_d")
g.map_lower(corrfunc)
g.savefig('matrix_plot.png')
###Output
_____no_output_____
###Markdown
7. Posterior predictive checks An important test of the model is its ability to recreate the observed data. This can be tested with posterior predictive checks, which involves simulating data using estimated parameters and comparing observed and simulated results. extract tracesThe first step then is to extract the traces from the estimated model. The function get_traces() gives you the samples (row) from the approaximated posterior distribution for all of the estimated group and subject parameters (column).
###Code
traces = m.get_traces()
traces.head()
###Output
_____no_output_____
###Markdown
simulating data__Now that we have the traces the next step is to simulate data using the estimated parameters. The RLDDM includes a function to simulate data. Here's an example of how to use the simulation-function for RLDDM. This example explains how to generate data with binary outcomes. See [here](11.-Probabilistic-binary-outcomes-vs.-normally-distributed-outcomes) for an example on simulating data with normally distributed outcomes. Inputs to function: a__ = decision threshold **t** = non-decision time __alpha__ = learning rate __pos_alpha__ = defaults to 0. if given it defines the learning rate for positive prediction errors. alpha then becomes the learning rate_ for negative prediction errors. __scaler__ = the scaling factor that is multiplied with the difference in q-values to calculate trial-by-trial drift rate __p_upper__ = the probability of reward for the option represented by the upper boundary. The current version thus only works for outcomes that are either 1 or 0 __p_lower__ = the probability of reward for the option represented by the lower boundary. __subjs__ = number of subjects to simulate data for. __split_by__ = define the condition which makes it easier to append data from different conditions. __size__ = number of trials per subject.
###Code
hddm.generate.gen_rand_rlddm_data(a=1,t=0.3,alpha=0.2,scaler=2,p_upper=0.8,p_lower=0.2,subjs=1,split_by=0,size=10)
###Output
_____no_output_____
###Markdown
__How to interpret columns in the resulting dataframe__ __q_up__ = expected reward for option represented by upper boundary __q_low__ = expected reward for option represented by lower boundary __sim_drift__ = the drift rate for each trial calculated as: (q_up-q_low)*scaler __response__ = simulated choice __rt__ = simulated response time __feedback__ = observed feedback for chosen option __subj_idx__ = subject id (starts at 0) __split_by__ = condition as integer __trial__ = current trial (starts at 1) Simulate data with estimated parameter values and compare to observed dataNow that we know how to extract traces and simulate data we can combine this to create a dataset similar to our observed data. This process is currently not automated but the following is an example code using the dataset we analyzed above.
###Code
from tqdm import tqdm #progress tracker
#create empty dataframe to store simulated data
sim_data = pd.DataFrame()
#create a column samp to be used to identify the simulated data sets
data['samp'] = 0
#load traces
traces = m.get_traces()
#decide how many times to repeat simulation process. repeating this multiple times is generally recommended,
#as it better captures the uncertainty in the posterior distribution, but will also take some time
for i in tqdm(range(1,51)):
#randomly select a row in the traces to use for extracting parameter values
sample = np.random.randint(0,traces.shape[0]-1)
#loop through all subjects in observed data
for s in data.subj_idx.unique():
#get number of trials for each condition.
size0 = len(data[(data['subj_idx']==s) & (data['split_by']==0)].trial.unique())
size1 = len(data[(data['subj_idx']==s) & (data['split_by']==1)].trial.unique())
size2 = len(data[(data['subj_idx']==s) & (data['split_by']==2)].trial.unique())
#set parameter values for simulation
a = traces.loc[sample,'a_subj.'+str(s)]
t = traces.loc[sample,'t_subj.'+str(s)]
scaler = traces.loc[sample,'v_subj.'+str(s)]
alphaInv = traces.loc[sample,'alpha_subj.'+str(s)]
#take inverse logit of estimated alpha
alpha = np.exp(alphaInv)/(1+np.exp(alphaInv))
#simulate data for each condition changing only values of size, p_upper, p_lower and split_by between conditions.
sim_data0 = hddm.generate.gen_rand_rlddm_data(a=a,t=t,scaler=scaler,alpha=alpha,size=size0,p_upper=0.8,p_lower=0.2,split_by=0)
sim_data1 = hddm.generate.gen_rand_rlddm_data(a=a,t=t,scaler=scaler,alpha=alpha,size=size1,p_upper=0.7,p_lower=0.3,split_by=1)
sim_data2 = hddm.generate.gen_rand_rlddm_data(a=a,t=t,scaler=scaler,alpha=alpha,size=size2,p_upper=0.6,p_lower=0.4,split_by=2)
#append the conditions
sim_data0 = sim_data0.append([sim_data1,sim_data2],ignore_index=True)
#assign subj_idx
sim_data0['subj_idx'] = s
#identify that these are simulated data
sim_data0['type'] = 'simulated'
#identify the simulated data
sim_data0['samp'] = i
#append data from each subject
sim_data = sim_data.append(sim_data0,ignore_index=True)
#combine observed and simulated data
ppc_data = data[['subj_idx','response','split_by','rt','trial','feedback','samp']].copy()
ppc_data['type'] = 'observed'
ppc_sdata = sim_data[['subj_idx','response','split_by','rt','trial','feedback','type','samp']].copy()
ppc_data = ppc_data.append(ppc_sdata)
ppc_data.to_csv('ppc_data_tutorial.csv')
###Output
_____no_output_____
###Markdown
PlottingNow that we have a dataframe with both observed and simulated data we can plot to see whether the simulated data are able to capture observed choice and reaction times. To capture the uncertainty in the simulated data we want to identify how much choice and reaction differs across the simulated data sets. A good measure of this is to calculate the highest posterior density/highest density interval for summary scores of the generated data. Below we calculate highest posterior density with an alpha set to 0.1, which means that we are describing the range of the 90% most likely values.
###Code
#for practical reasons we only look at the first 40 trials for each subject in a given condition
plot_ppc_data = ppc_data[ppc_data.trial<41].copy()
###Output
_____no_output_____
###Markdown
Choice
###Code
#bin trials to for smoother estimate of response proportion across learning
plot_ppc_data['bin_trial'] = pd.cut(plot_ppc_data.trial,11,labels=np.linspace(0, 10,11)).astype('int64')
#calculate means for each sample
sums = plot_ppc_data.groupby(['bin_trial','split_by','samp','type']).mean().reset_index()
#calculate the overall mean response across samples
ppc_sim = sums.groupby(['bin_trial','split_by','type']).mean().reset_index()
#initiate columns that will have the upper and lower bound of the hpd
ppc_sim['upper_hpd'] = 0
ppc_sim['lower_hpd'] = 0
for i in range(0,ppc_sim.shape[0]):
#calculate the hpd/hdi of the predicted mean responses across bin_trials
hdi = pymc.utils.hpd(sums.response[(sums['bin_trial']==ppc_sim.bin_trial[i]) & (sums['split_by']==ppc_sim.split_by[i]) & (sums['type']==ppc_sim.type[i])],alpha=0.1)
ppc_sim.loc[i,'upper_hpd'] = hdi[1]
ppc_sim.loc[i,'lower_hpd'] = hdi[0]
#calculate error term as the distance from upper bound to mean
ppc_sim['up_err'] = ppc_sim['upper_hpd']-ppc_sim['response']
ppc_sim['low_err'] = ppc_sim['response']-ppc_sim['lower_hpd']
ppc_sim['model'] = 'RLDDM_single_learning'
ppc_sim.to_csv('ppc_choicedata_tutorial.csv')
#plotting evolution of choice proportion for best option across learning for observed and simulated data.
fig, axs = plt.subplots(figsize=(15, 5),nrows=1, ncols=3, sharex=True,sharey=True)
for i in range(0,3):
ax = axs[i]
d = ppc_sim[(ppc_sim.split_by==i) & (ppc_sim.type=='simulated')]
ax.errorbar(d.bin_trial, d.response, yerr=[d.low_err,d.up_err], label='simulated',color='orange')
d = ppc_sim[(ppc_sim.split_by==i) & (ppc_sim.type=='observed')]
ax.plot(d.bin_trial, d.response,linewidth=3,label='observed')
ax.set_title('split_by = %i' %i,fontsize=20)
ax.set_ylabel('mean response')
ax.set_xlabel('trial')
plt.legend()
fig.savefig('PPCchoice.pdf')
###Output
_____no_output_____
###Markdown
__Fig.__ The plots display the rate of choosing the best option (response = 1) across learning and condition. The model generates data (orange) that closely follows the observed behavior (blue), with the exception of overpredicting performance early in the most difficult condition (split_by=2). Uncertainty in the generated data is captured by the 90% highest density interval of the means across simulated datasets. RT
###Code
#set reaction time to be negative for lower bound responses (response=0)
plot_ppc_data['reaction time'] = np.where(plot_ppc_data['response']==1,plot_ppc_data.rt,0-plot_ppc_data.rt)
#plotting evolution of choice proportion for best option across learning for observed and simulated data. We use bins of trials because plotting individual trials would be very noisy.
g = sns.FacetGrid(plot_ppc_data,col='split_by',hue='type')
g.map(sns.kdeplot, 'reaction time',bw=0.05).set_ylabels("Density")
g.add_legend()
g.savefig('PPCrt_dist.pdf')
###Output
_____no_output_____
###Markdown
__Fig.__ Density plots of observed and predicted reaction time across conditions. RTs for lower boundary choices (i.e. worst option choices) are set to be negative (0-RT) to be able to separate upper and lower bound responses. 8. Parameter recoveryTo validate the RLDDM we ran a parameter recovery study to test to which degree the model can recover the parameter values used to simulate data. To do this we generated 81 synthetic datasets with 50 subjects performing 70 trials each. The 81 datasets were simulated using all combinations of three plausible parameter values for decision threshold, non-decision time, learning rate and the scaling parameter onto drift rate. Estimated values split by simulated vales We can plot simulated together with the estimated values to test the models ability to recover parameters, and to see if there are any values that are more difficult to recover than others.
###Code
param_recovery = hddm.load_csv('recovery_sim_est_rlddm.csv')
g = sns.catplot(x='a',y='e_a',data=param_recovery,palette='Set1')
g.set_axis_labels("Simulated threshold", "Estimated threshold")
plt.title("Decision threshold")
g.savefig('Threshold_recovery.pdf')
g = sns.catplot(x='alpha',y='e_alphaT',data=param_recovery,palette='Set1')
g.set_axis_labels("Simulated alpha", "Estimated alpha")
plt.title("Learning rate")
g.savefig('Alpha_recovery.pdf')
g = sns.catplot(x='scaler',y='e_v',data=param_recovery,palette='Set1')
g.set_axis_labels("Simulated scaling", "Estimated scaling")
plt.title("Scaling drift rate")
g.savefig('Scaler_recovery.pdf')
g = sns.catplot(x='t',y='e_t',data=param_recovery,palette='Set1')
g.set_axis_labels("Simulated NDT", "Estimated NDT")
plt.title("Non-decision time")
g.savefig('NDT_recovery.pdf')
###Output
_____no_output_____
###Markdown
__Fig.__ The correlation between simulated and estimated parameter values are high, which means recovery is good. There is somewhat worse recovery for the learning rate and scaling parameter, which makes sense given that they to a degree can explain the same variance (see below). 9. Separate learning rates for positive and negative prediction errorsSeveral studies have reported differences in updating of expected rewards following positive and negative prediction errors (e.g. to capture differences between D1 and D2 receptor function). To model asymmetric updating rates for positive and negative prediction errors you can set dual=True in the model. This will produce two estimated learning rates; alpha and pos_alpha, of which alpha then becomes the estimated learning rate for negative prediction errors.
###Code
#set dual=True to model separate learning rates for positive and negative prediction errors.
m_dual = hddm.HDDMrl(data,dual=True)
#set sample and burn-in
m_dual.sample(1500,burn=500,dbname='traces.db',db='pickle')
#print stats to get an overview of posterior distribution of estimated parameters
m_dual.print_stats()
m_dual.plot_posteriors()
###Output
_____no_output_____
###Markdown
__Fig.__ There's more autocorrelation in this model compared to the one with a single learning rate. First, let's test whether it converges.
###Code
# estimate convergence
models = []
for i in range(3):
m = hddm.HDDMrl(data=data,dual=True)
m.sample(1500, burn=500,dbname='traces.db',db='pickle')
models.append(m)
#get max gelman-statistic value. shouldn't be higher than 1.1
np.max(list(gelman_rubin(models).values()))
gelman_rubin(models)
###Output
_____no_output_____
###Markdown
Convergence looks good, i.e. no parameters with gelman-rubin statistic > 1.1.
###Code
# Create a new model that has all traces concatenated
# of individual models.
m_dual = kabuki.utils.concat_models(models)
###Output
_____no_output_____
###Markdown
And then we can have a look at the joint posterior distribution:
###Code
alpha, t, a, v, pos_alpha = m_dual.nodes_db.node[['alpha', 't', 'a','v','pos_alpha']]
samples = {'alpha':alpha.trace(),'pos_alpha':pos_alpha.trace(),'t':t.trace(),'a':a.trace(),'v':v.trace()}
samp = pd.DataFrame(data=samples)
def corrfunc(x, y, **kws):
r, _ = stats.pearsonr(x, y)
ax = plt.gca()
ax.annotate("r = {:.2f}".format(r),
xy=(.1, .9), xycoords=ax.transAxes)
g = sns.PairGrid(samp, palette=["red"])
g.map_upper(plt.scatter, s=10)
g.map_diag(sns.distplot, kde=False)
g.map_lower(sns.kdeplot, cmap="Blues_d")
g.map_lower(corrfunc)
g.savefig('matrix_plot.png')
###Output
_____no_output_____
###Markdown
__Fig.__ The correlation between parameters is generally low. Posterior predictive checkThe DIC for this dual learning rate model is better than for the single learning rate model. We can therefore check whether we can detect this improvement in the ability to recreate choice and RT patterns:
###Code
#create empty dataframe to store simulated data
sim_data = pd.DataFrame()
#create a column samp to be used to identify the simulated data sets
data['samp'] = 0
#get traces, note here we extract traces from m_dual
traces = m_dual.get_traces()
#decide how many times to repeat simulation process. repeating this multiple times is generally recommended as it better captures the uncertainty in the posterior distribution, but will also take some time
for i in tqdm(range(1,51)):
#randomly select a row in the traces to use for extracting parameter values
sample = np.random.randint(0,traces.shape[0]-1)
#loop through all subjects in observed data
for s in data.subj_idx.unique():
#get number of trials for each condition.
size0 = len(data[(data['subj_idx']==s) & (data['split_by']==0)].trial.unique())
size1 = len(data[(data['subj_idx']==s) & (data['split_by']==1)].trial.unique())
size2 = len(data[(data['subj_idx']==s) & (data['split_by']==2)].trial.unique())
#set parameter values for simulation
a = traces.loc[sample,'a_subj.'+str(s)]
t = traces.loc[sample,'t_subj.'+str(s)]
scaler = traces.loc[sample,'v_subj.'+str(s)]
#when generating data with two learning rates pos_alpha represents learning rate for positive prediction errors and alpha for negative prediction errors
alphaInv = traces.loc[sample,'alpha_subj.'+str(s)]
pos_alphaInv = traces.loc[sample,'pos_alpha_subj.'+str(s)]
#take inverse logit of estimated alpha and pos_alpha
alpha = np.exp(alphaInv)/(1+np.exp(alphaInv))
pos_alpha = np.exp(pos_alphaInv)/(1+np.exp(pos_alphaInv))
#simulate data for each condition changing only values of size, p_upper, p_lower and split_by between conditions.
sim_data0 = hddm.generate.gen_rand_rlddm_data(a=a,t=t,scaler=scaler,alpha=alpha,pos_alpha=pos_alpha,size=size0,p_upper=0.8,p_lower=0.2,split_by=0)
sim_data1 = hddm.generate.gen_rand_rlddm_data(a=a,t=t,scaler=scaler,alpha=alpha,pos_alpha=pos_alpha,size=size1,p_upper=0.7,p_lower=0.3,split_by=1)
sim_data2 = hddm.generate.gen_rand_rlddm_data(a=a,t=t,scaler=scaler,alpha=alpha,pos_alpha=pos_alpha,size=size2,p_upper=0.6,p_lower=0.4,split_by=2)
#append the conditions
sim_data0 = sim_data0.append([sim_data1,sim_data2],ignore_index=True)
#assign subj_idx
sim_data0['subj_idx'] = s
#identify that these are simulated data
sim_data0['type'] = 'simulated'
#identify the simulated data
sim_data0['samp'] = i
#append data from each subject
sim_data = sim_data.append(sim_data0,ignore_index=True)
#combine observed and simulated data
ppc_dual_data = data[['subj_idx','response','split_by','rt','trial','feedback','samp']].copy()
ppc_dual_data['type'] = 'observed'
ppc_dual_sdata = sim_data[['subj_idx','response','split_by','rt','trial','feedback','type','samp']].copy()
ppc_dual_data = ppc_dual_data.append(ppc_dual_sdata)
#for practical reasons we only look at the first 40 trials for each subject in a given condition
plot_ppc_dual_data = ppc_dual_data[ppc_dual_data.trial<41].copy()
###Output
_____no_output_____
###Markdown
Choice
###Code
#bin trials to for smoother estimate of response proportion across learning
plot_ppc_dual_data['bin_trial'] = pd.cut(plot_ppc_dual_data.trial,11,labels=np.linspace(0, 10,11)).astype('int64')
#calculate means for each sample
sums = plot_ppc_dual_data.groupby(['bin_trial','split_by','samp','type']).mean().reset_index()
#calculate the overall mean response across samples
ppc_dual_sim = sums.groupby(['bin_trial','split_by','type']).mean().reset_index()
#initiate columns that will have the upper and lower bound of the hpd
ppc_dual_sim['upper_hpd'] = 0
ppc_dual_sim['lower_hpd'] = 0
for i in range(0,ppc_dual_sim.shape[0]):
#calculate the hpd/hdi of the predicted mean responses across bin_trials
hdi = pymc.utils.hpd(sums.response[(sums['bin_trial']==ppc_dual_sim.bin_trial[i]) & (sums['split_by']==ppc_dual_sim.split_by[i]) & (sums['type']==ppc_dual_sim.type[i])],alpha=0.1)
ppc_dual_sim.loc[i,'upper_hpd'] = hdi[1]
ppc_dual_sim.loc[i,'lower_hpd'] = hdi[0]
#calculate error term as the distance from upper bound to mean
ppc_dual_sim['up_err'] = ppc_dual_sim['upper_hpd']-ppc_dual_sim['response']
ppc_dual_sim['low_err'] = ppc_dual_sim['response']-ppc_dual_sim['lower_hpd']
ppc_dual_sim['model'] = 'RLDDM_dual_learning'
#plotting evolution of choice proportion for best option across learning for observed and simulated data.
fig, axs = plt.subplots(figsize=(15, 5),nrows=1, ncols=3, sharex=True,sharey=True)
for i in range(0,3):
ax = axs[i]
d = ppc_dual_sim[(ppc_dual_sim.split_by==i) & (ppc_dual_sim.type=='simulated')]
ax.errorbar(d.bin_trial, d.response, yerr=[d.low_err,d.up_err], label='simulated',color='orange')
d = ppc_sim[(ppc_dual_sim.split_by==i) & (ppc_dual_sim.type=='observed')]
ax.plot(d.bin_trial, d.response,linewidth=3,label='observed')
ax.set_title('split_by = %i' %i,fontsize=20)
ax.set_ylabel('mean response')
ax.set_xlabel('trial')
plt.legend()
###Output
_____no_output_____
###Markdown
__Fig.__ The plots display the rate of choosing the best option (response = 1) across learning and condition. The model generates data (orange) that closely follows the observed behavior (blue), with the exception of performance early in the most difficult condition (split_by=2). PPC for single vs. dual learning rateTo get a better sense of differences in ability to predict data between the single and dual learning rate model we can plot them together:
###Code
#plotting evolution of choice proportion for best option across learning for observed and simulated data. Compared for model with single and dual learning rate.
fig, axs = plt.subplots(figsize=(15, 5),nrows=1, ncols=3, sharex=True,sharey=True)
for i in range(0,3):
ax = axs[i]
d_single = ppc_sim[(ppc_sim.split_by==i) & (ppc_sim.type=='simulated')]
#slightly move bin_trial to avoid overlap in errorbars
d_single['bin_trial'] += 0.2
ax.errorbar(d_single.bin_trial, d_single.response, yerr=[d_single.low_err,d_single.up_err],label='simulated_single',color='orange')
d_dual = ppc_dual_sim[(ppc_dual_sim.split_by==i) & (ppc_dual_sim.type=='simulated')]
ax.errorbar(d_dual.bin_trial, d_dual.response, yerr=[d_dual.low_err,d_dual.up_err],label='simulated_dual',color='green')
d = ppc_sim[(ppc_dual_sim.split_by==i) & (ppc_dual_sim.type=='observed')]
ax.plot(d.bin_trial, d.response,linewidth=3,label='observed')
ax.set_title('split_by = %i' %i,fontsize=20)
ax.set_ylabel('mean response')
ax.set_xlabel('trial')
plt.xlim(-0.5,10.5)
plt.legend()
###Output
_____no_output_____
###Markdown
__Fig.__ The predictions from the model with two learning rates are not very different from the model with single learning rate, and a similar overprediction of performance early on for the most difficult condition (split_by =2). RT
###Code
plot_ppc_data['type_compare'] = np.where(plot_ppc_data['type']=='observed',plot_ppc_data['type'],'simulated_single_learning')
plot_ppc_dual_data['type_compare'] = np.where(plot_ppc_dual_data['type']=='observed',plot_ppc_dual_data['type'],'simulated_dual_learning')
dual_vs_single_pcc = plot_ppc_data.append(plot_ppc_dual_data)
dual_vs_single_pcc['reaction time'] = np.where(dual_vs_single_pcc['response']==1,dual_vs_single_pcc.rt,0-dual_vs_single_pcc.rt)
#plotting evolution of choice proportion for best option across learning for observed and simulated data. We use bins of trials because plotting individual trials would be very noisy.
g = sns.FacetGrid(dual_vs_single_pcc,col='split_by',hue='type_compare',height=5)
g.map(sns.kdeplot, 'reaction time',bw=0.01).set_ylabels("Density")
g.add_legend()
###Output
_____no_output_____
###Markdown
__Fig.__ Again there's not a big difference between the two models. Both models slightly overpredict performance for the medium (split_by =1) and hard (split_by = 2) conditions, as identified by lower densities for the negative (worst option choices) in the simulated compared to observed data. Transform alpha and pos_alphaTo interpret the parameter estimates for alpha and pos_alpha you have to transform them with the inverse logit where learning rate for negative prediction error is alpha and learning rate for positive prediction errors is pos_alpha. For this dataset the learning rate is estimated to be higher for positive than negative prediction errors.
###Code
#plot alpha for positive and negative learning rate
traces = m_dual.get_traces()
neg_alpha = np.exp(traces['alpha'])/(1+np.exp(traces['alpha']))
pos_alpha = np.exp(traces['pos_alpha'])/(1+np.exp(traces['pos_alpha']))
sns.kdeplot(neg_alpha, color='r', label="neg_alpha: " + str(np.round(np.mean(neg_alpha),3)))
sns.kdeplot(pos_alpha, color='b', label="pos_alpha: " + str(np.round(np.mean(pos_alpha),3)))
###Output
_____no_output_____
###Markdown
__Fig.__ The positive learning rate is estimated to be stronger than the negative learning rate. Sticky choice, tendencies to repeat choices, could be driving some of this difference. The current model does not allow to test for this, however, but it could be tested in the future if we implement a regression version of RLDDM (similar to HDDMRegressor). Simulate data with learning rates for positive and negative prediction errorsHere's how you would simulate data with a learning rate for positive and negative predictions of 0.2 and 0.4, respectively:
###Code
hddm.generate.gen_rand_rlddm_data(a=1,t=0.3,alpha=0.2,pos_alpha=0.4,scaler=2,p_upper=0.8,p_lower=0.2,size=10)
###Output
_____no_output_____
###Markdown
10. depends_on vs. split_byHDDMrl can be used to estimate separate parameters just as in the standard HDDM. But in RL you typically estimate the same learning rates and inverse temperature across conditions. That's one reason why you have to specify condition in the split_by-column instead of depends_on. (The other is that if you use depends_on the expected rewards will not get updated properly). But depends_on is still useful, for example if you want to estimate the effect of group on parameters. As an example we can simulate a dataset with two groups that have different decision thresholds:
###Code
data1 = hddm.generate.gen_rand_rlddm_data(a=1,t=0.3,alpha=0.4,scaler=2,p_upper=0.8,p_lower=0.2,subjs=50,size=50)
data1['group'] = 'group1'
data2 = hddm.generate.gen_rand_rlddm_data(a=2,t=0.3,alpha=0.4,scaler=2,p_upper=0.8,p_lower=0.2,subjs=50,size=50)
data2['group'] = 'group2'
group_data = data1.append(data2)
group_data['q_init'] = 0.5
m = hddm.HDDMrl(group_data,depends_on={'v':'group','a':'group','t':'group','alpha':'group'})
m.sample(1500,burn=500,dbname='traces.db',db='pickle')
#the plot shows that the model was able to recover the different decision threshold across groups.
a_group1, a_group2 = m.nodes_db.node[['a(group1)', 'a(group2)']]
hddm.analyze.plot_posterior_nodes([a_group1, a_group2])
plt.xlabel('decision threshold')
plt.ylabel('Posterior probability')
plt.xlim(0.7,2.3)
plt.title('Posterior of decision threshold group means')
###Output
_____no_output_____
###Markdown
11. Probabilistic binary outcomes vs. normally distributed outcomesThe examples so far have all been using a task structure where the outcomes are binary and probabilistic. But the model can also be applied to other types of outcomes. Here we show how you can generate and model data with normally distributed outcomes. As you will see you don't have to do any modifications to the model estimation process, but you have to change the input for generating data. Also note that the scaling parameter (v) will scale negatively with the values of the observed outcomes because the combined drift rate needs to be plausible.
###Code
# This is how we generated data so far, defining the probability of reward (1) for actions/stimuli associated with upper and lower boundary.
# binary probabilistic outcomes
hddm.generate.gen_rand_rlddm_data(a=2,t=0.3,scaler=2,alpha=0.2,size=10,p_upper=0.2,p_lower=0.8)
# If instead the outcomes are drawn from a normal distribution you will have to set binary_outcome to False and instead of p_upper and p_upper define the mean (mu) and sd
# of the normal distribution for both alternatives. Note that we change the initial q-value to 0, and that we reduce the scaling factor.
# normally distributed outcomes
hddm.generate.gen_rand_rlddm_data(a=2,t=0.3,scaler=0.2,alpha=0.2,size=10,mu_upper=8,mu_lower=2,sd_upper=1,sd_lower=1,binary_outcome=False,q_init=0)
# We can generate a dataset where 30 subjects perform 50 trials each. Note that we set the scaler to be lower than for the binary outcomes as otherwise
# the resulting drift will be unrealistically high.
norm_data = hddm.generate.gen_rand_rlddm_data(a=2,t=0.3,scaler=0.2,alpha=0.2,size=50,subjs=30,mu_upper=8,mu_lower=2,sd_upper=2,sd_lower=2,binary_outcome=False,q_init=0)
#and then we can do estimation as usual
#but first we need to define inital q-value
norm_data['q_init'] = 0
m_norm = hddm.HDDMrl(norm_data)
m_norm.sample(1500,burn=500,dbname='traces.db',db='pickle')
m_norm.print_stats()
###Output
_____no_output_____
###Markdown
12. HDDMrlRegressorAs of version 0.7.6. HDDM includes a module for estimating the impact of continuous regressor onto RLDDM parameters. The module, called HDDMrlRegressor, works the same way as the HDDMRegressor for the normal DDM. The method allows estimation of the association of e.g. neural measures onto parameters. To illustrate the method we extend the function to generate rlddm_data by adding a normally distributed regressor and including a coefficient called 'neural'.Note that to run the HDDMrlRegressor you need to include alpha when specifying the model. For more information on how to set up regressor models look at the tutorial for HDDM.
###Code
#function to generate rlddm-data that adds a neural regressor to decision threshold
def gen_rand_reg_rlddm_data(a, t, scaler, alpha, neural, size=1, p_upper=1, p_lower=0, z=0.5, q_init=0.5, split_by=0, subjs=1):
all_data = []
n = size
# set sd for variables to generate subject-parameters from group distribution
sd_t = 0.02
sd_a = 0.1
sd_alpha = 0.1
sd_v = 0.25
#save parameter values as group-values
tg = t
ag = a
alphag = alpha
scalerg = scaler
for s in range(0, subjs):
t = np.maximum(0.05, np.random.normal(
loc=tg, scale=sd_t, size=1)) if subjs > 1 else tg
a = np.maximum(0.05, np.random.normal(
loc=ag, scale=sd_a, size=1)) if subjs > 1 else ag
alpha = np.minimum(np.minimum(np.maximum(0.001, np.random.normal(loc=alphag, scale=sd_a, size=1)), alphag+alphag),1) if subjs > 1 else alphag
scaler = np.random.normal(loc=scalerg, scale=sd_v, size=1) if subjs > 1 else scalerg
#create a normalized regressor that is combined with the neural coefficient to create trial-by-trial values for decision threshold
neural_reg = np.random.normal(0,1,size=n)
q_up = np.tile([q_init], n)
q_low = np.tile([q_init], n)
response = np.tile([0.5], n)
feedback = np.tile([0.5], n)
rt = np.tile([0], n)
rew_up = np.random.binomial(1, p_upper, n).astype(float)
rew_low = np.random.binomial(1, p_lower, n).astype(float)
sim_drift = np.tile([0], n)
subj_idx = np.tile([s], n)
d = {'q_up': q_up, 'q_low': q_low, 'sim_drift': sim_drift, 'rew_up': rew_up, 'rew_low': rew_low,
'response': response, 'rt': rt, 'feedback': feedback, 'subj_idx': subj_idx, 'split_by': split_by, 'trial': 1, 'neural_reg': neural_reg}
df = pd.DataFrame(data=d)
df = df[['q_up', 'q_low', 'sim_drift', 'rew_up', 'rew_low',
'response', 'rt', 'feedback', 'subj_idx', 'split_by', 'trial','neural_reg']]
#generate data trial-by-trial using the Intercept (a), regressor (neural_reg) and coefficient (neural) for decision threshold.
data, params = hddm.generate.gen_rand_data(
{'a': a + neural*df.loc[0, 'neural_reg'], 't': t, 'v': df.loc[0, 'sim_drift'], 'z': z}, subjs=1, size=1)
df.loc[0, 'response'] = data.response[0]
df.loc[0, 'rt'] = data.rt[0]
if (data.response[0] == 1.0):
df.loc[0, 'feedback'] = df.loc[0, 'rew_up']
else:
df.loc[0, 'feedback'] = df.loc[0, 'rew_low']
for i in range(1, n):
df.loc[i, 'trial'] = i + 1
df.loc[i, 'q_up'] = (df.loc[i - 1, 'q_up'] * (1 - df.loc[i - 1, 'response'])) + ((df.loc[i - 1, 'response'])
* (df.loc[i - 1, 'q_up'] + (alpha * (df.loc[i - 1, 'rew_up'] - df.loc[i - 1, 'q_up']))))
df.loc[i, 'q_low'] = (df.loc[i - 1, 'q_low'] * (df.loc[i - 1, 'response'])) + ((1 - df.loc[i - 1, 'response'])
* (df.loc[i - 1, 'q_low'] + (alpha * (df.loc[i - 1, 'rew_low'] - df.loc[i - 1, 'q_low']))))
df.loc[i, 'sim_drift'] = (df.loc[i, 'q_up'] - df.loc[i, 'q_low']) * (scaler)
data, params = hddm.generate.gen_rand_data(
{'a': a + neural*df.loc[i, 'neural_reg'], 't': t, 'v': df.loc[i, 'sim_drift'] , 'z': z}, subjs=1, size=1)
df.loc[i, 'response'] = data.response[0]
df.loc[i, 'rt'] = data.rt[0]
if (data.response[0] == 1.0):
df.loc[i, 'feedback'] = df.loc[i, 'rew_up']
else:
df.loc[i, 'feedback'] = df.loc[i, 'rew_low']
all_data.append(df)
all_data = pd.concat(all_data, axis=0)
all_data = all_data[['q_up', 'q_low', 'sim_drift', 'response',
'rt', 'feedback', 'subj_idx', 'split_by', 'trial','neural_reg']]
return all_data
#Create data with function defined above.
#This will create trial-by-trial values for decision threshold (a) by adding the coefficient neural (here set to 0.2)
#multiplied by a normalized regressor (neural_reg) to the 'Intercept' value of a (here set to 1)
data_neural = gen_rand_reg_rlddm_data(a=1,t=0.3,scaler=2,alpha=0.2,neural = 0.2,size=100,p_upper=0.7,p_lower=0.3,subjs=25)
data_neural['q_init'] = 0.5
data_neural.head()
#run a regressor model estimating the impact of 'neural' on decision threshold a. This should estimate the coefficient a_neural_reg to be 0.2
#to run the HDDMrlRegressor you need to include alpha
m_reg = hddm.HDDMrlRegressor(data_neural,'a ~ neural_reg',include='alpha')
m_reg.sample(1000,burn=250)
m_reg.print_stats()
###Output
_____no_output_____
###Markdown
13. Regular RL without RTHDDMrl also includes a module to run an RL-model that uses softmax to transform q-values to probability of choosing options associated with upper (response=1) or lower (response=0) boundary. To run this model you type hddm.Hrl instead of hddm.HDDMrl. The setup is the same as for HDDMrl, and for now, the model won't run if you don't include an rt-column. This will be fixed for a future version, but for now, if you don't have RTs you can just create an rt-column where you set all rts to e.g. 0.5. You can choose to estimate separate learning rates for positive and negative learning rate by setting dual=True (see [here](9.-Separate-learning-rates-for-positive-and-negative-prediction-errors) for more information). The model will by default estimate posterior distributions for the alpha and v parameters. The probability of choosing upper boundary is captured as: $p_{up} =(e^{-2*z*d_t}-1)/ (e^{-2*d_t}-1)$, where ${d_t}=q_{up_t}-q_{low}*v$ and z represents starting point (which for now is fixed to be 0.5). This calculation is equivalent to soft-max transformation when z=0.5.
###Code
#run the model by calling hddm.Hrl (instead of hddm.HDDM for normal model and hddm.HDDMrl for rlddm-model)
m_rl = hddm.Hrl(data)
#set sample and burn-in
m_rl.sample(1500,burn=500,dbname='traces.db',db='pickle')
#print stats to get an overview of posterior distribution of estimated parameters
m_rl.print_stats()
###Output
_____no_output_____
###Markdown
Parameter estimates from the pure RL-model are a bit different compared to the RLDDM. This is to be expected as probability of choice in DDM is dependent both on the decsision threshold and the scaled difference in q-values, whereas the RL model only uses the scaled difference in q-values.
###Code
m_rl.plot_posteriors()
###Output
_____no_output_____
###Markdown
__Fig.__ Mixing and autocorrelation looks good.
###Code
# estimate convergence
models = []
for i in range(3):
m = hddm.Hrl(data=data)
m.sample(1500, burn=500,dbname='traces.db',db='pickle')
models.append(m)
#get max gelman-statistic value. shouldn't be higher than 1.1
np.max(list(gelman_rubin(models).values()))
###Output
_____no_output_____
###Markdown
Convergence looks good, i.e. no parameters with gelman-rubin statistic > 1.1.
###Code
# Create a new model that has all traces concatenated
# of individual models.
m_rl = kabuki.utils.concat_models(models)
alpha, v = m_rl.nodes_db.node[['alpha','v']]
samples = {'alpha':alpha.trace(),'v':v.trace()}
samp = pd.DataFrame(data=samples)
def corrfunc(x, y, **kws):
r, _ = stats.pearsonr(x, y)
ax = plt.gca()
ax.annotate("r = {:.2f}".format(r),
xy=(.1, .9), xycoords=ax.transAxes)
g = sns.PairGrid(samp, palette=["red"])
g.map_upper(plt.scatter, s=10)
g.map_diag(sns.distplot, kde=False)
g.map_lower(sns.kdeplot, cmap="Blues_d")
g.map_lower(corrfunc)
###Output
_____no_output_____
###Markdown
__Fig.__ The correlation in the posterior distribution for alpha and v/scaling is somewhat negative. Posterior predictive checkWe can also do posterior predictive check on the RL-model by generating new data with hddm.generate.gen_rand_rl_data.
###Code
#create empty dataframe to store simulated data
sim_data = pd.DataFrame()
#create a column samp to be used to identify the simulated data sets
data['samp'] = 0
#load traces
traces = m_rl.get_traces()
#decide how many times to repeat simulation process. repeating this multiple times is generally recommended as it better captures the uncertainty in the posterior distribution, but will also take some time
for i in tqdm(range(1,51)):
#randomly select a row in the traces to use for extracting parameter values
sample = np.random.randint(0,traces.shape[0]-1)
#loop through all subjects in observed data
for s in data.subj_idx.unique():
#get number of trials for each condition.
size0 = len(data[(data['subj_idx']==s) & (data['split_by']==0)].trial.unique())
size1 = len(data[(data['subj_idx']==s) & (data['split_by']==1)].trial.unique())
size2 = len(data[(data['subj_idx']==s) & (data['split_by']==2)].trial.unique())
#set parameter values for simulation
scaler = traces.loc[sample,'v_subj.'+str(s)]
alphaInv = traces.loc[sample,'alpha_subj.'+str(s)]
#take inverse logit of estimated alpha
alpha = np.exp(alphaInv)/(1+np.exp(alphaInv))
#simulate data for each condition changing only values of size, p_upper, p_lower and split_by between conditions.
sim_data0 = hddm.generate.gen_rand_rl_data(scaler=scaler,alpha=alpha,size=size0,p_upper=0.8,p_lower=0.2,split_by=0)
sim_data1 = hddm.generate.gen_rand_rl_data(scaler=scaler,alpha=alpha,size=size1,p_upper=0.7,p_lower=0.3,split_by=1)
sim_data2 = hddm.generate.gen_rand_rl_data(scaler=scaler,alpha=alpha,size=size2,p_upper=0.6,p_lower=0.4,split_by=2)
#append the conditions
sim_data0 = sim_data0.append([sim_data1,sim_data2],ignore_index=True)
#assign subj_idx
sim_data0['subj_idx'] = s
#identify that these are simulated data
sim_data0['type'] = 'simulated'
#identify the simulated data
sim_data0['samp'] = i
#append data from each subject
sim_data = sim_data.append(sim_data0,ignore_index=True)
#combine observed and simulated data
ppc_rl_data = data[['subj_idx','response','split_by','trial','feedback','samp']].copy()
ppc_rl_data['type'] = 'observed'
ppc_rl_sdata = sim_data[['subj_idx','response','split_by','trial','feedback','type','samp']].copy()
ppc_rl_data = ppc_rl_data.append(ppc_rl_sdata)
#for practical reasons we only look at the first 40 trials for each subject in a given condition
plot_ppc_rl_data = ppc_rl_data[ppc_rl_data.trial<41].copy()
#bin trials to for smoother estimate of response proportion across learning
plot_ppc_rl_data['bin_trial'] = pd.cut(plot_ppc_rl_data.trial,11,labels=np.linspace(0, 10,11)).astype('int64')
#calculate means for each sample
sums = plot_ppc_rl_data.groupby(['bin_trial','split_by','samp','type']).mean().reset_index()
#calculate the overall mean response across samples
ppc_rl_sim = sums.groupby(['bin_trial','split_by','type']).mean().reset_index()
#initiate columns that will have the upper and lower bound of the hpd
ppc_rl_sim['upper_hpd'] = 0
ppc_rl_sim['lower_hpd'] = 0
for i in range(0,ppc_rl_sim.shape[0]):
#calculate the hpd/hdi of the predicted mean responses across bin_trials
hdi = pymc.utils.hpd(sums.response[(sums['bin_trial']==ppc_rl_sim.bin_trial[i]) & (sums['split_by']==ppc_rl_sim.split_by[i]) & (sums['type']==ppc_rl_sim.type[i])],alpha=0.1)
ppc_rl_sim.loc[i,'upper_hpd'] = hdi[1]
ppc_rl_sim.loc[i,'lower_hpd'] = hdi[0]
#calculate error term as the distance from upper bound to mean
ppc_rl_sim['up_err'] = ppc_rl_sim['upper_hpd']-ppc_rl_sim['response']
ppc_rl_sim['low_err'] = ppc_rl_sim['response']-ppc_rl_sim['lower_hpd']
ppc_rl_sim['model'] = 'RL'
#plotting evolution of choice proportion for best option across learning for observed and simulated data. Compared for RL and RLDDM models, both with single learnign rate.
fig, axs = plt.subplots(figsize=(15, 5),nrows=1, ncols=3, sharex=True,sharey=True)
for i in range(0,3):
ax = axs[i]
d_single = ppc_sim[(ppc_sim.split_by==i) & (ppc_sim.type=='simulated')]
#slightly move bin_trial to avoid overlap in errorbars
d_single['bin_trial'] += 0.2
ax.errorbar(d_single.bin_trial, d_single.response, yerr=[d_single.low_err,d_single.up_err], label='simulated_RLDDM',color='orange')
ax = axs[i]
d_rl = ppc_rl_sim[(ppc_rl_sim.split_by==i) & (ppc_rl_sim.type=='simulated')]
ax.errorbar(d_rl.bin_trial, d_rl.response, yerr=[d_rl.low_err,d_rl.up_err], label='simulated_RL',color='green')
ax = axs[i]
d = ppc_sim[(ppc_dual_sim.split_by==i) & (ppc_dual_sim.type=='observed')]
ax.plot(d.bin_trial, d.response,linewidth=3,label='observed')
ax.set_title('split_by = %i' %i,fontsize=20)
ax.set_ylabel('mean response')
ax.set_xlabel('trial')
plt.xlim(-0.5,10.5)
plt.legend()
###Output
_____no_output_____
###Markdown
__Fig.__ The predicted choice for the RL-model is very similar to what was predicted in the RLDDM. That is not surprising given that they use the same calculation to get the choice likelihood. The difference between them is instead that the RLDDM could potentially detect the unique contribution of the scaling/drift parameter and the decision threshold onto choice. Misprediction across learningAnother way to visualize this is to look at how the predicted choice misses on the observed across learning, i.e. predicted-observed. As for the other plots we see that the two methods are very similar.
###Code
#rl
error_prediction = plot_ppc_rl_data.groupby(['split_by','type','bin_trial'])['response'].mean().reset_index()
ep = error_prediction.pivot_table(index=['split_by','bin_trial'],columns='type',values='response').reset_index()
ep['diff'] = ep['simulated']-ep['observed']
ep['model'] = 'RL'
#rlddm
error_prediction = plot_ppc_data.groupby(['split_by','type','bin_trial'])['response'].mean().reset_index()
ep_rlddm = error_prediction.pivot_table(index=['split_by','bin_trial'],columns='type',values='response').reset_index()
ep_rlddm['diff'] = ep_rlddm['simulated']-ep_rlddm['observed']
ep_rlddm['model'] = 'RLDDM'
#combine
ep = ep.append(ep_rlddm)
#plot
g = sns.relplot(x='bin_trial',y='diff',col='split_by',hue='model',kind='line',ci=False,data=ep,palette="Set2_r")
g.map(plt.axhline, y=0, ls=":", c=".5")
###Output
_____no_output_____ |
examples/reference/elements/bokeh/Raster.ipynb | ###Markdown
Title Raster Element Dependencies Bokeh Backends Bokeh Matplotlib
###Code
import numpy as np
import holoviews as hv
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
A ``Raster`` is the base class for image-like elements (namely [``Image``](./Image.ipynb), [``RGB``](./RGB.ipynb) and [``HSV``](./HSV.ipynb)), but may be used directly to visualize 2D arrays using a color map:
###Code
xvals = np.linspace(0,4,202)
ys,xs = np.meshgrid(xvals, -xvals[::-1])
hv.Raster(np.sin(((ys)**3)*xs))
###Output
_____no_output_____
###Markdown
Title Raster Element Dependencies Bokeh Backends Bokeh Matplotlib
###Code
import numpy as np
import holoviews as hv
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
A ``Raster`` is the base class for image-like elements (namely [``Image``](./Image.ipynb), [``RGB``](./RGB.ipynb) and [``HSV``](./HSV.ipynb)), but may be used directly to visualize 2D arrays using a color map:
###Code
xvals = np.linspace(0,4,202)
ys,xs = np.meshgrid(xvals, -xvals[::-1])
hv.Raster(np.sin(((ys)**3)*xs))
###Output
_____no_output_____
###Markdown
Title Raster Element Dependencies Bokeh Backends Bokeh Matplotlib Plotly
###Code
import numpy as np
import holoviews as hv
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
A ``Raster`` is the base class for image-like elements (namely [``Image``](./Image.ipynb), [``RGB``](./RGB.ipynb) and [``HSV``](./HSV.ipynb)), but may be used directly to visualize 2D arrays using a color map:
###Code
xvals = np.linspace(0,4,202)
ys,xs = np.meshgrid(xvals, -xvals[::-1])
hv.Raster(np.sin(((ys)**3)*xs))
###Output
_____no_output_____ |
sample_code.ipynb | ###Markdown
모듈 불러오기
###Code
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import layers
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import json
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
시각화 함수
###Code
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string], '')
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
###Output
_____no_output_____
###Markdown
학습 데이터 경로 정의
###Code
DATA_IN_PATH = './data_in/'
DATA_OUT_PATH = './data_out/'
TRAIN_INPUT_DATA = 'train_input.npy'
TRAIN_LABEL_DATA = 'train_label.npy'
DATA_CONFIGS = 'data_configs.json'
###Output
_____no_output_____
###Markdown
랜덤 시드 고정
###Code
SEED_NUM = 1234
tf.random.set_seed(SEED_NUM)
###Output
_____no_output_____
###Markdown
파일 로드
###Code
train_input = np.load(open(DATA_IN_PATH + TRAIN_INPUT_DATA, 'rb'))
train_label = np.load(open(DATA_IN_PATH + TRAIN_LABEL_DATA, 'rb'))
prepro_configs = json.load(open(DATA_IN_PATH + DATA_CONFIGS, 'r'))
###Output
_____no_output_____
###Markdown
모델 하이퍼파라메터 정의
###Code
WORD_EMBEDDING_DIM = 100
HIDDEN_STATE_DIM =150
DENSE_FEATURE_DIM = 150
model_name = 'rnn_classifier_en'
BATCH_SIZE = 128
NUM_EPOCHS = 5
VALID_SPLIT = 0.1
MAX_LEN = train_input.shape[1]
kargs = {'vocab_size': prepro_configs['vocab_size'],
'embedding_dimension': 100,
'dropout_rate': 0.2,
'lstm_dimension': 150,
'dense_dimension': 150,
'output_dimension':1}
###Output
_____no_output_____
###Markdown
모델 선언 및 컴파일
###Code
class RNNClassifier(tf.keras.Model):
def __init__(self, **kargs):
super(RNNClassifier, self).__init__(name=model_name) # name=model name?
self.embedding = layers.Embedding(input_dim=kargs['vocab_size'],
output_dim=kargs['embedding_dimension'])
self.lstm_1_layer = tf.keras.layers.LSTM(kargs['lstm_dimension'], return_sequences=True)
self.lstm_2_layer = tf.keras.layers.LSTM(kargs['lstm_dimension'])
self.dropout = layers.Dropout(kargs['dropout_rate'])
self.fc1 = layers.Dense(units=kargs['dense_dimension'],
activation=tf.keras.activations.tanh)
self.fc2 = layers.Dense(units=kargs['output_dimension'],
activation=tf.keras.activations.sigmoid)
def call(self, x):
x = self.embedding(x)
x = self.dropout(x)
x = self.lstm_1_layer(x)
x = self.lstm_2_layer(x)
x = self.dropout(x)
x = self.fc1(x)
x = self.dropout(x)
x = self.fc2(x)
return x
model = RNNClassifier(**kargs)
model.compile(optimizer=tf.keras.optimizers.Adam(1e-4),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.BinaryCrossentropy()])
###Output
_____no_output_____
###Markdown
Callback 선언
###Code
# overfitting을 막기 위한 ealrystop 추가
earlystop_callback = EarlyStopping(monitor='val_accuracy', min_delta=0.0001, patience=1)
# min_delta: the threshold that triggers the termination (acc should at least improve 0.0001)
# patience: no improvment epochs (patience = 1, 1번 이상 상승이 없으면 종료)
checkpoint_path = DATA_OUT_PATH + model_name + '/weights.h5'
checkpoint_dir = os.path.dirname(checkpoint_path)
# Create path if exists
if os.path.exists(checkpoint_dir):
print("{} -- Folder already exists \n".format(checkpoint_dir))
else:
os.makedirs(checkpoint_dir, exist_ok=True)
print("{} -- Folder create complete \n".format(checkpoint_dir))
cp_callback = ModelCheckpoint(
checkpoint_path, monitor='val_accuracy', verbose=1, save_best_only=True, save_weights_only=True)
###Output
_____no_output_____
###Markdown
모델 학습
###Code
history = model.fit(train_input, train_label, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS,
validation_split=VALID_SPLIT, callbacks=[earlystop_callback, cp_callback])
###Output
_____no_output_____
###Markdown
결과 플롯
###Code
plot_graphs(history, 'accuracy')
plot_graphs(history, 'loss')
###Output
_____no_output_____
###Markdown
테스트 데이터 불러오기
###Code
DATA_OUT_PATH = './data_out/'
TEST_INPUT_DATA = 'test_input.npy'
TEST_ID_DATA = 'test_id.npy'
test_input = np.load(open(DATA_IN_PATH + TEST_INPUT_DATA, 'rb'))
test_input = pad_sequences(test_input, maxlen=test_input.shape[1])
###Output
_____no_output_____
###Markdown
베스트 모델 불러오기
###Code
SAVE_FILE_NM = 'weight.h5'
model.load_weights(os.path.join(DATA_OUT_PATH, model_name, SAVE_FILE_NM))
###Output
_____no_output_____
###Markdown
테스트 데이터 예측하기
###Code
predictions = model.predict(test_input, batch_size=BATCH_SIZE)
predictions = predictions.numpy()
test_id = np.load(open(DATA_IN_PATH + TEST_ID_DATA, 'rb'), allow_pickle=True)
if not os.path.exists(DATA_OUT_PATH):
os.makedirs(DATA_OUT_PATH)
output = pd.DataFrame(data={"id": list(test_id), "sentiment": list(predictions)} )
output.to_csv(DATA_OUT_PATH + 'movie_review_result_rnn.csv', index=False, quoting=3)
###Output
_____no_output_____
###Markdown
Atari DRL Sample Code Training ParametersIf you want to see the Atari agent during training or just want to play around with the hyperparameters, here are the parameters that you can configure for training:* `gym_env`: Specifies the Atari environment. See this link: https://gym.openai.com/envs/atari* `scaled_height`: Controls the scaling for the frame height during preprocessing.* `scaled_width`: Controls the scaling for the frame width during preprocessing.* `k_frames`: Controls how many frames are stack to represent one state.* `memory_size`: The maximum capacity of replay memory.* `memory_alpha`: Specifies how much priority to apply in replay memory.* `memory_beta`: Controls the importance sampling weights.* `memory_beta_increment`: The value at which beta is linearly annealed towards 1.* `memory_eps`: A value added to the priority to ensure an experience has a non-zero probability to be drawn.* `greedy_start`: The starting value for the exploration rate in the epsilon greedy policy.* `greedy_end`: The ending value for the exploration rate in the epsilon greedy policy.* `greedy_decay`: The value at which the exploration rate is linearly annealed towards `greedy_end`.* `num_episodes`: The total number of episodes that the agent will train for.* `max_timesteps`: The maximum number of states that the agent can experience for each episode.* `discount`: The discount factor in the Q-learning algorithm.* `batch_size`: The batch size used for training.* `target_update`: The number of episodes that must pass for a target update to occur on the target network.* `optim_lr`: The learning rate used in the Adam optimizer.* `optim_eps`: The epsilon value used in the Adam optimizer.* `render`: Determines if the Atari environment is rendered during training or not.* `plot_reward`: Determines if the reward for each episode is plotted during training.* `save_rewards`: Determines if the mean rewards for each episode is saved on disk.* `save_model`: Determines if the target network's state dictionary will be saved to disk after training. Import Statements
###Code
%matplotlib inline
from atari import DQN, AtariAI
###Output
_____no_output_____
###Markdown
Training with DQN without Prioritized Experience Replay
###Code
gym_env = 'Pong-v0'
scaled_height = 84
scaled_width = 84
k_frames = 4
memory_size = 10000
greedy_start = 1.
greedy_end = 0.01
greedy_decay = 1.5e-3
num_episodes = 100
max_timesteps = 10000
discount = 0.99
batch_size = 32
target_update = 10
optim_lr = 2.5e-4
optim_eps = 1e-8
render = True
plot_reward = True
save_rewards = True
save_model = True
AtariAI.train_DQN(gym_env, scaled_height, scaled_width, k_frames, memory_size, greedy_start, greedy_end,
greedy_decay, num_episodes, max_timesteps, discount, batch_size, target_update, optim_lr,
optim_eps, render, plot_reward, save_rewards, save_model)
###Output
_____no_output_____
###Markdown
Training with DDQN and Prioritized Experience Replay
###Code
gym_env = 'Pong-v0'
scaled_height = 84
scaled_width = 84
k_frames = 4
memory_size = 50000
memory_alpha = 0.4
memory_beta = 0.4
memory_beta_increment = 1.5e-5
memory_eps = 1e-2
greedy_start = 1.
greedy_end = 0.01
greedy_decay = 1.5e-3
num_episodes = 100
max_timesteps = 10000
discount = 0.99
batch_size = 32
target_update = 10
optim_lr = 2.5e-4
optim_eps = 1e-8
render = True
plot_reward = True
save_rewards = True
save_model = True
AtariAI.train_DDQN_PER(gym_env, scaled_height, scaled_width, k_frames, memory_size, memory_alpha,
memory_beta, memory_beta_increment, memory_eps, greedy_start, greedy_end,
greedy_decay, num_episodes, max_timesteps, discount, batch_size, target_update,
optim_lr, optim_eps, render, plot_reward, save_rewards, save_model)
###Output
_____no_output_____
###Markdown
Generating ECFPs of 1,024-bits
###Code
import numpy as np
import pandas as pd
# Generating fingerprints
from rdkit import Chem
from rdkit.Chem import AllChem
#PCA
from sklearn.decomposition import PCA
#Visualisation
import seaborn as sns
import matplotlib.pylab as plt
#Splitting data into train and test
from sklearn.model_selection import train_test_split
#Removing variance
from sklearn.feature_selection import VarianceThreshold
#Cross validation
from sklearn.model_selection import StratifiedKFold, cross_val_score
import statistics
# confusion matrix, AUC
from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score
#Random Forest
from sklearn.ensemble import RandomForestClassifier
df = pd.read_csv("final_df_smiles.csv", encoding = "ISO-8859-1")
df.head()
# generating ECFPs (or morgan fps) with 1,024 bit length
morgan = []
for i in range (0, len(df)):
mol = Chem.MolFromSmiles(df.iloc[i,0])
fp = AllChem.GetMorganFingerprintAsBitVect(mol,2,nBits=1024)
fp_list = np.unique(fp, return_inverse=True)[1].tolist()
morgan.append(fp_list)
morgan = pd.DataFrame(data= morgan)
morgan = pd.concat([morgan, df.iloc[:, -1]], axis = 1)
display(morgan.head())
print(morgan.shape)
print("Inactive compounds: {}".format(morgan[(morgan['Target']==0)].shape[0]))
print("Active Compounds: {}".format(morgan[(morgan['Target']==1)].shape[0]))
X = morgan.iloc[:, : -1]
y = morgan.iloc[:, -1]
###Output
_____no_output_____
###Markdown
Chemical Space Visualisation
###Code
pca = PCA(n_components=2)
print(X.shape)
res = pca.fit_transform(X)
print(res.shape)
principal = pd.DataFrame(data = res, columns = ['PC_1', 'PC_2'])
finalPCA = pd.concat([principal, morgan[['Target']]], axis = 1)
display(finalPCA.head())
sns.set_style("white")
colours = 'silver', 'steelblue'
ax = sns.scatterplot(data=finalPCA, x='PC_1', y='PC_2', hue = 'Target', palette= colours)
plt.ylabel('PC 1',fontsize=16)
plt.xlabel('PC 2',fontsize=16)
plt.title('Chemical Space (ECFPs of 1024-bits)', fontsize= 18)
plt.xticks(fontsize=13.5)
plt.yticks(fontsize=13.5)
plt.show()
###Output
_____no_output_____
###Markdown
Data Pre-Processing
###Code
# splitting the database into 80% train and 20% test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state= 1)
print("X_train: {}".format(X_train.shape))
print("y_train: {}".format(y_train.shape))
print()
print("X_test: {}".format(X_test.shape))
print("y_test: {}".format(y_test.shape))
# removing features with low variance
def get_variance(df, threshold):
sel = VarianceThreshold(threshold = (threshold))
var = pd.DataFrame(data = sel.fit_transform(df))
features = sel.get_support(indices = True)
var.columns = features
return (var)
# creating three subdatabases which removes features with (i) 100% (ii) 95% and (iii) 90% constant values
X_var_100 = get_variance(X_train,1 *(1- 1))
X_var_95 = get_variance(X_train, 0.95 *(1- 0.95))
X_var_90 = get_variance(X_train, 0.90 *(1- 0.90))
display("X_var_100: {}".format(X_var_100.shape))
display("X_var_95: {}".format(X_var_95.shape))
display("X_var_90: {}".format(X_var_90.shape))
###Output
_____no_output_____
###Markdown
Feature Selection
###Code
# cross validation to compare performances of X_var_100, X_var_95 and X_var_90 on the train set
def cross_validation (num_splits, n_estimators, X, y, random_seed):
sfk = StratifiedKFold(n_splits = num_splits, shuffle=True, random_state= random_seed)
rf = RandomForestClassifier(n_estimators = n_estimators, random_state= random_seed)
rfc_cv_score = cross_val_score(rf, X, y, cv= sfk, scoring='roc_auc')
return (statistics.median(rfc_cv_score), statistics.stdev(rfc_cv_score))
feature_selection = []
for subdatabase in (X_var_100, X_var_95, X_var_90):
feature_selection.append(cross_validation (10, 100, subdatabase , y_train, 1))
feature_selection_df = pd.DataFrame(data = feature_selection, index=['X_var_100', 'X_var_95', 'X_var_90'])
feature_selection_df = feature_selection_df.round(decimals=3)*100
feature_selection_df.columns = ["Median AUC score (%)", "Standard Deviation"]
display(feature_selection_df)
###Output
_____no_output_____
###Markdown
Model Performance on test set
###Code
# select features with best performance on train set, in this case X_var_95
best_features = X_var_95
# select the same features for the test set as X_var_95
colums_name= best_features.columns
X_test_features = X_test[colums_name[:]]
display(X_test_features.head())
# apply rf to obtain performance of best feature combination on test set
rf = RandomForestClassifier(n_estimators = 100, random_state= 1)
rf.fit(best_features, y_train)
#make predictions
y_score = rf.predict(X_test_features)
y_pred_proba = rf.predict_proba(X_test_features)[::,1]
# calculate performance matrices
print("=== Confusion Matrix ===")
CM = confusion_matrix(y_test, y_score)
print(CM)
print('\n')
print("=== Classification Report ===")
print(classification_report(y_test, y_score))
print('\n')
print("=== AUC Score ===")
print(roc_auc_score(y_test, y_pred_proba))
# apply rf algorithm to rank the features based on their importance
feature_imp = pd.Series(rf.feature_importances_,index=colums_name).sort_values(ascending=False)
feature_imp.head(10)
###Output
_____no_output_____ |
Assignment3D5.ipynb | ###Markdown
Assignment3 Day5
###Code
str=["hey this is rakshanda","i am in nashik"]
temp_lst=map(lambda s:s.title(),str)
list(temp_lst)
###Output
_____no_output_____
###Markdown
Assignment3 Day5
###Code
str=["hey this is devesh","i am in nashik"]
#str[i].title
temp_lst=map(lambda s:s.title(),str)
list(temp_lst)
###Output
_____no_output_____ |
P1-completed.ipynb | ###Markdown
Self-Driving Car Engineer Nanodegree Project: **Finding Lane Lines on the Road** ***In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/322/view) for this project.---Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**--- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**--- Your output should look something like this (above) after detecting line segments using the helper functions below Your goal is to connect/average/extrapolate line segments to get output like this **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** Import Packages
###Code
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
###Output
_____no_output_____
###Markdown
Read in an Image
###Code
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
###Output
_____no_output_____
###Markdown
Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**`cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images`cv2.cvtColor()` to grayscale or change color`cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson!
###Code
import math
import scipy.stats as stats
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
#Initialize the left and right co-ordinates list collector.
#Also identify if a left or right line exists in a frame
left_X = []
left_Y = []
right_X = []
right_Y = []
left_exists=False
right_exists=False
for line in lines:
for x1,y1,x2,y2 in line:
#calculate slope of this particular line
dy= y2 - y1
dx= x2 - x1
#Take into consideration that sometimes lines maybe flat...ignore these
if dx :
slope = dy/dx
else:
slope = None
if slope < 0 :
#left lane parts...collect for averaging later
left_X.append(x1)
left_X.append(x2)
left_Y.append(y1)
left_Y.append(y2)
elif slope > 0 :
#right lane parts...collect for averaging later
right_X.append(x1)
right_X.append(x2)
right_Y.append(y1)
right_Y.append(y2)
left_X = np.asarray(left_X)
left_Y = np.asarray(left_Y)
right_X = np.asarray(right_X)
right_Y = np.asarray(right_Y)
#If the lane exists then calculate the average slope and intercept
#Used a simple average to begin with but realized we could use linregress method based on online research..
if left_X.size != 0 and left_Y.size != 0:
left_line = stats.linregress(left_X,left_Y)
left_exists= True
if right_X.size != 0 and right_Y.size != 0:
right_line = stats.linregress(right_X,right_Y)
right_exists =True
if left_exists:
#Extraploate and draw the lanes..
final_left_line = [[int((img.shape[1] -left_line.intercept)//left_line.slope),img.shape[1]],
[int((330-left_line.intercept)//left_line.slope),330]]
#
cv2.line(img,(final_left_line[0][0],final_left_line[0][1]),
(final_left_line[1][0],final_left_line[1][1]),color,thickness)
if right_exists:
#
final_right_line = [[int((img.shape[1] -right_line.intercept)//right_line.slope),img.shape[1]],
[int((330-right_line.intercept)//right_line.slope),330]]
#
cv2.line(img,(final_right_line[0][0],final_right_line[0][1]),
(final_right_line[1][0],final_right_line[1][1]),color,thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines,[255,0,0],10)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
###Output
_____no_output_____
###Markdown
Test ImagesBuild your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.**
###Code
import os
img_list = os.listdir("test_images/")
#print(img_list)
###Output
_____no_output_____
###Markdown
Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
###Code
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
#define the pipeline as a function to be used later
#First convert to grayscale
#Then use gaussian blur
#calculate the edges using Canny algorithm
#Identify the region of interest
#use Hough algorithm to the draw the left and right lanes
def find_lane(test_image):
#convert to grayscale
gray = grayscale(test_image)
#plt.imshow(gray,cmap='gray')
# Define a kernel size and apply Gaussian smoothing
kernel_size = 11
gray_blur = gaussian_blur(gray,kernel_size)
#plt.imshow(gray_blur,cmap='gray')
# Define our parameters for Canny and apply
low_threshold = 60
high_threshold = 120
edges = canny(gray_blur, low_threshold, high_threshold)
#plt.imshow(edges,cmap='gray')
#Region of interest
imshape = test_image.shape
vertices = np.array([[(100,imshape[0]),(500, 300), (500, 300), (900,imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
#plt.imshow(masked_edges,cmap='gray')
# Define the Hough transform parameters
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 20 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 4 #minimum number of pixels making up a line
max_line_gap = 8 # maximum gap in pixels between connectable line segments
line_img= hough_lines(masked_edges, rho, theta, threshold,min_line_length, max_line_gap)
#plt.imshow(line_img,cmap='gray')
lines_edges = weighted_img(line_img,test_image , 0.8, 1.8, 0.)
#plt.imshow(lines_edges)
return lines_edges
# Read in the images from test_images folder and send to the find_lane pipeline.
# On getting back the image with lanes output to test_images_output folder
for img in img_list:
file_path = "test_images/" + img
#print(file_path)
test_image = mpimg.imread(file_path)
result = find_lane(test_image)
output_filename = "test_images_output/" + "Added_Lane_" + img
#print(output_filename)
plt.imsave(output_filename,result)
#plt.imshow(result)
###Output
_____no_output_____
###Markdown
Test on VideosYou know what's cooler than drawing lanes over images? Drawing lanes over video!We can test our solution on two provided videos:`solidWhiteRight.mp4``solidYellowLeft.mp4`**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.****If you get an error that looks like this:**```NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()```**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
###Code
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
#convert to grayscale
gray = grayscale(image)
#plt.imshow(gray,cmap='gray')
# Define a kernel size and apply Gaussian smoothing
kernel_size = 11
gray_blur = gaussian_blur(gray,kernel_size)
#plt.imshow(gray_blur,cmap='gray')
# Define our parameters for Canny and apply
low_threshold = 60
high_threshold = 120
edges = canny(gray_blur, low_threshold, high_threshold)
#plt.imshow(edges,cmap='gray')
#Region of interest
imshape = image.shape
vertices = np.array([[(0,imshape[0]),(500, 300), (500, 300), (960,imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
#plt.imshow(masked_edges,cmap='gray')
# Define the Hough transform parameters
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 20 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 3 #minimum number of pixels making up a line
max_line_gap = 8 # maximum gap in pixels between connectable line segments
line_img= hough_lines(masked_edges, rho, theta, threshold,min_line_length, max_line_gap)
#plt.imshow(line_img,cmap='gray')
result = weighted_img(line_img,image, 0.8, 1.8, 0.)
return result
###Output
_____no_output_____
###Markdown
Let's try the one with the solid white lane on the right first ...
###Code
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
###Output
_____no_output_____
###Markdown
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
###Code
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
###Output
_____no_output_____
###Markdown
Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky!
###Code
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
###Output
_____no_output_____
###Markdown
Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
###Code
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
###Output
_____no_output_____ |
example-files/notebooks/aws-1000genomes.ipynb | ###Markdown
Exploratory Data Analysis of Genomic Datasets with ADAM and Mango Configuring ADAM and Mango on EMR Mango uses docker containers to be run easily on EMR. To get everything setup and installed, follow EMR documentation at http://bdg-mango.readthedocs.io/en/latest/cloud/emr.html. Loading Data from the 1000 Genomes ProjectIn this tutorial, we will use ADAM and Mango to discover interesting variants in the child of a 1000 Genomes trio.First, let’s import ADAM and Mango modules, as well as any Spark modules we need:
###Code
# Import ADAM modules
from bdgenomics.adam.adamContext import ADAMContext
from bdgenomics.adam.ds import AlignmentDataset, CoverageDataset
from bdgenomics.adam.stringency import LENIENT, _toJava
# Import Mango modules
from bdgenomics.mango.alignments import *
from bdgenomics.mango.coverage import CoverageDistribution
# Import Spark modules
from pyspark.sql import functions as sf
###Output
_____no_output_____
###Markdown
Next, we will create an ADAMContext. ADAMContext allows us to load and manipulate genomic data.
###Code
# Create ADAM Context
ac = ADAMContext(spark)
###Output
_____no_output_____
###Markdown
Variant Analysis with Spark SQL In this analysis, we will view a trio (NA19685, NA19661, and NA19660) and search for variants that are present in the child but not present in the parents. These are interesting regions, as they may indicate sights of de novo variation that may contribute to multiple disorders.First, we will load in a subset of variant data from chromosome 17:
###Code
pathPrefix = 's3://1000genomes/phase1/analysis_results/integrated_call_sets/'
genotypesPath = pathPrefix + 'ALL.chr17.integrated_phase1_v3.20101123.snps_indels_svs.genotypes.vcf.gz'
genotypes = ac.loadGenotypes(genotypesPath)
genotypes_df = genotypes.toDF()
###Output
_____no_output_____
###Markdown
We can take a look at the schema by printing the columns in the dataframe.
###Code
# cache genotypes and show the schema
genotypes_df.columns
###Output
_____no_output_____
###Markdown
This genotypes dataset contains all samples from the 1000 Genomes Project. Therefore, we will next filter genotypes to only consider samples that are in the NA19685 trio.
###Code
# trio IDs
IDs = ['NA19685','NA19661','NA19660']
trio_df = genotypes_df.filter(genotypes_df["sampleId"].isin(IDs))
###Output
_____no_output_____
###Markdown
We will next add a new column to our dataframe that determines the genomic location of each variant. This is defined by the chromosome (referenceName) and the start and end position of the variant.
###Code
# Add ReferenceRegion column and group by referenceRegion
trios_with_referenceRegion = trio_df.withColumn('ReferenceRegion',
sf.concat(sf.col('referenceName'),sf.lit(':'), sf.col('start'), sf.lit('-'), sf.col('end')))
###Output
_____no_output_____
###Markdown
Now, we want to query our dataset to find de novo variants. But first, we must register our dataframe with Spark SQL.
###Code
# Register df with Spark SQL
trios_with_referenceRegion.createOrReplaceTempView("trios")
###Output
_____no_output_____
###Markdown
Now that our dataframe is registered, we can run SQL queries on it. For our first query, we will select the names of a subset of variants belonging to sample NA19685 that have at least one alternative (ALT) allele.
###Code
# filter by alleles. This is a list of variant names that have an alternate allele for the child
alternate_variant_sites = spark.sql("SELECT variant.names[0] AS snp FROM trios \
WHERE array_contains(alleles, 'ALT') AND sampleId == 'NA19685'")
collected_sites = list(map(lambda x: x.snp, alternate_variant_sites.take(100)))
###Output
_____no_output_____
###Markdown
For our next query, we will filter a subset of sites in which the parents have both reference alleles. We then filter these variants by the set produced above from the child.
###Code
# get parent records and filter by only REF locations for variant names that were found in the child with an ALT
filtered1 = spark.sql("SELECT * FROM trios WHERE sampleId == 'NA19661' or sampleId == 'NA19660' \
AND !array_contains(alleles, 'ALT')")
filtered2 = filtered1.filter(filtered1["variant.names"][0].isin(collected_sites))
snp_counts = filtered2.take(100)
# view snp names as a list
set([x.variant.names[0] for x in snp_counts])
###Output
_____no_output_____
###Markdown
Working with Alignment Data Now, we can explore these specific variant sites in the raw genomic alignment data. First, let’s load in the data for the NA19685 trio:
###Code
# load in NA19685 exome from s3a
childReadsPath = 's3a://1000genomes/phase1/data/NA19685/exome_alignment/NA19685.mapped.illumina.mosaik.MXL.exome.20110411.bam'
parent1ReadsPath = 's3a://1000genomes/phase1/data/NA19660/exome_alignment/NA19660.mapped.illumina.mosaik.MXL.exome.20110411.bam'
parent2ReadsPath = 's3a://1000genomes/phase1/data/NA19661/exome_alignment/NA19661.mapped.illumina.mosaik.MXL.exome.20110411.bam'
childReads = ac.loadAlignments(childReadsPath, stringency=LENIENT)
parent1Reads = ac.loadAlignments(parent1ReadsPath, stringency=LENIENT)
parent2Reads = ac.loadAlignments(parent2ReadsPath, stringency=LENIENT)
###Output
_____no_output_____
###Markdown
Quality Control of Alignment Data One popular analysis to visually re-affirm the quality of genomic alignment data is by viewing coverage distribution. Coverage distribution gives us an idea of the read coverage we have across a sample. Next, we will generate a sample coverage distribution plot for the child alignment data on chromosome 17.
###Code
# calculate read coverage
# Takes 2-3 minutes
childCoverage = childReads.transform(lambda x: x.filter(x.referenceName == "17")).toCoverage()
###Output
_____no_output_____
###Markdown
Now that coverage data is calculated and cached, we will compute the coverage distribution of all three samples and plot the coverage distribution.
###Code
# Calculate coverage distribution
# You can check the progress in the SparkUI by navigating to
# <PUBLIC_MASTER_DNS>:8088 and clicking on the currently running Spark application.
cd = CoverageDistribution(spark, childCoverage, bin_size = 1)
ax, results = cd.plotDistributions(normalize=True, cumulative=False)
ax.set_title("Coverage Distribution")
ax.set_ylabel("Counts")
ax.set_xlabel("Coverage Depth")
ax.set_xscale("log")
plt.show()
###Output
_____no_output_____
###Markdown
Now that we are done with coverage, we can unpersist these datasets to clear space in memory for the next analysis.
###Code
childCoverage.unpersist()
###Output
_____no_output_____
###Markdown
Viewing Sites with Missense Variants in the ProbandAfter verifying alignment data and filtering variants, we have 4 genes with potential missense mutations in the proband, including YBX2, ZNF286B, KSR1, and GNA13. We can visually verify these sites by filtering and viewing the raw reads of the child and parents.First, let's view the child reads. If we zoom in to the location of the GNA13 variant (63052580-63052581) we can see a heterozygous T to A call.
###Code
# view missense variant at GNA13: 63052580-63052581 (SNP rs201316886) in child
# Takes about 2 minutes to collect data from workers
childViz = AlignmentSummary(spark, ac, childReads)
contig = "17"
start = 63052180
end = 63052981
childViz.viewPileup(contig, start, end)
###Output
_____no_output_____
###Markdown
It looks like there indeed is a variant at this position, possibly a heterozygous SNP with alternate allele A. Let's look at the parent data to verify this variant does not appear in the parents.
###Code
# view missense variant at GNA13: 63052580-63052581 in parent 1
parent1Viz = AlignmentSummary(spark, ac, parent1Reads)
contig = "17"
start = 63052180
end = 63052981
parent1Viz.viewPileup(contig, start, end)
# view missense variant at GNA13: 63052580-63052581 in parent 2
parent2Viz = AlignmentSummary(spark, ac, parent2Reads)
contig = "17"
start = 63052180
end = 63052981
parent2Viz.viewPileup(contig, start, end)
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis of Genomic Datasets with ADAM and Mango Configuring ADAM and Mango on EMR Mango uses docker containers to be run easily on EMR. To get everything setup and installed, follow EMR documentation at http://bdg-mango.readthedocs.io/en/latest/cloud/emr.html. Loading Data from the 1000 Genomes ProjectIn this tutorial, we will use ADAM and Mango to discover interesting variants in the child of a 1000 Genomes trio.First, let’s import ADAM and Mango modules, as well as any Spark modules we need:
###Code
# Import ADAM modules
from bdgenomics.adam.adamContext import ADAMContext
from bdgenomics.adam.rdd import AlignmentRecordRDD, CoverageRDD
from bdgenomics.adam.stringency import LENIENT, _toJava
# Import Mango modules
from bdgenomics.mango.rdd import GenomicVizRDD
from bdgenomics.mango.QC import CoverageDistribution
# Import Spark modules
from pyspark.sql import functions as sf
###Output
_____no_output_____
###Markdown
Next, we will create an ADAMContext and GenomicVizRDD. While ADAMContext allows us to load and manipulate genomic data, GenomicVizRDD let's us summarize and visualize such datasets.
###Code
# Create ADAM Context
ac = ADAMContext(spark)
genomicRDD = GenomicVizRDD(spark)
###Output
_____no_output_____
###Markdown
Variant Analysis with Spark SQL In this analysis, we will view a trio (NA19685, NA19661, and NA19660) and search for variants that are present in the child but not present in the parents. These are interesting regions, as they may indicate sights of de novo variation that may contribute to multiple disorders.First, we will load in a subset of variant data from chromosome 17:
###Code
genotypesPath = 's3://1000genomes/phase1/analysis_results/integrated_call_sets/ALL.chr17.integrated_phase1_v3.20101123.snps_indels_svs.genotypes.vcf.gz'
genotypes = ac.loadGenotypes(genotypesPath)
genotypes_df = genotypes.toDF()
###Output
_____no_output_____
###Markdown
We can take a look at the schema by printing the columns in the dataframe.
###Code
# cache genotypes and show the schema
genotypes_df.columns
###Output
_____no_output_____
###Markdown
This genotypes dataset contains all samples from the 1000 Genomes Project. Therefore, we will next filter genotypes to only consider samples that are in the NA19685 trio, and cache the results in memory.
###Code
# trio IDs
IDs = ['NA19685','NA19661','NA19660']
# Filter by individuals in the trio
trio_df = genotypes_df.filter(genotypes_df["sampleId"].isin(IDs))
trio_df.cache()
trio_df.count()
###Output
_____no_output_____
###Markdown
We will next add a new column to our dataframe that determines the genomic location of each variant. This is defined by the chromosome (contigName) and the start and end position of the variant.
###Code
# Add ReferenceRegion column and group by referenceRegion
trios_with_referenceRegion = trio_df.withColumn('ReferenceRegion',
sf.concat(sf.col('contigName'),sf.lit(':'), sf.col('start'), sf.lit('-'), sf.col('end')))
###Output
_____no_output_____
###Markdown
Now, we want to query our dataset to find de novo variants. But first, we must register our dataframe with Spark SQL.
###Code
# Register df with Spark SQL
trios_with_referenceRegion.createOrReplaceTempView("trios")
###Output
_____no_output_____
###Markdown
Now that our dataframe is registered, we can run SQL queries on it. For our first query, we will select the names of variants belonging to sample NA19685 that have at least one alternative (ALT) allele.
###Code
# filter by alleles. This is a list of variant names that have an alternate allele for the child
alternate_variant_sites = spark.sql("SELECT variant.names[0] AS snp FROM trios \
WHERE array_contains(alleles, 'ALT') AND sampleId == 'NA19685'")
collected_sites = map(lambda x: x.snp, alternate_variant_sites.collect())
###Output
_____no_output_____
###Markdown
For our next query, we will filter sites in which the parents have both reference alleles. We then filter these variants by the set produced above from the child.
###Code
# get parent records and filter by only REF locations for variant names that were found in the child with an ALT
filtered1 = spark.sql("SELECT * FROM trios WHERE sampleId == 'NA19661' or sampleId == 'NA19660' \
AND !array_contains(alleles, 'ALT')")
filtered2 = filtered1.filter(filtered1["variant.names"][0].isin(collected_sites))
snp_counts = filtered2.groupBy("variant.names").count().collect()
# collect snp names as a list
snp_names = map(lambda x: x.names, snp_counts)
denovo_snps = [item for sublist in snp_names for item in sublist]
denovo_snps
###Output
_____no_output_____
###Markdown
Now that we have found some interesting variants, we can unpersist our genotypes from memory.
###Code
trio_df.unpersist()
###Output
_____no_output_____
###Markdown
Working with Alignment Data Now, we can explore these specific variant sites in the raw genomic alignment data. First, let’s load in the data for the NA19685 trio:
###Code
# load in NA19685 exome from s3a
childReadsPath = 's3a://1000genomes/phase1/data/NA19685/exome_alignment/NA19685.mapped.illumina.mosaik.MXL.exome.20110411.bam'
parent1ReadsPath = 's3a://1000genomes/phase1/data/NA19660/exome_alignment/NA19660.mapped.illumina.mosaik.MXL.exome.20110411.bam'
parent2ReadsPath = 's3a://1000genomes/phase1/data/NA19661/exome_alignment/NA19661.mapped.illumina.mosaik.MXL.exome.20110411.bam'
childReads = ac.loadAlignments(childReadsPath, stringency=LENIENT)
parent1Reads = ac.loadAlignments(parent1ReadsPath, stringency=LENIENT)
parent2Reads = ac.loadAlignments(parent2ReadsPath, stringency=LENIENT)
###Output
_____no_output_____
###Markdown
We now have data alignment data for three individiuals in our trio. However, the data has not yet been loaded into memory. To cache these datasets for fast subsequent access to our data, we will run the cache() function.
###Code
# cache child RDD
# takes about 2 minutes, on 4 c3.4xlarge worker nodes
childReads.cache()
childReads.toDF().count()
###Output
_____no_output_____
###Markdown
Quality Control of Alignment Data One popular analysis to visually re-affirm the quality of genomic alignment data is by viewing coverage distribution. Coverage distribution gives us an idea of the read coverage we have across a sample. Next, we will generate a sample coverage distribution plot for the child alignment data on chromosome 17.
###Code
# calculate read coverage
# Takes 2-3 minutes
childCoverage = childReads.transform(lambda x: x.filter(x.contigName == "17")).toCoverage()
childCoverage.cache()
childCoverage.toDF().count()
###Output
_____no_output_____
###Markdown
Now that coverage data is calculated and cached, we will compute the coverage distribution of all three samples and plot the coverage distribution.
###Code
# Calculate coverage distribution
# You can check the progress in the SparkUI by navigating to
# <PUBLIC_MASTER_DNS>:8088 and clicking on the currently running Spark application.
cd = CoverageDistribution(spark, childCoverage)
x = cd.plot(normalize=True, cumulative=False, xScaleLog=True, labels="NA19685")
###Output
_____no_output_____
###Markdown
Now that we are done with coverage, we can unpersist these datasets to clear space in memory for the next analysis.
###Code
childCoverage.unpersist()
###Output
_____no_output_____
###Markdown
Viewing Sites with Missense Variants in the ProbandAfter verifying alignment data and filtering variants, we have 4 genes with potential missense mutations in the proband, including YBX2, ZNF286B, KSR1, and GNA13. We can visually verify these sites by filtering and viewing the raw reads of the child and parents.First, let's view the child reads. If we zoom in to the location of the GNA13 variant (63052580-63052581) we can see a heterozygous T to A call.
###Code
# view missense variant at GNA13: 63052580-63052581 (SNP rs201316886) in child
# Takes about 2 minutes to collect data from workers
contig = "17"
start = 63052180
end = 63052981
genomicRDD.ViewAlignments(childReads, contig, start, end)
###Output
_____no_output_____
###Markdown
It looks like there indeed is a variant at this position, possibly a heterozygous SNP with alternate allele A. Let's look at the parent data to verify this variant does not appear in the parents.
###Code
# view missense variant at GNA13: 63052580-63052581 in parent 1
contig = "17"
start = 63052180
end = 63052981
genomicRDD.ViewAlignments(parent1Reads, contig, start, end)
# view missense variant at GNA13: 63052580-63052581 in parent 2
contig = "17"
start = 63052180
end = 63052981
genomicRDD.ViewAlignments(parent2Reads, contig, start, end)
###Output
_____no_output_____
###Markdown
This confirms our filter that this variant is indeed only present in the proband, but not the parents. SummaryTo summarize, this post demonstrated how to setup and run ADAM and Mango in EMR. We demonstrated how to use these tools in an interactive notebook environment to explore the 1000 Genomes Dataset, a publicly available dataset on Amazon S3. We used these tools inspect 1000 Genomes data quality, query for interesting variants in the genome and validate results through visualization of raw datsets.
###Code
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis of Genomic Datasets with ADAM and Mango Configuring ADAM and Mango on EMR Mango uses docker containers to be run easily on EMR. To get everything setup and installed, follow EMR documentation at http://bdg-mango.readthedocs.io/en/latest/cloud/emr.html. Loading Data from the 1000 Genomes ProjectIn this tutorial, we will use ADAM and Mango to discover interesting variants in the child of a 1000 Genomes trio.First, let’s import ADAM and Mango modules, as well as any Spark modules we need:
###Code
# Import ADAM modules
from bdgenomics.adam.adamContext import ADAMContext
from bdgenomics.adam.rdd import AlignmentDataset, CoverageDataset
from bdgenomics.adam.stringency import LENIENT, _toJava
# Import Mango modules
from bdgenomics.mango.alignments import *
from bdgenomics.mango.coverage import CoverageDistribution
# Import Spark modules
from pyspark.sql import functions as sf
###Output
_____no_output_____
###Markdown
Next, we will create an ADAMContext. ADAMContext allows us to load and manipulate genomic data.
###Code
# Create ADAM Context
ac = ADAMContext(spark)
###Output
_____no_output_____
###Markdown
Variant Analysis with Spark SQL In this analysis, we will view a trio (NA19685, NA19661, and NA19660) and search for variants that are present in the child but not present in the parents. These are interesting regions, as they may indicate sights of de novo variation that may contribute to multiple disorders.First, we will load in a subset of variant data from chromosome 17:
###Code
pathPrefix = 's3://1000genomes/phase1/analysis_results/integrated_call_sets/'
genotypesPath = pathPrefix + 'ALL.chr17.integrated_phase1_v3.20101123.snps_indels_svs.genotypes.vcf.gz'
genotypes = ac.loadGenotypes(genotypesPath)
genotypes_df = genotypes.toDF()
###Output
_____no_output_____
###Markdown
We can take a look at the schema by printing the columns in the dataframe.
###Code
# cache genotypes and show the schema
genotypes_df.columns
###Output
_____no_output_____
###Markdown
This genotypes dataset contains all samples from the 1000 Genomes Project. Therefore, we will next filter genotypes to only consider samples that are in the NA19685 trio.
###Code
# trio IDs
IDs = ['NA19685','NA19661','NA19660']
trio_df = genotypes_df.filter(genotypes_df["sampleId"].isin(IDs))
###Output
_____no_output_____
###Markdown
We will next add a new column to our dataframe that determines the genomic location of each variant. This is defined by the chromosome (referenceName) and the start and end position of the variant.
###Code
# Add ReferenceRegion column and group by referenceRegion
trios_with_referenceRegion = trio_df.withColumn('ReferenceRegion',
sf.concat(sf.col('referenceName'),sf.lit(':'), sf.col('start'), sf.lit('-'), sf.col('end')))
###Output
_____no_output_____
###Markdown
Now, we want to query our dataset to find de novo variants. But first, we must register our dataframe with Spark SQL.
###Code
# Register df with Spark SQL
trios_with_referenceRegion.createOrReplaceTempView("trios")
###Output
_____no_output_____
###Markdown
Now that our dataframe is registered, we can run SQL queries on it. For our first query, we will select the names of a subset of variants belonging to sample NA19685 that have at least one alternative (ALT) allele.
###Code
# filter by alleles. This is a list of variant names that have an alternate allele for the child
alternate_variant_sites = spark.sql("SELECT variant.names[0] AS snp FROM trios \
WHERE array_contains(alleles, 'ALT') AND sampleId == 'NA19685'")
collected_sites = list(map(lambda x: x.snp, alternate_variant_sites.take(100)))
###Output
_____no_output_____
###Markdown
For our next query, we will filter a subset of sites in which the parents have both reference alleles. We then filter these variants by the set produced above from the child.
###Code
# get parent records and filter by only REF locations for variant names that were found in the child with an ALT
filtered1 = spark.sql("SELECT * FROM trios WHERE sampleId == 'NA19661' or sampleId == 'NA19660' \
AND !array_contains(alleles, 'ALT')")
filtered2 = filtered1.filter(filtered1["variant.names"][0].isin(collected_sites))
snp_counts = filtered2.take(100)
# view snp names as a list
set([x.variant.names[0] for x in snp_counts])
###Output
_____no_output_____
###Markdown
Working with Alignment Data Now, we can explore these specific variant sites in the raw genomic alignment data. First, let’s load in the data for the NA19685 trio:
###Code
# load in NA19685 exome from s3a
childReadsPath = 's3a://1000genomes/phase1/data/NA19685/exome_alignment/NA19685.mapped.illumina.mosaik.MXL.exome.20110411.bam'
parent1ReadsPath = 's3a://1000genomes/phase1/data/NA19660/exome_alignment/NA19660.mapped.illumina.mosaik.MXL.exome.20110411.bam'
parent2ReadsPath = 's3a://1000genomes/phase1/data/NA19661/exome_alignment/NA19661.mapped.illumina.mosaik.MXL.exome.20110411.bam'
childReads = ac.loadAlignments(childReadsPath, stringency=LENIENT)
parent1Reads = ac.loadAlignments(parent1ReadsPath, stringency=LENIENT)
parent2Reads = ac.loadAlignments(parent2ReadsPath, stringency=LENIENT)
###Output
_____no_output_____
###Markdown
Quality Control of Alignment Data One popular analysis to visually re-affirm the quality of genomic alignment data is by viewing coverage distribution. Coverage distribution gives us an idea of the read coverage we have across a sample. Next, we will generate a sample coverage distribution plot for the child alignment data on chromosome 17.
###Code
# calculate read coverage
# Takes 2-3 minutes
childCoverage = childReads.transform(lambda x: x.filter(x.referenceName == "17")).toCoverage()
###Output
_____no_output_____
###Markdown
Now that coverage data is calculated and cached, we will compute the coverage distribution of all three samples and plot the coverage distribution.
###Code
# Calculate coverage distribution
# You can check the progress in the SparkUI by navigating to
# <PUBLIC_MASTER_DNS>:8088 and clicking on the currently running Spark application.
cd = CoverageDistribution(spark, childCoverage, bin_size = 1)
ax, results = cd.plotDistributions(normalize=True, cumulative=False)
ax.set_title("Coverage Distribution")
ax.set_ylabel("Counts")
ax.set_xlabel("Coverage Depth")
ax.set_xscale("log")
plt.show()
###Output
_____no_output_____
###Markdown
Now that we are done with coverage, we can unpersist these datasets to clear space in memory for the next analysis.
###Code
childCoverage.unpersist()
###Output
_____no_output_____
###Markdown
Viewing Sites with Missense Variants in the ProbandAfter verifying alignment data and filtering variants, we have 4 genes with potential missense mutations in the proband, including YBX2, ZNF286B, KSR1, and GNA13. We can visually verify these sites by filtering and viewing the raw reads of the child and parents.First, let's view the child reads. If we zoom in to the location of the GNA13 variant (63052580-63052581) we can see a heterozygous T to A call.
###Code
# view missense variant at GNA13: 63052580-63052581 (SNP rs201316886) in child
# Takes about 2 minutes to collect data from workers
childViz = AlignmentSummary(spark, ac, childReads)
contig = "17"
start = 63052180
end = 63052981
childViz.viewPileup(contig, start, end)
###Output
_____no_output_____
###Markdown
It looks like there indeed is a variant at this position, possibly a heterozygous SNP with alternate allele A. Let's look at the parent data to verify this variant does not appear in the parents.
###Code
# view missense variant at GNA13: 63052580-63052581 in parent 1
parent1Viz = AlignmentSummary(spark, ac, parent1Reads)
contig = "17"
start = 63052180
end = 63052981
parent1Viz.viewPileup(contig, start, end)
# view missense variant at GNA13: 63052580-63052581 in parent 2
parent2Viz = AlignmentSummary(spark, ac, parent2Reads)
contig = "17"
start = 63052180
end = 63052981
parent2Viz.viewPileup(contig, start, end)
###Output
_____no_output_____ |
notebooks/zero_copy_loading.ipynb | ###Markdown
zero_copy_loading.ipynbThis notebook contains a more-easily runnable version of the code in my blog post, ["How to Load PyTorch Models 340 Times FasterwithRay"](https://medium.com/ibm-data-ai/how-to-load-pytorch-models-340-times-faster-with-ray-8be751a6944c).The notebook contains text from an early draft of the blog post, intermixed with the code used to generate the numbers in the plot. Formatting has been changed slightly so that the notebook passes the [`pycodestyle`](https://pycodestyle.pycqa.org/en/latest/intro.html) linter.Note that timings in the outputs included in this copy of the notebook are slightly different from the timings in the blog, because this notebook was rerun on a somewhat different setup prior to checking it into Github.There is also additional code at the bottom that performs timings on ResNet-50 and verifies that the models tested work correctly after being loaded with zero-copy loading. If you have a GPU with the CUDA libraries installed, some of the cells in this notebook will detect the presence of a GPU and perform additional tests.You can find instructions for setting up a Python environment to run this notebook in [README.md](./README.md).Once the environment is set up, you should be able to run this notebook from within that environment. Please open an issue against this Github repository if you have trouble running it on your machine.
###Code
# Initialization boilerplate
from typing import Tuple, List, Dict
import time
import ray
import transformers
import torch
import torchvision
import numpy as np
import pandas as pd
import urllib
import copy
import os
transformers.logging.set_verbosity_error()
def reboot_ray():
if ray.is_initialized():
ray.shutdown()
if torch.cuda.is_available():
return ray.init(num_gpus=1)
else:
return ray.init()
pass
reboot_ray()
###Output
_____no_output_____
###Markdown
How to Load PyTorch Models 340 Times Faster with Ray*One of the challenges of using deep learning in production is managing the cost of loading huge models for inference. In this article, we'll show how you can reduce this cost almost to zero by leveraging features of PyTorch and Ray.* IntroductionDeep learning models are big and cumbersome. Because of their size, they take a long time to load. This model loading cost leads to a great deal of engineering effort when deploying models in production. Model inference platforms like [TFX](https://www.tensorflow.org/tfx/serving/serving_basic), [TorchServe](https://github.com/pytorch/serve), and [IBM Spectrum Conductor Deep Learning Impact](https://www.ibm.com/products/spectrum-deep-learning-impact?cm_mmc=text_extensions_for_pandas) run deep learning models inside dedicated, long-lived processes and containers, with lots of complex code to start and stop containers and to pass data between them.But what if this conventional wisdom isn't entirely correct? What if there was a way to load a deep learning model in a tiny fraction of a second? It might be possible to run model inference in production with a much simpler architecture.Let's see how fast we can make model loading go. Background: BERT For the examples in this article, we'll use the [BERT](https://arxiv.org/abs/1810.04805) masked language model. BERT belongs to a group of general-purpose models that capture the nuances of human language in a (relatively) compact format. You can use these models to do many different natural language processing (NLP) tasks, ranging from document classification to machine translation. However, to do any task with high accuracy, you need to start with a model trained on your target language and [*fine-tune* the model](https://towardsdatascience.com/fine-tuning-a-bert-model-with-transformers-c8e49c4e008b) for the task.Tuning a BERT model for a task effectively creates a new model. If your application needs to perform three tasks in three different languages, you'll need *nine* copies of BERT --- one for each combination of language and task. This proliferation of models creates headaches in production. Being able to load and unload different BERT-based model really fast would save a lot of trouble.Let's start by loading up a BERT model in the most straightforward way. Loading a BERT ModelThe [transformers library](https://github.com/huggingface/transformers) from [Huggingface](https://huggingface.co/) provides convenient ways to load different variants of BERT. The code snippet that follows shows how to load `bert-base-uncased`, a medium-sized model with about 420 MB of parameters.
###Code
bert = transformers.BertModel.from_pretrained("bert-base-uncased")
###Output
_____no_output_____
###Markdown
The `transformers.BertModel.from_pretrained()` method follows PyTorch's [recommended practice](https://pytorch.org/tutorials/beginner/saving_loading_models.htmlsave-load-state-dict-recommended) for loading models: First, construct an instance of your model, which should be a subclass of `torch.nn.Module`. Then use `torch.load()` to load a PyTorch *state dictionary* of model weights. Finally, call your model's `load_state_dict()` method to copy the model weights from the state dictionary into your model's `torch.Tensor` objects.This method takes about 1.4 seconds to load BERT on my laptop, provided that the model is on local disk. That's fairly impressive for a model that's over 400MB in size, but it's still a long time. For comparison, running inference with this model only takes a fraction of a second.The main reason this method is so slow is that it is optimized for reading models in a portable way over a slow network connection. It copies the model's parameters several times while building the state dictionary, then it copies them some more while installing the weights into the model's Python object.PyTorch has an [alternate model loading method](https://pytorch.org/tutorials/beginner/saving_loading_models.htmlsave-load-entire-model) that gives up some compatibility but only copies model weights once. Here's what the code to load BERT with that method looks like:
###Code
# Serialize the model we loaded in the previous code listing.
torch.save(bert, "outputs/bert.pt")
# Load the model back in.
bert_2 = torch.load("outputs/bert.pt")
###Output
_____no_output_____
###Markdown
This method loads BERT in 0.125 seconds on the same machine. That's 11 times faster.If dropping the number of copies to 1 makes model loading that much faster, imagine what would happen if we dropped the number of copies to zero! Is it possible to do that? Zero-Copy Model LoadingIt turns out that we can indeed load PyTorch models while copying weights zero times. We can achieve this goal by leveraging some features of PyTorch and Ray.First, some background on [Ray](https://ray.io). Ray is an open source system for building high-performance distributed applications. One of Ray's unique features is its main-memory object store, [Plasma](https://docs.ray.io/en/master/serialization.htmlplasma-store). Plasma uses shared memory to pass objects between processes on each machine in a Ray cluster. Ray uses Plasma's shared memory model to implement zero-copy transfer of [NumPy](https://numpy.org/) arrays. If a Ray [task](https://docs.ray.io/en/master/walkthrough.htmlremote-functions-tasks) needs to read a NumPy array from Plasma, the task can access the array's data directly out of shared memory without copying any data into its local heap.So if we store the weights of a model as NumPy arrays on Plasma, we can access those weights directly out of Plasma's shared memory segments, without making any copies. But we still need to connect those weights to the rest of the PyTorch model, which requires them to be wrapped in PyTorch `Tensor` objects. The standard method of creating a `Tensor` involves copying the contents of the tensor, but PyTorch also has an alternate code path for initializing `Tensor`s *without* performing a copy. You can access this code path by passing your NumPy array to `torch.as_tensor()` instead of using `Tensor.__new__()`.With all of this background information in mind, here's a high-level overview of how to do zero-copy model loading from Plasma. First, you need to load the model into the Plasma object store, which is a three-step process:1. Load the model from disk.2. Separate the original PyTorch model into its weights and its graph of operations, and convert the weights to NumPy arrays.3. Upload the NumPy arrays and the model (minus weights) to Plasma.Once the model and its weights are in object storage, it becomes possible to do a zero-copy load of the model. Here are the steps to follow:1. Deserialize the model (minus weights) from Plasma2. Extract the weights from Plasma (without copying any data)3. Wrap the weights in PyTorch `Tensors` (without copying any data)4. Install the weight tensors back in the reconstructed model (without copying any data)If a copy of the model is in the local machine's Plasma shared memory segment, these steps will load load BERT in **0.004 seconds**. That's **340 times faster** than loading the model with `BertModel.from_pretrained()`!This loading time is an order of magnitude less than the time it takes to run one inference request on this model with a general purpose CPU. That means that you can load the model *on demand* with almost no performance penalty. There's need to spin up a dedicated model serving platform or a Ray [actor pool](https://docs.ray.io/en/master/actors.htmlactor-pool), tying up resources for models that aren't currently running inference. The DetailsLet's break down how to implement each of the steps for zero-copy model loading, starting with getting the model onto Plasma in an appropriate format.We've already covered how to load a PyTorch model from disk. The next step after that initial loading is to separate the model into its weights and its graph of operations, converting the weights to NumPy arrays. Here's a Python function that will do all those things.
###Code
def extract_tensors(m: torch.nn.Module) -> Tuple[torch.nn.Module, List[Dict]]:
"""
Remove the tensors from a PyTorch model, convert them to NumPy
arrays, and return the stripped model and tensors.
"""
tensors = []
for _, module in m.named_modules():
# Store the tensors in Python dictionaries
params = {
name: torch.clone(param).detach().numpy()
for name, param in module.named_parameters(recurse=False)
}
buffers = {
name: torch.clone(buf).detach().numpy()
for name, buf in module.named_buffers(recurse=False)
}
tensors.append({"params": params, "buffers": buffers})
# Make a copy of the original model and strip all tensors and
# temporary buffers out of the copy.
m_copy = copy.deepcopy(m)
for _, module in m_copy.named_modules():
for name in (
[name for name, _ in module.named_parameters(recurse=False)]
+ [name for name, _ in module.named_buffers(recurse=False)]):
setattr(module, name, None)
# Make sure the copy is configured for inference.
m_copy.train(False)
return m_copy, tensors
###Output
_____no_output_____
###Markdown
Most PyTorch models are built on top the PyTorch class `torch.nn.Module`. The model is a graph of Python objects, and every object is a subclasses of `Module`.The `Module` class provides two places to store model weights: *parameters* for weights that are trained by gradient descent, and *buffers* for weights that are trained in other ways. Lines 6-17 of the listing above iterate over the components of the model, pull out the parameters and buffers, and convert their values to NumPy arrays. Then lines 21-25 create a copy of the model and remove all the weights from the copy. Finally, line 29 returns the copy and the converted weight tensors as a Python tuple.We can pass the return value from this function directly to `ray.put()` to upload the model and its weights onto Plasma. Here's what the upload operation looks like.
###Code
bert_ref = ray.put(extract_tensors(bert))
###Output
_____no_output_____
###Markdown
The variable `bert_ref` here is a Ray object reference. We can retrieve the model and weights by passing this object reference to `ray.get()`, as in the following listing.
###Code
bert_skeleton, bert_weights = ray.get(bert_ref)
###Output
_____no_output_____
###Markdown
If the object that `bert_ref` points to isn't available on the current node of your Ray cluster, the first attempt to read the model will block while Ray [downloads the object to the node's local shared memory segment](https://github.com/ray-project/ray/blob/c1b9f921a614a0927013ff0daeb6e130aaebb473/src/ray/core_worker/store_provider/plasma_store_provider.ccL274). Subsequent calls to `ray.get(bert_ref)` will return the local copy immediately.Now we need to convert `bert_weights` from NumPy arrays to `torch.Tensor` objects and attach them to the model in `bert_skeleton`, all without performing any additional copies. Here is a Python function that does those steps.
###Code
def replace_tensors(m: torch.nn.Module, tensors: List[Dict]):
"""
Restore the tensors that extract_tensors() stripped out of a
PyTorch model.
:param no_parameters_objects: Skip wrapping tensors in
``torch.nn.Parameters`` objects (~20% speedup, may impact
some models)
"""
with torch.inference_mode():
modules = [module for _, module in m.named_modules()]
for module, tensor_dict in zip(modules, tensors):
# There are separate APIs to set parameters and buffers.
for name, array in tensor_dict["params"].items():
module.register_parameter(
name, torch.nn.Parameter(torch.as_tensor(array)))
for name, array in tensor_dict["buffers"].items():
module.register_buffer(name, torch.as_tensor(array))
###Output
_____no_output_____
###Markdown
This function does roughly the same thing as PyTorch's `load_state_dict()` function, except that it avoids copying tensors. The `replace_tensors()` function modifies the reconstituted model in place. After calling `replace_tensors()`, we can run the model, producing the same results as the original copy of the model. Here's some code that shows running a BERT model after loading its weights with `replace_tensors()`.
###Code
# Load tensors into the model's graph of Python objects
replace_tensors(bert_skeleton, bert_weights)
# Preprocess an example input string for BERT.
test_text = "All work and no play makes Jack a dull boy."
tokenizer = transformers.BertTokenizerFast.from_pretrained(
"bert-base-uncased")
test_tokens = tokenizer(test_text, return_tensors="pt")
# Run the original model and the copy that we just loaded
with torch.inference_mode():
print("Original model's output:")
print(bert(**test_tokens).last_hidden_state)
print("\nModel output after zero-copy model loading:")
print(bert_skeleton(**test_tokens).last_hidden_state)
###Output
<ipython-input-8-b7f49ff7af67>:17: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:180.)
module.register_buffer(name, torch.as_tensor(array))
###Markdown
CaveatsThe first time you call the `replace_tensors()` function, PyTorch will print out a warning:```UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. [...]```Most PyTorch models don't modify their own weights during inference, but PyTorch doesn't prevent models from doing so. If you load your weights via the zero-copy method and your model modifies a weights tensor, it will change the copy of those weights in Plasma's shared memory. Ray (as of version 1.4) [always opens shared memory segments in read-write mode](https://github.com/ray-project/plasma/blob/7d6acc7af2878fc932ec5314cbcda0e79a9d6a4b/src/plasma_client.cL111). If you're sure that you model does not not modify its own weights during inference, you can safely ignore this warning. You can test for these modifications by comparing your model's weights before and after inference. If your model does modify some of its weights, it's important to copy the relevant tensors prior to running inference.Another thing to note is that this method loads the model for CPU-based inference. To use GPU acceleration, you will need to copy the model's weights once to load them onto GPU memory. This copy operation takes about 0.07 seconds, which is still three times faster than the second-fastest way to load the model onto a GPU. ConclusionI hope you've enjoyed this introduction to zero-copy model loading with Ray and PyTorch. Being able to load models in milliseconds opens up some interesting architectural possibilities for high performance model inference. We're planning to cover some of those options in a later post.In the meantime, take a look at [project CodeFlare](https://www.research.ibm.com/blog/codeflare-ml-experiments?cm_mmc=text_extensions_for_pandas) to find out more about IBM Research's ongoing open source work around Ray, or try running Ray yourself with [IBM Cloud Code Engine](https://www.ibm.com/cloud/code-engine?cm_mmc=text_extensions_for_pandas). (This part not in blog) Source code for timing measurements faster version that can skip wrapping models in Parameter objects
###Code
# Timing measurements
# Don't include this cell in the blog
# A version of replace_tensors() that optionally allows a slightly
# faster but slightly dangerous shortcut when loading Parameters.
def replace_tensors_direct(m: torch.nn.Module, tensors: List[Dict]):
"""
Restore the tensors that extract_tensors() stripped out of a
PyTorch model.
"""
with torch.inference_mode():
modules = [module for _, module in m.named_modules()]
for module, tensor_dict in zip(modules, tensors):
# There are separate APIs to set parameters and buffers.
for name, array in tensor_dict["params"].items():
# Super fast, somewhat risky version avoids
# wrapping parameters in Parameters objects.
module._parameters[name] = torch.as_tensor(array)
for name, array in tensor_dict["buffers"].items():
module.register_buffer(name, torch.as_tensor(array))
def restore_from_plasma(model_and_tensors_ref):
model, tensors = ray.get(model_and_tensors_ref)
replace_tensors(model, tensors)
return model
def restore_from_plasma_direct(model_and_tensors_ref):
model, tensors = ray.get(model_and_tensors_ref)
replace_tensors_direct(model, tensors)
return model
bert_model_name = "bert-base-uncased"
# Begin comparison
print("MODEL LOADING TIMINGS:\n")
# Baseline: Load via the official API
print("Loading via official API:")
bert_times = %timeit -o -r 100 transformers.BertModel.from_pretrained(bert_model_name)
bert = transformers.BertModel.from_pretrained(bert_model_name)
# Baseline 2: torch.load()
print("Loading with torch.load():")
bert = transformers.BertModel.from_pretrained(bert_model_name)
bert_file = "outputs/bert.pt"
torch.save(bert, bert_file)
bert_2_times = %timeit -o -r 100 torch.load(bert_file)
bert_2 = torch.load(bert_file)
# Baseline 3: ray.get()
print("Loading with ray.get():")
bert_ref = ray.put(bert)
# Ray.put() actually returns before things have completely settled down.
time.sleep(1)
bert_3_times = %timeit -o -r 100 ray.get(bert_ref)
bert_3 = ray.get(bert_ref)
# The main event: Zero-copy load
bert_4_ref = ray.put(extract_tensors(bert))
# Ray.put() returns before things have completely settled down.
time.sleep(1)
print("Zero-copy load, using official APIs")
bert_4_times = %timeit -o -r 100 restore_from_plasma(bert_4_ref)
bert_4 = restore_from_plasma(bert_4_ref)
print("Zero-copy load, bypassing Parameter class")
bert_5_times = %timeit -o -r 100 restore_from_plasma_direct(bert_4_ref)
bert_5 = restore_from_plasma_direct(bert_4_ref)
# Test with CUDA if available
if torch.cuda.is_available():
def restore_from_plasma_to_cuda(model_and_tensors_ref):
model, tensors = ray.get(model_and_tensors_ref)
replace_tensors(model, tensors)
model.cuda()
return model
bert = transformers.BertModel.from_pretrained(bert_model_name)
torch.save(bert, bert_file)
print("Loading with torch.load() to CUDA")
bert_2_cuda_times = %timeit -o -r 100 torch.load(bert_file).cuda()
print("Zero-copy load to CUDA")
bert_4_cuda_times = %timeit -o -r 100 restore_from_plasma_to_cuda(bert_4_ref)
# Don't include this cell in the blog.
# Number crunching for performance graph
def stats_to_triple(timeit_output, name: str) -> Dict:
"""
Extract out 5%-95% range and mean stats from the output of %timeit
:param timeit_output: Object returned by %timeit -o
:param name: Name for the run that produced the performance numbers
:returns: Dictionary with keys "name", "5_percentile", "95_percentile",
and "mean", suitable for populating one row of a DataFrame.
"""
times = np.array(timeit_output.all_runs) / timeit_output.loops
return {
"name": name,
"5_percentile": np.percentile(times, 5),
"95_percentile": np.percentile(times, 96),
"mean": np.mean(times)
}
name_to_run = {
"from_pretrained()": bert_times,
"torch.load()": bert_2_times,
"ray.get()": bert_3_times,
"zero_copy": bert_4_times,
"zero_copy_hack": bert_5_times,
}
if torch.cuda.is_available():
name_to_run["torch.load() CUDA"] = bert_2_cuda_times
name_to_run["zero_copy CUDA"] = bert_4_cuda_times
records = [
stats_to_triple(times, name) for name, times in name_to_run.items()
]
timings = pd.DataFrame.from_records(records)
timings
###Output
_____no_output_____
###Markdown
(This part not in blog) Measure how long inference takes
###Code
# Don't include this cell in the blog.
# Inference timings
# Redo tokenization to make this cell self-contained
test_text = "All work and no play makes Jack a dull boy."
tokenizer = transformers.BertTokenizerFast.from_pretrained(
"bert-base-uncased")
test_tokens = tokenizer(test_text, return_tensors="pt")
# Common code to run inference
def run_bert(b, t):
with torch.no_grad():
return b(**t).last_hidden_state
print("LOCAL INFERENCE TIMINGS:\n")
with torch.inference_mode():
# Reload from scratch each time to be sure we aren't using stale values
print("Original model, no CUDA:")
bert = transformers.BertModel.from_pretrained("bert-base-uncased")
%timeit run_bert(bert, test_tokens)
print("Zero-copy model loading, no CUDA:")
bert = transformers.BertModel.from_pretrained("bert-base-uncased")
bert_ref = ray.put(extract_tensors(bert))
bert_skeleton, bert_weights = ray.get(bert_ref)
replace_tensors(bert_skeleton, bert_weights)
%timeit run_bert(bert_skeleton, test_tokens)
if torch.cuda.is_available():
def run_bert_cuda(b, t):
# Inputs need to be on GPU if model is on GPU
t = {k: v.to("cuda") for k, v in t.items()}
with torch.no_grad():
return b(**t).last_hidden_state
print("Original model, CUDA:")
bert = transformers.BertModel.from_pretrained("bert-base-uncased")
bert.cuda()
%timeit run_bert_cuda(bert, test_tokens)
print("Zero-copy model loading, CUDA:")
bert = transformers.BertModel.from_pretrained("bert-base-uncased")
bert_ref = ray.put(extract_tensors(bert))
bert_skeleton, bert_weights = ray.get(bert_ref)
replace_tensors(bert_skeleton, bert_weights)
bert_skeleton.cuda()
%timeit run_bert_cuda(bert_skeleton, test_tokens)
###Output
LOCAL INFERENCE TIMINGS:
Original model, no CUDA:
73 ms ± 417 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Zero-copy model loading, no CUDA:
73.1 ms ± 420 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
(This part not in blog) Measure how long inference takes via a Ray task
###Code
reboot_ray()
# Don't include this cell in the blog.
# Inference timings in remote process
bert = transformers.BertModel.from_pretrained("bert-base-uncased")
bert_ref = ray.put(extract_tensors(bert))
@ray.remote
def run_bert_zero_copy(tokens):
bert_skeleton, bert_weights = ray.get(bert_ref)
replace_tensors(bert_skeleton, bert_weights)
with torch.no_grad():
return bert_skeleton(**tokens).last_hidden_state.detach().numpy()
@ray.remote
def run_bert_zero_copy_cuda(tokens):
bert_skeleton, bert_weights = ray.get(bert_ref)
replace_tensors(bert_skeleton, bert_weights)
bert_skeleton.cuda()
# Inputs also need to be on the GPU
tokens = {k: v.to("cuda") for k, v in tokens.items()}
with torch.no_grad():
return bert_skeleton(**tokens).last_hidden_state.detach().numpy()
@ray.remote
class BertActor:
def __init__(self):
import transformers
transformers.logging.set_verbosity_error()
self._bert = transformers.BertModel.from_pretrained("bert-base-uncased")
self._bert.train(False)
def run_bert(self, tokens):
with torch.no_grad():
return self._bert(**tokens).last_hidden_state.detach().numpy()
@ray.remote
class BertActorCuda:
def __init__(self):
import transformers
transformers.logging.set_verbosity_error()
self._bert = transformers.BertModel.from_pretrained("bert-base-uncased").cuda()
self._bert.train(False)
def run_bert(self, tokens):
with torch.no_grad():
tokens = {k: v.to("cuda") for k, v in tokens.items()}
return self._bert(**tokens).last_hidden_state.detach().numpy()
# Redo tokenization to make this cell self-contained
test_text = "All work and no play makes Jack a dull boy."
tokenizer = transformers.BertTokenizerFast.from_pretrained(
"bert-base-uncased")
test_tokens = tokenizer(test_text, return_tensors="pt")
print("REMOTE INFERENCE TIMINGS:\n")
print("Actor, no CUDA:")
actor = BertActor.remote()
%timeit -o -r 100 ray.get(actor.run_bert.remote(test_tokens))
del(actor)
print("Zero-copy, no CUDA:")
%timeit -o -r 100 ray.get(run_bert_zero_copy.remote(test_tokens))
if torch.cuda.is_available():
print("Actor, with CUDA:")
actor = BertActorCuda.remote()
%timeit -o -r 100 ray.get(actor.run_bert.remote(test_tokens))
del(actor)
print("Zero-copy, with CUDA:")
%timeit -o -r 100 ray.get(run_bert_zero_copy_cuda.remote(test_tokens))
###Output
REMOTE INFERENCE TIMINGS:
Actor, no CUDA:
77.3 ms ± 2.71 ms per loop (mean ± std. dev. of 100 runs, 1 loop each)
Zero-copy, no CUDA:
###Markdown
(this part not in blog) Experiments on ResNet50
###Code
# Download and cache the ResNet model
resnet_model_name = "resnet50"
resnet_func = torchvision.models.resnet50
resnet_file = "outputs/resnet.pth"
# See https://pytorch.org/vision/0.8/_modules/torchvision/models/resnet.html
resnet_url = "https://download.pytorch.org/models/resnet50-0676ba61.pth"
# resnet_url = "https://download.pytorch.org/models/resnet152-b121ed2d.pth"
if not os.path.exists(resnet_file):
os.system(f"wget -O {resnet_file} {resnet_url}")
# Baseline method: Instantiate the model and call load_state_dict()
%timeit resnet_func(pretrained=True)
resnet = resnet_func(pretrained=True)
# Baseline 2: torch.load()
torch.save(resnet, "outputs/resnet.torch")
%timeit torch.load("outputs/resnet.torch")
resnet_2 = torch.load("outputs/resnet.torch")
# Baseline 3: ray.get()
resnet_ref = ray.put(resnet)
# Ray.put() actually returns before things have completely settled down.
time.sleep(1)
%timeit ray.get(resnet_ref)
resnet_3 = ray.get(resnet_ref)
resnet_ref = ray.put(extract_tensors(resnet))
# Ray.put() actually returns before things have completely settled down.
time.sleep(1)
%timeit -r 20 restore_from_plasma(resnet_ref)
resnet_4 = restore_from_plasma(resnet_ref)
# Compare parameters to verify that ResNet was loaded properly.
params_1 = list(resnet.parameters())
params_2 = list(resnet_2.parameters())
params_3 = list(resnet_3.parameters())
params_4 = list(resnet_4.parameters())
def compare_lists(l1, l2):
different_indices = []
for i in range(len(l1)):
if not torch.equal(l1[i], l2[i]):
different_indices.append(i)
return different_indices
print(f"1 vs 2: {compare_lists(params_1, params_2)}")
print(f"1 vs 3: {compare_lists(params_1, params_2)}")
print(f"1 vs 4: {compare_lists(params_1, params_2)}")
# Compare buffers to verify that ResNet was loaded properly.
bufs_1 = list(resnet.buffers())
bufs_2 = list(resnet_2.buffers())
bufs_3 = list(resnet_3.buffers())
bufs_4 = list(resnet_4.buffers())
print(f"1 vs 2: {compare_lists(bufs_1, bufs_2)}")
print(f"1 vs 3: {compare_lists(bufs_1, bufs_3)}")
print(f"1 vs 4: {compare_lists(bufs_1, bufs_4)}")
url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "outputs/dog.jpg")
try:
urllib.URLopener().retrieve(url, filename)
except BaseException:
# Different versions of urllib have different APIs
urllib.request.urlretrieve(url, filename)
def run_image_through_resnet(model):
from PIL import Image
from torchvision import transforms
input_image = Image.open(filename)
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
# create a mini-batch as expected by the model
input_batch = input_tensor.unsqueeze(0)
# Make sure the model is not in training mode.
model.eval()
# move the input and model to GPU for speed if available
if torch.cuda.is_available():
input_batch = input_batch.to('cuda')
model.to('cuda')
with torch.inference_mode():
output = model(input_batch)
# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes
# The output has unnormalized scores. To get probabilities, you can run a
# softmax on it.
probabilities = torch.nn.functional.softmax(output[0], dim=0)
return(probabilities)
# Make sure the models still run
before_sec = time.time()
result = run_image_through_resnet(resnet)[0:10]
print(f"{1000 * (time.time() - before_sec):1.2f} msec elapsed")
result
before_sec = time.time()
result = run_image_through_resnet(resnet_2)[0:10]
print(f"{1000 * (time.time() - before_sec):1.2f} msec elapsed")
result
before_sec = time.time()
result = run_image_through_resnet(resnet_3)[0:10]
print(f"{1000 * (time.time() - before_sec):1.2f} msec elapsed")
result
before_sec = time.time()
result = run_image_through_resnet(resnet_4)[0:10]
print(f"{1000 * (time.time() - before_sec):1.2f} msec elapsed")
result
###Output
163.32 msec elapsed
|
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/RECAP_DS/03_DATA_WRANGLING_AND_VISUALISATION/L02.ipynb | ###Markdown
DS104 Data Wrangling and Visualization : Lesson Two Companion Notebook Table of Contents * [Table of Contents](DS104L2_toc) * [Page 1 - Introduction](DS104L2_page_1) * [Page 2 - Data Transposition: See you on the Flip Side!](DS104L2_page_2) * [Page 3 - Energy Practice Hands-On](DS104L2_page_3) * [Page 4 - Energy Activity Solution R](DS104L2_page_4) * [Page 5 - Transposing Data in Python](DS104L2_page_5) * [Page 6 - Energy Activity Python](DS104L2_page_6) * [Page 7 - Energy Activity Solution Python](DS104L2_page_7) * [Page 8 - Transposing Data in Spreadsheets](DS104L2_page_8) * [Page 9 - Combining Datasets Together](DS104L2_page_9) * [Page 10 - Joining Datasets in R](DS104L2_page_10) * [Page 11 - Appending in R](DS104L2_page_11) * [Page 12 - Appending Activity in R](DS104L2_page_12) * [Page 13 - Appending Activity Solution in R](DS104L2_page_13) * [Page 14 - Merging Datasets in Python](DS104L2_page_14) * [Page 15 - Appending in Python](DS104L2_page_15) * [Page 16 - Combining in Python Activity](DS104L2_page_16) * [Page 17 - Combining in Python Activity Solution](DS104L2_page_17) * [Page 18 - Aggregating Data in R](DS104L2_page_18) * [Page 19 - Aggregating Data in Python](DS104L2_page_19) * [Page 20 - Pivot Tables](DS104L2_page_20) * [Page 21 - Key Terms](DS104L2_page_21) * [Page 22 - Data Transformation Hands-On](DS104L2_page_22) Page 1 - Introduction[Back to Top](DS104L2_toc)
###Code
from IPython.display import VimeoVideo
# Tutorial Video Name: Data Transformations
VimeoVideo('241240224', width=720, height=480)
###Output
_____no_output_____ |
Cifar10_image_classification.ipynb | ###Markdown
###Code
# CNN model for the CIFAR-10 Dataset
#import packages
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.constraints import maxnorm
from keras.optimizers import SGD
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
import numpy as np
import matplotlib.pyplot as plt
# load data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# normalize inputs from 0-255 to 0.0-1.0
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train = X_train / 255.0
X_test = X_test / 255.0
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
# Create the model
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(32, 32, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D())
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D())
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(1024, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
epochs = 10
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
model.summary()
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=64)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
classes_cifar = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']
plt.subplot(3,2,1)
plt.imshow(X_test[9998])
plt.title(classes_cifar[np.argmax(model.predict(X_test[9998:9999])[0], axis=0)])
plt.subplot(3,2,2)
plt.imshow(X_test[9])
plt.title(classes_cifar[np.argmax(model.predict(X_test[9:10])[0], axis=0)])
plt.subplot(3,2,5)
plt.imshow(X_test[25])
plt.title(classes_cifar[np.argmax(model.predict(X_test[25:26])[0], axis=0)])
plt.subplot(3,2,6)
plt.imshow(X_test[106])
plt.title(classes_cifar[np.argmax(model.predict(X_test[106:107])[0], axis=0)])
print model.predict(X_test[9]
###Output
_____no_output_____ |
examples/workshops/YouthMappers_2021.ipynb | ###Markdown
[](https://gishub.org/ym-colab)[](https://gishub.org/ym-binder)[](https://gishub.org/ym-binder-nb) **Interactive Mapping and Geospatial Analysis with Leafmap and Jupyter**This notebook was developed for the 90-min [leafmap workshop](https://www.eventbrite.com/e/interactive-mapping-and-geospatial-analysis-tickets-188600217327?keep_tld=1) taking place on November 9, 2021. The workshop is hosted by [YouthMappers](https://www.youthmappers.org).- Author: [Qiusheng Wu](https://github.com/giswqs)- Slides: https://gishub.org/ym - Streamlit web app: https://streamlit.gishub.orgLaunch this notebook to execute code interactively using: - Google Colab: https://gishub.org/ym-colab- Pangeo Binder JupyterLab: https://gishub.org/ym-binder- Pangeo Binder Jupyter Notebook: https://gishub.org/ym-binder-nb Introduction Workshop description[Leafmap](https://leafmap.org) is a Python package for interactive mapping and geospatial analysis with minimal coding in a Jupyter environment. It is built upon a number of open-source packages, such as [folium](https://github.com/python-visualization/folium) and [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) (for creating interactive maps), [WhiteboxTools](https://github.com/jblindsay/whitebox-tools) and [whiteboxgui](https://github.com/giswqs/whiteboxgui) (for analyzing geospatial data), and [ipywidgets](https://github.com/jupyter-widgets/ipywidgets) (for designing interactive graphical user interface). The WhiteboxTools library currently contains 480+ tools for advanced geospatial analysis. Leafmap provides many convenient functions for loading and visualizing geospatial data with only one line of code. Users can also use the interactive user interface to load geospatial data without coding. Anyone with a web browser and Internet connection can use leafmap to perform geospatial analysis and data visualization in the cloud with minimal coding. The topics that will be covered in this workshop include: - A brief introduction to Jupyter and Colab- A brief introduction to leafmap and relevant web resources - Creating interactive maps using multiple plotting backends- Searching and loading basemaps- Loading and visualizing vector/raster data- Using Cloud Optimized GeoTIFF (COG) and SpatialTemporal Asset Catalog (STAC)- Downloading OpenStreetMap data- Loading data from a PostGIS database- Creating custom legends and colorbars- Creating split-panel maps and linked maps- Visualizing Planet global monthly/quarterly mosaic- Designing and publishing interactive web apps- Performing geospatial analysis (e.g., hydrological analysis) using whiteboxguiThis workshop is intended for scientific programmers, data scientists, geospatial analysts, and concerned citizens of Earth. The attendees are expected to have a basic understanding of Python and the Jupyter ecosystem. Familiarity with Earth science and geospatial datasets is useful but not required. More information about leafmap can be found at https://leafmap.org. Jupyter keyboard shortcuts- Shift+Enter: run cell, select below- Ctrl+Enter: : run selected cells- Alt+Enter: run cell and insert below- Tab: code completion or indent- Shift+Tab: tooltip- Ctrl+/: comment out code Set up environment Required Python packages:* [leafmap](https://github.com/giswqs/leafmap) - A Python package for interactive mapping and geospatial analysis with minimal coding in a Jupyter environment.* [keplergl](https://docs.kepler.gl/docs/keplergl-jupyter) - A high-performance web-based application for visual exploration of large-scale geolocation data sets.* [pydeck](https://deckgl.readthedocs.io/en/latest) - High-scale spatial rendering in Python, powered by deck.gl.* [geopandas](https://geopandas.org) - An open source project to make working with geospatial data in python easier. * [xarray-leaflet](https://github.com/davidbrochart/xarray_leaflet) - An xarray extension for tiled map plotting. Use Google ColabClick the button below to open this notebook in Google Colab and execute code interactively.[](https://githubtocolab.com/giswqs/leafmap/blob/master/examples/workshops/YouthMappers_2021.ipynb)
###Code
import os
import subprocess
import sys
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
A function for installing Python packages.
###Code
def install(package):
subprocess.check_call([sys.executable, "-m", "pip", "install", package])
###Output
_____no_output_____
###Markdown
Install required Python packages in Google Colab.
###Code
pkgs = [
'leafmap',
'geopandas',
'keplergl',
'pydeck',
'xarray_leaflet',
'osmnx',
'pygeos',
'imageio',
'tifffile',
]
if "google.colab" in sys.modules:
for pkg in pkgs:
install(pkg)
###Output
_____no_output_____
###Markdown
Use Pangeo BinderClick the buttons below to open this notebook in JupyterLab (first button) or Jupyter Notebook (second button) and execute code interactively.[](https://gishub.org/ym-binder)[](https://gishub.org/ym-binder-nb)- JupyterLab: https://gishub.org/ym-binder- Jupyter Notebook: https://gishub.org/ym-binder-nb Use Miniconda/AnacondaIf you have[Anaconda](https://www.anaconda.com/distribution/download-section) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html) installed on your computer, you can install leafmap using the following commands. Leafmap has an optional dependency - [geopandas](https://geopandas.org), which can be challenging to install on some computers, especially Windows. It is highly recommended that you create a fresh conda environment to install geopandas and leafmap. Follow the commands below to set up a conda env and install geopandas, leafmap, pydeck, keplergl, and xarray_leaflet. ```conda create -n geo python=3.8conda activate geoconda install geopandasconda install mamba -c conda-forgemamba install leafmap keplergl pydeck xarray_leaflet -c conda-forgemamba install osmnx pygeos imageio tifffile -c conda-forgejupyter lab```
###Code
try:
import leafmap
except ImportError:
install('leafmap')
###Output
_____no_output_____
###Markdown
Create an interactive map`Leafmap` has five plotting backends: [folium](https://github.com/python-visualization/folium), [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet), [here-map](https://github.com/heremaps/here-map-widget-for-jupyter), [kepler.gl](https://docs.kepler.gl/docs/keplergl-jupyter), and [pydeck](https://deckgl.readthedocs.io). Note that the backends do not offer equal functionality. Some interactive functionality in `ipyleaflet` might not be available in other plotting backends. To use a specific plotting backend, use one of the following:- `import leafmap.leafmap as leafmap`- `import leafmap.foliumap as leafmap`- `import leafmap.heremap as leafmap`- `import leafmap.kepler as leafmap`- `import leafmap.deck as leafmap` Use ipyleaflet
###Code
import leafmap
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Use folium
###Code
import leafmap.foliumap as leafmap
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Use kepler.gl
###Code
import leafmap.kepler as leafmap
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
If you encounter an error saying `Error displaying widget: model not found` when trying to display the map, you can use `m.static_map()` as a workaround until this [kepler.gl bug](https://github.com/keplergl/kepler.gl/issues/1165) has been resolved.
###Code
# m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Use pydeck
###Code
import leafmap.deck as leafmap
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Customize the default map Specify map center and zoom level
###Code
import leafmap
m = leafmap.Map(center=(40, -100), zoom=4) # center=[lat, lon]
m
m = leafmap.Map(center=(51.5, -0.15), zoom=17)
m
###Output
_____no_output_____
###Markdown
Change map size
###Code
m = leafmap.Map(height="400px", width="800px")
m
###Output
_____no_output_____
###Markdown
Set control visibilityWhen creating a map, set the following controls to either `True` or `False` as appropriate.* attribution_control* draw_control* fullscreen_control* layers_control* measure_control* scale_control* toolbar_control
###Code
m = leafmap.Map(
draw_control=False,
measure_control=False,
fullscreen_control=False,
attribution_control=False,
)
m
###Output
_____no_output_____
###Markdown
Remove all controls from the map.
###Code
m = leafmap.Map()
m.clear_controls()
m
###Output
_____no_output_____
###Markdown
Change basemapsSpecify a Google basemap to use, can be one of ["ROADMAP", "TERRAIN", "SATELLITE", "HYBRID"].
###Code
import leafmap
m = leafmap.Map(google_map="TERRAIN") # HYBIRD, ROADMAP, SATELLITE, TERRAIN
m
###Output
_____no_output_____
###Markdown
Add a basemap using the `add_basemap()` function.
###Code
m = leafmap.Map()
m.add_basemap("Esri.NatGeoWorldMap")
m
###Output
_____no_output_____
###Markdown
Print out the list of available basemaps.
###Code
for basemap in leafmap.leafmap_basemaps:
print(basemap)
###Output
_____no_output_____
###Markdown

###Code
m = leafmap.Map()
m.add_tile_layer(
url="https://mt1.google.com/vt/lyrs=y&x={x}&y={y}&z={z}",
name="Google Satellite",
attribution="Google",
)
m
###Output
_____no_output_____
###Markdown
Add tile layers Add XYZ tile layer
###Code
import leafmap
m = leafmap.Map()
m.add_tile_layer(
url="https://mt1.google.com/vt/lyrs=y&x={x}&y={y}&z={z}",
name="Google Satellite",
attribution="Google",
)
m
###Output
_____no_output_____
###Markdown
Add WMS tile layerMore WMS basemaps can be found at the following websites:- USGS National Map: https://viewer.nationalmap.gov/services- MRLC NLCD Land Cover data: https://www.mrlc.gov/data-services-page- FWS NWI Wetlands data: https://www.fws.gov/wetlands/Data/Web-Map-Services.html
###Code
m = leafmap.Map()
naip_url = 'https://services.nationalmap.gov/arcgis/services/USGSNAIPImagery/ImageServer/WMSServer?'
m.add_wms_layer(
url=naip_url, layers='0', name='NAIP Imagery', format='image/png', shown=True
)
m
###Output
_____no_output_____
###Markdown
Add xyzservices providerAdd a layer from [xyzservices](https://github.com/geopandas/xyzservices) provider object.
###Code
import os
import xyzservices.providers as xyz
basemap = xyz.OpenTopoMap
basemap
m = leafmap.Map()
m.add_basemap(basemap)
m
###Output
_____no_output_____
###Markdown
Add COG/STAC layersA Cloud Optimized GeoTIFF (COG) is a regular GeoTIFF file, aimed at being hosted on a HTTP file server, with an internal organization that enables more efficient workflows on the cloud. It does this by leveraging the ability of clients issuing HTTP GET range requests to ask for just the parts of a file they need. More information about COG can be found at Some publicly available Cloud Optimized GeoTIFFs:* https://stacindex.org/* https://cloud.google.com/storage/docs/public-datasets/landsat* https://www.digitalglobe.com/ecosystem/open-data* https://earthexplorer.usgs.gov/For this demo, we will use data from https://www.maxar.com/open-data/california-colorado-fires for mapping California and Colorado fires. A list of COGs can be found [here](https://github.com/giswqs/leafmap/blob/master/examples/data/cog_files.txt). Add COG layer
###Code
import leafmap
m = leafmap.Map()
url = 'https://opendata.digitalglobe.com/events/california-fire-2020/pre-event/2018-02-16/pine-gulch-fire20/1030010076004E00.tif'
url2 = 'https://opendata.digitalglobe.com/events/california-fire-2020/post-event/2020-08-14/pine-gulch-fire20/10300100AAC8DD00.tif'
m.add_cog_layer(url, name="Fire (pre-event)")
m.add_cog_layer(url2, name="Fire (post-event)")
m
###Output
_____no_output_____
###Markdown
Add STAC layerThe SpatioTemporal Asset Catalog (STAC) specification provides a common language to describe a range of geospatial information, so it can more easily be indexed and discovered. A 'spatiotemporal asset' is any file that represents information about the earth captured in a certain space and time. The initial focus is primarily remotely-sensed imagery (from satellites, but also planes, drones, balloons, etc), but the core is designed to be extensible to SAR, full motion video, point clouds, hyperspectral, LiDAR and derived data like NDVI, Digital Elevation Models, mosaics, etc. More information about STAC can be found at https://stacspec.org/Some publicly available SpatioTemporal Asset Catalog (STAC):* https://stacindex.orgFor this demo, we will use STAC assets from https://stacindex.org/catalogs/spot-orthoimages-canada-2005/?t=catalogs
###Code
m = leafmap.Map()
url = 'https://canada-spot-ortho.s3.amazonaws.com/canada_spot_orthoimages/canada_spot5_orthoimages/S5_2007/S5_11055_6057_20070622/S5_11055_6057_20070622.json'
m.add_stac_layer(url, bands=['B3', 'B2', 'B1'], name='False color')
m
###Output
_____no_output_____
###Markdown
Add local raster datasetsThe `add_raster` function relies on the `xarray_leaflet` package and is only available for the ipyleaflet plotting backend. Therefore, Google Colab is not supported. Note that `xarray_leaflet` does not work properly on Windows ([source](https://github.com/davidbrochart/xarray_leaflet/issues/30)).
###Code
import os
import leafmap
###Output
_____no_output_____
###Markdown
Download samples raster datasets. More datasets can be downloaded from https://viewer.nationalmap.gov/basic/
###Code
out_dir = os.getcwd()
landsat = os.path.join(out_dir, 'landsat.tif')
dem = os.path.join(out_dir, 'dem.tif')
###Output
_____no_output_____
###Markdown
Download a small Landsat imagery.
###Code
if not os.path.exists(landsat):
landsat_url = 'https://drive.google.com/file/d/1EV38RjNxdwEozjc9m0FcO3LFgAoAX1Uw/view?usp=sharing'
leafmap.download_from_gdrive(landsat_url, 'landsat.tif', out_dir, unzip=False)
###Output
_____no_output_____
###Markdown
Download a small DEM dataset.
###Code
if not os.path.exists(dem):
dem_url = 'https://drive.google.com/file/d/1vRkAWQYsLWCi6vcTMk8vLxoXMFbdMFn8/view?usp=sharing'
leafmap.download_from_gdrive(dem_url, 'dem.tif', out_dir, unzip=False)
m = leafmap.Map()
###Output
_____no_output_____
###Markdown
Add local raster datasets to the mapMore colormap can be found at https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html
###Code
m.add_raster(dem, colormap='terrain', layer_name='DEM')
m.add_raster(landsat, bands=[5, 4, 3], layer_name='Landsat')
m
###Output
_____no_output_____
###Markdown
Add legend Add built-in legend
###Code
import leafmap
###Output
_____no_output_____
###Markdown
List all available built-in legends.
###Code
legends = leafmap.builtin_legends
for legend in legends:
print(legend)
###Output
_____no_output_____
###Markdown
Add a WMS layer and built-in legend to the map.
###Code
m = leafmap.Map()
url = "https://www.mrlc.gov/geoserver/mrlc_display/NLCD_2019_Land_Cover_L48/wms?"
m.add_wms_layer(
url,
layers="NLCD_2019_Land_Cover_L48",
name="NLCD 2019 CONUS Land Cover",
format="image/png",
transparent=True,
)
m.add_legend(builtin_legend='NLCD')
m
###Output
_____no_output_____
###Markdown
Add U.S. National Wetlands Inventory (NWI). More info at https://www.fws.gov/wetlands.
###Code
m = leafmap.Map(google_map="HYBRID")
url1 = "https://www.fws.gov/wetlands/arcgis/services/Wetlands/MapServer/WMSServer?"
m.add_wms_layer(
url1, layers="1", format='image/png', transparent=True, name="NWI Wetlands Vector"
)
url2 = "https://www.fws.gov/wetlands/arcgis/services/Wetlands_Raster/ImageServer/WMSServer?"
m.add_wms_layer(
url2, layers="0", format='image/png', transparent=True, name="NWI Wetlands Raster"
)
m.add_legend(builtin_legend="NWI")
m
###Output
_____no_output_____
###Markdown
Add custom legendThere are two ways you can add custom legends:1. Define legend labels and colors2. Define legend dictionaryDefine legend keys and colors.
###Code
m = leafmap.Map()
labels = ['One', 'Two', 'Three', 'Four', 'ect']
# color can be defined using either hex code or RGB (0-255, 0-255, 0-255)
colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3']
# colors = [(255, 0, 0), (127, 255, 0), (127, 18, 25), (36, 70, 180), (96, 68, 123)]
m.add_legend(title='Legend', labels=labels, colors=colors)
m
###Output
_____no_output_____
###Markdown
Define a legend dictionary.
###Code
m = leafmap.Map()
url = "https://www.mrlc.gov/geoserver/mrlc_display/NLCD_2019_Land_Cover_L48/wms?"
m.add_wms_layer(
url,
layers="NLCD_2019_Land_Cover_L48",
name="NLCD 2019 CONUS Land Cover",
format="image/png",
transparent=True,
)
legend_dict = {
'11 Open Water': '466b9f',
'12 Perennial Ice/Snow': 'd1def8',
'21 Developed, Open Space': 'dec5c5',
'22 Developed, Low Intensity': 'd99282',
'23 Developed, Medium Intensity': 'eb0000',
'24 Developed High Intensity': 'ab0000',
'31 Barren Land (Rock/Sand/Clay)': 'b3ac9f',
'41 Deciduous Forest': '68ab5f',
'42 Evergreen Forest': '1c5f2c',
'43 Mixed Forest': 'b5c58f',
'51 Dwarf Scrub': 'af963c',
'52 Shrub/Scrub': 'ccb879',
'71 Grassland/Herbaceous': 'dfdfc2',
'72 Sedge/Herbaceous': 'd1d182',
'73 Lichens': 'a3cc51',
'74 Moss': '82ba9e',
'81 Pasture/Hay': 'dcd939',
'82 Cultivated Crops': 'ab6c28',
'90 Woody Wetlands': 'b8d9eb',
'95 Emergent Herbaceous Wetlands': '6c9fb8',
}
m.add_legend(title="NLCD Land Cover Classification", legend_dict=legend_dict)
m
###Output
_____no_output_____
###Markdown
Add colormapThe colormap functionality requires the ipyleaflet plotting backend. Folium is not supported.
###Code
import leafmap
import leafmap.colormaps as cm
###Output
_____no_output_____
###Markdown
Common colormapsColor palette for DEM data.
###Code
cm.palettes.dem
###Output
_____no_output_____
###Markdown
Show the DEM palette.
###Code
cm.plot_colormap(colors=cm.palettes.dem, axis_off=True)
###Output
_____no_output_____
###Markdown
Color palette for NDVI data.
###Code
cm.palettes.ndvi
###Output
_____no_output_____
###Markdown
Show the NDVI palette.
###Code
cm.plot_colormap(colors=cm.palettes.ndvi)
###Output
_____no_output_____
###Markdown
Custom colormapsSpecify the number of classes for a palette.
###Code
cm.get_palette('terrain', n_class=8)
###Output
_____no_output_____
###Markdown
Show the terrain palette with 8 classes.
###Code
cm.plot_colormap(colors=cm.get_palette('terrain', n_class=8))
###Output
_____no_output_____
###Markdown
Create a palette with custom colors, label, and font size.
###Code
cm.plot_colormap(colors=["red", "green", "blue"], label="Temperature", font_size=12)
###Output
_____no_output_____
###Markdown
Create a discrete color palette.
###Code
cm.plot_colormap(
colors=["red", "green", "blue"], discrete=True, label="Temperature", font_size=12
)
###Output
_____no_output_____
###Markdown
Specify the width and height for the palette.
###Code
cm.plot_colormap(
'terrain',
label="Elevation",
width=8.0,
height=0.4,
orientation='horizontal',
vmin=0,
vmax=1000,
)
###Output
_____no_output_____
###Markdown
Change the orentation of the colormap to be vertical.
###Code
cm.plot_colormap(
'terrain',
label="Elevation",
width=0.4,
height=4,
orientation='vertical',
vmin=0,
vmax=1000,
)
###Output
_____no_output_____
###Markdown
Horizontal colormapAdd a horizontal colorbar to an interactive map.
###Code
m = leafmap.Map()
m.add_basemap("OpenTopoMap")
m.add_colormap(
'terrain',
label="Elevation",
width=8.0,
height=0.4,
orientation='horizontal',
vmin=0,
vmax=4000,
)
m
###Output
_____no_output_____
###Markdown
Vertical colormapAdd a vertical colorbar to an interactive map.
###Code
m = leafmap.Map()
m.add_basemap("OpenTopoMap")
m.add_colormap(
'terrain',
label="Elevation",
width=0.4,
height=4,
orientation='vertical',
vmin=0,
vmax=4000,
)
m
###Output
_____no_output_____
###Markdown
List of available colormaps
###Code
cm.plot_colormaps(width=12, height=0.4)
###Output
_____no_output_____
###Markdown
Add vector datasets Add CSVRead a CSV as a Pandas DataFrame.
###Code
import os
import leafmap
in_csv = 'https://raw.githubusercontent.com/giswqs/data/main/world/world_cities.csv'
df = leafmap.csv_to_pandas(in_csv)
df
###Output
_____no_output_____
###Markdown
Create a point layer from a CSV file containing lat/long information.
###Code
m = leafmap.Map()
m.add_xy_data(in_csv, x="longitude", y="latitude", layer_name="World Cities")
m
###Output
_____no_output_____
###Markdown
Set the output directory.
###Code
out_dir = os.getcwd()
out_shp = os.path.join(out_dir, 'world_cities.shp')
###Output
_____no_output_____
###Markdown
Convert a CSV file containing lat/long information to a shapefile.
###Code
leafmap.csv_to_shp(in_csv, out_shp)
###Output
_____no_output_____
###Markdown
Convert a CSV file containing lat/long information to a GeoJSON.
###Code
out_geojson = os.path.join(out_dir, 'world_cities.geojson')
leafmap.csv_to_geojson(in_csv, out_geojson)
###Output
_____no_output_____
###Markdown
Convert a CSV file containing lat/long information to a GeoPandas GeoDataFrame.
###Code
gdf = leafmap.csv_to_gdf(in_csv)
gdf
###Output
_____no_output_____
###Markdown
Add GeoJSONAdd a GeoJSON to the map.
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
in_geojson = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/cable-geo.geojson'
m.add_geojson(in_geojson, layer_name="Cable lines", info_mode='on_hover')
m
###Output
_____no_output_____
###Markdown
Add a GeoJSON with random filled color to the map.
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/countries.geojson"
m.add_geojson(
url, layer_name="Countries", fill_colors=['red', 'yellow', 'green', 'orange']
)
m
###Output
_____no_output_____
###Markdown
Use the `style_callback` function for assigning a random color to each polygon.
###Code
import random
m = leafmap.Map(center=[0, 0], zoom=2)
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/countries.geojson"
def random_color(feature):
return {
'color': 'black',
'fillColor': random.choice(['red', 'yellow', 'green', 'orange']),
}
m.add_geojson(url, layer_name="Countries", style_callback=random_color)
m
###Output
_____no_output_____
###Markdown
Use custom `style` and `hover_style` functions.
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/countries.geojson"
style = {
"stroke": True,
"color": "#0000ff",
"weight": 2,
"opacity": 1,
"fill": True,
"fillColor": "#0000ff",
"fillOpacity": 0.1,
}
hover_style = {"fillOpacity": 0.7}
m.add_geojson(url, layer_name="Countries", style=style, hover_style=hover_style)
m
###Output
_____no_output_____
###Markdown
Add shapefile
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
in_shp = 'https://github.com/giswqs/leafmap/raw/master/examples/data/countries.zip'
m.add_shp(in_shp, layer_name="Countries")
m
###Output
_____no_output_____
###Markdown
Add KML
###Code
import leafmap
m = leafmap.Map()
in_kml = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us-states.kml'
m.add_kml(in_kml, layer_name="US States KML")
m
###Output
_____no_output_____
###Markdown
Add GeoDataFrame
###Code
import geopandas as gpd
m = leafmap.Map()
gdf = gpd.read_file(
"https://github.com/giswqs/leafmap/raw/master/examples/data/cable-geo.geojson"
)
m.add_gdf(gdf, layer_name="Cable lines")
m
###Output
_____no_output_____
###Markdown
Read the GeoPandas sample dataset as a GeoDataFrame.
###Code
path_to_data = gpd.datasets.get_path("nybb")
gdf = gpd.read_file(path_to_data)
gdf
m = leafmap.Map()
m.add_gdf(gdf, layer_name="New York boroughs", fill_colors=["red", "green", "blue"])
m
###Output
_____no_output_____
###Markdown
Add point layerAdd a point layer using the interactive GUI.
###Code
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Add a point layer programmatically.
###Code
m = leafmap.Map()
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us_cities.geojson"
m.add_point_layer(url, popup=["name", "pop_max"], layer_name="US Cities")
m
###Output
_____no_output_____
###Markdown
Add vectorThe `add_vector` function supports any vector data format supported by GeoPandas.
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/countries.geojson"
m.add_vector(
url, layer_name="Countries", fill_colors=['red', 'yellow', 'green', 'orange']
)
m
###Output
_____no_output_____
###Markdown
Download OSM data OSM from geocodeAdd OSM data of place(s) by name or ID to the map. Note that the leafmap custom layer control does not support GeoJSON, we need to use the ipyleaflet built-in layer control.
###Code
import leafmap
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.add_osm_from_geocode("New York City", layer_name='NYC')
m
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.add_osm_from_geocode("Chicago, Illinois", layer_name='Chicago, IL')
m
###Output
_____no_output_____
###Markdown
OSM from placeAdd OSM entities within boundaries of geocodable place(s) to the map.
###Code
m = leafmap.Map(toolbar_control=False, layers_control=True)
place = "Bunker Hill, Los Angeles, California"
tags = {"building": True}
m.add_osm_from_place(place, tags, layer_name="Los Angeles, CA")
m
###Output
_____no_output_____
###Markdown
Show OSM feature tags.https://wiki.openstreetmap.org/wiki/Map_features
###Code
# leafmap.osm_tags_list()
###Output
_____no_output_____
###Markdown
OSM from address
###Code
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.add_osm_from_address(
address="New York City", tags={"amenity": "bar"}, dist=1500, layer_name="NYC bars"
)
m
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.add_osm_from_address(
address="New York City",
tags={"landuse": ["retail", "commercial"], "building": True},
dist=1000,
layer_name="NYC buildings",
)
m
###Output
_____no_output_____
###Markdown
OSM from bbox
###Code
m = leafmap.Map(toolbar_control=False, layers_control=True)
north, south, east, west = 40.7551, 40.7454, -73.9738, -73.9965
m.add_osm_from_bbox(
north, south, east, west, tags={"amenity": "bar"}, layer_name="NYC bars"
)
m
###Output
_____no_output_____
###Markdown
OSM from pointAdd OSM entities within some distance N, S, E, W of a point to the map.
###Code
m = leafmap.Map(
center=[46.7808, -96.0156], zoom=12, toolbar_control=False, layers_control=True
)
m.add_osm_from_point(
center_point=(46.7808, -96.0156),
tags={"natural": "water"},
dist=10000,
layer_name="Lakes",
)
m
m = leafmap.Map(
center=[39.9170, 116.3908], zoom=15, toolbar_control=False, layers_control=True
)
m.add_osm_from_point(
center_point=(39.9170, 116.3908),
tags={"building": True, "natural": "water"},
dist=1000,
layer_name="Beijing",
)
m
###Output
_____no_output_____
###Markdown
OSM from viewAdd OSM entities within the current map view to the map.
###Code
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.set_center(-73.9854, 40.7500, 16)
m
m.add_osm_from_view(tags={"amenity": "bar", "building": True}, layer_name="New York")
###Output
_____no_output_____
###Markdown
Create a GeoPandas GeoDataFrame from place.
###Code
gdf = leafmap.osm_gdf_from_place("New York City", tags={"amenity": "bar"})
gdf
###Output
_____no_output_____
###Markdown
Use WhiteboxToolsUse the built-in toolbox to perform geospatial analysis. For example, you can perform depression filling using the sample DEM dataset downloaded in the above step.
###Code
import os
import leafmap
import urllib.request
###Output
_____no_output_____
###Markdown
Download a sample DEM dataset.
###Code
url = 'https://github.com/giswqs/whitebox-python/raw/master/whitebox/testdata/DEM.tif'
urllib.request.urlretrieve(url, "dem.tif")
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Display the toolbox using the default mode.
###Code
leafmap.whiteboxgui()
###Output
_____no_output_____
###Markdown
Display the toolbox using the collapsible tree mode. Note that the tree mode does not support Google Colab.
###Code
leafmap.whiteboxgui(tree=True)
###Output
_____no_output_____
###Markdown
Perform geospatial analysis using the [whitebox](https://github.com/giswqs/whitebox-python) package.
###Code
import os
import whitebox
wbt = whitebox.WhiteboxTools()
wbt.verbose = False
data_dir = os.getcwd()
wbt.set_working_dir(data_dir)
wbt.feature_preserving_smoothing("dem.tif", "smoothed.tif", filter=9)
wbt.breach_depressions("smoothed.tif", "breached.tif")
wbt.d_inf_flow_accumulation("breached.tif", "flow_accum.tif")
import matplotlib.pyplot as plt
import imageio
%matplotlib inline
original = imageio.imread(os.path.join(data_dir, 'dem.tif'))
smoothed = imageio.imread(os.path.join(data_dir, 'smoothed.tif'))
breached = imageio.imread(os.path.join(data_dir, 'breached.tif'))
flow_accum = imageio.imread(os.path.join(data_dir, 'flow_accum.tif'))
fig = plt.figure(figsize=(16, 11))
ax1 = fig.add_subplot(2, 2, 1)
ax1.set_title('Original DEM')
plt.imshow(original)
ax2 = fig.add_subplot(2, 2, 2)
ax2.set_title('Smoothed DEM')
plt.imshow(smoothed)
ax3 = fig.add_subplot(2, 2, 3)
ax3.set_title('Breached DEM')
plt.imshow(breached)
ax4 = fig.add_subplot(2, 2, 4)
ax4.set_title('Flow Accumulation')
plt.imshow(flow_accum)
plt.show()
###Output
_____no_output_____
###Markdown
Create basemap gallery
###Code
import leafmap
for basemap in leafmap.leafmap_basemaps:
print(basemap)
layers = list(leafmap.leafmap_basemaps.keys())[17:117]
leafmap.linked_maps(rows=20, cols=5, height="200px", layers=layers, labels=layers)
###Output
_____no_output_____
###Markdown
Create linked map
###Code
import leafmap
leafmap.leafmap_basemaps.keys()
layers = ['ROADMAP', 'HYBRID']
leafmap.linked_maps(rows=1, cols=2, height='400px', layers=layers)
layers = ['Stamen.Terrain', 'OpenTopoMap']
leafmap.linked_maps(rows=1, cols=2, height='400px', layers=layers)
###Output
_____no_output_____
###Markdown
Create a 2 * 2 linked map to visualize land cover change. Specify the `center` and `zoom` parameters to change the default map center and zoom level.
###Code
layers = [str(f"NLCD {year} CONUS Land Cover") for year in [2001, 2006, 2011, 2016]]
labels = [str(f"NLCD {year}") for year in [2001, 2006, 2011, 2016]]
leafmap.linked_maps(
rows=2,
cols=2,
height='300px',
layers=layers,
labels=labels,
center=[36.1, -115.2],
zoom=9,
)
###Output
_____no_output_____
###Markdown
Create split-panel mapCreate a split-panel map by specifying the `left_layer` and `right_layer`, which can be chosen from the basemap names, or any custom XYZ tile layer.
###Code
import leafmap
leafmap.split_map(left_layer="ROADMAP", right_layer="HYBRID")
###Output
_____no_output_____
###Markdown
Hide the zoom control from the map.
###Code
leafmap.split_map(
left_layer="Esri.WorldTopoMap", right_layer="OpenTopoMap", zoom_control=False
)
###Output
_____no_output_____
###Markdown
Add labels to the map and change the default map center and zoom level.
###Code
leafmap.split_map(
left_layer="NLCD 2001 CONUS Land Cover",
right_layer="NLCD 2019 CONUS Land Cover",
left_label="2001",
right_label="2019",
label_position="bottom",
center=[36.1, -114.9],
zoom=10,
)
###Output
_____no_output_____
###Markdown
Create heat mapSpecify the file path to the CSV. It can either be a file locally or on the Internet.
###Code
import leafmap
m = leafmap.Map(layers_control=True)
in_csv = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/world_cities.csv"
m.add_heatmap(
in_csv,
latitude="latitude",
longitude='longitude',
value="pop_max",
name="Heat map",
radius=20,
)
m
###Output
_____no_output_____
###Markdown
Use the folium plotting backend.
###Code
from leafmap import foliumap
m = foliumap.Map()
in_csv = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/world_cities.csv"
m.add_heatmap(
in_csv,
latitude="latitude",
longitude='longitude',
value="pop_max",
name="Heat map",
radius=20,
)
colors = ['blue', 'lime', 'red']
m.add_colorbar(colors=colors, vmin=0, vmax=10000)
m.add_title("World Population Heat Map", font_size="20px", align="center")
m
###Output
_____no_output_____
###Markdown
Save map to HTML
###Code
import leafmap
m = leafmap.Map()
m.add_basemap("Esri.NatGeoWorldMap")
m
###Output
_____no_output_____
###Markdown
Specify the output HTML file name to save the map as a web page.
###Code
m.to_html("mymap.html")
###Output
_____no_output_____
###Markdown
If the output HTML file name is not provided, the function will return a string containing contain the source code of the HTML file.
###Code
html = m.to_html()
# print(html)
###Output
_____no_output_____
###Markdown
Use kepler plotting backend
###Code
import leafmap.kepler as leafmap
###Output
_____no_output_____
###Markdown
Create an interactive mapCreate an interactive map. You can specify various parameters to initialize the map, such as `center`, `zoom`, `height`, and `widescreen`.
###Code
m = leafmap.Map(center=[40, -100], zoom=2, height=600, widescreen=False)
m
###Output
_____no_output_____
###Markdown
If you encounter an error saying `Error displaying widget: model not found` when trying to display the map, you can use `m.static_map()` as a workaround until this [kepler.gl bug](https://github.com/keplergl/kepler.gl/issues/1165) has been resolved.
###Code
# m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Add CSVAdd a CSV to the map. If you have a map config file, you can directly apply config to the map.
###Code
m = leafmap.Map(center=[37.7621, -122.4143], zoom=12)
in_csv = (
'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/hex_data.csv'
)
config = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/hex_config.json'
m.add_csv(in_csv, layer_name="hex_data", config=config)
m
m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Save map configSave the map configuration as a JSON file.
###Code
m.save_config("cache/config.json")
###Output
_____no_output_____
###Markdown
Save map as htmlSave the map to an interactive html.
###Code
m.to_html(outfile="cache/kepler_hex.html")
###Output
_____no_output_____
###Markdown
Add GeoJONSAdd a GeoJSON with US state boundaries to the map.
###Code
m = leafmap.Map(center=[50, -110], zoom=2)
polygons = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us-states.json'
m.add_geojson(polygons, layer_name="Countries")
m
m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Add shapefileAdd a shapefile to the map.
###Code
m = leafmap.Map(center=[20, 0], zoom=1)
in_shp = "https://github.com/giswqs/leafmap/raw/master/examples/data/countries.zip"
m.add_shp(in_shp, "Countries")
m
m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Add GeoDataFrameAdd a GeoPandas GeoDataFrame to the map.
###Code
import geopandas as gpd
gdf = gpd.read_file(
"https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/world_cities.geojson"
)
gdf
m = leafmap.Map(center=[20, 0], zoom=1)
m.add_gdf(gdf, "World cities")
m
# m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Use planet imageryFirst, you need to [sign up](https://www.planet.com/login/?mode=signup) a Planet account and get an API key. See https://developers.planet.com/quickstart/apis.Uncomment the following line to pass in your API key.
###Code
import os
import leafmap
if os.environ.get("PLANET_API_KEY") is None:
os.environ["PLANET_API_KEY"] = 'your-api-key'
quarterly_tiles = leafmap.planet_quarterly_tiles()
for tile in quarterly_tiles:
print(tile)
monthly_tiles = leafmap.planet_monthly_tiles()
for tile in monthly_tiles:
print(tile)
###Output
_____no_output_____
###Markdown
Add a Planet monthly mosaic by specifying year and month.
###Code
m = leafmap.Map()
m.add_planet_by_month(year=2020, month=8)
m
###Output
_____no_output_____
###Markdown
Add a Planet quarterly mosaic by specifying year and quarter.
###Code
m = leafmap.Map()
m.add_planet_by_quarter(year=2019, quarter=2)
m
###Output
_____no_output_____
###Markdown
Use timeseries inspector
###Code
import os
import leafmap
if os.environ.get("PLANET_API_KEY") is None:
os.environ["PLANET_API_KEY"] = 'your-api-key'
tiles = leafmap.planet_tiles()
leafmap.ts_inspector(tiles, center=[40, -100], zoom=4)
###Output
_____no_output_____
###Markdown
Use time slider Use the time slider to visualize Planet quarterly mosaic.
###Code
import os
import leafmap
if os.environ.get("PLANET_API_KEY") is None:
os.environ["PLANET_API_KEY"] = 'your-api-key'
###Output
_____no_output_____
###Markdown
Specify the map center and zoom level.
###Code
m = leafmap.Map(center=[38.2659, -103.2447], zoom=13)
m
###Output
_____no_output_____
###Markdown
Use the time slider to visualize Planet quarterly mosaic.
###Code
m = leafmap.Map()
layers_dict = leafmap.planet_quarterly_tiles()
m.add_time_slider(layers_dict, time_interval=1)
m
###Output
_____no_output_____
###Markdown
Use the time slider to visualize basemaps.
###Code
m = leafmap.Map()
m.clear_layers()
layers_dict = leafmap.basemap_xyz_tiles()
m.add_time_slider(layers_dict, time_interval=1)
m
###Output
_____no_output_____
###Markdown
[](https://gishub.org/ym-colab)[](https://gishub.org/ym-binder)[](https://gishub.org/ym-binder-nb) **Interactive Mapping and Geospatial Analysis with Leafmap and Jupyter**This notebook was developed for the 90-min [leafmap workshop](https://www.eventbrite.com/e/interactive-mapping-and-geospatial-analysis-tickets-188600217327?keep_tld=1) taking place on November 9, 2021. The workshop is hosted by [YouthMappers](https://www.youthmappers.org).- Author: [Qiusheng Wu](https://github.com/giswqs)- Slides: https://gishub.org/ym - Streamlit web app: https://streamlit.gishub.orgLaunch this notebook to execute code interactively using: - Google Colab: https://gishub.org/ym-colab- Pangeo Binder JupyterLab: https://gishub.org/ym-binder- Pangeo Binder Jupyter Notebook: https://gishub.org/ym-binder-nb Introduction Workshop description[Leafmap](https://leafmap.org) is a Python package for interactive mapping and geospatial analysis with minimal coding in a Jupyter environment. It is built upon a number of open-source packages, such as [folium](https://github.com/python-visualization/folium) and [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) (for creating interactive maps), [WhiteboxTools](https://github.com/jblindsay/whitebox-tools) and [whiteboxgui](https://github.com/giswqs/whiteboxgui) (for analyzing geospatial data), and [ipywidgets](https://github.com/jupyter-widgets/ipywidgets) (for designing interactive graphical user interface). The WhiteboxTools library currently contains 480+ tools for advanced geospatial analysis. Leafmap provides many convenient functions for loading and visualizing geospatial data with only one line of code. Users can also use the interactive user interface to load geospatial data without coding. Anyone with a web browser and Internet connection can use leafmap to perform geospatial analysis and data visualization in the cloud with minimal coding. The topics that will be covered in this workshop include: - A brief introduction to Jupyter and Colab- A brief introduction to leafmap and relevant web resources - Creating interactive maps using multiple plotting backends- Searching and loading basemaps- Loading and visualizing vector/raster data- Using Cloud Optimized GeoTIFF (COG) and SpatialTemporal Asset Catalog (STAC)- Downloading OpenStreetMap data- Loading data from a PostGIS database- Creating custom legends and colorbars- Creating split-panel maps and linked maps- Visualizing Planet global monthly/quarterly mosaic- Designing and publishing interactive web apps- Performing geospatial analysis (e.g., hydrological analysis) using whiteboxguiThis workshop is intended for scientific programmers, data scientists, geospatial analysts, and concerned citizens of Earth. The attendees are expected to have a basic understanding of Python and the Jupyter ecosystem. Familiarity with Earth science and geospatial datasets is useful but not required. More information about leafmap can be found at https://leafmap.org. Jupyter keyboard shortcuts- Shift+Enter: run cell, select below- Ctrl+Enter: : run selected cells- Alt+Enter: run cell and insert below- Tab: code completion or indent- Shift+Tab: tooltip- Ctrl+/: comment out code Set up environment Required Python packages:* [leafmap](https://github.com/giswqs/leafmap) - A Python package for interactive mapping and geospatial analysis with minimal coding in a Jupyter environment.* [keplergl](https://docs.kepler.gl/docs/keplergl-jupyter) - A high-performance web-based application for visual exploration of large-scale geolocation data sets.* [pydeck](https://deckgl.readthedocs.io/en/latest) - High-scale spatial rendering in Python, powered by deck.gl.* [geopandas](https://geopandas.org) - An open source project to make working with geospatial data in python easier. * [xarray-leaflet](https://github.com/davidbrochart/xarray_leaflet) - An xarray extension for tiled map plotting. Use Google ColabClick the button below to open this notebook in Google Colab and execute code interactively.[](https://githubtocolab.com/giswqs/leafmap/blob/master/examples/workshops/YouthMappers_2021.ipynb)
###Code
import os
import subprocess
import sys
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
A function for installing Python packages.
###Code
def install(package):
subprocess.check_call([sys.executable, "-m", "pip", "install", package])
###Output
_____no_output_____
###Markdown
Install required Python packages in Google Colab.
###Code
pkgs = [
'leafmap',
'geopandas',
'keplergl',
'pydeck',
'xarray_leaflet',
'osmnx',
'pygeos',
'imageio',
'tifffile',
]
if "google.colab" in sys.modules:
for pkg in pkgs:
install(pkg)
###Output
_____no_output_____
###Markdown
Use Pangeo BinderClick the buttons below to open this notebook in JupyterLab (first button) or Jupyter Notebook (second button) and execute code interactively.[](https://gishub.org/ym-binder)[](https://gishub.org/ym-binder-nb)- JupyterLab: https://gishub.org/ym-binder- Jupyter Notebook: https://gishub.org/ym-binder-nb Use Miniconda/AnacondaIf you have[Anaconda](https://www.anaconda.com/distribution/download-section) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html) installed on your computer, you can install leafmap using the following commands. Leafmap has an optional dependency - [geopandas](https://geopandas.org), which can be challenging to install on some computers, especially Windows. It is highly recommended that you create a fresh conda environment to install geopandas and leafmap. Follow the commands below to set up a conda env and install geopandas, leafmap, pydeck, keplergl, and xarray_leaflet. ```conda create -n geo python=3.8conda activate geoconda install geopandasconda install mamba -c conda-forgemamba install leafmap keplergl pydeck xarray_leaflet -c conda-forgemamba install osmnx pygeos imageio tifffile -c conda-forgejupyter lab```
###Code
try:
import leafmap
except ImportError:
install('leafmap')
###Output
_____no_output_____
###Markdown
Create an interactive map`Leafmap` has five plotting backends: [folium](https://github.com/python-visualization/folium), [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet), [here-map](https://github.com/heremaps/here-map-widget-for-jupyter), [kepler.gl](https://docs.kepler.gl/docs/keplergl-jupyter), and [pydeck](https://deckgl.readthedocs.io). Note that the backends do not offer equal functionality. Some interactive functionality in `ipyleaflet` might not be available in other plotting backends. To use a specific plotting backend, use one of the following:- `import leafmap.leafmap as leafmap`- `import leafmap.foliumap as leafmap`- `import leafmap.heremap as leafmap`- `import leafmap.kepler as leafmap`- `import leafmap.deck as leafmap` Use ipyleaflet
###Code
import leafmap
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Use folium
###Code
import leafmap.foliumap as leafmap
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Use kepler.gl
###Code
import leafmap.kepler as leafmap
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
If you encounter an error saying `Error displaying widget: model not found` when trying to display the map, you can use `m.static_map()` as a workaround until this [kepler.gl bug](https://github.com/keplergl/kepler.gl/issues/1165) has been resolved.
###Code
# m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Use pydeck
###Code
import leafmap.deck as leafmap
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Customize the default map Specify map center and zoom level
###Code
import leafmap
m = leafmap.Map(center=(40, -100), zoom=4) # center=[lat, lon]
m
m = leafmap.Map(center=(51.5, -0.15), zoom=17)
m
###Output
_____no_output_____
###Markdown
Change map size
###Code
m = leafmap.Map(height="400px", width="800px")
m
###Output
_____no_output_____
###Markdown
Set control visibilityWhen creating a map, set the following controls to either `True` or `False` as appropriate.* attribution_control* draw_control* fullscreen_control* layers_control* measure_control* scale_control* toolbar_control
###Code
m = leafmap.Map(
draw_control=False,
measure_control=False,
fullscreen_control=False,
attribution_control=False,
)
m
###Output
_____no_output_____
###Markdown
Remove all controls from the map.
###Code
m = leafmap.Map()
m.clear_controls()
m
###Output
_____no_output_____
###Markdown
Change basemapsSpecify a Google basemap to use, can be one of ["ROADMAP", "TERRAIN", "SATELLITE", "HYBRID"].
###Code
import leafmap
m = leafmap.Map(google_map="TERRAIN") # HYBIRD, ROADMAP, SATELLITE, TERRAIN
m
###Output
_____no_output_____
###Markdown
Add a basemap using the `add_basemap()` function.
###Code
m = leafmap.Map()
m.add_basemap("Esri.NatGeoWorldMap")
m
###Output
_____no_output_____
###Markdown
Print out the list of available basemaps.
###Code
for basemap in leafmap.basemaps:
print(basemap)
###Output
_____no_output_____
###Markdown

###Code
m = leafmap.Map()
m.add_tile_layer(
url="https://mt1.google.com/vt/lyrs=y&x={x}&y={y}&z={z}",
name="Google Satellite",
attribution="Google",
)
m
###Output
_____no_output_____
###Markdown
Add tile layers Add XYZ tile layer
###Code
import leafmap
m = leafmap.Map()
m.add_tile_layer(
url="https://mt1.google.com/vt/lyrs=y&x={x}&y={y}&z={z}",
name="Google Satellite",
attribution="Google",
)
m
###Output
_____no_output_____
###Markdown
Add WMS tile layerMore WMS basemaps can be found at the following websites:- USGS National Map: https://viewer.nationalmap.gov/services- MRLC NLCD Land Cover data: https://www.mrlc.gov/data-services-page- FWS NWI Wetlands data: https://www.fws.gov/wetlands/Data/Web-Map-Services.html
###Code
m = leafmap.Map()
naip_url = 'https://services.nationalmap.gov/arcgis/services/USGSNAIPImagery/ImageServer/WMSServer?'
m.add_wms_layer(
url=naip_url, layers='0', name='NAIP Imagery', format='image/png', shown=True
)
m
###Output
_____no_output_____
###Markdown
Add xyzservices providerAdd a layer from [xyzservices](https://github.com/geopandas/xyzservices) provider object.
###Code
import os
import xyzservices.providers as xyz
basemap = xyz.OpenTopoMap
basemap
m = leafmap.Map()
m.add_basemap(basemap)
m
###Output
_____no_output_____
###Markdown
Add COG/STAC layersA Cloud Optimized GeoTIFF (COG) is a regular GeoTIFF file, aimed at being hosted on a HTTP file server, with an internal organization that enables more efficient workflows on the cloud. It does this by leveraging the ability of clients issuing HTTP GET range requests to ask for just the parts of a file they need. More information about COG can be found at Some publicly available Cloud Optimized GeoTIFFs:* https://stacindex.org/* https://cloud.google.com/storage/docs/public-datasets/landsat* https://www.digitalglobe.com/ecosystem/open-data* https://earthexplorer.usgs.gov/For this demo, we will use data from https://www.maxar.com/open-data/california-colorado-fires for mapping California and Colorado fires. A list of COGs can be found [here](https://github.com/giswqs/leafmap/blob/master/examples/data/cog_files.txt). Add COG layer
###Code
import leafmap
m = leafmap.Map()
url = 'https://opendata.digitalglobe.com/events/california-fire-2020/pre-event/2018-02-16/pine-gulch-fire20/1030010076004E00.tif'
url2 = 'https://opendata.digitalglobe.com/events/california-fire-2020/post-event/2020-08-14/pine-gulch-fire20/10300100AAC8DD00.tif'
m.add_cog_layer(url, name="Fire (pre-event)")
m.add_cog_layer(url2, name="Fire (post-event)")
m
###Output
_____no_output_____
###Markdown
Add STAC layerThe SpatioTemporal Asset Catalog (STAC) specification provides a common language to describe a range of geospatial information, so it can more easily be indexed and discovered. A 'spatiotemporal asset' is any file that represents information about the earth captured in a certain space and time. The initial focus is primarily remotely-sensed imagery (from satellites, but also planes, drones, balloons, etc), but the core is designed to be extensible to SAR, full motion video, point clouds, hyperspectral, LiDAR and derived data like NDVI, Digital Elevation Models, mosaics, etc. More information about STAC can be found at https://stacspec.org/Some publicly available SpatioTemporal Asset Catalog (STAC):* https://stacindex.orgFor this demo, we will use STAC assets from https://stacindex.org/catalogs/spot-orthoimages-canada-2005/?t=catalogs
###Code
m = leafmap.Map()
url = 'https://canada-spot-ortho.s3.amazonaws.com/canada_spot_orthoimages/canada_spot5_orthoimages/S5_2007/S5_11055_6057_20070622/S5_11055_6057_20070622.json'
m.add_stac_layer(url, bands=['B3', 'B2', 'B1'], name='False color')
m
###Output
_____no_output_____
###Markdown
Add local raster datasetsThe `add_raster` function relies on the `xarray_leaflet` package and is only available for the ipyleaflet plotting backend. Therefore, Google Colab is not supported. Note that `xarray_leaflet` does not work properly on Windows ([source](https://github.com/davidbrochart/xarray_leaflet/issues/30)).
###Code
import os
import leafmap
###Output
_____no_output_____
###Markdown
Download samples raster datasets. More datasets can be downloaded from https://viewer.nationalmap.gov/basic/
###Code
out_dir = os.getcwd()
landsat = os.path.join(out_dir, 'landsat.tif')
dem = os.path.join(out_dir, 'dem.tif')
###Output
_____no_output_____
###Markdown
Download a small Landsat imagery.
###Code
if not os.path.exists(landsat):
landsat_url = 'https://drive.google.com/file/d/1EV38RjNxdwEozjc9m0FcO3LFgAoAX1Uw/view?usp=sharing'
leafmap.download_from_gdrive(landsat_url, 'landsat.tif', out_dir, unzip=False)
###Output
_____no_output_____
###Markdown
Download a small DEM dataset.
###Code
if not os.path.exists(dem):
dem_url = 'https://drive.google.com/file/d/1vRkAWQYsLWCi6vcTMk8vLxoXMFbdMFn8/view?usp=sharing'
leafmap.download_from_gdrive(dem_url, 'dem.tif', out_dir, unzip=False)
m = leafmap.Map()
###Output
_____no_output_____
###Markdown
Add local raster datasets to the mapMore colormap can be found at https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html
###Code
m.add_raster(dem, colormap='terrain', layer_name='DEM')
m.add_raster(landsat, bands=[5, 4, 3], layer_name='Landsat')
m
###Output
_____no_output_____
###Markdown
Add legend Add built-in legend
###Code
import leafmap
###Output
_____no_output_____
###Markdown
List all available built-in legends.
###Code
legends = leafmap.builtin_legends
for legend in legends:
print(legend)
###Output
_____no_output_____
###Markdown
Add a WMS layer and built-in legend to the map.
###Code
m = leafmap.Map()
url = "https://www.mrlc.gov/geoserver/mrlc_display/NLCD_2019_Land_Cover_L48/wms?"
m.add_wms_layer(
url,
layers="NLCD_2019_Land_Cover_L48",
name="NLCD 2019 CONUS Land Cover",
format="image/png",
transparent=True,
)
m.add_legend(builtin_legend='NLCD')
m
###Output
_____no_output_____
###Markdown
Add U.S. National Wetlands Inventory (NWI). More info at https://www.fws.gov/wetlands.
###Code
m = leafmap.Map(google_map="HYBRID")
url1 = "https://www.fws.gov/wetlands/arcgis/services/Wetlands/MapServer/WMSServer?"
m.add_wms_layer(
url1, layers="1", format='image/png', transparent=True, name="NWI Wetlands Vector"
)
url2 = "https://www.fws.gov/wetlands/arcgis/services/Wetlands_Raster/ImageServer/WMSServer?"
m.add_wms_layer(
url2, layers="0", format='image/png', transparent=True, name="NWI Wetlands Raster"
)
m.add_legend(builtin_legend="NWI")
m
###Output
_____no_output_____
###Markdown
Add custom legendThere are two ways you can add custom legends:1. Define legend labels and colors2. Define legend dictionaryDefine legend keys and colors.
###Code
m = leafmap.Map()
labels = ['One', 'Two', 'Three', 'Four', 'ect']
# color can be defined using either hex code or RGB (0-255, 0-255, 0-255)
colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3']
# colors = [(255, 0, 0), (127, 255, 0), (127, 18, 25), (36, 70, 180), (96, 68, 123)]
m.add_legend(title='Legend', labels=labels, colors=colors)
m
###Output
_____no_output_____
###Markdown
Define a legend dictionary.
###Code
m = leafmap.Map()
url = "https://www.mrlc.gov/geoserver/mrlc_display/NLCD_2019_Land_Cover_L48/wms?"
m.add_wms_layer(
url,
layers="NLCD_2019_Land_Cover_L48",
name="NLCD 2019 CONUS Land Cover",
format="image/png",
transparent=True,
)
legend_dict = {
'11 Open Water': '466b9f',
'12 Perennial Ice/Snow': 'd1def8',
'21 Developed, Open Space': 'dec5c5',
'22 Developed, Low Intensity': 'd99282',
'23 Developed, Medium Intensity': 'eb0000',
'24 Developed High Intensity': 'ab0000',
'31 Barren Land (Rock/Sand/Clay)': 'b3ac9f',
'41 Deciduous Forest': '68ab5f',
'42 Evergreen Forest': '1c5f2c',
'43 Mixed Forest': 'b5c58f',
'51 Dwarf Scrub': 'af963c',
'52 Shrub/Scrub': 'ccb879',
'71 Grassland/Herbaceous': 'dfdfc2',
'72 Sedge/Herbaceous': 'd1d182',
'73 Lichens': 'a3cc51',
'74 Moss': '82ba9e',
'81 Pasture/Hay': 'dcd939',
'82 Cultivated Crops': 'ab6c28',
'90 Woody Wetlands': 'b8d9eb',
'95 Emergent Herbaceous Wetlands': '6c9fb8',
}
m.add_legend(title="NLCD Land Cover Classification", legend_dict=legend_dict)
m
###Output
_____no_output_____
###Markdown
Add colormapThe colormap functionality requires the ipyleaflet plotting backend. Folium is not supported.
###Code
import leafmap
import leafmap.colormaps as cm
###Output
_____no_output_____
###Markdown
Common colormapsColor palette for DEM data.
###Code
cm.palettes.dem
###Output
_____no_output_____
###Markdown
Show the DEM palette.
###Code
cm.plot_colormap(colors=cm.palettes.dem, axis_off=True)
###Output
_____no_output_____
###Markdown
Color palette for NDVI data.
###Code
cm.palettes.ndvi
###Output
_____no_output_____
###Markdown
Show the NDVI palette.
###Code
cm.plot_colormap(colors=cm.palettes.ndvi)
###Output
_____no_output_____
###Markdown
Custom colormapsSpecify the number of classes for a palette.
###Code
cm.get_palette('terrain', n_class=8)
###Output
_____no_output_____
###Markdown
Show the terrain palette with 8 classes.
###Code
cm.plot_colormap(colors=cm.get_palette('terrain', n_class=8))
###Output
_____no_output_____
###Markdown
Create a palette with custom colors, label, and font size.
###Code
cm.plot_colormap(colors=["red", "green", "blue"], label="Temperature", font_size=12)
###Output
_____no_output_____
###Markdown
Create a discrete color palette.
###Code
cm.plot_colormap(
colors=["red", "green", "blue"], discrete=True, label="Temperature", font_size=12
)
###Output
_____no_output_____
###Markdown
Specify the width and height for the palette.
###Code
cm.plot_colormap(
'terrain',
label="Elevation",
width=8.0,
height=0.4,
orientation='horizontal',
vmin=0,
vmax=1000,
)
###Output
_____no_output_____
###Markdown
Change the orentation of the colormap to be vertical.
###Code
cm.plot_colormap(
'terrain',
label="Elevation",
width=0.4,
height=4,
orientation='vertical',
vmin=0,
vmax=1000,
)
###Output
_____no_output_____
###Markdown
Horizontal colormapAdd a horizontal colorbar to an interactive map.
###Code
m = leafmap.Map()
m.add_basemap("OpenTopoMap")
m.add_colormap(
'terrain',
label="Elevation",
width=8.0,
height=0.4,
orientation='horizontal',
vmin=0,
vmax=4000,
)
m
###Output
_____no_output_____
###Markdown
Vertical colormapAdd a vertical colorbar to an interactive map.
###Code
m = leafmap.Map()
m.add_basemap("OpenTopoMap")
m.add_colormap(
'terrain',
label="Elevation",
width=0.4,
height=4,
orientation='vertical',
vmin=0,
vmax=4000,
)
m
###Output
_____no_output_____
###Markdown
List of available colormaps
###Code
cm.plot_colormaps(width=12, height=0.4)
###Output
_____no_output_____
###Markdown
Add vector datasets Add CSVRead a CSV as a Pandas DataFrame.
###Code
import os
import leafmap
in_csv = 'https://raw.githubusercontent.com/giswqs/data/main/world/world_cities.csv'
df = leafmap.csv_to_pandas(in_csv)
df
###Output
_____no_output_____
###Markdown
Create a point layer from a CSV file containing lat/long information.
###Code
m = leafmap.Map()
m.add_xy_data(in_csv, x="longitude", y="latitude", layer_name="World Cities")
m
###Output
_____no_output_____
###Markdown
Set the output directory.
###Code
out_dir = os.getcwd()
out_shp = os.path.join(out_dir, 'world_cities.shp')
###Output
_____no_output_____
###Markdown
Convert a CSV file containing lat/long information to a shapefile.
###Code
leafmap.csv_to_shp(in_csv, out_shp)
###Output
_____no_output_____
###Markdown
Convert a CSV file containing lat/long information to a GeoJSON.
###Code
out_geojson = os.path.join(out_dir, 'world_cities.geojson')
leafmap.csv_to_geojson(in_csv, out_geojson)
###Output
_____no_output_____
###Markdown
Convert a CSV file containing lat/long information to a GeoPandas GeoDataFrame.
###Code
gdf = leafmap.csv_to_gdf(in_csv)
gdf
###Output
_____no_output_____
###Markdown
Add GeoJSONAdd a GeoJSON to the map.
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
in_geojson = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/cable_geo.geojson'
m.add_geojson(in_geojson, layer_name="Cable lines", info_mode='on_hover')
m
###Output
_____no_output_____
###Markdown
Add a GeoJSON with random filled color to the map.
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/countries.geojson"
m.add_geojson(
url, layer_name="Countries", fill_colors=['red', 'yellow', 'green', 'orange']
)
m
###Output
_____no_output_____
###Markdown
Use the `style_callback` function for assigning a random color to each polygon.
###Code
import random
m = leafmap.Map(center=[0, 0], zoom=2)
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/countries.geojson"
def random_color(feature):
return {
'color': 'black',
'fillColor': random.choice(['red', 'yellow', 'green', 'orange']),
}
m.add_geojson(url, layer_name="Countries", style_callback=random_color)
m
###Output
_____no_output_____
###Markdown
Use custom `style` and `hover_style` functions.
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/countries.geojson"
style = {
"stroke": True,
"color": "#0000ff",
"weight": 2,
"opacity": 1,
"fill": True,
"fillColor": "#0000ff",
"fillOpacity": 0.1,
}
hover_style = {"fillOpacity": 0.7}
m.add_geojson(url, layer_name="Countries", style=style, hover_style=hover_style)
m
###Output
_____no_output_____
###Markdown
Add shapefile
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
in_shp = 'https://github.com/giswqs/leafmap/raw/master/examples/data/countries.zip'
m.add_shp(in_shp, layer_name="Countries")
m
###Output
_____no_output_____
###Markdown
Add KML
###Code
import leafmap
m = leafmap.Map()
in_kml = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us_states.kml'
m.add_kml(in_kml, layer_name="US States KML")
m
###Output
_____no_output_____
###Markdown
Add GeoDataFrame
###Code
import geopandas as gpd
m = leafmap.Map()
gdf = gpd.read_file(
"https://github.com/giswqs/leafmap/raw/master/examples/data/cable_geo.geojson"
)
m.add_gdf(gdf, layer_name="Cable lines")
m
###Output
_____no_output_____
###Markdown
Read the GeoPandas sample dataset as a GeoDataFrame.
###Code
path_to_data = gpd.datasets.get_path("nybb")
gdf = gpd.read_file(path_to_data)
gdf
m = leafmap.Map()
m.add_gdf(gdf, layer_name="New York boroughs", fill_colors=["red", "green", "blue"])
m
###Output
_____no_output_____
###Markdown
Add point layerAdd a point layer using the interactive GUI.
###Code
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Add a point layer programmatically.
###Code
m = leafmap.Map()
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us_cities.geojson"
m.add_point_layer(url, popup=["name", "pop_max"], layer_name="US Cities")
m
###Output
_____no_output_____
###Markdown
Add vectorThe `add_vector` function supports any vector data format supported by GeoPandas.
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/countries.geojson"
m.add_vector(
url, layer_name="Countries", fill_colors=['red', 'yellow', 'green', 'orange']
)
m
###Output
_____no_output_____
###Markdown
Download OSM data OSM from geocodeAdd OSM data of place(s) by name or ID to the map. Note that the leafmap custom layer control does not support GeoJSON, we need to use the ipyleaflet built-in layer control.
###Code
import leafmap
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.add_osm_from_geocode("New York City", layer_name='NYC')
m
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.add_osm_from_geocode("Chicago, Illinois", layer_name='Chicago, IL')
m
###Output
_____no_output_____
###Markdown
OSM from placeAdd OSM entities within boundaries of geocodable place(s) to the map.
###Code
m = leafmap.Map(toolbar_control=False, layers_control=True)
place = "Bunker Hill, Los Angeles, California"
tags = {"building": True}
m.add_osm_from_place(place, tags, layer_name="Los Angeles, CA")
m
###Output
_____no_output_____
###Markdown
Show OSM feature tags.https://wiki.openstreetmap.org/wiki/Map_features
###Code
# leafmap.osm_tags_list()
###Output
_____no_output_____
###Markdown
OSM from address
###Code
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.add_osm_from_address(
address="New York City", tags={"amenity": "bar"}, dist=1500, layer_name="NYC bars"
)
m
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.add_osm_from_address(
address="New York City",
tags={"landuse": ["retail", "commercial"], "building": True},
dist=1000,
layer_name="NYC buildings",
)
m
###Output
_____no_output_____
###Markdown
OSM from bbox
###Code
m = leafmap.Map(toolbar_control=False, layers_control=True)
north, south, east, west = 40.7551, 40.7454, -73.9738, -73.9965
m.add_osm_from_bbox(
north, south, east, west, tags={"amenity": "bar"}, layer_name="NYC bars"
)
m
###Output
_____no_output_____
###Markdown
OSM from pointAdd OSM entities within some distance N, S, E, W of a point to the map.
###Code
m = leafmap.Map(
center=[46.7808, -96.0156], zoom=12, toolbar_control=False, layers_control=True
)
m.add_osm_from_point(
center_point=(46.7808, -96.0156),
tags={"natural": "water"},
dist=10000,
layer_name="Lakes",
)
m
m = leafmap.Map(
center=[39.9170, 116.3908], zoom=15, toolbar_control=False, layers_control=True
)
m.add_osm_from_point(
center_point=(39.9170, 116.3908),
tags={"building": True, "natural": "water"},
dist=1000,
layer_name="Beijing",
)
m
###Output
_____no_output_____
###Markdown
OSM from viewAdd OSM entities within the current map view to the map.
###Code
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.set_center(-73.9854, 40.7500, 16)
m
m.add_osm_from_view(tags={"amenity": "bar", "building": True}, layer_name="New York")
###Output
_____no_output_____
###Markdown
Create a GeoPandas GeoDataFrame from place.
###Code
gdf = leafmap.osm_gdf_from_place("New York City", tags={"amenity": "bar"})
gdf
###Output
_____no_output_____
###Markdown
Use WhiteboxToolsUse the built-in toolbox to perform geospatial analysis. For example, you can perform depression filling using the sample DEM dataset downloaded in the above step.
###Code
import os
import leafmap
import urllib.request
###Output
_____no_output_____
###Markdown
Download a sample DEM dataset.
###Code
url = 'https://github.com/giswqs/whitebox-python/raw/master/whitebox/testdata/DEM.tif'
urllib.request.urlretrieve(url, "dem.tif")
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Display the toolbox using the default mode.
###Code
leafmap.whiteboxgui()
###Output
_____no_output_____
###Markdown
Display the toolbox using the collapsible tree mode. Note that the tree mode does not support Google Colab.
###Code
leafmap.whiteboxgui(tree=True)
###Output
_____no_output_____
###Markdown
Perform geospatial analysis using the [whitebox](https://github.com/giswqs/whitebox-python) package.
###Code
import os
import whitebox
wbt = whitebox.WhiteboxTools()
wbt.verbose = False
data_dir = os.getcwd()
wbt.set_working_dir(data_dir)
wbt.feature_preserving_smoothing("dem.tif", "smoothed.tif", filter=9)
wbt.breach_depressions("smoothed.tif", "breached.tif")
wbt.d_inf_flow_accumulation("breached.tif", "flow_accum.tif")
import matplotlib.pyplot as plt
import imageio
%matplotlib inline
original = imageio.imread(os.path.join(data_dir, 'dem.tif'))
smoothed = imageio.imread(os.path.join(data_dir, 'smoothed.tif'))
breached = imageio.imread(os.path.join(data_dir, 'breached.tif'))
flow_accum = imageio.imread(os.path.join(data_dir, 'flow_accum.tif'))
fig = plt.figure(figsize=(16, 11))
ax1 = fig.add_subplot(2, 2, 1)
ax1.set_title('Original DEM')
plt.imshow(original)
ax2 = fig.add_subplot(2, 2, 2)
ax2.set_title('Smoothed DEM')
plt.imshow(smoothed)
ax3 = fig.add_subplot(2, 2, 3)
ax3.set_title('Breached DEM')
plt.imshow(breached)
ax4 = fig.add_subplot(2, 2, 4)
ax4.set_title('Flow Accumulation')
plt.imshow(flow_accum)
plt.show()
###Output
_____no_output_____
###Markdown
Create basemap gallery
###Code
import leafmap
for basemap in leafmap.basemaps:
print(basemap)
layers = list(leafmap.basemaps.keys())[17:117]
leafmap.linked_maps(rows=20, cols=5, height="200px", layers=layers, labels=layers)
###Output
_____no_output_____
###Markdown
Create linked map
###Code
import leafmap
leafmap.basemaps.keys()
layers = ['ROADMAP', 'HYBRID']
leafmap.linked_maps(rows=1, cols=2, height='400px', layers=layers)
layers = ['Stamen.Terrain', 'OpenTopoMap']
leafmap.linked_maps(rows=1, cols=2, height='400px', layers=layers)
###Output
_____no_output_____
###Markdown
Create a 2 * 2 linked map to visualize land cover change. Specify the `center` and `zoom` parameters to change the default map center and zoom level.
###Code
layers = [str(f"NLCD {year} CONUS Land Cover") for year in [2001, 2006, 2011, 2016]]
labels = [str(f"NLCD {year}") for year in [2001, 2006, 2011, 2016]]
leafmap.linked_maps(
rows=2,
cols=2,
height='300px',
layers=layers,
labels=labels,
center=[36.1, -115.2],
zoom=9,
)
###Output
_____no_output_____
###Markdown
Create split-panel mapCreate a split-panel map by specifying the `left_layer` and `right_layer`, which can be chosen from the basemap names, or any custom XYZ tile layer.
###Code
import leafmap
leafmap.split_map(left_layer="ROADMAP", right_layer="HYBRID")
###Output
_____no_output_____
###Markdown
Hide the zoom control from the map.
###Code
leafmap.split_map(
left_layer="Esri.WorldTopoMap", right_layer="OpenTopoMap", zoom_control=False
)
###Output
_____no_output_____
###Markdown
Add labels to the map and change the default map center and zoom level.
###Code
leafmap.split_map(
left_layer="NLCD 2001 CONUS Land Cover",
right_layer="NLCD 2019 CONUS Land Cover",
left_label="2001",
right_label="2019",
label_position="bottom",
center=[36.1, -114.9],
zoom=10,
)
###Output
_____no_output_____
###Markdown
Create heat mapSpecify the file path to the CSV. It can either be a file locally or on the Internet.
###Code
import leafmap
m = leafmap.Map(layers_control=True)
in_csv = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/world_cities.csv"
m.add_heatmap(
in_csv,
latitude="latitude",
longitude='longitude',
value="pop_max",
name="Heat map",
radius=20,
)
m
###Output
_____no_output_____
###Markdown
Use the folium plotting backend.
###Code
from leafmap import foliumap
m = foliumap.Map()
in_csv = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/world_cities.csv"
m.add_heatmap(
in_csv,
latitude="latitude",
longitude='longitude',
value="pop_max",
name="Heat map",
radius=20,
)
colors = ['blue', 'lime', 'red']
m.add_colorbar(colors=colors, vmin=0, vmax=10000)
m.add_title("World Population Heat Map", font_size="20px", align="center")
m
###Output
_____no_output_____
###Markdown
Save map to HTML
###Code
import leafmap
m = leafmap.Map()
m.add_basemap("Esri.NatGeoWorldMap")
m
###Output
_____no_output_____
###Markdown
Specify the output HTML file name to save the map as a web page.
###Code
m.to_html("mymap.html")
###Output
_____no_output_____
###Markdown
If the output HTML file name is not provided, the function will return a string containing contain the source code of the HTML file.
###Code
html = m.to_html()
# print(html)
###Output
_____no_output_____
###Markdown
Use kepler plotting backend
###Code
import leafmap.kepler as leafmap
###Output
_____no_output_____
###Markdown
Create an interactive mapCreate an interactive map. You can specify various parameters to initialize the map, such as `center`, `zoom`, `height`, and `widescreen`.
###Code
m = leafmap.Map(center=[40, -100], zoom=2, height=600, widescreen=False)
m
###Output
_____no_output_____
###Markdown
If you encounter an error saying `Error displaying widget: model not found` when trying to display the map, you can use `m.static_map()` as a workaround until this [kepler.gl bug](https://github.com/keplergl/kepler.gl/issues/1165) has been resolved.
###Code
# m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Add CSVAdd a CSV to the map. If you have a map config file, you can directly apply config to the map.
###Code
m = leafmap.Map(center=[37.7621, -122.4143], zoom=12)
in_csv = (
'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/hex_data.csv'
)
config = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/hex_config.json'
m.add_csv(in_csv, layer_name="hex_data", config=config)
m
m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Save map configSave the map configuration as a JSON file.
###Code
m.save_config("cache/config.json")
###Output
_____no_output_____
###Markdown
Save map as htmlSave the map to an interactive html.
###Code
m.to_html(outfile="cache/kepler_hex.html")
###Output
_____no_output_____
###Markdown
Add GeoJONSAdd a GeoJSON with US state boundaries to the map.
###Code
m = leafmap.Map(center=[50, -110], zoom=2)
polygons = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us_states.json'
m.add_geojson(polygons, layer_name="Countries")
m
m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Add shapefileAdd a shapefile to the map.
###Code
m = leafmap.Map(center=[20, 0], zoom=1)
in_shp = "https://github.com/giswqs/leafmap/raw/master/examples/data/countries.zip"
m.add_shp(in_shp, "Countries")
m
m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Add GeoDataFrameAdd a GeoPandas GeoDataFrame to the map.
###Code
import geopandas as gpd
gdf = gpd.read_file(
"https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/world_cities.geojson"
)
gdf
m = leafmap.Map(center=[20, 0], zoom=1)
m.add_gdf(gdf, "World cities")
m
# m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Use planet imageryFirst, you need to [sign up](https://www.planet.com/login/?mode=signup) a Planet account and get an API key. See https://developers.planet.com/quickstart/apis.Uncomment the following line to pass in your API key.
###Code
import os
import leafmap
if os.environ.get("PLANET_API_KEY") is None:
os.environ["PLANET_API_KEY"] = 'your-api-key'
quarterly_tiles = leafmap.planet_quarterly_tiles()
for tile in quarterly_tiles:
print(tile)
monthly_tiles = leafmap.planet_monthly_tiles()
for tile in monthly_tiles:
print(tile)
###Output
_____no_output_____
###Markdown
Add a Planet monthly mosaic by specifying year and month.
###Code
m = leafmap.Map()
m.add_planet_by_month(year=2020, month=8)
m
###Output
_____no_output_____
###Markdown
Add a Planet quarterly mosaic by specifying year and quarter.
###Code
m = leafmap.Map()
m.add_planet_by_quarter(year=2019, quarter=2)
m
###Output
_____no_output_____
###Markdown
Use timeseries inspector
###Code
import os
import leafmap
if os.environ.get("PLANET_API_KEY") is None:
os.environ["PLANET_API_KEY"] = 'your-api-key'
tiles = leafmap.planet_tiles()
leafmap.ts_inspector(tiles, center=[40, -100], zoom=4)
###Output
_____no_output_____
###Markdown
Use time slider Use the time slider to visualize Planet quarterly mosaic.
###Code
import os
import leafmap
if os.environ.get("PLANET_API_KEY") is None:
os.environ["PLANET_API_KEY"] = 'your-api-key'
###Output
_____no_output_____
###Markdown
Specify the map center and zoom level.
###Code
m = leafmap.Map(center=[38.2659, -103.2447], zoom=13)
m
###Output
_____no_output_____
###Markdown
Use the time slider to visualize Planet quarterly mosaic.
###Code
m = leafmap.Map()
layers_dict = leafmap.planet_quarterly_tiles()
m.add_time_slider(layers_dict, time_interval=1)
m
###Output
_____no_output_____
###Markdown
Use the time slider to visualize basemaps.
###Code
m = leafmap.Map()
m.clear_layers()
layers_dict = leafmap.basemap_xyz_tiles()
m.add_time_slider(layers_dict, time_interval=1)
m
###Output
_____no_output_____
###Markdown
[](https://gishub.org/ym-colab)[](https://gishub.org/ym-binder)[](https://gishub.org/ym-binder-nb) **Interactive Mapping and Geospatial Analysis with Leafmap and Jupyter**This notebook was developed for the 90-min [leafmap workshop](https://www.eventbrite.com/e/interactive-mapping-and-geospatial-analysis-tickets-188600217327?keep_tld=1) taking place on November 9, 2021. The workshop is hosted by [YouthMappers](https://www.youthmappers.org).- Author: [Qiusheng Wu](https://github.com/giswqs)- Slides: https://gishub.org/ym - Streamlit web app: https://streamlit.gishub.orgLaunch this notebook to execute code interactively using: - Google Colab: https://gishub.org/ym-colab- Pangeo Binder JupyterLab: https://gishub.org/ym-binder- Pangeo Binder Jupyter Notebook: https://gishub.org/ym-binder-nb Introduction Workshop description[Leafmap](https://leafmap.org) is a Python package for interactive mapping and geospatial analysis with minimal coding in a Jupyter environment. It is built upon a number of open-source packages, such as [folium](https://github.com/python-visualization/folium) and [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) (for creating interactive maps), [WhiteboxTools](https://github.com/jblindsay/whitebox-tools) and [whiteboxgui](https://github.com/giswqs/whiteboxgui) (for analyzing geospatial data), and [ipywidgets](https://github.com/jupyter-widgets/ipywidgets) (for designing interactive graphical user interface). The WhiteboxTools library currently contains 480+ tools for advanced geospatial analysis. Leafmap provides many convenient functions for loading and visualizing geospatial data with only one line of code. Users can also use the interactive user interface to load geospatial data without coding. Anyone with a web browser and Internet connection can use leafmap to perform geospatial analysis and data visualization in the cloud with minimal coding. The topics that will be covered in this workshop include: - A brief introduction to Jupyter and Colab- A brief introduction to leafmap and relevant web resources - Creating interactive maps using multiple plotting backends- Searching and loading basemaps- Loading and visualizing vector/raster data- Using Cloud Optimized GeoTIFF (COG) and SpatialTemporal Asset Catalog (STAC)- Downloading OpenStreetMap data- Loading data from a PostGIS database- Creating custom legends and colorbars- Creating split-panel maps and linked maps- Visualizing Planet global monthly/quarterly mosaic- Designing and publishing interactive web apps- Performing geospatial analysis (e.g., hydrological analysis) using whiteboxguiThis workshop is intended for scientific programmers, data scientists, geospatial analysts, and concerned citizens of Earth. The attendees are expected to have a basic understanding of Python and the Jupyter ecosystem. Familiarity with Earth science and geospatial datasets is useful but not required. More information about leafmap can be found at https://leafmap.org. Jupyter keyboard shortcuts- Shift+Enter: run cell, select below- Ctrl+Enter: : run selected cells- Alt+Enter: run cell and insert below- Tab: code completion or indent- Shift+Tab: tooltip- Ctrl+/: comment out code Set up environment Required Python packages:* [leafmap](https://github.com/giswqs/leafmap) - A Python package for interactive mapping and geospatial analysis with minimal coding in a Jupyter environment.* [keplergl](https://docs.kepler.gl/docs/keplergl-jupyter) - A high-performance web-based application for visual exploration of large-scale geolocation data sets.* [pydeck](https://deckgl.readthedocs.io/en/latest) - High-scale spatial rendering in Python, powered by deck.gl.* [geopandas](https://geopandas.org) - An open source project to make working with geospatial data in python easier. * [xarray-leaflet](https://github.com/davidbrochart/xarray_leaflet) - An xarray extension for tiled map plotting. Use Google ColabClick the button below to open this notebook in Google Colab and execute code interactively.[](https://githubtocolab.com/giswqs/leafmap/blob/master/examples/workshops/YouthMappers_2021.ipynb)
###Code
import os
import subprocess
import sys
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
A function for installing Python packages.
###Code
def install(package):
subprocess.check_call([sys.executable, "-m", "pip", "install", package])
###Output
_____no_output_____
###Markdown
Install required Python packages in Google Colab.
###Code
pkgs = ['leafmap', 'geopandas', 'keplergl', 'pydeck', 'xarray_leaflet', 'osmnx', 'pygeos', 'imageio', 'tifffile']
if "google.colab" in sys.modules:
for pkg in pkgs:
install(pkg)
###Output
_____no_output_____
###Markdown
Use Pangeo BinderClick the buttons below to open this notebook in JupyterLab (first button) or Jupyter Notebook (second button) and execute code interactively.[](https://gishub.org/ym-binder)[](https://gishub.org/ym-binder-nb)- JupyterLab: https://gishub.org/ym-binder- Jupyter Notebook: https://gishub.org/ym-binder-nb Use Miniconda/AnacondaIf you have[Anaconda](https://www.anaconda.com/distribution/download-section) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html) installed on your computer, you can install leafmap using the following commands. Leafmap has an optional dependency - [geopandas](https://geopandas.org), which can be challenging to install on some computers, especially Windows. It is highly recommended that you create a fresh conda environment to install geopandas and leafmap. Follow the commands below to set up a conda env and install geopandas, leafmap, pydeck, keplergl, and xarray_leaflet. ```conda create -n geo python=3.8conda activate geoconda install geopandasconda install mamba -c conda-forgemamba install leafmap keplergl pydeck xarray_leaflet -c conda-forgemamba install osmnx pygeos imageio tifffile -c conda-forgejupyter lab```
###Code
try:
import leafmap
except ImportError:
install('leafmap')
###Output
_____no_output_____
###Markdown
Create an interactive map`Leafmap` has five plotting backends: [folium](https://github.com/python-visualization/folium), [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet), [here-map](https://github.com/heremaps/here-map-widget-for-jupyter), [kepler.gl](https://docs.kepler.gl/docs/keplergl-jupyter), and [pydeck](https://deckgl.readthedocs.io). Note that the backends do not offer equal functionality. Some interactive functionality in `ipyleaflet` might not be available in other plotting backends. To use a specific plotting backend, use one of the following:- `import leafmap.leafmap as leafmap`- `import leafmap.foliumap as leafmap`- `import leafmap.heremap as leafmap`- `import leafmap.kepler as leafmap`- `import leafmap.deck as leafmap` Use ipyleaflet
###Code
import leafmap
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Use folium
###Code
import leafmap.foliumap as leafmap
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Use kepler.gl
###Code
import leafmap.kepler as leafmap
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
If you encounter an error saying `Error displaying widget: model not found` when trying to display the map, you can use `m.static_map()` as a workaround until this [kepler.gl bug](https://github.com/keplergl/kepler.gl/issues/1165) has been resolved.
###Code
# m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Use pydeck
###Code
import leafmap.deck as leafmap
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Customize the default map Specify map center and zoom level
###Code
import leafmap
m = leafmap.Map(center=(40, -100), zoom=4) #center=[lat, lon]
m
m = leafmap.Map(center=(51.5, -0.15), zoom=17)
m
###Output
_____no_output_____
###Markdown
Change map size
###Code
m = leafmap.Map(height="400px", width="800px")
m
###Output
_____no_output_____
###Markdown
Set control visibilityWhen creating a map, set the following controls to either `True` or `False` as appropriate.* attribution_control* draw_control* fullscreen_control* layers_control* measure_control* scale_control* toolbar_control
###Code
m = leafmap.Map(draw_control=False, measure_control=False, fullscreen_control=False, attribution_control=False)
m
###Output
_____no_output_____
###Markdown
Remove all controls from the map.
###Code
m = leafmap.Map()
m.clear_controls()
m
###Output
_____no_output_____
###Markdown
Change basemapsSpecify a Google basemap to use, can be one of ["ROADMAP", "TERRAIN", "SATELLITE", "HYBRID"].
###Code
import leafmap
m = leafmap.Map(google_map="TERRAIN") # HYBIRD, ROADMAP, SATELLITE, TERRAIN
m
###Output
_____no_output_____
###Markdown
Add a basemap using the `add_basemap()` function.
###Code
m = leafmap.Map()
m.add_basemap("Esri.NatGeoWorldMap")
m
###Output
_____no_output_____
###Markdown
Print out the list of available basemaps.
###Code
for basemap in leafmap.leafmap_basemaps:
print(basemap)
###Output
_____no_output_____
###Markdown

###Code
m = leafmap.Map()
m.add_tile_layer(url="https://mt1.google.com/vt/lyrs=y&x={x}&y={y}&z={z}", name="Google Satellite", attribution="Google")
m
###Output
_____no_output_____
###Markdown
Add tile layers Add XYZ tile layer
###Code
import leafmap
m = leafmap.Map()
m.add_tile_layer(url="https://mt1.google.com/vt/lyrs=y&x={x}&y={y}&z={z}", name="Google Satellite", attribution="Google")
m
###Output
_____no_output_____
###Markdown
Add WMS tile layerMore WMS basemaps can be found at the following websites:- USGS National Map: https://viewer.nationalmap.gov/services- MRLC NLCD Land Cover data: https://www.mrlc.gov/data-services-page- FWS NWI Wetlands data: https://www.fws.gov/wetlands/Data/Web-Map-Services.html
###Code
m = leafmap.Map()
naip_url = 'https://services.nationalmap.gov/arcgis/services/USGSNAIPImagery/ImageServer/WMSServer?'
m.add_wms_layer(url=naip_url, layers='0', name='NAIP Imagery', format='image/png', shown=True)
m
###Output
_____no_output_____
###Markdown
Add xyzservices providerAdd a layer from [xyzservices](https://github.com/geopandas/xyzservices) provider object.
###Code
import os
import xyzservices.providers as xyz
basemap = xyz.OpenTopoMap
basemap
m = leafmap.Map()
m.add_basemap(basemap)
m
###Output
_____no_output_____
###Markdown
Add COG/STAC layersA Cloud Optimized GeoTIFF (COG) is a regular GeoTIFF file, aimed at being hosted on a HTTP file server, with an internal organization that enables more efficient workflows on the cloud. It does this by leveraging the ability of clients issuing HTTP GET range requests to ask for just the parts of a file they need. More information about COG can be found at Some publicly available Cloud Optimized GeoTIFFs:* https://stacindex.org/* https://cloud.google.com/storage/docs/public-datasets/landsat* https://www.digitalglobe.com/ecosystem/open-data* https://earthexplorer.usgs.gov/For this demo, we will use data from https://www.maxar.com/open-data/california-colorado-fires for mapping California and Colorado fires. A list of COGs can be found [here](https://github.com/giswqs/leafmap/blob/master/examples/data/cog_files.txt). Add COG layer
###Code
import leafmap
m = leafmap.Map()
url = 'https://opendata.digitalglobe.com/events/california-fire-2020/pre-event/2018-02-16/pine-gulch-fire20/1030010076004E00.tif'
url2 = 'https://opendata.digitalglobe.com/events/california-fire-2020/post-event/2020-08-14/pine-gulch-fire20/10300100AAC8DD00.tif'
m.add_cog_layer(url, name="Fire (pre-event)")
m.add_cog_layer(url2, name="Fire (post-event)")
m
###Output
_____no_output_____
###Markdown
Add STAC layerThe SpatioTemporal Asset Catalog (STAC) specification provides a common language to describe a range of geospatial information, so it can more easily be indexed and discovered. A 'spatiotemporal asset' is any file that represents information about the earth captured in a certain space and time. The initial focus is primarily remotely-sensed imagery (from satellites, but also planes, drones, balloons, etc), but the core is designed to be extensible to SAR, full motion video, point clouds, hyperspectral, LiDAR and derived data like NDVI, Digital Elevation Models, mosaics, etc. More information about STAC can be found at https://stacspec.org/Some publicly available SpatioTemporal Asset Catalog (STAC):* https://stacindex.orgFor this demo, we will use STAC assets from https://stacindex.org/catalogs/spot-orthoimages-canada-2005/?t=catalogs
###Code
m = leafmap.Map()
url = 'https://canada-spot-ortho.s3.amazonaws.com/canada_spot_orthoimages/canada_spot5_orthoimages/S5_2007/S5_11055_6057_20070622/S5_11055_6057_20070622.json'
m.add_stac_layer(url, bands=['B3', 'B2', 'B1'], name='False color')
m
###Output
_____no_output_____
###Markdown
Add local raster datasetsThe `add_raster` function relies on the `xarray_leaflet` package and is only available for the ipyleaflet plotting backend. Therefore, Google Colab is not supported. Note that `xarray_leaflet` does not work properly on Windows ([source](https://github.com/davidbrochart/xarray_leaflet/issues/30)).
###Code
import os
import leafmap
###Output
_____no_output_____
###Markdown
Download samples raster datasets. More datasets can be downloaded from https://viewer.nationalmap.gov/basic/
###Code
out_dir = os.getcwd()
landsat = os.path.join(out_dir, 'landsat.tif')
dem = os.path.join(out_dir, 'dem.tif')
###Output
_____no_output_____
###Markdown
Download a small Landsat imagery.
###Code
if not os.path.exists(landsat):
landsat_url = 'https://drive.google.com/file/d/1EV38RjNxdwEozjc9m0FcO3LFgAoAX1Uw/view?usp=sharing'
leafmap.download_from_gdrive(landsat_url, 'landsat.tif', out_dir, unzip=False)
###Output
_____no_output_____
###Markdown
Download a small DEM dataset.
###Code
if not os.path.exists(dem):
dem_url = 'https://drive.google.com/file/d/1vRkAWQYsLWCi6vcTMk8vLxoXMFbdMFn8/view?usp=sharing'
leafmap.download_from_gdrive(dem_url, 'dem.tif', out_dir, unzip=False)
m = leafmap.Map()
###Output
_____no_output_____
###Markdown
Add local raster datasets to the mapMore colormap can be found at https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html
###Code
m.add_raster(dem, colormap='terrain', layer_name='DEM')
m.add_raster(landsat, bands=[5, 4, 3], layer_name='Landsat')
m
###Output
_____no_output_____
###Markdown
Add legend Add built-in legend
###Code
import leafmap
###Output
_____no_output_____
###Markdown
List all available built-in legends.
###Code
legends = leafmap.builtin_legends
for legend in legends:
print(legend)
###Output
_____no_output_____
###Markdown
Add a WMS layer and built-in legend to the map.
###Code
m = leafmap.Map()
url = "https://www.mrlc.gov/geoserver/mrlc_display/NLCD_2019_Land_Cover_L48/wms?"
m.add_wms_layer(url, layers="NLCD_2019_Land_Cover_L48", name="NLCD 2019 CONUS Land Cover",format="image/png", transparent=True)
m.add_legend(builtin_legend='NLCD')
m
###Output
_____no_output_____
###Markdown
Add U.S. National Wetlands Inventory (NWI). More info at https://www.fws.gov/wetlands.
###Code
m = leafmap.Map(google_map="HYBRID")
url1 = "https://www.fws.gov/wetlands/arcgis/services/Wetlands/MapServer/WMSServer?"
m.add_wms_layer(url1, layers="1",format='image/png', transparent=True, name="NWI Wetlands Vector")
url2 = "https://www.fws.gov/wetlands/arcgis/services/Wetlands_Raster/ImageServer/WMSServer?"
m.add_wms_layer(url2, layers="0",format='image/png', transparent=True, name="NWI Wetlands Raster")
m.add_legend(builtin_legend="NWI")
m
###Output
_____no_output_____
###Markdown
Add custom legendThere are two ways you can add custom legends:1. Define legend labels and colors2. Define legend dictionaryDefine legend keys and colors.
###Code
m = leafmap.Map()
labels = ['One', 'Two', 'Three', 'Four', 'ect']
#color can be defined using either hex code or RGB (0-255, 0-255, 0-255)
colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3']
# colors = [(255, 0, 0), (127, 255, 0), (127, 18, 25), (36, 70, 180), (96, 68, 123)]
m.add_legend(title='Legend', labels=labels, colors=colors)
m
###Output
_____no_output_____
###Markdown
Define a legend dictionary.
###Code
m = leafmap.Map()
url = "https://www.mrlc.gov/geoserver/mrlc_display/NLCD_2019_Land_Cover_L48/wms?"
m.add_wms_layer(url, layers="NLCD_2019_Land_Cover_L48", name="NLCD 2019 CONUS Land Cover", format="image/png", transparent=True)
legend_dict = {
'11 Open Water': '466b9f',
'12 Perennial Ice/Snow': 'd1def8',
'21 Developed, Open Space': 'dec5c5',
'22 Developed, Low Intensity': 'd99282',
'23 Developed, Medium Intensity': 'eb0000',
'24 Developed High Intensity': 'ab0000',
'31 Barren Land (Rock/Sand/Clay)': 'b3ac9f',
'41 Deciduous Forest': '68ab5f',
'42 Evergreen Forest': '1c5f2c',
'43 Mixed Forest': 'b5c58f',
'51 Dwarf Scrub': 'af963c',
'52 Shrub/Scrub': 'ccb879',
'71 Grassland/Herbaceous': 'dfdfc2',
'72 Sedge/Herbaceous': 'd1d182',
'73 Lichens': 'a3cc51',
'74 Moss': '82ba9e',
'81 Pasture/Hay': 'dcd939',
'82 Cultivated Crops': 'ab6c28',
'90 Woody Wetlands': 'b8d9eb',
'95 Emergent Herbaceous Wetlands': '6c9fb8'
}
m.add_legend(title="NLCD Land Cover Classification", legend_dict=legend_dict)
m
###Output
_____no_output_____
###Markdown
Add colormapThe colormap functionality requires the ipyleaflet plotting backend. Folium is not supported.
###Code
import leafmap
import leafmap.colormaps as cm
###Output
_____no_output_____
###Markdown
Common colormapsColor palette for DEM data.
###Code
cm.palettes.dem
###Output
_____no_output_____
###Markdown
Show the DEM palette.
###Code
cm.plot_colormap(colors=cm.palettes.dem, axis_off=True)
###Output
_____no_output_____
###Markdown
Color palette for NDVI data.
###Code
cm.palettes.ndvi
###Output
_____no_output_____
###Markdown
Show the NDVI palette.
###Code
cm.plot_colormap(colors=cm.palettes.ndvi)
###Output
_____no_output_____
###Markdown
Custom colormapsSpecify the number of classes for a palette.
###Code
cm.get_palette('terrain', n_class=8)
###Output
_____no_output_____
###Markdown
Show the terrain palette with 8 classes.
###Code
cm.plot_colormap(colors=cm.get_palette('terrain', n_class=8))
###Output
_____no_output_____
###Markdown
Create a palette with custom colors, label, and font size.
###Code
cm.plot_colormap(colors=["red", "green", "blue"], label="Temperature", font_size=12)
###Output
_____no_output_____
###Markdown
Create a discrete color palette.
###Code
cm.plot_colormap(colors=["red", "green", "blue"], discrete=True, label="Temperature", font_size=12)
###Output
_____no_output_____
###Markdown
Specify the width and height for the palette.
###Code
cm.plot_colormap('terrain', label="Elevation", width=8.0, height=0.4, orientation='horizontal',vmin=0, vmax=1000)
###Output
_____no_output_____
###Markdown
Change the orentation of the colormap to be vertical.
###Code
cm.plot_colormap('terrain', label="Elevation", width=0.4, height=4, orientation='vertical',vmin=0, vmax=1000)
###Output
_____no_output_____
###Markdown
Horizontal colormapAdd a horizontal colorbar to an interactive map.
###Code
m = leafmap.Map()
m.add_basemap("OpenTopoMap")
m.add_colormap('terrain', label="Elevation", width=8.0, height=0.4, orientation='horizontal',vmin=0, vmax=4000)
m
###Output
_____no_output_____
###Markdown
Vertical colormapAdd a vertical colorbar to an interactive map.
###Code
m = leafmap.Map()
m.add_basemap("OpenTopoMap")
m.add_colormap('terrain', label="Elevation", width=0.4, height=4, orientation='vertical',vmin=0, vmax=4000)
m
###Output
_____no_output_____
###Markdown
List of available colormaps
###Code
cm.plot_colormaps(width=12, height=0.4)
###Output
_____no_output_____
###Markdown
Add vector datasets Add CSVRead a CSV as a Pandas DataFrame.
###Code
import os
import leafmap
in_csv = 'https://raw.githubusercontent.com/giswqs/data/main/world/world_cities.csv'
df = leafmap.csv_to_pandas(in_csv)
df
###Output
_____no_output_____
###Markdown
Create a point layer from a CSV file containing lat/long information.
###Code
m = leafmap.Map()
m.add_xy_data(in_csv, x="longitude", y="latitude", layer_name="World Cities")
m
###Output
_____no_output_____
###Markdown
Set the output directory.
###Code
out_dir = os.getcwd()
out_shp = os.path.join(out_dir, 'world_cities.shp')
###Output
_____no_output_____
###Markdown
Convert a CSV file containing lat/long information to a shapefile.
###Code
leafmap.csv_to_shp(in_csv, out_shp)
###Output
_____no_output_____
###Markdown
Convert a CSV file containing lat/long information to a GeoJSON.
###Code
out_geojson = os.path.join(out_dir, 'world_cities.geojson')
leafmap.csv_to_geojson(in_csv, out_geojson)
###Output
_____no_output_____
###Markdown
Convert a CSV file containing lat/long information to a GeoPandas GeoDataFrame.
###Code
gdf = leafmap.csv_to_gdf(in_csv)
gdf
###Output
_____no_output_____
###Markdown
Add GeoJSONAdd a GeoJSON to the map.
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
in_geojson = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/cable-geo.geojson'
m.add_geojson(in_geojson, layer_name="Cable lines", info_mode='on_hover')
m
###Output
_____no_output_____
###Markdown
Add a GeoJSON with random filled color to the map.
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/countries.geojson"
m.add_geojson(url, layer_name="Countries", fill_colors=['red', 'yellow', 'green', 'orange'])
m
###Output
_____no_output_____
###Markdown
Use the `style_callback` function for assigning a random color to each polygon.
###Code
import random
m = leafmap.Map(center=[0, 0], zoom=2)
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/countries.geojson"
def random_color(feature):
return {
'color': 'black',
'fillColor': random.choice(['red', 'yellow', 'green', 'orange']),
}
m.add_geojson(url, layer_name="Countries", style_callback=random_color)
m
###Output
_____no_output_____
###Markdown
Use custom `style` and `hover_style` functions.
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/countries.geojson"
style = {
"stroke": True,
"color": "#0000ff",
"weight": 2,
"opacity": 1,
"fill": True,
"fillColor": "#0000ff",
"fillOpacity": 0.1,
}
hover_style = {"fillOpacity": 0.7}
m.add_geojson(url, layer_name="Countries", style=style, hover_style=hover_style)
m
###Output
_____no_output_____
###Markdown
Add shapefile
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
in_shp = 'https://github.com/giswqs/leafmap/raw/master/examples/data/countries.zip'
m.add_shp(in_shp, layer_name="Countries")
m
###Output
_____no_output_____
###Markdown
Add KML
###Code
import leafmap
m = leafmap.Map()
in_kml = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us-states.kml'
m.add_kml(in_kml,layer_name="US States KML")
m
###Output
_____no_output_____
###Markdown
Add GeoDataFrame
###Code
import geopandas as gpd
m = leafmap.Map()
gdf = gpd.read_file("https://github.com/giswqs/leafmap/raw/master/examples/data/cable-geo.geojson")
m.add_gdf(gdf, layer_name="Cable lines")
m
###Output
_____no_output_____
###Markdown
Read the GeoPandas sample dataset as a GeoDataFrame.
###Code
path_to_data = gpd.datasets.get_path("nybb")
gdf = gpd.read_file(path_to_data)
gdf
m = leafmap.Map()
m.add_gdf(gdf, layer_name="New York boroughs", fill_colors=["red", "green", "blue"])
m
###Output
_____no_output_____
###Markdown
Add point layerAdd a point layer using the interactive GUI.
###Code
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Add a point layer programmatically.
###Code
m = leafmap.Map()
url= "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us_cities.geojson"
m.add_point_layer(url, popup=["name", "pop_max"], layer_name="US Cities")
m
###Output
_____no_output_____
###Markdown
Add vectorThe `add_vector` function supports any vector data format supported by GeoPandas.
###Code
m = leafmap.Map(center=[0, 0], zoom=2)
url = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/countries.geojson"
m.add_vector(url, layer_name="Countries", fill_colors=['red', 'yellow', 'green', 'orange'])
m
###Output
_____no_output_____
###Markdown
Download OSM data OSM from geocodeAdd OSM data of place(s) by name or ID to the map. Note that the leafmap custom layer control does not support GeoJSON, we need to use the ipyleaflet built-in layer control.
###Code
import leafmap
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.add_osm_from_geocode("New York City", layer_name='NYC')
m
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.add_osm_from_geocode("Chicago, Illinois", layer_name='Chicago, IL')
m
###Output
_____no_output_____
###Markdown
OSM from placeAdd OSM entities within boundaries of geocodable place(s) to the map.
###Code
m = leafmap.Map(toolbar_control=False, layers_control=True)
place = "Bunker Hill, Los Angeles, California"
tags = {"building": True}
m.add_osm_from_place(place, tags, layer_name="Los Angeles, CA")
m
###Output
_____no_output_____
###Markdown
Show OSM feature tags.https://wiki.openstreetmap.org/wiki/Map_features
###Code
# leafmap.osm_tags_list()
###Output
_____no_output_____
###Markdown
OSM from address
###Code
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.add_osm_from_address(
address="New York City",
tags={"amenity": "bar"},
dist=1500,
layer_name="NYC bars")
m
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.add_osm_from_address(
address="New York City",
tags={
"landuse": ["retail", "commercial"],
"building": True
},
dist=1000, layer_name="NYC buildings")
m
###Output
_____no_output_____
###Markdown
OSM from bbox
###Code
m = leafmap.Map(toolbar_control=False, layers_control=True)
north, south, east, west = 40.7551, 40.7454, -73.9738, -73.9965
m.add_osm_from_bbox(north, south, east, west, tags={"amenity": "bar"}, layer_name="NYC bars")
m
###Output
_____no_output_____
###Markdown
OSM from pointAdd OSM entities within some distance N, S, E, W of a point to the map.
###Code
m = leafmap.Map(center=[46.7808, -96.0156], zoom=12, toolbar_control=False, layers_control=True)
m.add_osm_from_point(center_point=(46.7808, -96.0156), tags={"natural": "water"}, dist=10000, layer_name="Lakes")
m
m = leafmap.Map(center=[39.9170, 116.3908], zoom=15, toolbar_control=False, layers_control=True)
m.add_osm_from_point(center_point=(39.9170, 116.3908), tags={"building": True, "natural": "water"}, dist=1000, layer_name="Beijing")
m
###Output
_____no_output_____
###Markdown
OSM from viewAdd OSM entities within the current map view to the map.
###Code
m = leafmap.Map(toolbar_control=False, layers_control=True)
m.set_center(-73.9854, 40.7500, 16)
m
m.add_osm_from_view(tags={"amenity": "bar", "building": True}, layer_name="New York")
###Output
_____no_output_____
###Markdown
Create a GeoPandas GeoDataFrame from place.
###Code
gdf = leafmap.osm_gdf_from_place("New York City", tags={"amenity": "bar"})
gdf
###Output
_____no_output_____
###Markdown
Use WhiteboxToolsUse the built-in toolbox to perform geospatial analysis. For example, you can perform depression filling using the sample DEM dataset downloaded in the above step.
###Code
import os
import leafmap
import urllib.request
###Output
_____no_output_____
###Markdown
Download a sample DEM dataset.
###Code
url = 'https://github.com/giswqs/whitebox-python/raw/master/whitebox/testdata/DEM.tif'
urllib.request.urlretrieve(url, "dem.tif")
m = leafmap.Map()
m
###Output
_____no_output_____
###Markdown
Display the toolbox using the default mode.
###Code
leafmap.whiteboxgui()
###Output
_____no_output_____
###Markdown
Display the toolbox using the collapsible tree mode. Note that the tree mode does not support Google Colab.
###Code
leafmap.whiteboxgui(tree=True)
###Output
_____no_output_____
###Markdown
Perform geospatial analysis using the [whitebox](https://github.com/giswqs/whitebox-python) package.
###Code
import os
import whitebox
wbt = whitebox.WhiteboxTools()
wbt.verbose = True
data_dir = os.getcwd()
wbt.set_working_dir(data_dir)
wbt.feature_preserving_smoothing("dem.tif", "smoothed.tif", filter=9)
wbt.breach_depressions("smoothed.tif", "breached.tif")
wbt.d_inf_flow_accumulation("breached.tif", "flow_accum.tif")
import matplotlib.pyplot as plt
import imageio
%matplotlib inline
original = imageio.imread(os.path.join(data_dir, 'dem.tif'))
smoothed = imageio.imread(os.path.join(data_dir, 'smoothed.tif'))
breached = imageio.imread(os.path.join(data_dir, 'breached.tif'))
flow_accum = imageio.imread(os.path.join(data_dir, 'flow_accum.tif'))
fig=plt.figure(figsize=(16,11))
ax1 = fig.add_subplot(2, 2, 1)
ax1.set_title('Original DEM')
plt.imshow(original)
ax2 = fig.add_subplot(2, 2, 2)
ax2.set_title('Smoothed DEM')
plt.imshow(smoothed)
ax3 = fig.add_subplot(2, 2, 3)
ax3.set_title('Breached DEM')
plt.imshow(breached)
ax4 = fig.add_subplot(2, 2, 4)
ax4.set_title('Flow Accumulation')
plt.imshow(flow_accum)
plt.show()
###Output
_____no_output_____
###Markdown
Create basemap gallery
###Code
import leafmap
for basemap in leafmap.leafmap_basemaps:
print(basemap)
layers = list(leafmap.leafmap_basemaps.keys())[17:117]
leafmap.linked_maps(rows=20, cols=5, height="200px", layers=layers, labels=layers)
###Output
_____no_output_____
###Markdown
Create linked map
###Code
import leafmap
leafmap.leafmap_basemaps.keys()
layers = ['ROADMAP', 'HYBRID']
leafmap.linked_maps(rows=1, cols=2, height='400px', layers=layers)
layers = ['Stamen.Terrain', 'OpenTopoMap']
leafmap.linked_maps(rows=1, cols=2, height='400px', layers=layers)
###Output
_____no_output_____
###Markdown
Create a 2 * 2 linked map to visualize land cover change. Specify the `center` and `zoom` parameters to change the default map center and zoom level.
###Code
layers = [str(f"NLCD {year} CONUS Land Cover") for year in [2001, 2006, 2011, 2016]]
labels = [str(f"NLCD {year}") for year in [2001, 2006, 2011, 2016]]
leafmap.linked_maps(rows=2, cols=2, height='300px', layers=layers, labels=labels, center=[36.1, -115.2], zoom=9)
###Output
_____no_output_____
###Markdown
Create split-panel mapCreate a split-panel map by specifying the `left_layer` and `right_layer`, which can be chosen from the basemap names, or any custom XYZ tile layer.
###Code
import leafmap
leafmap.split_map(left_layer="ROADMAP", right_layer="HYBRID")
###Output
_____no_output_____
###Markdown
Hide the zoom control from the map.
###Code
leafmap.split_map(left_layer="Esri.WorldTopoMap", right_layer="OpenTopoMap", zoom_control=False)
###Output
_____no_output_____
###Markdown
Add labels to the map and change the default map center and zoom level.
###Code
leafmap.split_map(left_layer="NLCD 2001 CONUS Land Cover", right_layer="NLCD 2019 CONUS Land Cover",
left_label = "2001", right_label="2019", label_position="bottom", center=[36.1, -114.9], zoom=10)
###Output
_____no_output_____
###Markdown
Create heat mapSpecify the file path to the CSV. It can either be a file locally or on the Internet.
###Code
import leafmap
m = leafmap.Map(layers_control=True)
in_csv = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/world_cities.csv"
m.add_heatmap(in_csv, latitude="latitude", longitude='longitude', value="pop_max", name="Heat map", radius=20)
m
###Output
_____no_output_____
###Markdown
Use the folium plotting backend.
###Code
from leafmap import foliumap
m = foliumap.Map()
in_csv = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/world_cities.csv"
m.add_heatmap(in_csv, latitude="latitude", longitude='longitude', value="pop_max", name="Heat map", radius=20)
colors = ['blue', 'lime', 'red']
m.add_colorbar(colors=colors, vmin=0, vmax=10000)
m.add_title("World Population Heat Map", font_size="20px", align="center")
m
###Output
_____no_output_____
###Markdown
Save map to HTML
###Code
import leafmap
m = leafmap.Map()
m.add_basemap("Esri.NatGeoWorldMap")
m
###Output
_____no_output_____
###Markdown
Specify the output HTML file name to save the map as a web page.
###Code
m.to_html("mymap.html")
###Output
_____no_output_____
###Markdown
If the output HTML file name is not provided, the function will return a string containing contain the source code of the HTML file.
###Code
html = m.to_html()
# print(html)
###Output
_____no_output_____
###Markdown
Use kepler plotting backend
###Code
import leafmap.kepler as leafmap
###Output
_____no_output_____
###Markdown
Create an interactive mapCreate an interactive map. You can specify various parameters to initialize the map, such as `center`, `zoom`, `height`, and `widescreen`.
###Code
m = leafmap.Map(center=[40, -100], zoom=2, height=600, widescreen=False)
m
###Output
_____no_output_____
###Markdown
If you encounter an error saying `Error displaying widget: model not found` when trying to display the map, you can use `m.static_map()` as a workaround until this [kepler.gl bug](https://github.com/keplergl/kepler.gl/issues/1165) has been resolved.
###Code
# m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Add CSVAdd a CSV to the map. If you have a map config file, you can directly apply config to the map.
###Code
m = leafmap.Map(center=[37.7621, -122.4143], zoom=12)
in_csv = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/hex_data.csv'
config = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/hex_config.json'
m.add_csv(in_csv, layer_name="hex_data", config=config)
m
m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Save map configSave the map configuration as a JSON file.
###Code
m.save_config("cache/config.json")
###Output
_____no_output_____
###Markdown
Save map as htmlSave the map to an interactive html.
###Code
m.to_html(outfile="cache/kepler_hex.html")
###Output
_____no_output_____
###Markdown
Add GeoJONSAdd a GeoJSON with US state boundaries to the map.
###Code
m = leafmap.Map(center=[50, -110], zoom=2)
polygons = 'https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us-states.json'
m.add_geojson(polygons, layer_name="Countries")
m
m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Add shapefileAdd a shapefile to the map.
###Code
m = leafmap.Map(center=[20, 0], zoom=1)
in_shp = "https://github.com/giswqs/leafmap/raw/master/examples/data/countries.zip"
m.add_shp(in_shp, "Countries")
m
m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Add GeoDataFrameAdd a GeoPandas GeoDataFrame to the map.
###Code
import geopandas as gpd
gdf = gpd.read_file("https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/world_cities.geojson")
gdf
m = leafmap.Map(center=[20, 0], zoom=1)
m.add_gdf(gdf, "World cities")
m
# m.static_map(width=1280, height=600)
###Output
_____no_output_____
###Markdown
Use planet imageryFirst, you need to [sign up](https://www.planet.com/login/?mode=signup) a Planet account and get an API key. See https://developers.planet.com/quickstart/apis.Uncomment the following line to pass in your API key.
###Code
import os
import leafmap
if os.environ.get("PLANET_API_KEY") is None:
os.environ["PLANET_API_KEY"] = 'your-api-key'
quarterly_tiles = leafmap.planet_quarterly_tiles()
for tile in quarterly_tiles:
print(tile)
monthly_tiles = leafmap.planet_monthly_tiles()
for tile in monthly_tiles:
print(tile)
###Output
_____no_output_____
###Markdown
Add a Planet monthly mosaic by specifying year and month.
###Code
m = leafmap.Map()
m.add_planet_by_month(year=2020, month=8)
m
###Output
_____no_output_____
###Markdown
Add a Planet quarterly mosaic by specifying year and quarter.
###Code
m = leafmap.Map()
m.add_planet_by_quarter(year=2019, quarter=2)
m
###Output
_____no_output_____
###Markdown
Use timeseries inspector
###Code
import os
import leafmap
if os.environ.get("PLANET_API_KEY") is None:
os.environ["PLANET_API_KEY"] = 'your-api-key'
tiles = leafmap.planet_tiles()
leafmap.ts_inspector(tiles, center=[40, -100], zoom=4)
###Output
_____no_output_____
###Markdown
Use time slider Use the time slider to visualize Planet quarterly mosaic.
###Code
import os
import leafmap
if os.environ.get("PLANET_API_KEY") is None:
os.environ["PLANET_API_KEY"] = 'your-api-key'
###Output
_____no_output_____
###Markdown
Specify the map center and zoom level.
###Code
m = leafmap.Map(center=[38.2659, -103.2447], zoom=13)
m
###Output
_____no_output_____
###Markdown
Use the time slider to visualize Planet quarterly mosaic.
###Code
m = leafmap.Map()
layers_dict = leafmap.planet_quarterly_tiles()
m.add_time_slider(layers_dict, time_interval=1)
m
###Output
_____no_output_____
###Markdown
Use the time slider to visualize basemaps.
###Code
m = leafmap.Map()
m.clear_layers()
layers_dict = leafmap.basemap_xyz_tiles()
m.add_time_slider(layers_dict, time_interval=1)
m
###Output
_____no_output_____ |
carracing/vae_test.ipynb | ###Markdown
Test VAE model on random frame of random file in `record`
###Code
import numpy as np
import os
import json
import tensorflow as tf
import random
from vae.vae import ConvVAE, reset_graph
import matplotlib.pyplot as plt
np.set_printoptions(precision=4, edgeitems=6, linewidth=100, suppress=True)
os.environ["CUDA_VISIBLE_DEVICES"]="-1" # disable GPU
DATA_DIR = "record"
model_path_name = "vae"
z_size=32
filelist = os.listdir(DATA_DIR)
obs = np.load(os.path.join(DATA_DIR, random.choice(filelist)))["obs"]
obs = obs.astype(np.float32)/255.0
obs.shape
frame = random.choice(obs).reshape(1, 64, 64, 3)
vae = ConvVAE(z_size=z_size,
batch_size=1,
is_training=False,
reuse=False,
gpu_mode=False)
vae.load_json(os.path.join(model_path_name, 'vae.json'))
# show recorded frame that will be fed into the input of VAE
plt.imshow(frame[0])
plt.show()
batch_z = vae.encode(frame)
print(batch_z[0]) # print out sampled z
reconstruct = vae.decode(batch_z)
# show reconstruction
plt.imshow(reconstruct[0])
plt.show()
###Output
_____no_output_____ |
src/orthologs.ipynb | ###Markdown
Retrieve HS genes
###Code
genes = pd.read_parquet('/gstock/GeneIso/V2/Genes.parquet')
###Output
_____no_output_____
###Markdown
Identify Orthologs in MM
###Code
tree = ET.parse('/gstock/GeneIso/data/External/ENSEMBL/mm_ortho.xml')
root = tree.getroot()
cols = [sub_child.attrib['name'] for child in root for sub_child in child if sub_child.tag == 'Attribute']
# Retrieve MM orthologs from ENSEMBL BIOMART
mm_ortho = pd.read_csv('/gstock/GeneIso/data/External/ENSEMBL/mmusculus_orthologs.txt.gz', compression='gzip', sep='\t', names=cols)
mm_ortho.columns.tolist()
tmp['MM_Gene_Length'] = tmp['end_position'] - tmp['start_position']
tmp[['Miso_siso', 'GeneID', 'Gene name', 'Gene_length', 'mmusculus_homolog_associated_gene_name', 'mmusculus_homolog_ensembl_gene', 'start_position', 'end_position', 'MM_Gene_Length']]
###Output
_____no_output_____
###Markdown
HS & MM Gene length
###Code
tmp[['Miso_siso', 'GeneID', 'mmusculus_homolog_ensembl_gene', 'Gene_length','MM_Gene_Length']].drop_duplicates().groupby('Miso_siso')[['Gene_length','MM_Gene_Length']].describe().round(0).astype(int)
###Output
_____no_output_____
###Markdown
Count transcript / gene in MM
###Code
transcript_count_mm = tmp[['mmusculus_homolog_ensembl_gene', 'ensembl_transcript_id']].drop_duplicates().groupby(['mmusculus_homolog_ensembl_gene'])['ensembl_transcript_id'].count().rename('transcript_count_mm').reset_index()
transcript_count_mm.loc[transcript_count_mm['transcript_count_mm'] > 1, 'Miso_siso_mm'] = 'Miso'
transcript_count_mm.loc[transcript_count_mm['transcript_count_mm'] == 1, 'Miso_siso_mm'] = 'Siso'
transcript_count_mm
###Output
_____no_output_____
###Markdown
MISOG / SISOG count
###Code
tmp_merge = pd.merge(tmp[['GeneID', 'Miso_siso', 'mmusculus_homolog_ensembl_gene', 'transcript_count_x']], transcript_count_mm, on='mmusculus_homolog_ensembl_gene').drop_duplicates()
tmp_merge.groupby('Miso_siso')['Miso_siso_mm'].value_counts()
tree = ET.parse('/gstock/GeneIso/data/External/ENSEMBL/mm_genes.xml')
root = tree.getroot()
# for child in root:
# for sub_child in child:
# print(sub_child.tag, sub_child.attrib)
cols = [sub_child.attrib['name'] for child in root for sub_child in child if sub_child.tag == 'Attribute']
cols
###Output
_____no_output_____
###Markdown
Load MM genes + mRNA BIOMART file
###Code
tree = ET.parse('/gstock/GeneIso/data/External/ENSEMBL/mm_genes.xml')
root = tree.getroot()
cols = [sub_child.attrib['name'] for child in root for sub_child in child if sub_child.tag == 'Attribute']
mm_genes_mrna = pd.read_csv('/gstock/GeneIso/data/External/ENSEMBL/mmusculus_genes_mrnas.txt.gz', compression='gzip', sep='\t', names=cols)
mm_genes_mrna
###Output
/home/weber/.conda/envs/ExoCarto/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3166: DtypeWarning: Columns (4) have mixed types.Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Load MM exons file from ENSEMBL BIOMART
###Code
tree = ET.parse('/gstock/GeneIso/data/External/ENSEMBL/mm_exons.xml')
root = tree.getroot()
cols_exons = [sub_child.attrib['name'] for child in root for sub_child in child if sub_child.tag == 'Attribute']
# for child in root:
# for sub_child in child:
# print(sub_child.tag, sub_child.attrib)
mm_exons = pd.read_csv('/gstock/GeneIso/data/External/ENSEMBL/mmusculus_exons.txt.gz', compression='gzip', sep='\t', names=cols_exons)
mm_exons
# cols
###Output
_____no_output_____
###Markdown
Select closest MM ortholog for each HS gene
###Code
def select_ortho(df):
# print(df[['mmusculus_homolog_associated_gene_name', 'ensembl_gene_id', 'mmusculus_homolog_ensembl_gene', 'mmusculus_homolog_orthology_type', 'mmusculus_homolog_chromosome', 'mmusculus_homolog_perc_id', 'mmusculus_homolog_perc_id_r1', 'mmusculus_homolog_goc_score', 'mmusculus_homolog_wga_coverage', 'mmusculus_homolog_orthology_confidence']])
df = df.loc[df['mmusculus_homolog_orthology_confidence'] == 1]
df = df.dropna(subset=['Gene name'])
max_percent_id = df.mmusculus_homolog_perc_id.max()
max_percent_id_r1 = df.mmusculus_homolog_perc_id_r1.max()
max_goc_score = df.mmusculus_homolog_goc_score.max()
max_wga_coverage = df.mmusculus_homolog_wga_coverage.max()
df = df.loc[
(df['mmusculus_homolog_perc_id'] == max_percent_id) &
(df['mmusculus_homolog_perc_id_r1'] == max_percent_id_r1) &
(df['mmusculus_homolog_goc_score'] == max_goc_score) &
(df['mmusculus_homolog_wga_coverage'] == max_wga_coverage)
]
df = df.drop_duplicates()
if df.shape[0] > 1:
ortho_genes = df.mmusculus_homolog_associated_gene_name.tolist()
start_gene = df['Gene name'].values[0]
target = [gene for gene in ortho_genes if gene.lower() == start_gene.lower()]
if len(target) == 0:
target = ortho_genes[0]
else:
target = target[0]
df = df.loc[df['mmusculus_homolog_associated_gene_name'] == target].reset_index(drop=True)
if df.shape[0] > 1:
df = pd.DataFrame(df.loc[0]).T
# print(df[['mmusculus_homolog_associated_gene_name', 'ensembl_gene_id', 'mmusculus_homolog_ensembl_gene', 'mmusculus_homolog_orthology_type', 'mmusculus_homolog_chromosome', 'mmusculus_homolog_perc_id', 'mmusculus_homolog_perc_id_r1', 'mmusculus_homolog_goc_score', 'mmusculus_homolog_wga_coverage', 'mmusculus_homolog_orthology_confidence']])
return df
tmp_merge = pd.merge(
pd.merge(
genes,
mm_ortho[['mmusculus_homolog_associated_gene_name', 'ensembl_gene_id', 'mmusculus_homolog_ensembl_gene', 'mmusculus_homolog_orthology_type', 'mmusculus_homolog_chromosome', 'mmusculus_homolog_perc_id', 'mmusculus_homolog_perc_id_r1', 'mmusculus_homolog_goc_score', 'mmusculus_homolog_wga_coverage', 'mmusculus_homolog_orthology_confidence']]
.drop_duplicates().rename({'ensembl_gene_id': 'GeneID'}, axis=1)
),
mm_genes_mrna.drop(['transcript_count', 'external_gene_name'], axis=1)[['ensembl_gene_id', 'chromosome_name', 'start_position', 'end_position', 'source']],
left_on='mmusculus_homolog_ensembl_gene',
right_on='ensembl_gene_id'
)
# print(tmp_merge)
tmp_merge = tmp_merge.groupby('GeneID').progress_apply(select_ortho).reset_index(drop=True)
# print(tmp_merge)
tmp_merge
###Output
5%|▌ | 659/12717 [00:06<01:59, 100.83it/s]
###Markdown
Merge previous dataframe with mRNA dataframe
###Code
mm_genes_mrna_final = pd.merge(
tmp_merge[['Miso_siso', 'GeneID', 'Gene name', 'mmusculus_homolog_associated_gene_name', 'mmusculus_homolog_ensembl_gene', 'ensembl_gene_id']],
mm_genes_mrna,
on=['ensembl_gene_id']
)
mm_genes_mrna_final = mm_genes_mrna_final.drop_duplicates(subset=['Miso_siso', 'ensembl_gene_id', 'ensembl_transcript_id']).dropna(subset=['mmusculus_homolog_associated_gene_name'])
mm_genes_mrna_final['chromosome_name'] = mm_genes_mrna_final['chromosome_name'].astype(str)
mm_genes_mrna_final.to_parquet('/gstock/GeneIso/V2/Genes_MM.parquet')
mm_genes_mrna_final
###Output
_____no_output_____
###Markdown
Merge with exons
###Code
exon_mm = pd.merge(
mm_genes_mrna_final,
mm_exons,
on=['ensembl_gene_id', 'ensembl_transcript_id'],
)
exon_mm
###Output
_____no_output_____
###Markdown
Retrieve Ordinal & length for MM exons
###Code
def exon_ord(df, ortho_specie):
if df.Strand.values[0] == 1:
df = df.sort_values(by=['Exon region start (bp)', 'Exon region end (bp)'])
elif df.Strand.values[0] == -1:
df = df.sort_values(by=['Exon region start (bp)', 'Exon region end (bp)'], ascending=False)
df['Ordinal_nb'] = list(range(1,df.shape[0]+1))
df['Length'] = df['Exon region end (bp)'] - df['Exon region start (bp)']
df['Ordinal_nb_inverted'] = np.array(list(range(1,df.shape[0] + 1))[::-1]) * -1
df['Ortho_specie'] = ortho_specie
return df[['Ortho_specie', 'Miso_siso', 'GeneID', 'Ortho_GeneID', 'Ortho_transcript_id', 'Strand', 'Chromosome/scaffold name', 'Exon region start (bp)', 'Exon region end (bp)', 'Ordinal_nb', 'Ordinal_nb_inverted', 'Length']]
exon_tmp_dev = exon_mm[['Miso_siso', 'GeneID', 'ensembl_gene_id', 'ensembl_transcript_id', 'strand', 'chromosome_name', 'exon_chrom_start', 'exon_chrom_end']]
exon_tmp_dev = exon_tmp_dev.drop_duplicates(subset=['Miso_siso', 'ensembl_gene_id', 'ensembl_transcript_id', 'strand', 'chromosome_name', 'exon_chrom_start', 'exon_chrom_end'])
exon_tmp_dev = exon_tmp_dev.rename(
{
'strand' : 'Strand',
'chromosome_name' : 'Chromosome/scaffold name',
'exon_chrom_start' : 'Exon region start (bp)',
'exon_chrom_end' : 'Exon region end (bp)',
'ensembl_gene_id' : 'Ortho_GeneID',
'ensembl_transcript_id' : 'Ortho_transcript_id'
}, axis=1
)
exon_tmp_dev = exon_tmp_dev.sort_values(by=['Chromosome/scaffold name', 'Exon region start (bp)', 'Exon region end (bp)', 'GeneID', 'Ortho_GeneID', 'Ortho_transcript_id'])
# exon_tmp_dev = exon_tmp_dev.loc[exon_tmp_dev['Ortho_GeneID'].isin(exon_tmp_dev['Ortho_GeneID'].unique().tolist()[:100])]
exon_tmp_dev = exon_tmp_dev.groupby(['Miso_siso', 'Ortho_GeneID', 'Ortho_transcript_id']).progress_apply(lambda r: exon_ord(r, 'Mus_Musculus')).reset_index(drop=True)
exon_tmp_dev['Chromosome/scaffold name'] = exon_tmp_dev['Chromosome/scaffold name'].astype(str)
# exon_tmp_dev.to_sql('Exons', engine, if_exists='replace')
exon_tmp_dev.to_parquet('/gstock/GeneIso/V2/Exons_MM.parquet')
exon_tmp_dev
###Output
100%|██████████| 27903/27903 [05:14<00:00, 88.64it/s]
###Markdown
Retrieve Ordinal & length for MM CDS
###Code
def retrieve_cds_coord(df, ortho_specie):
if df.Strand.values[0] == 1:
df = df.sort_values(by=['Exon region start (bp)', 'Exon region end (bp)'])
df_cds = df.dropna(subset=['CDS start', 'CDS end'])
df_cds['Ordinal_nb'] = list(range(1,df_cds.shape[0]+1))
df_cds['Ordinal_nb_inverted'] = np.array(list(range(1,df_cds.shape[0] + 1))[::-1]) * -1
elif df.Strand.values[0] == -1:
df = df.sort_values(by=['Exon region start (bp)', 'Exon region end (bp)'], ascending=False)
df_cds = df.dropna(subset=['CDS start', 'CDS end'])
df_cds['Ordinal_nb'] = list(range(1,df_cds.shape[0]+1))
df_cds['Ordinal_nb_inverted'] = np.array(list(range(1,df_cds.shape[0] + 1))[::-1]) * -1
df_cds['Length'] = df_cds['CDS end'] - df_cds['CDS start']
df_cds['Ortho_specie'] = ortho_specie
return df_cds
exon_tmp_dev = exon_mm[['Miso_siso', 'GeneID', 'ensembl_gene_id', 'ensembl_transcript_id', 'strand', 'chromosome_name', 'exon_chrom_start', 'exon_chrom_end', 'genomic_coding_start', 'genomic_coding_end']]
exon_tmp_dev = exon_tmp_dev.drop_duplicates(subset=['Miso_siso', 'ensembl_gene_id', 'ensembl_transcript_id', 'strand', 'chromosome_name', 'exon_chrom_start', 'exon_chrom_end'])
exon_tmp_dev = exon_tmp_dev.rename(
{
'strand' : 'Strand',
'chromosome_name' : 'Chromosome/scaffold name',
'exon_chrom_start' : 'Exon region start (bp)',
'exon_chrom_end' : 'Exon region end (bp)',
'genomic_coding_start' : 'CDS start',
'genomic_coding_end' : 'CDS end',
'ensembl_gene_id' : 'Ortho_GeneID',
'ensembl_transcript_id' : 'Ortho_transcript_id'
}, axis=1
)
exon_tmp_dev = exon_tmp_dev.sort_values(by=['Chromosome/scaffold name', 'Exon region start (bp)', 'Exon region end (bp)', 'GeneID', 'Ortho_GeneID', 'Ortho_transcript_id'])
# exon_tmp_dev = exon_tmp_dev.loc[exon_tmp_dev['GeneID'].isin(exon_tmp_dev['GeneID'].unique().tolist()[:100])]
cds = exon_tmp_dev.groupby(['Miso_siso', 'Ortho_GeneID', 'Ortho_transcript_id']).progress_apply(lambda r: retrieve_cds_coord(r, "Mus Musculus")).reset_index(drop=True)
cds['Chromosome/scaffold name'] = cds['Chromosome/scaffold name'].astype(str)
# cds.to_sql('CDS', engine, if_exists='replace')
cds.to_parquet('/gstock/GeneIso/V2/CDS_MM.parquet')
cds
###Output
100%|██████████| 27903/27903 [06:01<00:00, 77.18it/s]
###Markdown
Define & Retrieve Ordinal & length for UTR
###Code
def retrieve_utr_ortho(df, ortho_specie):
df = df.sort_values(by=['rank'])
df['rank_inverted'] = np.array(list(range(1,df.shape[0] + 1))[::-1]) * -1
cds_count = df.loc[df['genomic_coding_start'].isna() == False].shape[0]
if df.strand.values[0] == 1:
five_utr = df.loc[(df['exon_chrom_start'] != df['genomic_coding_start']) & (df['rank_inverted'] <= cds_count * -1)].reset_index(drop=True)
five_utr.loc[five_utr['genomic_coding_end'].isna() == True, 'UTR_start'] = five_utr.loc[five_utr['genomic_coding_end'].isna() == True, 'exon_chrom_start']
five_utr.loc[five_utr['genomic_coding_end'].isna() == True, 'UTR_end'] = five_utr.loc[five_utr['genomic_coding_end'].isna() == True, 'exon_chrom_end']
five_utr.loc[five_utr['genomic_coding_end'].isna() == False, 'UTR_start'] = five_utr.loc[five_utr['genomic_coding_end'].isna() == False, 'exon_chrom_start']
five_utr.loc[five_utr['genomic_coding_end'].isna() == False, 'UTR_end'] = five_utr.loc[five_utr['genomic_coding_end'].isna() == False, 'genomic_coding_start'] - 1
five_utr['UTR'] = "5_prime"
three_utr = df.loc[(df['exon_chrom_end'] != df['genomic_coding_end']) & (df['rank'] >= cds_count)].reset_index(drop=True)
three_utr.loc[three_utr['genomic_coding_end'].isna() == True, 'UTR_start'] = three_utr.loc[three_utr['genomic_coding_end'].isna() == True, 'exon_chrom_start']
three_utr.loc[three_utr['genomic_coding_end'].isna() == True, 'UTR_end'] = three_utr.loc[three_utr['genomic_coding_end'].isna() == True, 'exon_chrom_end']
three_utr.loc[three_utr['genomic_coding_end'].isna() == False, 'UTR_start'] = three_utr.loc[three_utr['genomic_coding_end'].isna() == False, 'genomic_coding_end'] + 1
three_utr.loc[three_utr['genomic_coding_end'].isna() == False, 'UTR_end'] = three_utr.loc[three_utr['genomic_coding_end'].isna() == False, 'exon_chrom_end']
three_utr['UTR'] = "3_prime"
if df.strand.values[0] == -1:
five_utr = df.loc[(df['exon_chrom_start'] != df['genomic_coding_start']) & (df['rank'] >= cds_count)].reset_index(drop=True)
five_utr.loc[five_utr['genomic_coding_end'].isna() == True, 'UTR_start'] = five_utr.loc[five_utr['genomic_coding_end'].isna() == True, 'exon_chrom_start']
five_utr.loc[five_utr['genomic_coding_end'].isna() == True, 'UTR_end'] = five_utr.loc[five_utr['genomic_coding_end'].isna() == True, 'exon_chrom_end']
five_utr.loc[five_utr['genomic_coding_end'].isna() == False, 'UTR_start'] = five_utr.loc[five_utr['genomic_coding_end'].isna() == False, 'exon_chrom_start']
five_utr.loc[five_utr['genomic_coding_end'].isna() == False, 'UTR_end'] = five_utr.loc[five_utr['genomic_coding_end'].isna() == False, 'genomic_coding_start'] - 1
five_utr['UTR'] = "3_prime"
three_utr = df.loc[(df['exon_chrom_end'] != df['genomic_coding_end']) & (df['rank_inverted'] <= cds_count * -1)].reset_index(drop=True)
three_utr.loc[three_utr['genomic_coding_end'].isna() == True, 'UTR_start'] = three_utr.loc[three_utr['genomic_coding_end'].isna() == True, 'exon_chrom_start']
three_utr.loc[three_utr['genomic_coding_end'].isna() == True, 'UTR_end'] = three_utr.loc[three_utr['genomic_coding_end'].isna() == True, 'exon_chrom_end']
three_utr.loc[three_utr['genomic_coding_end'].isna() == False, 'UTR_start'] = three_utr.loc[three_utr['genomic_coding_end'].isna() == False, 'genomic_coding_end'] + 1
three_utr.loc[three_utr['genomic_coding_end'].isna() == False, 'UTR_end'] = three_utr.loc[three_utr['genomic_coding_end'].isna() == False, 'exon_chrom_end']
three_utr['UTR'] = "5_prime"
final_utr = pd.concat([five_utr, three_utr])
final_utr['Length'] = final_utr['UTR_end'] - final_utr['UTR_start']
final_utr['Ortho_specie'] = ortho_specie
return final_utr
# exon_tmp_dev = exon_mm.loc[exon_mm['GeneID'].isin(exon_mm['GeneID'].unique().tolist()[:100])]
utr = exon_mm[['Miso_siso', 'GeneID', 'ensembl_gene_id', 'ensembl_transcript_id', 'strand', 'chromosome_name', 'exon_chrom_start', 'exon_chrom_end', 'genomic_coding_start', 'genomic_coding_end', 'cds_start', 'cds_end', 'rank']]
utr = utr.drop_duplicates(subset=['Miso_siso', 'ensembl_gene_id', 'ensembl_transcript_id', 'strand', 'chromosome_name', 'exon_chrom_start', 'exon_chrom_end'])
utr = utr.groupby(['Miso_siso', 'ensembl_gene_id', 'ensembl_transcript_id']).progress_apply(lambda r: retrieve_utr_ortho(r, "Mus Musculus")).reset_index(drop=True)
five_UTR = utr.loc[utr['UTR'] == "5_prime"].rename({'rank' : 'Ordinal_nb', 'rank_inverted' : 'Ordinal_nb_inverted'}, axis=1).reset_index(drop=True)
three_UTR = utr.loc[utr['UTR'] == "3_prime"].rename({'rank' : 'Ordinal_nb', 'rank_inverted' : 'Ordinal_nb_inverted'}, axis=1).reset_index(drop=True)
five_UTR = five_UTR.rename(
{
'strand' : 'Strand',
'chromosome_name' : 'Chromosome/scaffold name',
'exon_chrom_start' : 'Exon region start (bp)',
'exon_chrom_end' : 'Exon region end (bp)',
'genomic_coding_start' : 'CDS start',
'genomic_coding_end' : 'CDS end',
'ensembl_gene_id' : 'Ortho_GeneID',
'ensembl_transcript_id' : 'Ortho_transcript_id'
}, axis=1
)
three_UTR = three_UTR.rename(
{
'strand' : 'Strand',
'chromosome_name' : 'Chromosome/scaffold name',
'exon_chrom_start' : 'Exon region start (bp)',
'exon_chrom_end' : 'Exon region end (bp)',
'genomic_coding_start' : 'CDS start',
'genomic_coding_end' : 'CDS end',
'ensembl_gene_id' : 'Ortho_GeneID',
'ensembl_transcript_id' : 'Ortho_transcript_id'
}, axis=1
)
five_UTR['Chromosome/scaffold name'] = five_UTR['Chromosome/scaffold name'].astype(str)
three_UTR['Chromosome/scaffold name'] = three_UTR['Chromosome/scaffold name'].astype(str)
five_UTR.to_parquet('/gstock/GeneIso/V2/5_UTR_MM.parquet')
three_UTR.to_parquet('/gstock/GeneIso/V2/3_UTR_MM.parquet')
five_UTR
###Output
100%|██████████| 27903/27903 [14:12<00:00, 32.71it/s]
###Markdown
Define & Retrieve Ordinal & length for MM introns
###Code
def exon_ord(df):
df['Exon_nb'] = list(range(1,df.shape[0]+1))
return df
# Fct to compute intron boundaries
def get_introns(df, ortho_specie):
# if df.Strand.values[0] == 1:
df = df.sort_values(by=['Exon region start (bp)', 'Exon region end (bp)'])
df['ranges'] = df['Exon region start (bp)'].astype(str) + '-' + df['Exon region end (bp)'].astype(str)
l = list()
exons = df['ranges'].values.tolist()
for j, e in enumerate(exons):
# Exon 1
if j == 0:
l.append(int(e.split("-")[1]) + 1)
# Exon 2 <-> Exon -2
elif j > 0 and j < len(exons) - 1:
l.append(int(e.split("-")[0]) - 1)
l.append(int(e.split("-")[1]) + 1)
# Exon -1
elif j == len(exons) - 1:
l.append(int(e.split("-")[0]) - 1)
# Final list
l = ["{}-{}".format(e, l[j + 1]) for j, e in enumerate(l) if j < len(l) - 1 if j % 2 == 0]
df['Introns'] = l + [np.nan]
if df.Strand.values[0] == -1:
df = df.sort_values(by=['Exon region start (bp)', 'Exon region end (bp)'], ascending=False)
df.loc[df['Introns'].isna() == False, 'Ordinal_nb'] = range(1,len(l)+1)
df['Ordinal_nb_inverted'] = np.array(list(range(1,df.shape[0] + 1))[::-1]) * -1
df['Ortho_specie'] = ortho_specie
# df['Introns_nb'] = range(1,len(l)+2)
# return df[["Ortho_specie", 'Miso_siso', 'GeneID', 'Ortho_GeneID', 'Ortho_transcript_id', 'Strand', 'Chromosome/scaffold name', 'Introns', 'Ordinal_nb', 'Ordinal_nb_inverted']].dropna()
return df
# return df[['GeneID', 'transcript_id', 'Strand', 'Chromosome/scaffold name', 'Introns', 'Introns_nb']]
exon_tmp_dev = exon_mm[['Miso_siso', 'GeneID', 'ensembl_gene_id', 'ensembl_transcript_id', 'strand', 'chromosome_name', 'exon_chrom_start', 'exon_chrom_end', 'genomic_coding_start', 'genomic_coding_end']]
exon_tmp_dev = exon_tmp_dev.drop_duplicates(subset=['Miso_siso', 'ensembl_gene_id', 'ensembl_transcript_id', 'strand', 'chromosome_name', 'exon_chrom_start', 'exon_chrom_end'])
exon_tmp_dev = exon_tmp_dev.rename(
{
'strand' : 'Strand',
'chromosome_name' : 'Chromosome/scaffold name',
'exon_chrom_start' : 'Exon region start (bp)',
'exon_chrom_end' : 'Exon region end (bp)',
'genomic_coding_start' : 'CDS start',
'genomic_coding_end' : 'CDS end',
'ensembl_gene_id' : 'Ortho_GeneID',
'ensembl_transcript_id' : 'Ortho_transcript_id'
}, axis=1
)
exon_tmp_dev = exon_tmp_dev.sort_values(by=['Chromosome/scaffold name', 'Exon region start (bp)', 'Exon region end (bp)', 'GeneID', 'Ortho_GeneID', 'Ortho_transcript_id'])
# exon_tmp_dev = exon_tmp_dev.loc[exon_tmp_dev['GeneID'].isin(exon_tmp_dev['GeneID'].unique().tolist()[:100])]
# exon_tmp_dev = exon_tmp_dev.loc[exon_tmp_dev['Ortho_transcript_id'] == 'ENSMUST00000076757']
introns_df = exon_tmp_dev.groupby(['Miso_siso', 'GeneID', 'Ortho_GeneID', 'Ortho_transcript_id']).progress_apply(lambda r: get_introns(r, 'Mus Musculus')).reset_index(drop=True)
introns_df['Chromosome/scaffold name'] = introns_df['Chromosome/scaffold name'].astype(str)
# introns_df["Length"] = introns_df["Introns"].apply(lambda r: int(r.split("-")[1]) - int(r.split("-")[0]))
introns_df.loc[introns_df['Introns'].isna() == False, "Length"] = introns_df.loc[introns_df['Introns'].isna() == False, "Introns"].apply(lambda r: int(r.split("-")[1]) - int(r.split("-")[0]))
# introns_df.to_sql('Introns', engine, if_exists='replace')
introns_df.to_parquet('/gstock/GeneIso/V2/Introns_MM.parquet')
introns_df
###Output
100%|██████████| 27903/27903 [07:18<00:00, 63.60it/s]
###Markdown
Part 2 - Reload file if nb fails
###Code
# genes = pd.read_parquet("/gstock/GeneIso/V2/Genes.parquet")
# mrna = pd.read_parquet("/gstock/GeneIso/V2/mRNA.parquet")
exons = pd.read_parquet("/gstock/GeneIso/V2/Exons_MM.parquet")
cds = pd.read_parquet("/gstock/GeneIso/V2/CDS_MM.parquet")
five_UTR = pd.read_parquet("/gstock/GeneIso/V2/5_UTR_MM.parquet")
three_UTR = pd.read_parquet("/gstock/GeneIso/V2/3_UTR_MM.parquet")
introns = pd.read_parquet("/gstock/GeneIso/V2/Introns_MM.parquet")
#TODO
# introns.loc[introns['Strand'] == 1, 'Ordinal_nb_inverted'] = introns.loc[introns['Strand'] == 1, 'Ordinal_nb_inverted'] + 1
introns = introns.dropna(subset=['Length'])
five_UTR = five_UTR.drop(['GeneID'],axis=1).rename({'Ortho_GeneID' : 'GeneID', 'Ortho_transcript_id' : 'transcript_id', "UTR_start" : "5' UTR start", "UTR_end" : "5' UTR end"}, axis=1)
three_UTR = three_UTR.drop(['GeneID'],axis=1).rename({'Ortho_GeneID' : 'GeneID', 'Ortho_transcript_id' : 'transcript_id', "UTR_start" : "3' UTR start", "UTR_end" : "3' UTR end"}, axis=1)
cds = cds.drop(['GeneID'],axis=1).rename({'Ortho_GeneID' : 'GeneID', 'Ortho_transcript_id' : 'transcript_id', 'CDS start' : 'CDS_start_abs', 'CDS end' : 'CDS_end_abs', }, axis=1)
exons = exons.drop(['GeneID'],axis=1).rename({'Ortho_GeneID' : 'GeneID', 'Ortho_transcript_id' : 'transcript_id'}, axis=1)
def show_values_on_bars(axs):
def _show_on_single_plot(ax):
for p in ax.patches:
_x = p.get_x() + p.get_width() / 2
_y = p.get_y() + p.get_height()
if np.isnan(p.get_height()) == False and p.get_height() > 0.4:
value = '{:0}'.format(int(p.get_height()))
ax.text(_x, _y, value, ha="center", fontsize=12)
if isinstance(axs, np.ndarray):
for idx, ax in np.ndenumerate(axs):
_show_on_single_plot(ax)
else:
_show_on_single_plot(axs)
palette={'Miso' : '#C43032FF', 'Siso' : '#7B9FF9FF', }
fig = plt.figure(figsize=(20, 10))
gs = matplotlib.gridspec.GridSpec(2, 2, width_ratios=[1, 1])
ax00 = plt.subplot(gs[0,0])
ax01 = plt.subplot(gs[0,1])
ax3 = plt.subplot(gs[1,0:])
data_5_prime = 100 * ( five_UTR.groupby(['Miso_siso', 'GeneID', 'transcript_id'])["5' UTR start"].count().reset_index().groupby('Miso_siso')["5' UTR start"].value_counts() / five_UTR.groupby(['Miso_siso', 'GeneID', 'transcript_id'])["5' UTR start"].count().reset_index().groupby('Miso_siso')["5' UTR start"].value_counts().groupby('Miso_siso').sum())
data_5_prime = data_5_prime.rename('count').reset_index()
data_5_prime = data_5_prime.loc[data_5_prime["5' UTR start"] <= 7]
data_5_prime = data_5_prime.round()
data_3_prime = 100 * ( three_UTR.groupby(['Miso_siso', 'GeneID', 'transcript_id'])["3' UTR start"].count().reset_index().groupby('Miso_siso')["3' UTR start"].value_counts() / three_UTR.groupby(['Miso_siso', 'GeneID', 'transcript_id'])["3' UTR start"].count().reset_index().groupby('Miso_siso')["3' UTR start"].value_counts().groupby('Miso_siso').sum())
data_3_prime = data_3_prime.rename('count').reset_index()
data_3_prime = data_3_prime.loc[data_3_prime["3' UTR start"] <= 7]
data_3_prime = data_3_prime.round()
sns.barplot(data=data_5_prime, x="5' UTR start", y='count', hue='Miso_siso', palette=palette, ax=ax00)
sns.barplot(data=data_3_prime, x="3' UTR start", y='count', hue='Miso_siso', palette=palette, ax=ax01)
show_values_on_bars(ax00)
show_values_on_bars(ax01)
ax00.set_ylim(ymax=80)
ax01.set_ylim(ymax=105)
ax00.set_ylabel("% of transcripts")
ax00.set_xlabel("Nb of 5' UTR exons")
ax01.set_ylabel("% of transcripts")
ax01.set_xlabel("Nb of 3' UTR exons")
# ax00.legend(loc='upper right', handles = [mpatches.Patch(color=palette['Miso']), mpatches.Patch(color=palette['Siso'])], labels=['M-iso', 'S-iso'])
ax01.legend(loc='upper right', handles = [mpatches.Patch(color=palette['Miso']), mpatches.Patch(color=palette['Siso'])], labels=['MISOG', 'SISOG'])
ax00.legend().remove()
l_axes = [ax00, ax01, ax3]
palette={'Miso' : '#C43032FF', 'Siso' : '#7B9FF9FF', }
distance_tss_start_strand_positive = (cds.loc[cds['Strand'] == 1].groupby(['Miso_siso', 'GeneID', 'transcript_id'])['CDS_start_abs'].min() - exons.loc[exons['Strand'] == 1].groupby(['Miso_siso', 'GeneID', 'transcript_id'])['Exon region start (bp)'].min())
distance_tss_start_strand_negative = (exons.loc[exons['Strand'] == -1].groupby(['Miso_siso', 'GeneID', 'transcript_id'])['Exon region end (bp)'].max() - cds.loc[cds['Strand'] == -1].groupby(['Miso_siso', 'GeneID', 'transcript_id'])['CDS_end_abs'].max())
tss_start = pd.concat([distance_tss_start_strand_positive, distance_tss_start_strand_negative]).rename('Length').reset_index()
tss_start['Distance_START_STOP'] = 'START'
distance_tts_stop_strand_positive = (exons.loc[exons['Strand'] == 1].groupby(['Miso_siso', 'GeneID', 'transcript_id'])['Exon region end (bp)'].max() - cds.loc[cds['Strand'] == 1].groupby(['Miso_siso', 'GeneID', 'transcript_id'])['CDS_end_abs'].max())
distance_tts_stop_strand_negative = (cds.loc[cds['Strand'] == -1].groupby(['Miso_siso', 'GeneID', 'transcript_id'])['CDS_start_abs'].min() - exons.loc[exons['Strand'] == -1].groupby(['Miso_siso', 'GeneID', 'transcript_id'])['Exon region start (bp)'].min())
tts_stop = pd.concat([distance_tts_stop_strand_positive, distance_tts_stop_strand_negative]).rename('Length').reset_index()
tts_stop['Distance_START_STOP'] = 'STOP'
tss_tts_final_df = pd.concat([tss_start, tts_stop])
t = cds[['GeneID', 'Exon region start (bp)', 'Exon region end (bp)', 'CDS_start_abs', 'CDS_end_abs']].drop_duplicates().groupby('GeneID')['CDS_start_abs'].count()
tss_tts_final_df = tss_tts_final_df.loc[~tss_tts_final_df['GeneID'].isin(t[t == 1].index.tolist())]
data = tss_tts_final_df.rename({'Distance_START_STOP' : 'variable', 'Length' : 'value'}, axis=1)
bw = 0.25
cut = 0.05
lw = 0
x, y, hue = 'variable', 'value', 'Miso_siso'
print(data.groupby(['Miso_siso', 'variable'])['value'].describe())
box =sns.violinplot(data=data, x='variable', y='value', hue='Miso_siso', showfliers=True, palette=palette, ax=ax3, bw=bw, cut=cut, linewidth=lw)
plt.setp(box.collections, alpha=.3)
box = plt.boxplot(data.loc[(data['Miso_siso'] == 'Miso') & (data['variable'] == 'START')]['value'], positions=[-0.2], showfliers=False, widths=[0.02],
medianprops={"color":'black'})
# for patch in box.artists:
# r, g, b, a = patch.get_facecolor()
# patch.set_facecolor((r, g, b, .1))
box = plt.boxplot(data.loc[(data['Miso_siso'] == 'Siso') & (data['variable'] == 'START')]['value'], positions=[+0.2], showfliers=False, widths=[0.02],
medianprops={"color":'black'})
# for patch in box.artists:
# r, g, b, a = patch.get_facecolor()
# patch.set_facecolor((r, g, b, .1))
box = plt.boxplot(data.loc[(data['Miso_siso'] == 'Miso') & (data['variable'] == 'STOP')]['value'], positions=[0.8], showfliers=False, widths=[0.02],
medianprops={"color":'black'})
# for patch in box.artists:
# r, g, b, a = patch.get_facecolor()
# patch.set_facecolor((r, g, b, .1))
box = plt.boxplot(data.loc[(data['Miso_siso'] == 'Siso') & (data['variable'] == 'STOP')]['value'], positions=[1.2], showfliers=False, widths=[0.02],
medianprops={"color":'black'})
ax3.set_xticklabels(["5' Distance TSS ↔ START", "3' STOP ↔ TTS"])
ax3.set_ylabel('Length')
ax3.set_xlabel('')
ax3.legend().remove()
box_pairs = [
(('START', 'Miso'), ('START', 'Siso')),
(('STOP', 'Miso'), ('STOP', 'Siso')),
]
ax3.set_ylim(0,0.5e5)
add_stat_annotation(ax3, data=data, x='variable', y='value', hue='Miso_siso', box_pairs=box_pairs, test='Mann-Whitney', text_format='simple', loc='outside', pvalue_thresholds=pvalues_cutoff, fontsize=12,)
# ax.text(-0.1, 1.15, string.ascii_uppercase[i], transform=ax3.transAxes, size=25, weight='bold')
for ax in l_axes:
ax.grid(axis='y')
ax.set_axisbelow(True)
plt.tight_layout()
# figure_path = base_dir + yaml['Figures']['Fig2']
# fig.savefig(figure_path, dpi=600)
exons
plt.rcParams.update({'font.size' : 18})
def custom_boxplot(data, x, y, hue, ax, ylim, title="Title",xlabel="",ylabel="", box_pairs=[()], palette=['grey'], id_string="A", padding_value_boxplot=1, padding_title=10, legend=False, x_legend=0):
data = data.sort_values(by=hue,ascending=True)
count = data.groupby([hue, x]).count()['GeneID'].reset_index().pivot(columns=[hue],values='GeneID',index=x).reset_index()
count['Miso'] = count['Miso'].apply(lambda r: str(format(int(r), ',')))
count['Siso'] = count['Siso'].apply(lambda r: str(format(int(r), ',')))
count['Miso/Siso'] = count[x].astype(str) + '\n(' + count['Miso'] + ' / ' + count['Siso'] + ')'
bw = 0.2
cut = 0
lw = 1.5
box = sns.violinplot(data=data, x=x, y=y, hue=hue, showfliers=False, ax=ax, cut=cut, linewidth=lw, bw=bw, scale='width', alpha=0.3, palette=palette)
plt.setp(box.collections, alpha=.3)
ax.set_title(title, pad=padding_title)
ax.set_ylim(ylim)
ax.spines['right'].set_linewidth(0)
ax.spines['top'].set_linewidth(0)
ax.set_xlabel(xlabel)
ax.set_ylabel('Length (bp)')
ax.get_yaxis().set_major_formatter(matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
ax.set_xticklabels(count['Miso/Siso'].values.tolist(), fontsize=14)
if not x:
ax.spines['bottom'].set_linewidth(0)
ax.axes.xaxis.set_visible(False)
ax.set_axisbelow(True)
ax.grid(axis='y')
if legend is True:
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles=
[
mpatches.Patch([0], [0], color=palette['Miso'], ),
mpatches.Patch([0], [0], color=palette['Siso'], ),
matplotlib.lines.Line2D([0], [0], marker='o', color='w', label='Circle', markerfacecolor='grey', markeredgecolor='black', markersize=15),
# matplotlib.lines.Line2D([0], [0], marker='s', color='w', label='Circle', markerfacecolor='grey', markeredgecolor='black', markersize=15)
],
labels= ['MISOG', 'SISOG', "Median"], title='', ncol=3, bbox_to_anchor=(0.5,-1), loc='center right')
else:
ax.legend().remove()
means = data.groupby([hue, x])[y].quantile(0.75).reset_index()
medians = data.groupby([hue, x])[y].quantile(0.5).reset_index()
for ms, x_shift in zip(['Miso', 'Siso'], [-0.2,0.2]):
tmp_df_medians = medians.loc[medians['Miso_siso'] == ms]
tmp_df_means = means.loc[means['Miso_siso'] == ms]
x_plot = [e + x_shift for e in range(0,5)]
if 'Introns' in title :
ax.plot(x_plot, tmp_df_means[y].values, lw=2.5, color=palette[ms], marker="s", markersize=8, markeredgecolor="black", markeredgewidth=1, ls='-', )
ax.plot(x_plot, tmp_df_medians[y].values, lw=2.5, color="white", marker="o", markersize=6, markeredgecolor="black", markeredgewidth=1, ls='', )
else:
ax.plot(x_plot, tmp_df_medians[y].values, lw=2.5, color=palette[ms], marker="o", markersize=10, markeredgecolor="black", markeredgewidth=1, ls='-', )
add_stat_annotation(ax, data=data, x=x, y=y, hue=hue, box_pairs=box_pairs,
test='Mann-Whitney', text_format='simple', pvalue_thresholds=pvalues_cutoff,
loc='outside', fontsize=9, verbose=2, line_height=0.0025, line_offset=0.012)
return ax
f, ax = plt.subplots(nrows=4, ncols=2, figsize=(22,25))
palette={'Miso' : '#C43032FF', 'Siso' : '#7B9FF9FF', }
box_pairs = [
((e,'Miso'),(e,'Siso')) for e in range(1,6)
]
box_pairs = box_pairs + [((1,'Miso'),(e,'Miso')) for e in range(2,6)] + [((1,'Siso'),(e,'Siso')) for e in range(2,6)]
print(box_pairs)
k_limit = 5
zscore_cutoff = 2
padding_title = 125
custom_boxplot(data=exons.loc[(exons['Ordinal_nb'] <= 5)], x='Ordinal_nb', y='Length', hue='Miso_siso', ax=ax[1][0], ylim=(0,750), xlabel='Ordinal position', palette=palette, title="5' Exons", box_pairs=box_pairs, padding_title=padding_title)
custom_boxplot(data=cds.loc[(cds['Ordinal_nb'] <= 5)], x='Ordinal_nb', y='Length', hue='Miso_siso', xlabel='Ordinal position', ax=ax[2][0], ylim=(0,500), palette={'Miso' : '#C43032FF', 'Siso' : '#7B9FF9FF', }, title="5' TERs", box_pairs=box_pairs, padding_title=padding_title)
custom_boxplot(data=introns.loc[(introns['Ordinal_nb'] <= 5)], x='Ordinal_nb', y='Length', hue='Miso_siso', ax=ax[0][0], xlabel='Ordinal position', ylim=(0,5e4), palette={'Miso' : '#C43032FF', 'Siso' : '#7B9FF9FF', }, title="5' Introns", box_pairs=box_pairs, padding_title=padding_title)
custom_boxplot(data=five_UTR.loc[(five_UTR['Ordinal_nb'] <= 5)], x='Ordinal_nb', y='Length', hue='Miso_siso', ax=ax[3][0], xlabel='Ordinal position', ylim=(0,500), palette={'Miso' : '#C43032FF', 'Siso' : '#7B9FF9FF', }, title="5' UTR", box_pairs=box_pairs, padding_title=padding_title)
box_pairs = [
((e,'Miso'),(e,'Siso')) for e in list(range(-5,0))
]
box_pairs = box_pairs + [((-1,'Miso'),(e,'Miso')) for e in range(-5,-1)] + [((-1,'Siso'),(e,'Siso')) for e in range(-5,-1)]
custom_boxplot(data=exons.loc[(exons['Ordinal_nb_inverted'] >= -5)], x='Ordinal_nb_inverted', y='Length', hue='Miso_siso',xlabel='Ordinal position', ax=ax[1][1], ylim=(0,6.5e3), palette=palette, title="3' Exons", box_pairs=box_pairs, padding_title=padding_title)
custom_boxplot(data=cds.loc[(cds['Ordinal_nb_inverted'] >= -5)], x='Ordinal_nb_inverted', y='Length', hue='Miso_siso', xlabel='Ordinal position',ax=ax[2][1], ylim=(0,500), palette=palette, title="3' TERs", box_pairs=box_pairs, padding_title=padding_title)
custom_boxplot(data=introns.loc[(introns['Ordinal_nb_inverted'] >= -5)], x='Ordinal_nb_inverted', y='Length', hue='Miso_siso',xlabel='Ordinal position', ax=ax[0][1], ylim=(0,50e3), palette=palette, title="3' Introns", box_pairs=box_pairs, padding_title=padding_title, )
custom_boxplot(data=three_UTR.loc[(three_UTR['Ordinal_nb_inverted'] >= -5)], x='Ordinal_nb_inverted', y='Length', hue='Miso_siso', xlabel='Ordinal position',ax=ax[3][1], ylim=(0,6e3), palette=palette, title="3' UTR", box_pairs=box_pairs, padding_title=padding_title, legend=True, x_legend=1.35,)
i = 0
for n, a in enumerate(ax):
print(a)
sub_a = a[0]
sub_a.text(-0.15, 1.15, string.ascii_uppercase[i], transform=sub_a.transAxes, size=25, weight='bold')
i += 1
for n, a in enumerate(ax):
sub_a = a[1]
sub_a.text(-0.15, 1.15, string.ascii_uppercase[i], transform=sub_a.transAxes, size=25, weight='bold')
i += 1
plt.tight_layout()
# figure_path = base_dir + yaml['Figures']['FigS1']
# f.savefig(figure_path, dpi=600)
###Output
[((1, 'Miso'), (1, 'Siso')), ((2, 'Miso'), (2, 'Siso')), ((3, 'Miso'), (3, 'Siso')), ((4, 'Miso'), (4, 'Siso')), ((5, 'Miso'), (5, 'Siso')), ((1, 'Miso'), (2, 'Miso')), ((1, 'Miso'), (3, 'Miso')), ((1, 'Miso'), (4, 'Miso')), ((1, 'Miso'), (5, 'Miso')), ((1, 'Siso'), (2, 'Siso')), ((1, 'Siso'), (3, 'Siso')), ((1, 'Siso'), (4, 'Siso')), ((1, 'Siso'), (5, 'Siso'))]
2_Miso v.s. 2_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.261e-42 U_stat=6.240e+07
1_Miso v.s. 1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.637e-45 U_stat=6.361e+07
3_Miso v.s. 3_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=5.301e-22 U_stat=5.532e+07
4_Miso v.s. 4_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.636e-13 U_stat=4.726e+07
5_Miso v.s. 5_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.694e-08 U_stat=3.857e+07
1_Miso v.s. 2_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=3.459e+08
1_Siso v.s. 2_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.919e-71 U_stat=2.115e+07
1_Miso v.s. 3_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=3.441e+08
1_Siso v.s. 3_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.641e-134 U_stat=2.006e+07
1_Miso v.s. 4_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=3.253e+08
1_Siso v.s. 4_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=5.923e-151 U_stat=1.831e+07
1_Miso v.s. 5_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=2.975e+08
1_Siso v.s. 5_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.060e-153 U_stat=1.631e+07
2_Miso v.s. 2_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=9.760e-27 U_stat=5.815e+07
1_Miso v.s. 1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.317e-52 U_stat=6.296e+07
3_Miso v.s. 3_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=7.812e-08 U_stat=5.093e+07
4_Miso v.s. 4_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.384e-07 U_stat=4.250e+07
5_Miso v.s. 5_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.594e-06 U_stat=3.415e+07
1_Miso v.s. 2_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=4.731e-131 U_stat=2.420e+08
1_Siso v.s. 2_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.338e-03 U_stat=1.598e+07
1_Miso v.s. 3_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.480e-82 U_stat=2.359e+08
1_Siso v.s. 3_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.000e+00 U_stat=1.503e+07
1_Miso v.s. 4_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.729e-71 U_stat=2.200e+08
1_Siso v.s. 4_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=4.476e-01 U_stat=1.362e+07
1_Miso v.s. 5_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.643e-59 U_stat=2.013e+08
1_Siso v.s. 5_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.485e-02 U_stat=1.219e+07
2.0_Miso v.s. 2.0_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.149e-29 U_stat=6.678e+07
1.0_Miso v.s. 1.0_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.364e-22 U_stat=7.655e+07
3.0_Miso v.s. 3.0_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.471e-18 U_stat=5.517e+07
4.0_Miso v.s. 4.0_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=7.667e-19 U_stat=4.472e+07
5.0_Miso v.s. 5.0_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.752e-09 U_stat=3.510e+07
1.0_Miso v.s. 2.0_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=7.200e-172 U_stat=3.164e+08
1.0_Siso v.s. 2.0_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=6.033e-54 U_stat=1.821e+07
1.0_Miso v.s. 3.0_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=3.200e+08
1.0_Siso v.s. 3.0_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.500e-104 U_stat=1.733e+07
1.0_Miso v.s. 4.0_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=3.041e+08
1.0_Siso v.s. 4.0_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.129e-140 U_stat=1.591e+07
1.0_Miso v.s. 5.0_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=2.858e+08
1.0_Siso v.s. 5.0_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=4.865e-150 U_stat=1.423e+07
2_Miso v.s. 2_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=4.790e-02 U_stat=1.044e+07
1_Miso v.s. 1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=8.840e-02 U_stat=6.944e+07
3_Miso v.s. 3_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.140e-02 U_stat=6.514e+05
4_Miso v.s. 4_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.000e+00 U_stat=6.017e+04
5_Miso v.s. 5_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.000e+00 U_stat=7.713e+03
1_Miso v.s. 2_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=1.947e+08
1_Siso v.s. 2_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.385e-177 U_stat=8.161e+06
1_Miso v.s. 3_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=5.000e+07
1_Siso v.s. 3_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=7.698e-48 U_stat=2.059e+06
1_Miso v.s. 4_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.335e-111 U_stat=1.382e+07
1_Siso v.s. 4_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.756e-15 U_stat=6.506e+05
1_Miso v.s. 5_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.826e-27 U_stat=4.502e+06
1_Siso v.s. 5_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.278e-04 U_stat=2.260e+05
-4_Miso v.s. -4_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.774e-13 U_stat=4.724e+07
-5_Miso v.s. -5_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=4.064e-09 U_stat=3.847e+07
-3_Miso v.s. -3_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=3.157e-09 U_stat=5.723e+07
-2_Miso v.s. -2_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=3.263e-26 U_stat=6.420e+07
-1_Miso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.658e-08 U_stat=6.863e+07
-2_Miso v.s. -1_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=3.890e+07
-2_Siso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=2.831e+06
-3_Miso v.s. -1_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=3.475e+07
-3_Siso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=2.026e+06
-4_Miso v.s. -1_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=3.206e+07
-4_Siso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=1.799e+06
-5_Miso v.s. -1_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=2.880e+07
-5_Siso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=1.494e+06
-4_Miso v.s. -4_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=7.373e-08 U_stat=4.245e+07
-5_Miso v.s. -5_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=3.662e-02 U_stat=3.496e+07
-3_Miso v.s. -3_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.642e-04 U_stat=5.169e+07
-2_Miso v.s. -2_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=5.545e-14 U_stat=5.986e+07
-1_Miso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=5.559e-35 U_stat=6.469e+07
-2_Miso v.s. -1_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=7.949e-94 U_stat=2.476e+08
-2_Siso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=5.739e-58 U_stat=1.377e+07
-3_Miso v.s. -1_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.657e-117 U_stat=2.305e+08
-3_Siso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=7.309e-81 U_stat=1.161e+07
-4_Miso v.s. -1_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.208e-137 U_stat=2.105e+08
-4_Siso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.087e-75 U_stat=1.048e+07
-5_Miso v.s. -1_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=8.079e-135 U_stat=1.907e+08
-5_Siso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=5.884e-84 U_stat=9.017e+06
-4_Miso v.s. -4_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=8.147e-04 U_stat=4.274e+07
-5_Miso v.s. -5_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=4.745e-02 U_stat=3.382e+07
-3_Miso v.s. -3_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=3.213e-01 U_stat=5.183e+07
-2_Miso v.s. -2_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=5.345e-02 U_stat=6.224e+07
-1_Miso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.000e+00 U_stat=7.128e+07
-2_Miso v.s. -1_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.377e-02 U_stat=2.705e+08
-2_Siso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=3.065e-02 U_stat=1.503e+07
-3_Miso v.s. -1_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.000e+00 U_stat=2.576e+08
-3_Siso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.000e+00 U_stat=1.373e+07
-4_Miso v.s. -1_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.000e+00 U_stat=2.367e+08
-4_Siso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.095e-01 U_stat=1.193e+07
-5_Miso v.s. -1_Miso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=2.100e-02 U_stat=2.185e+08
-5_Siso v.s. -1_Siso: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.000e+00 U_stat=1.074e+07
|
chapter_10/code/chapter9-object-detection.ipynb | ###Markdown
I entered this competition as an opportunity to implement an object detection model, something I know how to do in theory but have not yet put into practice. As yolo is the model I am most familiar with I will be implementing a model inspired by yolov3 (though not exactly the same) using Tensorflow.
###Code
import numpy as np
import pandas as pd
import tensorflow as tf
import os
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw, ImageEnhance
import albumentations as albu
from tqdm.notebook import tqdm
###Output
_____no_output_____
###Markdown
Load dataI'll begin by loading the labels data and extracting the bounding boxes (which I refer to as "bboxes" in the code) from the table.
###Code
labels = pd.read_csv('../input/global-wheat-detection/train.csv')
labels.head()
###Output
_____no_output_____
###Markdown
In the raw data each bounding box is in an array of length 4 but formatted as a string. The below block of code groups the bounding boxes by image id and places the bounding boxes as numpy arrays next to each image id. The image id can then be used to quickly retrieve all of the bounding boxes.
###Code
def group_boxes(group):
boundaries = group['bbox'].str.split(',', expand=True)
boundaries[0] = boundaries[0].str.slice(start=1)
boundaries[3] = boundaries[3].str.slice(stop=-1)
return boundaries.values.astype(float)
labels = labels.groupby('image_id').apply(group_boxes)
###Output
_____no_output_____
###Markdown
Here's a sample of five bounding boxes for one of the images.
###Code
labels['b6ab77fd7'][0:5]
###Output
_____no_output_____
###Markdown
With the labels extracted from the data I now need the images loaded as numpy arrays. At this point it is worth splitting the data into a training and validation dataset. As the dataset is small I will keep the vast majority of the images in the training dataset and only put the last ten images aside as a validation dataset. This may not be the best size to perform accurate validation but I think it is the right compromise considering the number of images available and the complexity of the task.
###Code
train_image_ids = np.unique(labels.index.values)[0:3363]
val_image_ids = np.unique(labels.index.values)[3363:3373]
###Output
_____no_output_____
###Markdown
With the ids split it's now time to load the images. To keep the training of the model relatively fast I will resize each image from (1024,1024) to (256,256). I experimented with larger images and with the model I am using I didn't see a good enough lift in training accuracy to make up for the extra time it took to train a model with larger images.
###Code
def load_image(image_id):
image = Image.open('../input/global-wheat-detection/train/' + image_id + ".jpg")
image = image.resize((256, 256))
return np.asarray(image)
train_pixels = {}
train_labels = {}
for image_id in tqdm(train_image_ids):
train_pixels[image_id] = load_image(image_id)
train_labels[image_id] = labels[image_id].copy() / 4
val_pixels = {}
val_labels = {}
for image_id in tqdm(val_image_ids):
val_pixels[image_id] = load_image(image_id)
val_labels[image_id] = labels[image_id].copy() / 4
###Output
_____no_output_____
###Markdown
Visualise imagesBefore going on it is worth having a look at some of the images and bounding boxes in the dataset. For that a few helper functions will be required. The below functions take an image id and the corresponding bounding boxes and return the bounding boxes drawn onto the image.
###Code
def draw_bboxes(image_id, bboxes, source='train'):
image = Image.open('../input/global-wheat-detection/' + source +'/' + image_id + ".jpg")
image = image.resize((256,256))
draw = ImageDraw.Draw(image)
for bbox in bboxes:
draw_bbox(draw, bbox)
return np.asarray(image)
def draw_bbox(draw, bbox):
x, y, width, height = bbox
draw.rectangle([x, y, x + width, y + height], width=2, outline='red')
###Output
_____no_output_____
###Markdown
I'll also add a wrapper function to call this function multiple times for multiple images.
###Code
def show_images(image_ids, bboxes, source='train'):
pixels = []
for image_id in image_ids:
pixels.append(
draw_bboxes(image_id, bboxes[image_id], source)
)
num_of_images = len(image_ids)
fig, axes = plt.subplots(
1,
num_of_images,
figsize=(5 * num_of_images, 5 * num_of_images)
)
for i, image_pixels in enumerate(pixels):
axes[i].imshow(image_pixels)
show_images(train_image_ids[0:4], train_labels)
###Output
_____no_output_____
###Markdown
Clean bounding boxesThere are a small number of bounding boxes in this dataset that do not bound a head of wheat. While the number is small enough that the model can still learn how to detect the heads of wheat they still cause a little bit of inaccuracy. Below I'll search for tiny bounding boxes that cannot possibly fit a head of wheat inside them and huge bounding boxes that miss the head of wheat they are aimed at.
###Code
tiny_bboxes = []
for i, image_id in enumerate(train_image_ids):
for label in train_labels[image_id]:
if label[2] * label[3] <= 10 and label[2] * label[3] != 0:
tiny_bboxes.append(i)
print(str(len(tiny_bboxes)) + ' tiny bounding boxes found')
huge_bboxes = []
for i, image_id in enumerate(train_image_ids):
for label in train_labels[image_id]:
if label[2] * label[3] > 8000:
huge_bboxes.append(i)
print(str(len(huge_bboxes)) + ' huge bounding boxes found')
###Output
_____no_output_____
###Markdown
The tiny bounding boxes are actually too small to show when visualised on an image. However we can take a peak at one of the huge bounding boxes.
###Code
show_images(train_image_ids[562:564], train_labels)
###Output
_____no_output_____
###Markdown
I did some more manual inspection of the bad labels picked out of this code that I have not included in this notebook. I found that some huge bounding boxes were actually okay as they bound a very zoomed in image. To this end I have listed a few to be kept (1079, 1371, 2020). Otherwise the below code throws out any bounding boxes whose area is larger than 8000 or smaller than 5.
###Code
def clean_labels(train_image_ids, train_labels):
good_labels = {}
for i, image_id in enumerate(train_image_ids):
good_labels[image_id] = []
for j, label in enumerate(train_labels[image_id]):
# remove huge bbox
if label[2] * label[3] > 8000 and i not in [1079, 1371, 2020]:
continue
# remove tiny bbox
elif label[2] < 5 or label[3] < 5:
continue
else:
good_labels[image_id].append(
train_labels[image_id][j]
)
return good_labels
train_labels = clean_labels(train_image_ids, train_labels)
###Output
_____no_output_____
###Markdown
Data pipelineUsually I would use Tensorflows data api or keras data generators to build a pipeline to get data into the model. However the pre-processing that needs to be done for this model is not trivial and it turned out to be easier to create a custom data generator. This takes the form of a class that is passed to keras' fit generator function. It contains the following functionality:- define the size of the dataset. Keras needs this to work out how long an epoch is.- shuffle the dataset.- get an image and augment it to add variety to the dataset. This includes amending bounding boxes when a head of wheat has changed in the image.- reshape the bounding boxes to a label grid.I'll start by initialising the class.
###Code
class DataGenerator(tf.keras.utils.Sequence):
def __init__(self, image_ids, image_pixels, labels=None, batch_size=1, shuffle=False, augment=False):
self.image_ids = image_ids
self.image_pixels = image_pixels
self.labels = labels
self.batch_size = batch_size
self.shuffle = shuffle
self.augment = augment
self.on_epoch_end()
self.image_grid = self.form_image_grid()
def form_image_grid(self):
image_grid = np.zeros((32, 32, 4))
# x, y, width, height
cell = [0, 0, 256 / 32, 256 / 32]
for i in range(0, 32):
for j in range(0, 32):
image_grid[i,j] = cell
cell[0] = cell[0] + cell[2]
cell[0] = 0
cell[1] = cell[1] + cell[3]
return image_grid
###Output
_____no_output_____
###Markdown
Next I will add some methods to the class that keras needs to operate the data generation. length is used to determine how many images there are in the dataset. on_epoch_end is called at the end of each epoch (as well once before training starts) to get the index of all images in the dataset. It is also has the opportunity to shuffle the dataset per epoch if the generator was configured to do so.
###Code
def __len__(self):
return int(np.floor(len(self.image_ids) / self.batch_size))
def on_epoch_end(self):
self.indexes = np.arange(len(self.image_ids))
if self.shuffle == True:
np.random.shuffle(self.indexes)
DataGenerator.__len__ = __len__
DataGenerator.on_epoch_end = on_epoch_end
###Output
_____no_output_____
###Markdown
Regarding the augmentations a number of transformations will be applied to each training image before they are fed into the model. This helps to add some diversity to a small dataset effectively growing it to a much larger one:- **random sized crop:** The model needs to be able to detect a wheat head regardless of how close or far away the head is to the camera. To produce more zoom levels in the dataset the crop method will take a portion of the image and zoom in to create a new image with larger wheat heads.- **flip amd rotate**: The wheat heads can point in any direction. To create more examples of wheat heads pointing in different directions the image will randomly be flipped both horizontally and vertically or rotated.- **hue saturation and brightness:** these are various methods that will alter the lighting of the image which will help to create different lighting scenarios. This helps as the test pictures are from various countries each with their own lighting levels.- **noise:** Some wheat heads aren't quite in focus. Adding some noise to the images helps to catch these wheat heads while also forcing the model to learn more abstract wheat head shapes. This helps a lot with over-fitting.- **cutout**: randomly remove small squares of pixels in the image. This prevents the model simply memorizing certain wheat heads and instead forces it to learn the patterns that represent a wheat head.- **clahe:** this is a must have. In many images the wheat heads are a similar colour to the grass in the background making it tricky for the model to differentiate between them. CLAHE helps to exemplify the colour difference between the two.- **grey scale:** I found that there were a few images with a yellow/gold tint. My model was learning to detect wheat heads without a tint (as most images do not contain a tint) and was really struggling to detect anything on the yellow images. By converting all images to grey scale the model is forced to ignore these tints making it much more effective at identifying wheat heads regardless of tint.I also greyscale and apply CLAHE to each validation image as the model has learnt on grey images where the wheat heads are given a lighter shade of grey.
###Code
DataGenerator.train_augmentations = albu.Compose([
albu.RandomSizedCrop(
min_max_height=(200, 200),
height=256,
width=256,
p=0.8
),
albu.OneOf([
albu.Flip(),
albu.RandomRotate90(),
], p=1),
albu.OneOf([
albu.HueSaturationValue(),
albu.RandomBrightnessContrast()
], p=1),
albu.OneOf([
albu.GaussNoise(),
albu.GlassBlur(),
albu.ISONoise(),
albu.MultiplicativeNoise(),
], p=0.5),
albu.Cutout(
num_holes=8,
max_h_size=16,
max_w_size=16,
fill_value=0,
p=0.5
),
albu.CLAHE(p=1),
albu.ToGray(p=1),
],
bbox_params={'format': 'coco', 'label_fields': ['labels']})
DataGenerator.val_augmentations = albu.Compose([
albu.CLAHE(p=1),
albu.ToGray(p=1),
])
###Output
_____no_output_____
###Markdown
The next functions load an image and the corresponding bounding boxes depending on randomly picked image ids. As well as loading the images the above augmentations are added to an image as it is loaded. As the albumentaitons library was used to apply these augmentations I get the bounding boxes re-sized for free.
###Code
def __getitem__(self, index):
indexes = self.indexes[index * self.batch_size:(index + 1) * self.batch_size]
batch_ids = [self.image_ids[i] for i in indexes]
X, y = self.__data_generation(batch_ids)
return X, y
def __data_generation(self, batch_ids):
X, y = [], []
# Generate data
for i, image_id in enumerate(batch_ids):
pixels = self.image_pixels[image_id]
bboxes = self.labels[image_id]
if self.augment:
pixels, bboxes = self.augment_image(pixels, bboxes)
else:
pixels = self.contrast_image(pixels)
bboxes = self.form_label_grid(bboxes)
X.append(pixels)
y.append(bboxes)
return np.array(X), np.array(y)
def augment_image(self, pixels, bboxes):
bbox_labels = np.ones(len(bboxes))
aug_result = self.train_augmentations(image=pixels, bboxes=bboxes, labels=bbox_labels)
bboxes = self.form_label_grid(aug_result['bboxes'])
return np.array(aug_result['image']) / 255, bboxes
def contrast_image(self, pixels):
aug_result = self.val_augmentations(image=pixels)
return np.array(aug_result['image']) / 255
DataGenerator.__getitem__ = __getitem__
DataGenerator.__data_generation = __data_generation
DataGenerator.augment_image = augment_image
DataGenerator.contrast_image = contrast_image
###Output
_____no_output_____
###Markdown
The final part of the data generator class re-shapes the bounding box labels. It's worth mentioning here that there are a number of ways to represent a bounding box with four numbers. Some common ways are coco (the shape the boxes are in the raw data), voc-pascal and yolo.*Figure 1, some common ways to represent bounding boxes*I'll be using the yolo shape for this model. In addition to the above shape, yolo detects objects by placing a grid over the image and asking if an object (such as a wheat head) is present in any of the cells of the grid. I've decided to use a 32x32 grid for this challenge which I'll refer to as a label grid. The bounding boxes are reshaped to be offset within the relevant cells of the image. Then all four variables (x, y, width and height) are scaled down to a 0-1 scale using the width and height of the image.*Figure 2, a bounding box is put in the cell most at the centre of an object. It's x and y are offset to the cell while width and height remain the same*Any cells in the grid that have no objects within them contain a bounding box with the dimensions [0, 0, 0, 0].Each bounding box gets a confidence score where a value of 1 tells us that an object (wheat head) is present in the cell and a value of 0 tells us that no object is present. So a cell with an object present could contain a value like this: [1, 0.5, 0.5, 0.2, 0.2] telling us that there is an object present (due to the confidence score of 1, the centre of the bounding box is exactly in the middle of the cell and the box is 20% of the images total width and height.As a cell could contain two overlapping heads of wheat I have configured the grid to contain up to two bounding boxes. These are known as anchor boxes.The below code takes the list of bounding boxes for an image and puts them into the yolo label grid shape.
###Code
def form_label_grid(self, bboxes):
label_grid = np.zeros((32, 32, 10))
for i in range(0, 32):
for j in range(0, 32):
cell = self.image_grid[i,j]
label_grid[i,j] = self.rect_intersect(cell, bboxes)
return label_grid
def rect_intersect(self, cell, bboxes):
cell_x, cell_y, cell_width, cell_height = cell
cell_x_max = cell_x + cell_width
cell_y_max = cell_y + cell_height
anchor_one = np.array([0, 0, 0, 0, 0])
anchor_two = np.array([0, 0, 0, 0, 0])
# check all boxes
for bbox in bboxes:
box_x, box_y, box_width, box_height = bbox
box_x_centre = box_x + (box_width / 2)
box_y_centre = box_y + (box_height / 2)
if(box_x_centre >= cell_x and box_x_centre < cell_x_max and box_y_centre >= cell_y and box_y_centre < cell_y_max):
if anchor_one[0] == 0:
anchor_one = self.yolo_shape(
[box_x, box_y, box_width, box_height],
[cell_x, cell_y, cell_width, cell_height]
)
elif anchor_two[0] == 0:
anchor_two = self.yolo_shape(
[box_x, box_y, box_width, box_height],
[cell_x, cell_y, cell_width, cell_height]
)
else:
break
return np.concatenate((anchor_one, anchor_two), axis=None)
def yolo_shape(self, box, cell):
box_x, box_y, box_width, box_height = box
cell_x, cell_y, cell_width, cell_height = cell
# top left x,y to centre x,y
box_x = box_x + (box_width / 2)
box_y = box_y + (box_height / 2)
# offset bbox x,y to cell x,y
box_x = (box_x - cell_x) / cell_width
box_y = (box_y - cell_y) / cell_height
# bbox width,height relative to cell width,height
box_width = box_width / 256
box_height = box_height / 256
return [1, box_x, box_y, box_width, box_height]
DataGenerator.form_label_grid = form_label_grid
DataGenerator.rect_intersect = rect_intersect
DataGenerator.yolo_shape = yolo_shape
train_generator = DataGenerator(
train_image_ids,
train_pixels,
train_labels,
batch_size=6,
shuffle=True,
augment=True
)
val_generator = DataGenerator(
val_image_ids,
val_pixels,
val_labels,
batch_size=10,
shuffle=False,
augment=False
)
image_grid = train_generator.image_grid
###Output
_____no_output_____
###Markdown
ModelWith the data ready to go I'll define and train the model. As mentioned before this model is inspired by yolo, specifically yolo v3. This is a large and at times complex model. Below is an outline of the model. Basically the model begins with a convolutional layer with 32 filters which doubles in size in the next layer. The filters are then halved in size before doubling every layer up to 128 layers. The filters are then halved again while a larger stride reduces the size of the input image. This pattern of doubling and halving filter sizes continues with a few repeated blocks until we reach a size of 1024. A few resnet skip layers are added in as well to stabilise the large number of layers and reduce the chance of vanishing gradients.*figure 3, An outline of the model taken from the yolov3 [paper](https://pjreddie.com/media/files/papers/YOLOv3.pdf).*Below is my keras implementation of the model. The model is mostly in line with the yolov3 architecture though I have removed a few layers and altered some others.
###Code
x_input = tf.keras.Input(shape=(256,256,3))
x = tf.keras.layers.Conv2D(32, (3, 3), strides=(1, 1), padding='same')(x_input)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
########## block 1 ##########
x = tf.keras.layers.Conv2D(64, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x_shortcut = x
for i in range(2):
x = tf.keras.layers.Conv2D(32, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x = tf.keras.layers.Conv2D(64, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x = tf.keras.layers.Add()([x_shortcut, x])
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x_shortcut = x
########## block 2 ##########
x = tf.keras.layers.Conv2D(128, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x_shortcut = x
for i in range(2):
x = tf.keras.layers.Conv2D(64, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x = tf.keras.layers.Conv2D(128, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x = tf.keras.layers.Add()([x_shortcut, x])
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x_shortcut = x
########## block 3 ##########
x = tf.keras.layers.Conv2D(256, (3, 3), strides=(2, 2), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x_shortcut = x
for i in range(8):
x = tf.keras.layers.Conv2D(128, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x = tf.keras.layers.Conv2D(256, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x = tf.keras.layers.Add()([x_shortcut, x])
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x_shortcut = x
########## block 4 ##########
x = tf.keras.layers.Conv2D(512, (3, 3), strides=(2, 2), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x_shortcut = x
for i in range(8):
x = tf.keras.layers.Conv2D(256, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x = tf.keras.layers.Conv2D(512, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x = tf.keras.layers.Add()([x_shortcut, x])
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x_shortcut = x
########## block 5 ##########
x = tf.keras.layers.Conv2D(1024, (3, 3), strides=(2, 2), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x_shortcut = x
for i in range(4):
x = tf.keras.layers.Conv2D(512, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x = tf.keras.layers.Conv2D(1024, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x = tf.keras.layers.Add()([x_shortcut, x])
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x_shortcut = x
########## output layers ##########
x = tf.keras.layers.Conv2D(512, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x = tf.keras.layers.Conv2D(256, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
x = tf.keras.layers.Conv2D(128, (3, 3), strides=(1, 1), padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
predictions = tf.keras.layers.Conv2D(10, (1, 1), strides=(1, 1), activation='sigmoid')(x)
model = tf.keras.Model(inputs=x_input, outputs=predictions)
###Output
_____no_output_____
###Markdown
One issue with yolo is that it is likely to contain more cells in its label grid that contain no objects than cells that do contain objects. It is easy then for the model to focus too much on learning to reduce no object cells to zero and not focus enough on getting the bounding boxes to the right shape. To overcome this the yolo [paper](https://pjreddie.com/media/files/papers/YOLOv3.pdf) suggests weighting the cells containing bounding boxes five times higher and the cells with no bounding boxes by half. I have defined a custom loss function to do just this. I have also split the loss function into three parts. The first takes care of the confidence score that is trying to work out if a label grid cell contains a head of wheat or not. Binary cross entropy is used here as that is a binary classification task. The second part looks at the x,y position of the bounding boxes while the third looks at the width,height of the bounding boxes. MSE (mean squared loss) is used for the second and third parts as they are regression tasks.
###Code
def custom_loss(y_true, y_pred):
binary_crossentropy = prob_loss = tf.keras.losses.BinaryCrossentropy(
reduction=tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE
)
prob_loss = binary_crossentropy(
tf.concat([y_true[:,:,:,0], y_true[:,:,:,5]], axis=0),
tf.concat([y_pred[:,:,:,0], y_pred[:,:,:,5]], axis=0)
)
xy_loss = tf.keras.losses.MSE(
tf.concat([y_true[:,:,:,1:3], y_true[:,:,:,6:8]], axis=0),
tf.concat([y_pred[:,:,:,1:3], y_pred[:,:,:,6:8]], axis=0)
)
wh_loss = tf.keras.losses.MSE(
tf.concat([y_true[:,:,:,3:5], y_true[:,:,:,8:10]], axis=0),
tf.concat([y_pred[:,:,:,3:5], y_pred[:,:,:,8:10]], axis=0)
)
bboxes_mask = get_mask(y_true)
xy_loss = xy_loss * bboxes_mask
wh_loss = wh_loss * bboxes_mask
return prob_loss + xy_loss + wh_loss
def get_mask(y_true):
anchor_one_mask = tf.where(
y_true[:,:,:,0] == 0,
0.5,
5.0
)
anchor_two_mask = tf.where(
y_true[:,:,:,5] == 0,
0.5,
5.0
)
bboxes_mask = tf.concat(
[anchor_one_mask,anchor_two_mask],
axis=0
)
return bboxes_mask
###Output
_____no_output_____
###Markdown
I experimented with a few optimisers including FTRL and SGD but in the end Adam was the fastest and most reliable function. I have kept the learning rate reasonably high as it can take a number of steps to get the model moving quickly towards convergence. A higher rate helps to reduce the number of steps needed to get the model going.
###Code
optimiser = tf.keras.optimizers.Adam(learning_rate=0.0001)
model.compile(
optimizer=optimiser,
loss=custom_loss
)
###Output
_____no_output_____
###Markdown
While a high learning rate at the beginning of a training run is great at the start, it can cause issues as the model approaches convergence when smaller, more careful steps are needed. I considered using learning rate decay to handle this but decided on a callback to reduce the learning rate when it plateaus (or increases) over the space of two epochs. This allows the model to make the most of a higher rate until it that rate is too high at which point the model wuicky reduces it.In addition to this I have added an early stopping callback to stop the model training if is no longer able to reduce the loss. This reduces any waste processing and provides faster feedback if the model just isn't training very well.
###Code
callbacks = [
tf.keras.callbacks.ReduceLROnPlateau(monitor='loss', patience=2, verbose=1),
tf.keras.callbacks.EarlyStopping(monitor='loss', patience=5, verbose=1, restore_best_weights=True),
]
###Output
_____no_output_____
###Markdown
Finally the model is ready to be trained. The data generators are passed into the fit generator method of the model alongside the callbacks and the maximum number of epochs to take. Be warned that with 100 or less images this model can train at an okay speed on CPU. Any more images than that will need the GPU (which could still run for a few hours).
###Code
history = model.fit_generator(
train_generator,
validation_data=val_generator,
epochs=80,
callbacks=callbacks
)
###Output
_____no_output_____
###Markdown
Prediction post processingThe model outputs the predicted bounding boxes as a label grid. However to visualise the bounding boxes on an image or submit them to the competition the shape for one images bounding boxes need changing from (16,16,10) to (m, 4) where m represents the number of bounding boxes that have a high confidence.This first function transforms the boxes from the yolo format to the coco format. It does this through the following:- return the scale of the boxes from 0-1 to 0-256- change the x,y from the centre of the box to the top left corner- change width and height to x_max, y_max i.e. change to voc shape
###Code
def prediction_to_bbox(bboxes, image_grid):
bboxes = bboxes.copy()
im_width = (image_grid[:,:,2] * 32)
im_height = (image_grid[:,:,3] * 32)
# descale x,y
bboxes[:,:,1] = (bboxes[:,:,1] * image_grid[:,:,2]) + image_grid[:,:,0]
bboxes[:,:,2] = (bboxes[:,:,2] * image_grid[:,:,3]) + image_grid[:,:,1]
bboxes[:,:,6] = (bboxes[:,:,6] * image_grid[:,:,2]) + image_grid[:,:,0]
bboxes[:,:,7] = (bboxes[:,:,7] * image_grid[:,:,3]) + image_grid[:,:,1]
# descale width,height
bboxes[:,:,3] = bboxes[:,:,3] * im_width
bboxes[:,:,4] = bboxes[:,:,4] * im_height
bboxes[:,:,8] = bboxes[:,:,8] * im_width
bboxes[:,:,9] = bboxes[:,:,9] * im_height
# centre x,y to top left x,y
bboxes[:,:,1] = bboxes[:,:,1] - (bboxes[:,:,3] / 2)
bboxes[:,:,2] = bboxes[:,:,2] - (bboxes[:,:,4] / 2)
bboxes[:,:,6] = bboxes[:,:,6] - (bboxes[:,:,8] / 2)
bboxes[:,:,7] = bboxes[:,:,7] - (bboxes[:,:,9] / 2)
# width,heigth to x_max,y_max
bboxes[:,:,3] = bboxes[:,:,1] + bboxes[:,:,3]
bboxes[:,:,4] = bboxes[:,:,2] + bboxes[:,:,4]
bboxes[:,:,8] = bboxes[:,:,6] + bboxes[:,:,8]
bboxes[:,:,9] = bboxes[:,:,7] + bboxes[:,:,9]
return bboxes
###Output
_____no_output_____
###Markdown
Next the bounding boxes with low confidence need removing. I also need to remove any boxes that overlap another box. Luckily Tensorflow has a non-max suppression function that filters out both low confidence boxes and removes one box if any two overlap. There's just a little bit of reshaping to prepare the bounding boxes for this function one of which includes switching the position of the x and y dimensions.
###Code
def non_max_suppression(predictions, top_n):
probabilities = np.concatenate((predictions[:,:,0].flatten(), predictions[:,:,5].flatten()), axis=None)
first_anchors = predictions[:,:,1:5].reshape((32*32, 4))
second_anchors = predictions[:,:,6:10].reshape((32*32, 4))
bboxes = np.concatenate(
(first_anchors,second_anchors),
axis=0
)
bboxes = switch_x_y(bboxes)
bboxes, probabilities = select_top(probabilities, bboxes, top_n=top_n)
bboxes = switch_x_y(bboxes)
return bboxes
def switch_x_y(bboxes):
x1 = bboxes[:,0].copy()
y1 = bboxes[:,1].copy()
x2 = bboxes[:,2].copy()
y2 = bboxes[:,3].copy()
bboxes[:,0] = y1
bboxes[:,1] = x1
bboxes[:,2] = y2
bboxes[:,3] = x2
return bboxes
def select_top(probabilities, boxes, top_n=10):
top_indices = tf.image.non_max_suppression(
boxes = boxes,
scores = probabilities,
max_output_size = top_n,
iou_threshold = 0.3,
score_threshold = 0.3
)
top_indices = top_indices.numpy()
return boxes[top_indices], probabilities[top_indices]
###Output
_____no_output_____
###Markdown
Wrap these post-processing functions into one and output the predicted bounding boxes as a dictionary where the image id is the key.
###Code
def process_predictions(predictions, image_ids, image_grid):
bboxes = {}
for i, image_id in enumerate(image_ids):
predictions[i] = prediction_to_bbox(predictions[i], image_grid)
bboxes[image_id] = non_max_suppression(predictions[i], top_n=100)
# back to coco shape
bboxes[image_id][:,2:4] = bboxes[image_id][:,2:4] - bboxes[image_id][:,0:2]
return bboxes
###Output
_____no_output_____
###Markdown
Let's see how the model did by producing predictions for some images in the validation dataset.
###Code
val_predictions = model.predict(val_generator)
val_predictions = process_predictions(val_predictions, val_image_ids, image_grid)
show_images(val_image_ids[0:4], val_predictions)
show_images(val_image_ids[4:8], val_predictions)
###Output
_____no_output_____
###Markdown
Evaluate ModelWith the model trained it's time to look at the quality of the model. Begin by plotting the loss curve.
###Code
print('Epochs: ' + str(len(history.history['loss'])))
print('Final training loss: ' + str(history.history['loss'][-1]))
print('Final validation loss: ' + str(history.history['val_loss'][-1]))
fig, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].set_title('Training Loss')
ax[0].plot(history.history['loss'])
ax[1].set_title('Validation Loss')
ax[1].plot(history.history['val_loss'])
###Output
_____no_output_____
###Markdown
Then visualise the first few layers of the model to see how each layer influences the bounding boxes. Start by copying the model and configuring the new one to return each layers output when a prediction is made.
###Code
layer_outputs = [layer.output for layer in model.layers]
evaluation_model = tf.keras.Model(inputs=model.input, outputs=layer_outputs)
###Output
_____no_output_____
###Markdown
Then pick an image and cycle through the layers making a prediction and visualising the features outputted by the layer. The warm colours represent where the features lie in this image.Note that I have not visualised all the layers here due to the large number of them.
###Code
image = Image.open('../input/global-wheat-detection/train/' + train_image_ids[1] + ".jpg")
image = image.resize((256, 256))
pixels = np.asarray(image) / 255
pixels = np.expand_dims(pixels, axis=0)
num_of_layers = len(layer_outputs)
fig, axes = plt.subplots(2, 6, figsize=(20, 10))
layer = 0
for i in range(0, 2):
for j in range(0, 6):
layer_output = evaluation_model.predict(pixels)[layer]
axes[i, j].imshow(layer_output[0, :, :, 1], cmap='inferno')
layer = layer + 1
###Output
_____no_output_____
###Markdown
Test ImagesFinally predict bounding boxes for the test set. I wanted to see how easy/difficult it would be to use the model without a data generator class so I have inputted the images into the model as numpy arrays.First, load the test image ids.
###Code
test_image_ids = os.listdir('/kaggle/input/global-wheat-detection/test/')
test_image_ids = [image_id[:-4] for image_id in test_image_ids]
###Output
_____no_output_____
###Markdown
Now loop through the images, loading each one as a numpy array, applying augmentations to it and feeding it into the model. Save the bounding box predictions as a numpy array.
###Code
test_predictions = []
for i, image_id in enumerate(test_image_ids):
image = Image.open('/kaggle/input/global-wheat-detection/test/' + image_id + ".jpg")
image = image.resize((256, 256))
pixels = np.asarray(image)
val_augmentations = albu.Compose([
albu.CLAHE(p=1),
albu.ToGray(p=1)
])
aug_result = val_augmentations(image=pixels)
pixels = np.array(aug_result['image']) / 255
pixels = np.expand_dims(pixels, axis=0)
bboxes = model.predict(pixels)
test_predictions.append(bboxes)
test_predictions = np.concatenate(test_predictions)
###Output
_____no_output_____
###Markdown
Then apply the typical post processing functions to the predictions.
###Code
test_predictions = process_predictions(test_predictions, test_image_ids, image_grid)
###Output
_____no_output_____
###Markdown
Although only a submission to the competition will provide a final score on how good the model is, I'll visualise each test image with their predicted boxes to get an idea of the models quality. This involves a slightly different visualisation function just to accomodate the larger images.
###Code
show_images(test_image_ids[0:4], test_predictions, source='test')
show_images(test_image_ids[4:8], test_predictions, source='test')
show_images(test_image_ids[8:10], test_predictions, source='test')
###Output
_____no_output_____
###Markdown
Finally save the weights so that the model can be used in an inference notebook.
###Code
model.save_weights('wheat_detection_model')
###Output
_____no_output_____ |
lab/02_AmazonMachineLearning/Amazon Machine Learning.ipynb | ###Markdown
Amazon Machine Learning Demonstrationhttps://aws.amazon.com/pt/machine-learning/Amazon Machine Learning is a service that makes it easy for developers of all skill levels to use machine learning technology. Amazon Machine Learning provides visualization tools and wizards that guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology.With Amazon Machine Learning you can train three different types of models, using the following algorithms: - Binary Logistic Regression - Multinomial Logistic Regression - Linear Regression We will use Multinomial Logistic Regression to create a model for predicting the category of a product, given its short descriptiion.Python Boto3 reference:http://boto3.readthedocs.io/en/latest/reference/services/machinelearning.html Goal: to create a model to predict a given product categoryModel: - Input: product short description - Output: category - *predict_categoria(product_name) -> category*
###Code
%matplotlib inline
import boto3
import numpy as np
import pandas as pd
import sagemaker
import IPython.display as disp
import json
from time import gmtime, strftime
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn import preprocessing
from IPython.display import Markdown
from notebook import notebookapp
# Get the current Sagemaker session
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
s3_bucket = sagemaker_session.default_bucket()
client = boto3.client('machinelearning', region_name='us-east-1')
s3_client = boto3.client('s3')
s3 = boto3.client('s3')
base_dir='/tmp/aml'
bucket_arn = "arn:aws:s3:::%s/*" % s3_bucket
policy_statement = {
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": bucket_arn
}
current_policy = None
try:
current_policy = json.loads(s3_client.get_bucket_policy(Bucket=s3_bucket)['Policy'])
policy_found = False
for st in current_policy['Statement']:
if st["Action"] == "s3:GetObject" and st["Resource"] == bucket_arn:
policy_found = True
break
if not policy_found:
current_policy['Statement'].append( bucket_statement )
except Exception as e:
print("There is no current policy. Adding one...")
s3_client.put_bucket_policy(
Bucket=s3_bucket,
Policy=json.dumps(
{
"Version": "2012-10-17",
"Statement": [policy_statement]
}
)
)
###Output
_____no_output_____
###Markdown
Data Scientist moment Preparing the dataset
###Code
!mkdir -p $base_dir
!curl -s https://spock.cloud/ai-workshop/aml_data.tar.gz | tar -xz -C $base_dir
data = pd.read_csv(base_dir + '/sample.csv', sep=',', encoding='utf-8')
print( len(data) )
data.iloc[[517, 163, 14, 826, 692]]
###Output
_____no_output_____
###Markdown
So, we need to remove accents, transform everything to lower case and remove stopwords
###Code
# tranlating table for removing accents
accents = "".maketrans("áàãâéêíóôõúüçÁÀÃÂÉÊÍÓÔÕÚÜÇ", "aaaaeeiooouucAAAAEEIOOOUUC")
# loading stopwords without accents
file = open("stopwords.txt", "r")
stopwords = list(map(lambda x:x.strip().translate(accents),file.readlines()))
file.close()
# this tokenizer will tokenize the text, remove stop words and compute bigrams (ngram(2))
word_vectorizer = TfidfVectorizer(ngram_range=(1,2), analyzer='word', stop_words=stopwords, token_pattern='[a-zA-Z]+')
tokenizer = word_vectorizer.build_tokenizer()
def remove_stop_words(text):
return " ".join( list(filter( lambda x: x not in stopwords, tokenizer(text) )) )
data['product_name_tokens'] = list(map(lambda x: remove_stop_words( x.lower().translate(accents) ), data['product_name']))
data['main_category_tokens'] = list(map(lambda x: remove_stop_words( x.lower().translate(accents) ), data['main_category']))
data['subcategory_tokens'] = list(map(lambda x: remove_stop_words( x.lower().translate(accents) ), data['sub_category']))
data.iloc[[26, 163, 14, 826, 692]]
###Output
_____no_output_____
###Markdown
Let's remove the unecessary columns
###Code
data_final = data[ [ 'product_name_tokens', 'main_category_tokens', 'subcategory_tokens' ]]
data_final = data_final.rename(columns={
"product_name_tokens": "product_name",
"main_category_tokens": "category",
"subcategory_tokens": "sub_category",
})
data_final.head()
###Output
_____no_output_____
###Markdown
Ok. We finished our 'sample' dataset preparation. Now, lets continue with the dataset that was already cleaned. In real life, you should apply all these transformations to your final dataset.
###Code
disp.Image(base_dir + '/workflow_processo.png')
###Output
_____no_output_____
###Markdown
Now, lets execute the steps above, using Amazon Machine Learning.
###Code
# First, lets upload our dataset to S3
s3.upload_file( base_dir + '/dataset.csv', s3_bucket, 'workshop/AML/dataset.csv' )
# just take a look on that, before continue
pd.read_csv(base_dir + '/dataset.csv', sep=',', encoding='utf-8').head()
###Output
_____no_output_____
###Markdown
Now, lets create the DataSources Before that, we need to split it into 70% training and 30% test
###Code
strategy_train = open( 'split_strategy_training.json', 'r').read()
strategy_test = open( 'split_strategy_test.json', 'r').read()
print( "Training: {}\nTest: {}".format( strategy_train, strategy_test ) )
###Output
_____no_output_____
###Markdown
How AML knows the file format (CSV)? By using the schema bellow...
###Code
categorias_schema = open('category_schema.json', 'r').read()
print( "Formato dos dados do dataset: {}\n".format( categorias_schema) )
###Output
_____no_output_____
###Markdown
Creating the DataSources (train and test) for the Category Model
###Code
train_datasource_name = 'CategoriasTrain' + '_' + strftime("%Y%m%d_%H%M%S", gmtime())
test_datasource_name = 'CategoriasTest' + '_' + strftime("%Y%m%d_%H%M%S", gmtime())
print(train_datasource_name, test_datasource_name)
resp = client.create_data_source_from_s3(
DataSourceId=train_datasource_name,
DataSourceName=train_datasource_name,
DataSpec={
'DataLocationS3': 's3://%s/workshop/AML/dataset.csv' % s3_bucket,
'DataSchema': categorias_schema,
'DataRearrangement': strategy_train
},
ComputeStatistics=True
)
resp = client.create_data_source_from_s3(
DataSourceId=test_datasource_name,
DataSourceName=test_datasource_name,
DataSpec={
'DataLocationS3': 's3://%s/workshop/AML/dataset.csv' % s3_bucket,
'DataSchema': categorias_schema,
'DataRearrangement': strategy_test
},
ComputeStatistics=True
)
waiter = client.get_waiter('data_source_available')
waiter.wait(FilterVariable='Name', EQ=train_datasource_name)
waiter.wait(FilterVariable='Name', EQ=test_datasource_name)
print( "Datasources created successfully!" )
###Output
_____no_output_____
###Markdown
Creating/training the Category modelThis is the Model Recipe. It contains the last transformations applyed to your dataset before start training the model. Please note the function: ngram(product_name, 2). It will create bigrams for the input text. So, the model will consider as input a term frequency table, extracted from the bigrams of the product_name.
###Code
cat_recipe = open('category_recipe.json', 'r').read()
print(cat_recipe)
###Output
_____no_output_____
###Markdown
Reference: http://docs.aws.amazon.com/machine-learning/latest/dg/data-transformations-reference.html The training will start as soon as you execute the command bellow
###Code
model_name = 'ProdutoCategorias' + '_' + strftime("%Y%m%d_%H%M%S", gmtime())
print(model_name)
resp = client.create_ml_model(
MLModelId=model_name,
MLModelName=model_name,
MLModelType='MULTICLASS',
Parameters={
'sgd.maxPasses': '30',
'sgd.shuffleType': 'auto',
'sgd.l2RegularizationAmount': '1e-6'
},
TrainingDataSourceId=train_datasource_name,
Recipe=cat_recipe
)
waiter = client.get_waiter('ml_model_available')
waiter.wait(FilterVariable='Name', EQ=model_name)
print( "Model created successfully!" )
eval_name = 'ProdutoCategoriasEval' + '_' + strftime("%Y%m%d_%H%M%S", gmtime())
# it will take around 4mins.
resp = client.create_evaluation(
EvaluationId=eval_name,
EvaluationName=eval_name,
MLModelId=model_name,
EvaluationDataSourceId=test_datasource_name
)
waiter = client.get_waiter('evaluation_available')
waiter.wait(FilterVariable='Name', EQ=eval_name)
print( "Model evaluated successfully!" )
###Output
_____no_output_____
###Markdown
It will take a few more minutes, please check the service console if you wish Checking the model score...
###Code
score = client.get_evaluation( EvaluationId=eval_name )
print("Score categorias: {}".format( score['PerformanceMetrics']['Properties']['MulticlassAvgFScore'] ) )
###Output
_____no_output_____
###Markdown
Predicting new Categories with the trained model
###Code
try:
client.create_realtime_endpoint(
MLModelId=model_name
)
print('Please, wait a few seconds while the endpoint is being created. Get some coffee...')
except Exception as e:
print(e)
def predict_category( product_name ):
response = client.predict(
MLModelId=model_name,
Record={
'product_name': product_name
},
PredictEndpoint='https://realtime.machinelearning.us-east-1.amazonaws.com'
)
return response['Prediction']['predictedLabel']
testes = pd.read_csv(base_dir + '/testes.csv', sep=',', encoding='utf-8')
testes.head()
result = None
try:
testes['predicted_category'] = testes['product_name'].apply(predict_category)
result = testes
except Exception as e:
print( "Your realtime endpoint is not ready yet... Please, wait for a few seconds more and try again.")
result
###Output
_____no_output_____
###Markdown
Cleaning Up
###Code
client.delete_realtime_endpoint(MLModelId=model_name)
print("Endpoint deleted")
client.delete_ml_model(MLModelId=model_name)
print("Model deleted")
client.delete_evaluation(EvaluationId=eval_name)
print("Evaluation deleted")
client.delete_data_source(DataSourceId=test_datasource_name)
print("Datasource deleted")
client.delete_data_source(DataSourceId=train_datasource_name)
print("Endpoint deleted")
###Output
_____no_output_____ |
docs/tutorials/cluster.ipynb | ###Markdown
Clustering with Hypertools The cluster feature performs clustering analysis on the data (an arrray, dataframe, or list) and returns a list of cluster labels. The default clustering method is K-Means (argument 'KMeans') with MiniBatchKMeans, AgglomerativeClustering, Birch, FeatureAgglomeration, SpectralClustering and HDBSCAN also supported.Note that, if a list is passed, the arrays will be stacked and clustering will be performed *across* all lists (not within each list). Import Packages
###Code
import hypertools as hyp
from collections import Counter
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load your data We will load one of the sample datasets. This dataset consists of 8,124 samples of mushrooms with various text features.
###Code
geo = hyp.load('mushrooms')
mushrooms = geo.get_data()
###Output
_____no_output_____
###Markdown
We can peek at the first few rows of the dataframe using the pandas function `head()`
###Code
mushrooms.head()
###Output
_____no_output_____
###Markdown
Obtain cluster labels To obtain cluster labels, simply pass the data to `hyp.cluster`. Since we have not specified a desired number of cluster, the default of 3 clusters is used (labels 0, 1, and 2). Additionally, since we have note specified a desired clustering algorithm, K-Means is used by default.
###Code
labels = hyp.cluster(mushrooms)
set(labels)
###Output
_____no_output_____
###Markdown
We can further examine the number of datapoints assigned each label.
###Code
Counter(labels)
###Output
_____no_output_____
###Markdown
Specify number of cluster labels You can also specify the number of desired clusters by setting the `n_clusters` argument to an integer number of clusters, as below. We can see that when we pass the int 10 to n_clusters, 10 cluster labels are assigned. Since we have note specified a desired clustering algorithm, K-Means is used by default.
###Code
labels_10 = hyp.cluster(mushrooms, n_clusters = 10)
set(labels_10)
###Output
_____no_output_____
###Markdown
Different clustering models You may prefer to use a clustering model other than K-Means. To do so, simply pass a string to the cluster argument specifying the desired clustering algorithm.In this case, we specify both the clustering model (HDBSCAN) and the number of clusters (10).
###Code
labels_HDBSCAN = hyp.cluster(mushrooms, cluster='HDBSCAN')
geo = hyp.plot(mushrooms, '.', hue=labels_10, title='K-means clustering')
geo = hyp.plot(mushrooms, '.', hue=labels_HDBSCAN, title='HCBSCAN clustering')
###Output
_____no_output_____ |
notebooks/05_final_eda/01_preprocess_full_comments.ipynb | ###Markdown
combine files
###Code
data = []
for f in Path("/mnt/data2/ptf/final_zo").glob("*.json"):
with open(f) as inp:
for line in tqdm(inp):
d = json.loads(line)
if "comments" in d:
for c in d["comments"]:
data.append({"url": d["url"], **c})
len(data)
df = pd.DataFrame(data)
###Output
_____no_output_____
###Markdown
parse relative dates
###Code
def parse(x):
# date when crawled 11th June 2019
d = datetime.datetime(2019, 6, 11, 12, 0)
idx = x.find('—')
return dateparser.parse(x[idx:], languages=['de'], settings={'RELATIVE_BASE': d})
parsed = Parallel(n_jobs=4)(delayed(parse)(i) for i in tqdm(df['date'].values))
df['date'] = parsed
df.to_pickle('parsed_data.pkl')
df = pd.read_pickle('parsed_data.pkl')
###Output
_____no_output_____
###Markdown
group into chunks
###Code
df = df.sort_values(by='date', ascending=False)
df.shape
df = df.drop_duplicates(subset=['text', 'date'])
df
groups = pd.qcut(df['date'], 10)
groups
df['year'] = df['date'].apply(lambda x: x.year)
df['year'].value_counts()
df['group'] = df['year'].apply(lambda x: x if x > 2010 else 2010)
df['group'].value_counts()
###Output
_____no_output_____
###Markdown
clean, split into sentences
###Code
# def get_sents(texts):
# tokenizer = Tokenizer(split_camel_case=True, token_classes=False, extra_info=False)
# sentence_splitter = SentenceSplitter(is_tuple=False)
# results = []
# for text in texts:
# text = clean(text, lang='de', lower=False)
# tokens = tokenizer.tokenize_paragraph(text)
# sentences = sentence_splitter.split(tokens)
# cleaned = [' '.join(s) for s in sentences]
# results.append(cleaned)
# return results
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in range(0, len(l), n):
yield l[i:i + n]
def combine(li):
for l in li:
for x in l:
yield x
# results = Parallel(n_jobs=4)(delayed(get_sents)(row) for row in tqdm(list(chunks(df['text'], 10000))))
# pickle.dump( results, open( "/mnt/data2/results_sentes.pkl", "wb" ) )
# results = pickle.load( open( "/mnt/data2/results_sentes.pkl", "rb" ) )
# df['sents'] = list(combine(results))
sents_data = []
# for _, row in tqdm(df[['group', 'sents']].iterrows(), total=df.shape[0]):
# for s in row['sents']:
# sents_data.append({'text': s, 'group': row['group']})
len(sents_data)
# df_sents = pd.DataFrame(sents_data)
# df_sents
df
# df_sents.to_pickle('/mnt/data2/results_sents.pkl')
# df_sents = pd.read_pickle('/mnt/data2/results_sents.pkl')
###Output
_____no_output_____
###Markdown
Lemmatize
###Code
df_sents = df
del df
df_sents = df_sents[df_sents['text'].str.len() > 10]
df_sents.shape
final = preprocess(df_sents['text'].values)
df_sents['text'] = final
df_sents['text']
df_sents.to_pickle('/mnt/data2/results_full_comments_lemma.pkl')
len(final)
df_sents['group'] = df_sents['group'].apply(lambda x: x if x % 2 == 0 else x - 1)
! rm /mnt/data2/ptf/groups/zo_bi_*_full.txt
for year, group in df_sents.groupby('group'):
print(year, group.shape)
Path(f'/mnt/data2/ptf/groups/zo_bi_{year}_full.txt').write_text('\n'.join(group['text'].values) + '\n')
###Output
2010 (761148, 5)
2012 (975706, 5)
2014 (1813791, 5)
2016 (4028289, 5)
2018 (4568232, 5)
|
T_Academy sklearn & keras/Lab_03) Regression.ipynb | ###Markdown
2. Multiple Linear Regression이번에는 x의 개수가 2개 이상인 다중 회귀 분석에 대해 실습해보겠습니다.x 변수로 보스톤 데이터셋에 존재하는 모든 변수를 사용하겠습니다. 1) 모델 불러오기 및 정의하기
###Code
mul_lr = LinearRegression()
###Output
_____no_output_____
###Markdown
2) 모델 학습하기 (훈련 데이터)
###Code
mul_lr.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
3) 결과 예측하기 (테스트 데이터)
###Code
y_pred = mul_lr.predict(x_test)
###Output
_____no_output_____
###Markdown
4) 결과 살펴보기일반적으로 선형회귀 R2를 평가 척도로 사용합니다.R2값이 1에 가까울수록 회귀 모델이 데이터를 잘 표현한다는 것을 의미합니다.
###Code
print('다중 선형 회귀, R2 : {:.4f}'.format(r2_score(y_test, y_pred)))
###Output
_____no_output_____
###Markdown
보스톤 데이터셋이 가지고 있는 x 변수의 수는 13개, 그리고 주택 가격인 y 변수까지 합치면 14개의 변수가 됩니다.이는 14개의 차원으로 표현된다는 의미이고, 사람은 최대 3차원까지만 인식할 수 있으므로 시각화를 하기에는 어려움이 있습니다. 회귀 모델의 계수 w, 절편 b 살펴보기어떤 변수에 얼마 만큼의 가중치가 할당되고, 절편 값은 얼마나 할당되는지 살펴볼 수 있습니다.
###Code
print('다중 선형 회귀, 계수(w) : {}, 절편(b) : {:.4f}'.format(mul_lr.coef_, mul_lr.intercept_))
###Output
_____no_output_____
###Markdown
Machine Learning Algorithm Based Regression이번에는 머신러닝 알고리즘을 기반으로한 회귀 모델에 대해 알아보겠습니다.Sklearn이 지원하는 머신러닝 기반 회귀 모델로는 결정 트리, 랜덤 포레스트, 서포트 벡터 머신, MLP, AdaBoost, Gradient Boosting 등이 있습니다.그 중 결정 트리, 서포트 벡터 머신, MLP 회귀 모델을 살펴보겠습니다. 1. Decision Tree Regressor트리 모델은 데이터의 불순도(impurity, Entropy)를 최소화 하는 방향으로 트리를 분기하여 모델을 생성합니다. 자세한 내용은 분류 수업에서 설명 드리겠습니다.결정 트리 회귀 모델은 Sklearn의 tree 패키지에 있습니다. 1) 모델 불러오기 및 정의하기
###Code
from sklearn.tree import DecisionTreeRegressor
dt_regr = DecisionTreeRegressor(max_depth=5)
###Output
_____no_output_____
###Markdown
2) 모델 학습하기 (훈련 데이터)
###Code
dt_regr.fit(x_train['RM'].values.reshape((-1, 1)), y_train)
###Output
_____no_output_____
###Markdown
3) 결과 예측하기 (테스트 데이터)
###Code
y_pred = dt_regr.predict(x_test['RM'].values.reshape((-1, 1)))
###Output
_____no_output_____
###Markdown
4) 결과 살펴보기일반적으로 선형회귀 R2를 평가 척도로 사용합니다.R2값이 1에 가까울수록 회귀 모델이 데이터를 잘 표현한다는 것을 의미합니다.
###Code
print('단순 결정 트리 회귀, R2 : {:.4f}'.format(r2_score(y_test, y_pred)))
line_x = np.linspace(np.min(x_test['RM']), np.max(x_test['RM']), 10)
line_y = dt_regr.predict(line_x.reshape((-1, 1)))
plt.scatter(x_test['RM'].values.reshape((-1, 1)), y_test, c = 'black')
plt.plot(line_x, line_y, c = 'red')
plt.legend(['Regression line', 'Test data sample'], loc='upper left')
###Output
_____no_output_____
###Markdown
13개의 변수를 모두 사용해 결정 트리 회귀 모델을 사용해 보세요. (5분)
###Code
# 수강생 ver
dt_regre = DecisionTreeRegressor(max_depth=5)
dt_regre.fit(x_train, y_train)
y_pred = dt_regre.predict(x_test)
print('다중 결정 트리 회귀, R2 : {:.4f}'.format(r2_score(y_test, y_pred)))
###Output
_____no_output_____
###Markdown
2. Support Vector Machine Regressor서포트 벡터 머신의 기본 개념은 결정 경계와 가장 가까운 데이터 샘플의 거리(Margin)을 최대화 하는 방식으로 모델을 조정합니다.자세한 내용은 분류 파트에서 설명드리겠습니다. 서포트 벡터 머신 회귀 모델은 Sklearn의 svm 패키지에 있습니다. 1) 모델 불러오기 및 정의하기
###Code
from sklearn.svm import SVR
svm_regr = SVR()
###Output
_____no_output_____
###Markdown
2) 모델 학습하기 (훈련 데이터)
###Code
svm_regr.fit(x_train['RM'].values.reshape((-1, 1)), y_train)
###Output
_____no_output_____
###Markdown
3) 결과 예측하기 (테스트 데이터)
###Code
y_pred = svm_regr.predict(x_test['RM'].values.reshape((-1, 1)))
###Output
_____no_output_____
###Markdown
4) 결과 살펴보기일반적으로 선형회귀 R2를 평가 척도로 사용합니다.R2값이 1에 가까울수록 회귀 모델이 데이터를 잘 표현한다는 것을 의미합니다.
###Code
print('단순 서포트 벡터 머신 회귀, R2 : {:.4f}'.format(r2_score(y_test, y_pred)))
line_x = np.linspace(np.min(x_test['RM']), np.max(x_test['RM']), 100)
line_y = svm_regr.predict(line_x.reshape((-1, 1)))
plt.scatter(x_test['RM'], y_test, c = 'black')
plt.plot(line_x, line_y, c = 'red')
plt.legend(['Regression line', 'Test data sample'], loc='upper left')
###Output
_____no_output_____
###Markdown
13개의 변수를 모두 사용해 서포트 벡터 머신 회귀 모델을 사용해 보세요. (5분)
###Code
# 수강생 ver
svm_regr = SVR(C=20,)
svm_regr.fit(x_train, y_train)
y_pred = svm_regr.predict(x_test)
print('다중 서포트 벡터 머신 회귀, R2 : {:.4f}'.format(r2_score(y_test, y_pred)))
###Output
_____no_output_____
###Markdown
3. Multi Layer Perceptron Regressor딥러닝의 기본 모델인 뉴럴 네트워크를 기반으로 한 회귀 모델입니다. 기본적으로 MLP라 하면, 입력층-은닉층-출력층 3개로 이루어진 뉴럴 네트워크를 의미합니다. 어떻게 뉴럴 네트워크가 비선형 문제를 해결할 수 있을까?은닉층에 존재하는 하나하나의 노드는 기본 선형 회귀 모델과 동일하게 $ wx + b $로 이루어져 있습니다. 하지만 이런 선형 분리를 할 수 있는 모델을 여러개를 모아 비선형 분리를 수행하는 것이 뉴럴 네트워크 입니다.아래 그림을 보면 4개의 벡터 공간을 선형 분리하는 퍼셉트론들이 하나의 비선형 공간을 분류할 수 있는 벡터 공간을 형성하는 것을 확인할 수 있습니다.직관적으로는 이해하기 어려우시겠지만, 우리가 케익을 4개의 퍼셉트론들이 분할하는 대로 잘라 가운데 부분을 남기는 것을 생각해보시면 되겠습니다.MLP 회귀 모델은 Sklearn의 neural_network 패키지에 있습니다. 1) 모델 불러오기 및 정의하기
###Code
from sklearn.neural_network import MLPRegressor
mlp_regr = MLPRegressor(solver='lbfgs')
###Output
_____no_output_____
###Markdown
2) 모델 학습하기 (훈련 데이터)
###Code
mlp_regr.fit(x_train['RM'].values.reshape((-1, 1)), y_train)
###Output
_____no_output_____
###Markdown
3) 결과 예측하기 (테스트 데이터)
###Code
y_pred = mlp_regr.predict(x_test['RM'].values.reshape((-1, 1)))
###Output
_____no_output_____
###Markdown
4) 결과 살펴보기일반적으로 선형회귀 R2를 평가 척도로 사용합니다.R2값이 1에 가까울수록 회귀 모델이 데이터를 잘 표현한다는 것을 의미합니다.
###Code
print('단순 MLP 회귀, R2 : {:.4f}'.format(r2_score(y_test, y_pred)))
line_x = np.linspace(np.min(x_test['RM']), np.max(x_test['RM']), 10)
line_y = mlp_regr.predict(line_x.reshape((-1, 1)))
plt.scatter(x_test['RM'], y_test, c = 'black')
plt.plot(line_x, line_y, c = 'red')
plt.legend(['Regression line', 'Test data sample'], loc='upper left')
###Output
_____no_output_____
###Markdown
13개의 변수를 모두 사용해 MLP 회귀 모델을 사용해 보세요. (5분)
###Code
mlp_regr = MLPRegressor(hidden_layer_sizes=(50, ), activation='tanh', solver ='sgd', random_state=2019)
mlp_regr.fit(x_train, y_train)
y_pred = mlp_regr.predict(x_test)
print('다중 MLP 회귀, R2 : {:.4f}'.format(r2_score(y_test, y_pred)))
###Output
_____no_output_____
###Markdown
Lab_03 회귀 Context Linear Regression+ Simple Linear Regression+ Multiple Linear Regression Machine Learning Algorithm Based Regression+ Decision Tree Regression+ RandomForest Regression+ MLP Regression Evaluation+ R2+ Adjusted R2
###Code
import os
from os.path import join
import copy
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import sklearn
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
boston = load_boston()
###Output
_____no_output_____
###Markdown
이번 회귀 실습을 위해 sklearn 내장 데이터인 보스톤 주택 데이터를 불러오겠습니다.보스톤 데이터셋은 범죄율, 강의 인접 유무, 세금 등 13개의 변수를 가지고 있으며, 주택 가격을 라벨 데이터로 가지고 있습니다.
###Code
print(boston.DESCR)
data = boston.data
label = boston.target
columns = boston.feature_names
data = pd.DataFrame(data, columns = columns)
data.head()
data.shape
data.describe()
data.info()
###Output
_____no_output_____
###Markdown
Linear Regression선형 회귀는 종속 변수와 한개 이상의 독립 변수와의 선형 상관 관계를 모델링하는 회귀 분석 기법입니다. 용어를 종속 변수, 독립 변수로 표현하면 이해하기 어려우니 다음 수식에서의 y, x 로 표현하겠습니다. $$ y = wx + b$$$$ y = w_0x_0 + w_1x_1 + w_2x_2 + .... w_nx_n + b$$$$ w : 계수(가중치) $$$$ b : 절편(편향) $$간단하게 생각해보면 선형 회귀는 데이터가 분포되어 있는 공간에서 데이터를 가장 잘 표현하는 선을 하나 긋는다고 생각할 수 있습니다.선형 회귀의 비용 함수는 다음과 같이 표현될 수 있습니다.$$ Cost_{lr} = \sum_i{(y_i - \hat y_i)^2}$$$$ \hat y_i = b + wx_i $$결국 실제 참값 $y_i$와 회귀 모델이 출력한 $ \hat y $ 사이의 잔차의 제곱의 합을 최소화하는 w(계수)를 구하는 것이 목적입니다. -> Least Square, 최소 제곱법 선형 회귀는 출력되는 y가 1개 또는 2개 이상인지의 유무에 따라 단변량, 다변량이라는 말이 붙는데, 이번 수업에서는 출력값인 y가 1개(단변량)라고 가정하겠습니다. 또한, 입력으로 들어가는 x가 1개 또는 2개 이상인지의 유무에 따라 단순(Simple), 다중(Multiple)이라는 말이 붙는데, 이번 실습에서는 단순, 다중 선형 회귀 분석에 대해 모두 알아보겠습니다. 선형 회귀분석의 4가지 기본 가정선형 회귀에는 4가지 가정이 필요합니다. 우리 수업에서는 이론적인 내용을 다루지 않으므로, 추후에 살펴보시면 좋겠습니다.맨 아래 참조 목록에 4가지 가정에 대해 잘 설명해준 페이지의 링크를 달아두었습니다.1. 선형성2. 독립성3. 등분산성4. 정규성 1. Simple Linear Regression선형 회귀의 첫 번째로 x가 1개인 단순 회귀 분석에 대해 실습해보겠습니다.x 변수로 'RM' 변수를, y 변수는 주택 가격으로 하겠습니다.Linear Regression은 Sklearn의 linear_model 패키지에 있습니다.* 회귀부터는 데이터를 train, test로 나누어 진행하겠습니다. sklearn의 model_selection 패키지에 있는 train_test_split 함수를 사용합니다.
###Code
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(data, label, test_size=0.2, random_state=2019)
###Output
_____no_output_____
###Markdown
1) 모델 불러오기 및 정의하기
###Code
from sklearn.linear_model import LinearRegression
sim_lr = LinearRegression()
###Output
_____no_output_____
###Markdown
2) 모델 학습하기 (훈련 데이터)
###Code
sim_lr.fit(x_train['RM'].values.reshape((-1, 1)), y_train)
###Output
_____no_output_____
###Markdown
3) 결과 예측하기 (테스트 데이터)
###Code
y_pred = sim_lr.predict(x_test['RM'].values.reshape((-1, 1)))
###Output
_____no_output_____
###Markdown
4) 결과 살펴보기일반적으로 선형회귀 R2를 평가 척도로 사용합니다.R2값이 1에 가까울수록 회귀 모델이 데이터를 잘 표현한다는 것을 의미합니다.
###Code
from sklearn.metrics import r2_score
print('단순 선형 회귀, R2 : {:.4f}'.format(r2_score(y_test, y_pred)))
line_x = np.linspace(np.min(x_test['RM']), np.max(x_test['RM']), 10)
line_y = sim_lr.predict(line_x.reshape((-1, 1)))
plt.scatter(x_test['RM'], y_test, s=10, c='black')
plt.plot(line_x, line_y, c = 'red')
plt.legend(['Regression line', 'Test data sample'], loc='upper left')
###Output
_____no_output_____
###Markdown
회귀 모델의 계수 w, 절편 b 살펴보기어떤 변수에 얼마 만큼의 가중치가 할당되고, 절편 값은 얼마나 할당되는지 살펴볼 수 있습니다.
###Code
print('단순 선형 회귀, 계수(w) : {:.4f}, 절편(b) : {:.4f}'.format(sim_lr.coef_[0], sim_lr.intercept_))
###Output
_____no_output_____ |
notebooks/LDA_model-religion.ipynb | ###Markdown
Import all the required packages
###Code
## basic packages
import numpy as np
import re
import csv
import time
import pandas as pd
from itertools import product
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
##gensim
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
##spacy and nltk
import spacy
from nltk.corpus import stopwords
from spacy.lang.en.stop_words import STOP_WORDS
##vis
import pyLDAvis
import pyLDAvis.gensim_models as gensimvis
pyLDAvis.enable_notebook()
import warnings
warnings.filterwarnings("ignore",category=DeprecationWarning)
###Output
_____no_output_____
###Markdown
load the metadata of podcast transcripts
###Code
global df, show_descriptions
meta_data = []
with open("../data/metadata.tsv") as csvfile:
csvreader = csv.reader(csvfile,delimiter="\t")
for row in csvreader:
meta_data.append(row)
df = pd.DataFrame(meta_data[1:],columns=meta_data[0])
show_filename_prefixes = df.show_filename_prefix
episode_filename_prefixes = df.episode_filename_prefix
shows = df.groupby(by=['show_name'])
show_names = shows.apply(lambda x: x.show_name.unique()[0])
show_descriptions_aggregated = shows.apply(lambda x: x.show_description.unique()[0])
episode_descriptions_aggregated = shows.apply(lambda x: list(x.episode_description))
descriptions_aggregated = {}
for k,text in enumerate(episode_descriptions_aggregated):
descriptions_aggregated[show_names[k]] = [show_descriptions_aggregated[k]]+text
genres_topics = ["comedy","news","crime","science","economics","politics","education",\
"sports","lifestyle","health","wellbeing","religion","faith","music",\
"art","fashion","literature","humanities","drama","fitness","drama",\
"fantasy","scifi","gameshow","news quiz","games","game","mental",\
"humor","research","technology","society","social","culture","lifestyle",\
"songs","cooking","culinary","food","travel","films","film","movies","tv",\
"climate","space","planet","digital","artificial intelligence", "ai",\
"cars","car","nutrition","wellness","family","history","geography","physics",\
"mathematics","math","chemistry","biology","documentary","commentary","nfl",\
"mls","nba","mlb","stocks","stock","market","wall","street","wallstreet","business",\
"reality","shows","investing","social media","biography","biographies",\
"data science","medicine","media","books","book","europe","asia","canada",\
"south america","north america","america","usa","netflix","adventure","pets","dogs",\
"cats","dog","cat","nintendo","xbox","playstation","ps4","ps5","theatre","mars"\
"tennis","australia","conspiracy","war","epidemic","pandemic","climate","change"\
"astrology","novel","church","christ","romance","english","kids","astronomy"\
"design","entrepreneurship","marketing","digital","christian","christianity","boardgames",\
"boardgame","videogames","videogame","twitch","currency","cryptocurrency","federal","bank",\
"blockchain","bitcoin","nyse","nft","import","export","capital","money","exchange","boxing",\
"mma","wrestling","excercise","excercises","gym","bodybuilding","body-building","yoga",\
"stamina","strength","calories","meditation","physical","healthy","pope","bible","catholic",\
"catholicism","testament"]
formats = ["monologue","interview","storytelling","repurposed",\
"bite-sized","co-host conversation","debate","narrative",\
"scripted","improvised"]
podcasts_genres_topics = {}
for k,show in enumerate(show_names):
keywords = show.lower().split(" ")
for word in keywords:
if word in genres_topics:
if (k,show) in podcasts_genres_topics:
if word not in podcasts_genres_topics[(k,show)]:
podcasts_genres_topics[(k,show)].append(word)
else:
podcasts_genres_topics[(k,show)] = [word]
podcasts = [item[1] for item in podcasts_genres_topics.keys()]
nlp = spacy.load("en_core_web_sm")
stops_nltk = set(stopwords.words("english"))
stops_spacy = STOP_WORDS.union({'ll', 've', 'pron','okay','oh','like','know','yea','yep','yes','no',\
"like","oh","yeah","okay","wow","podcast","rating","ratings","not",\
"support","anchor","podcasts","episode","http","https","5star","reviews",\
"review","instagram","tiktok","amazon","apple","twitter","goole",\
"facebook","send","voice message","message","voice","subscribe","follow",\
"sponsor","links","easiest","way","fuck","fucking","talk","discuss",\
"world","time","want","join","learn","week","things","stuff","find",\
"enjoy","welcome","share","talk","talking","people","gmail","help","today",\
"listen","best","stories","story","hope","tips","great","journey",\
"topics","email","questions","question","going","life","good","friends",\
"friend","guys","discussing","live","work","student","students","need",\
"hear","think","change","free","better","little","fucking","fuck","shit",\
"bitch","sex","easiest","way","currently","follow","follows","needs",\
"grow","stay","tuned","walk","understand","tell","tells","ask","helps",\
"feel","feels","look","looks","meet","relate","soon","quick","dude","girl",\
"girls","guy","literally","spotify","google","totally","played","young",\
"begin","began","create","month","year","date","day","terms","lose","list",\
"bought","brings","bring","buy","percent","rate","increase","words","value",\
"search","awesome","followers","finn","jake","mark","america","american",\
"speak","funny","hours","hour","honestly","states","united","franklin",\
"patrick","john","build","dave","excited","process","processes","based",\
"focus","star","mary","chris","taylor","gotta","liked","hair","adam","chat",\
"named","died","born","country","mother","father","children","tools",\
"countries","jordan","tommy","listeners","water","jason","lauren","alex",\
"laguna","jessica","kristen","examples","example","heidi","stephen","utiful",\
"everybody","sorry","came","come","meet","whoa","whoaa","yay","whoaw",\
"somebody","anybody","cool","watch","nice","shall"})
stops = stops_nltk.union(stops_spacy)
health_category = [(key,val) for key,val in podcasts_genres_topics.items() if ("fitness" in val)\
or ("health" in val)\
or ("diet" in val)\
or ("nutrition" in val)\
or ("healthy" in val)\
or ("meditation" in val)\
or ("mental" in val)\
or ("physical" in val)\
or ("excercise" in val)\
or ("calories" in val)\
or ("gym" in val)\
or ("bodybuilding" in val)\
or ("body-building" in val)\
or ("stamina" in val)\
or ("strength" in val)\
or ("excercise" in val)\
or ("yoga" in val)]
sports_category = [(key,val) for key,val in podcasts_genres_topics.items() if ("sports" in val)\
or ("games" in val)\
or ("game" in val)\
or ("videogame" in val)\
or ("videogames" in val)\
or ("boardgame" in val)\
or ("boardgames" in val)\
or ("xbox" in val)\
or ("nintendo" in val)\
or ("twitch" in val)\
or ("ps4" in val)\
or ("ps5" in val)\
or ("playstation" in val)\
or ("basketball" in val)\
or ("football" in val)\
or ("soccer" in val)\
or ("baseball" in val)\
or ("boxing" in val)\
or ("wrestling" in val)\
or ("mma" in val)\
or ("tennis" in val)]
religion_category = [(key,val) for key,val in podcasts_genres_topics.items() if ("religion" in val)\
or ("faith" in val)\
or ("church" in val)\
or ("christ" in val)\
or ("christian" in val)\
or ("christianity" in val)\
or ("bible" in val)\
or ("testament" in val)\
or ("pope" in val)\
or ("catholic" in val)\
or ("catholicism" in val)]
invest_category = [(key,val) for key,val in podcasts_genres_topics.items() if ("market" in val)\
or ("business" in val)\
or ("invest" in val)\
or ("stocks" in val)\
or ("stock" in val)\
or ("wallstreet" in val)\
or ("investing" in val)\
or ("investment" in val)\
or ("exchange" in val)\
or ("nyse" in val)\
or ("capital" in val)\
or ("money" in val)\
or ("currency" in val)\
or ("cryptocurrency" in val)\
or ("blockchain" in val)\
or ("bitcoin" in val)\
or ("federal" in val)\
or ("bank" in val)\
or ("nft" in val)]
number_of_topics = [5,6,7,8,9,10,15]
df_parameters = list(product([2,3,4,5,6,7,8,9,10],[0.3,0.4,0.5,0.6,0.7,0.8,0.9]))
hyperparams = list(product(number_of_topics,df_parameters))
sports_cs = []
with open('/home1/sgmark/capstone-project/model/coherence_scores_religion_category.csv','r') as f:
reader = csv.reader(f)
for row in reader:
sports_cs.append([float(x) for x in row])
best_hp_setting = hyperparams[np.argmax([x[5] for x in sports_cs])]
best_hp_setting
###Output
_____no_output_____
###Markdown
The individual transcript location
###Code
def file_location(show,episode):
search_string = local_path + "/spotify-podcasts-2020" + "/podcasts-transcripts" \
+ "/" + show[0] \
+ "/" + show[1] \
+ "/" + "show_" + show \
+ "/"
return search_string
###Output
_____no_output_____
###Markdown
load the transcripts
###Code
transcripts = {}
for podcast,genre in religion_category:
for i in shows.get_group(podcast[1])[['show_filename_prefix','episode_filename_prefix']].index:
show,episode = shows.get_group(podcast[1])[['show_filename_prefix','episode_filename_prefix']].loc[i]
s = show.split("_")[1]
try:
with open('../podcast_transcripts/'+s[0]+'/'+s[1]+'/'+show+'/'+episode+'.txt','r') as f:
transcripts[(show,episode)] = f.readlines()
f.close()
except Exception as e:
pass
keys = list(transcripts.keys())
# Cleaning & remove urls and links
def remove_stops(text,stops):
final = []
for word in text:
if (word not in stops) and (len(word)>3) and (not word.endswith('ing')) and (not word.endswith('ly')):
final.append(word)
return final
def clean_text(docs):
final = []
for doc in docs:
clean_doc = remove_stops(doc, stops)
final.extend(clean_doc)
return final
def lemmatization(text_data):
nlp = spacy.load("en_core_web_sm")
texts = []
for text in text_data:
doc = nlp(text)
lem_text = []
for token in doc:
if (token.pos_=="VERB") or (token.pos_=="ADV"):
pass
else:
lem_text.append(token.lemma_)
texts.append(lem_text)
return texts
# # lemmatization -- do lemmatization for just the verbs
# def get_lemmatized(text):
# lemmatized = []
# for phrase in text:
# sentence=''
# for word in nlp(phrase):
# if word.pos_ == "VERB":
# #or word.pos_ == "ADJ" or word.pos_ == "ADV":
# sentence += ' ' + str(word.lemma_)
# elif str(word)!='':
# sentence += ' ' + str(word)
# lemmatized.append(sentence.strip())
# return lemmatized
# def get_named_entities(text):
# return nlp(text.lower()).ents
# def get_noun_chunks(text):
# non_stop_noun_chunks = []
# stops = stopwords.words("english")
# for word in nlp(text.lower()).noun_chunks:
# if str(word) not in stops:
# non_stop_noun_chunks.append(word)
# return non_stop_noun_chunks
###Output
_____no_output_____
###Markdown
tokenize/convert text into words
###Code
def normalize_docs(text_data):
final_texts = []
for text in text_data:
new_text = gensim.utils.simple_preprocess(text,deacc=True)
final_texts.append(new_text)
return final_texts
docs = []
for text in transcripts.values():
docs.append(' '.join(clean_text(normalize_docs(text))))
texts = lemmatization(docs)
texts = [remove_stops(text,stops) for text in texts]
###Output
_____no_output_____
###Markdown
Using bigrams
###Code
from gensim.models.phrases import Phrases
bigram = Phrases(texts, min_count=5)
for i in range(len(texts)):
for token in bigram[texts[i]]:
if '_' in token:
texts[i].append(token)
###Output
_____no_output_____
###Markdown
Construct a corpus of words as a bag of words
###Code
dictionary = corpora.Dictionary(texts)
dictionary.filter_extremes(no_below=best_hp_setting[1][0],no_above=best_hp_setting[1][1])
corpus = [dictionary.doc2bow(text) for text in texts]
# from itertools import product
# number_of_topics = [5,6,7,8,9,10,15]
# df_parameters = list(product([2,3,4,5,6,7,8,9,10],[0.3,0.4,0.5,0.6,0.7,0.8,0.9]))
# coherence_scores_umass = np.zeros((len(number_of_topics),len(df_parameters)))
# coherence_scores_uci = np.zeros((len(number_of_topics),len(df_parameters)))
# coherence_scores_npmi = np.zeros((len(number_of_topics),len(df_parameters)))
# j = 0
# for num in number_of_topics:
# i = 0
# for n,m in df_parameters:
# dictionary = corpora.Dictionary(texts)
# dictionary.filter_extremes(no_below=n,no_above=m)
# corpus = [dictionary.doc2bow(text) for text in texts]
# num_topics = num
# chunksize = 200
# passes = 20
# iterations = 500
# eval_every = None
# lda_model = gensim.models.ldamodel.LdaModel(corpus,
# id2word=dictionary,
# num_topics=num_topics,
# chunksize=chunksize,
# passes=passes,
# iterations=iterations,
# alpha='auto',
# eta='auto',
# random_state = 123,
# eval_every=eval_every)
# cm = CoherenceModel(lda_model, texts=texts,corpus=corpus, coherence= 'c_uci')
# coherence_scores_uci[j,i] = cm.get_coherence()
# cm = CoherenceModel(lda_model, texts=texts,corpus=corpus, coherence= 'c_npmi')
# coherence_scores_npmi[j,i] = cm.get_coherence()
# cm = CoherenceModel(lda_model, corpus=corpus, coherence= 'u_mass')
# coherence_scores_umass[j,i] = cm.get_coherence()
# with open("coherence_scores_religion_category.csv",'a') as f:
# writer = csv.writer(f)
# writer.writerow([num,n,m,coherence_scores_uci[j,i],coherence_scores_npmi[j,i],\
# coherence_scores_umass[j,i]])
# i += 1
# print(i)
# j += 1
# print(j)
%%time
import logging
logging.basicConfig(filename='religion_topics.log', encoding='utf-8',format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
num_topics = best_hp_setting[0]
chunksize = 200
passes = 50
iterations = 500
eval_every = None
lda_model = gensim.models.ldamodel.LdaModel(corpus,
id2word=dictionary,
num_topics=num_topics,
chunksize=chunksize,
passes=passes,
iterations=iterations,
alpha='auto',
eta='auto',
random_state=123,
eval_every=eval_every)
top_topics = lda_model.top_topics(corpus) #, num_words=20)
# Average topic coherence is the sum of topic coherences of all topics, divided by the number of topics.
avg_topic_coherence = sum([t[1] for t in top_topics])/num_topics
print('Average topic coherence: %.4f.' % avg_topic_coherence)
cm = CoherenceModel(lda_model, texts = texts, corpus=corpus, coherence='c_npmi')
coherence = cm.get_coherence()
print(coherence)
for x in cm.get_coherence_per_topic(): print(x)
###Output
0.02942897361518566
0.010920267325592833
0.017413806749319347
0.030864923889938962
0.017170672600274106
0.07077519751080304
###Markdown
Visualizing data
###Code
vis = pyLDAvis.gensim_models.prepare(lda_model,corpus,dictionary,mds="mmds",R=20)
pyLDAvis.save_html(vis,'religion_umass.html')
vis
# from pprint import pprint
# pprint(top_topics)
import pickle
pickle.dump(lda_model,open('../model/religion_episodes_lda_model_umass.pkl','wb'))
pickle.dump(dictionary,open('../model/religion_episodes_dictionary_umass.pkl','wb'))
pickle.dump(corpus,open('../model/religion_episodes_corpus_umass.pkl','wb'))
# pickle.dump(texts,open('../model/religion_episodes_texts.pkl','wb'))
# import pickle
# file = open('../model/religion_episodes_lda_model.pkl','rb')
# lda_model = pickle.load(file)
# file.close()
# file = open('../model/religion_episodes_corpus.pkl','rb')
# corpus = pickle.load(file)
# file.close()
# file = open('../model/religion_episodes_dictionary.pkl','rb')
# dictionary = pickle.load(file)
# file.close()
def get_main_topic_df(model, bow, texts):
topic_list = []
percent_list = []
keyword_list = []
podcast_list = []
episode_list = []
duration_list = []
publisher_list = []
show_prefix_list = []
episode_prefix_list = []
descriptions_list = []
for key,wc in zip(keys,bow):
show_prefix_list.append(key[0])
episode_prefix_list.append(key[1])
podcast_list.append(df[(df['show_filename_prefix'] == key[0])&(df['episode_filename_prefix'] == key[1])].show_name.iloc[0])
episode_list.append(df[(df['show_filename_prefix'] == key[0])&(df['episode_filename_prefix'] == key[1])].episode_name.iloc[0])
duration_list.append(df[(df['show_filename_prefix'] == key[0])&(df['episode_filename_prefix'] == key[1])].duration.iloc[0])
publisher_list.append(df[(df['show_filename_prefix'] == key[0])&(df['episode_filename_prefix'] == key[1])].publisher.iloc[0])
descriptions_list.append(df[(df['show_filename_prefix'] == key[0])&(df['episode_filename_prefix'] == key[1])].episode_description.iloc[0])
topic, percent = sorted(model.get_document_topics(wc), key=lambda x: x[1], reverse=True)[0]
topic_list.append(topic)
percent_list.append(round(percent, 3))
keyword_list.append(' '.join(sorted([x[0] for x in model.show_topic(topic)])))
result_df = pd.concat([pd.Series(show_prefix_list, name='show_filename_prefix'),
pd.Series(episode_prefix_list, name='episode_filename_prefix'),
pd.Series(podcast_list, name='Podcast_name'),
pd.Series(episode_list, name='Episode_name'),
pd.Series(topic_list, name='Dominant_topic'),
pd.Series(percent_list, name='Percent'),
pd.Series(texts, name='Processed_text'),
pd.Series(keyword_list, name='Keywords'),
pd.Series(duration_list, name='Duration of the episode'),
pd.Series(publisher_list, name='Publisher of the show'),
pd.Series(descriptions_list, name='Description of the episode')], axis=1)
return result_df
main_topic_df = get_main_topic_df(lda_model,corpus,texts)
main_topic_df.to_pickle('religion_topics_main_df_umass.pkl')
main_topic_df.head(5)
plt.figure(figsize=(10,8))
grouped_topics = main_topic_df.groupby('Dominant_topic')
grouped_topics.count()['Podcast_name'].\
plot.bar(rot=0).\
set(title='Dominant Topic Frequency in the {} podcast episodes'.format(len(texts)),
ylabel='Topic frequency');
representatives = pd.DataFrame()
for k in grouped_topics.groups.keys():
representatives = pd.concat([representatives,
grouped_topics.get_group(k).sort_values(['Percent'], ascending=False).head(1)])
representatives
# print('Document: {} Dominant topic: {}\n'.format(representatives.index[2],
# representatives.loc[representatives.index[2]]['Dominant_topic']))
# print([sentence.strip() for sentence in transcripts[keys[representatives.index[2]]]])
num_topics = best_hp_setting[0]
def word_count_by_topic(topic=0):
d_lens = [len(d) for d in grouped_topics.get_group(topic)['Processed_text']]
plt.figure(figsize=(10,8))
plt.hist(d_lens)
large = plt.gca().get_ylim()[1]
d_mean = round(np.mean(d_lens), 1)
d_median = np.median(d_lens)
plt.plot([d_mean, d_mean], [0,large], label='Mean = {}'.format(d_mean))
plt.plot([d_median, d_median], [0,large], label='Median = {}'.format(d_median))
plt.legend()
plt.xlabel('Document word count',fontsize=16)
plt.ylabel('Number of documents',fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
from ipywidgets import interact, IntSlider
slider = IntSlider(min=0, max=num_topics-1, step=1, value=0, description='Topic')
interact(word_count_by_topic, topic=slider);
lda_top_words_index = set()
for i in range(lda_model.num_topics):
lda_top_words_index = lda_top_words_index.union([k for (k,v) in lda_model.get_topic_terms(i)])
print('Indices of top words: \n{}\n'.format(lda_top_words_index))
words_we_care_about = [{dictionary[tup[0]]: tup[1] for tup in lst if tup[0] in list(lda_top_words_index)}
for lst in corpus]
lda_top_words_df = pd.DataFrame(words_we_care_about).fillna(0).astype(int).sort_index(axis=1)
lda_top_words_df['Cluster'] = main_topic_df['Dominant_topic']
clusterwise_words_dist = lda_top_words_df.groupby('Cluster').get_group(0)
clusterwise_words_dist.sum()[:-1].transpose().\
plot.bar(figsize=(30, 8), width=0.7).\
set(ylabel='Word frequency',
title='Word Frequencies by Topic, Combining the Top {} Words in Each Topic'.format(len(lda_top_words_index)));
word_totals = {k:{y[1]:y[0] for y in x[0]} for k,x in enumerate(top_topics)}
import matplotlib.pyplot as plt
from ipywidgets import interact, IntSlider
from wordcloud import WordCloud
def show_wordcloud(topic=0):
cloud = WordCloud(background_color='white', colormap='viridis')
cloud.generate_from_frequencies(word_totals[topic])
plt.gca().imshow(cloud)
plt.axis('off')
plt.tight_layout()
slider = IntSlider(min=0, max=best_hp_setting[0]-1, step=1, value=0, description='Topic')
interact(show_wordcloud, topic=slider);
###Output
_____no_output_____ |
Jewelry Appraisal Project.ipynb | ###Markdown
Ring Appraisal Project URLs: cartier_catalog.csv: https://www.kaggle.com/marcelopesse/cartier-jewelry-catalog data set created April 2020 jewelry.csv: https://www.kaggle.com/victormegir/jewelry-from-haritidisgr data set created March 2021
###Code
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import random as rand
import datetime as dt
import re
df_jewelry = pd.read_csv('jewelry.csv')
df_cartier = pd.read_csv('cartier_catalog.csv', usecols = ['categorie', 'title', 'price', 'tags', 'description'])
df_jewelry = df_jewelry.drop_duplicates().reset_index(drop=True)
df_cartier = df_cartier.drop_duplicates().reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Converting jewelry.csv price column to USD using 3/15/2021 exchange rate because of the scrape date. Can be connected to an exchange rate API for live data.
###Code
df_jewelry['price_usd'] = round(df_jewelry['price'] * 1.19,2)
df_jewelry.head()
###Output
_____no_output_____
###Markdown
Creating brand column in Cartier dataset
###Code
df_cartier['brand'] = 'Cartier'
df_cartier.head()
###Output
_____no_output_____
###Markdown
How df_jewelry columns map to df_cartier: name = title, price_usd = price, jewel_type = categorie (kind of), other description columns = description (kind of)
###Code
df_jewelry['color'].unique()
df_cartier['tags'].unique()
df_jewelry['color'] = df_jewelry['color'].str.replace('Gold','Yellow')
df_cartier['tags'] = df_cartier['tags'].str.replace('yellow gold','Yellow')
df_cartier['tags'] = df_cartier['tags'].str.replace('white gold','White')
df_cartier['tags'] = df_cartier['tags'].str.replace('pink gold','Rose')
df_jewelry.fillna(np.nan,inplace=True)
df_cartier.fillna(np.nan,inplace=True)
df_jewelry.replace('nan',np.nan,inplace=True)
df_cartier.replace('nan',np.nan,inplace=True)
###Output
_____no_output_____
###Markdown
Determining price inflation with 10 random rings from the Cartier dataset
###Code
index_set = set()
while True:
ind = rand.choice(df_cartier[df_cartier['categorie'] == 'rings'].index)
index_set.add(ind)
if len(index_set) == 10:
break
index_list = list(index_set)
# Prices of 10 random cartier items from the dataset
start = [1110,20700,6000,2020,13600,1040,8800,1130,1380,171000]
# start = df_cartier.loc[index_list]['price'].values
# Prices on Cartier website. Taken in June 2021
end = [1220,22800,6600,2220,12800,1140,9650,1210,1520,192000]
dif = list(map(lambda l: (l[1] - l[0]) / l[0], zip(start,end)))
average_price_increase = sum(dif)/len(dif)
average_price_increase
d_after = dt.datetime.strptime('062021', "%m%Y").date() # date of my Cartier webscrape
d_start = dt.datetime.strptime('042020', "%m%Y").date() # date of dataset Cartier webscrape
num_of_months = (d_after.year - d_start.year) * 12 + d_after.month - d_start.month
f'The number of months between my webscrape and the original webscrape: {num_of_months}'
# Determining the monthly price inflation rate using the compound interest formula
P = 1
n = 12 # months
A = average_price_increase + 1
t = num_of_months/12
r = round(n*((A/P)**(1/(t*n)) - 1),4)
print(f'The rate r = {r*100:.2f} % compounded monthly')
# This number will be used in the API for determining the current price
###Output
The rate r = 6.84 % compounded monthly
###Markdown
Checking jewelry types and keeping only rings:
###Code
df_jewelry['jewel_type'].unique()
df_cartier['categorie'].unique()
df_ring_j = df_jewelry[df_jewelry['jewel_type'].isin(['Wedding rings','Ring'])]
df_ring_j.head()
df_ring_c = df_cartier[df_cartier['categorie'] == 'rings']
df_ring_c.head()
df_ring_c.loc[df_ring_c['description'].str.contains('wedding',case=False,na=False) |
df_ring_c['title'].str.contains('wedding',case=False,na=False)]['description'].unique()
df_ring_c.loc[df_ring_c['description'].str.contains('wedding',case=False,na=False) |
df_ring_c['title'].str.contains('wedding',case=False,na=False), 'jewel_type'] = 'Wedding rings'
df_ring_c.loc[df_ring_c['jewel_type'].isna(), 'jewel_type'] = 'Ring'
df_ring_c.head()
###Output
_____no_output_____
###Markdown
Feature Creation and Homogenization Cartier Dataset
###Code
df_ring_c.loc[(df_ring_c['tags'].str.contains('White',na=False)),'color'] = 'White'
df_ring_c.loc[(df_ring_c['tags'].str.contains('Rose',na=False)),'color'] = 'Rose'
df_ring_c.loc[(df_ring_c['tags'].str.contains('Yellow',na=False)),'color'] = 'Yellow'
df_ring_c.loc[(df_ring_c['tags'].str.contains('White',na=False))
& (df_ring_c['tags'].str.contains('Rose',na=False)),'color'] = 'White & Rose'
df_ring_c.loc[(df_ring_c['tags'].str.contains('White',na=False))
& (df_ring_c['tags'].str.contains('Yellow',na=False)),'color'] = 'White & Yellow'
df_ring_c.loc[(df_ring_c['tags'].str.contains('Rose',na=False))
& (df_ring_c['tags'].str.contains('Yellow',na=False)),'color'] = 'Yellow & Rose'
df_ring_c.loc[(df_ring_c['tags'].str.contains('White',na=False))
& (df_ring_c['tags'].str.contains('Rose',na=False))
& (df_ring_c['tags'].str.contains('Yellow',na=False)),'color'] = 'White & Yellow & Rose'
df_ring_c_gold = df_ring_c[df_ring_c['color'].isin(['Yellow', 'Rose', 'White', 'Yellow & Rose', 'White & Yellow',
'White & Rose', 'White & Yellow & Rose'])
& ~df_ring_c['description'].str.contains('‰ platinum')]
c_diamond_carats = df_ring_c_gold['description'].str\
.extractall(r"diamonds totaling (\d+\.\d+) carat"
,flags=re.IGNORECASE)\
.astype(float).droplevel('match')
total_c_diamond_carats = c_diamond_carats.groupby(c_diamond_carats.index).agg({0: sum})
df_ring_c_gold['total diamond carats'] = total_c_diamond_carats
gold_carats_df = df_ring_c_gold['description'].str\
.extractall(r"(\d+)K",
flags=re.IGNORECASE)\
.astype(int).droplevel('match')
final_gold_carats_df = gold_carats_df.groupby(gold_carats_df.index).agg({0: 'first'})
df_ring_c_gold['gold carats'] = final_gold_carats_df
df_ring_c_gold = df_ring_c_gold[~df_ring_c_gold['gold carats'].isna()]
c_jewel_weight = df_ring_c_gold['description'].str\
.extractall(r"totaling (\d+\.\d+) carat"
,flags=re.IGNORECASE)\
.astype(float).droplevel('match')
total_c_jewel_weight = c_jewel_weight.groupby(c_jewel_weight.index).agg({0: sum})
df_ring_c_gold['jewel_weight'] = total_c_jewel_weight
df_ring_c_gold.head()
first = df_ring_c_gold['description'].str.split(',', n=1, expand=True)[1]
second = first.str.split('gold.', n=1, expand=True)[1]
df_ring_c_gold['rock_details'] = second.str.split('Width', n=1, expand=True)[0]
bad_set_c = ['wide from size']
bad_dict_c = {
'brilliant-diamonds wide also available as midi ring':'brilliant-diamonds',
'brilliant-diamonds all-gold ring paved ring':'brilliant-diamonds',
'black-ceramic tto ring ceramic ring':'black-ceramic',
'chrysoprases lapis lazulis brilliant-diamonds': 'chrysoprases lapis-lazulis brilliant-diamonds'
}
def replace_func_c(x):
val = x
for num in list(map(str,list(range(10)))):
val = val.replace(num,'')
val = val.lower()
val = val.replace('.','')
val = val.replace(',','')
val = val.replace('totaling','')
val = val.replace('carats','')
val = val.replace('carat','')
val = val.replace('set with','')
val = val.replace('-cut','')
val = val.replace('mm','')
val = val.replace('width','')
val = val.replace(':','')
val = val.replace(' a ',' ')
val = val.replace(' to ',' ')
val = val.replace(' and ',' ')
val = val.replace('and ',' ')
val = val.replace(' an ',' ')
val = val.replace('following','')
val = val.replace('metrics','')
val = val.replace('k white gold','')
val = val.replace('k rose gold','')
val = val.replace('k pink gold','')
val = val.replace('k yellow gold','')
val = val.replace('white gold','')
val = val.replace('rose gold','')
val = val.replace('sizes','')
val = val.replace('all-gold ring paved ring','')
val = val.replace('beads','')
val = val.replace('eyes','')
val = val.replace('nose','')
val = val.replace('with','')
val = val.replace(' of','')
val = val.replace(' center stone ',' ')
val = val.replace(' tto ring ceramic ring','')
val = val.replace('black ','black-')
val = val.replace('rose ','rose-')
val = val.replace('blue ','blue-')
val = val.replace('yellow ','yellow-')
val = val.replace('green ','green-')
val = val.replace('orange ','orange-')
val = val.replace('brilliant ','brilliant-')
val = val.replace('tsavorite ','tsavorite-')
val = val.replace('pear-shaped ','pear-shaped-')
val = val.replace('baguette ','baguette-')
val = val.replace('princess ','princess-')
val = val.replace('troidia ','troidia-')
val = val.replace('brilliant-pavé ','brilliant-pavé-')
val = val.replace('mother-of pearl','mother-of-pearl')
val = val.replace('gray mother-of-pearl','gray-mother-of-pearl')
val = val.replace('(','')
val = val.replace(')','')
val = val.replace(' ',' ')
val = val.replace(' ',' ')
val = val.replace(' ',' ')
val = val.strip()
# if val in ('oval',):
# print(val,x)
if val in bad_set_c:
return ''
elif val in bad_dict_c:
return bad_dict_c[val]
# elif val == 'nan':
# return np.nan
# else:
# return val
return val
df_ring_c_gold['abbrev description'] = df_ring_c_gold['rock_details'].apply(lambda x: replace_func_c(str(x)))
df_ring_c_gold['rock_details'].apply(lambda x: replace_func_c(str(x))).unique()
###Output
_____no_output_____
###Markdown
Putting a value of 1 in every column that corresponds to a term in the ring description.
###Code
for idx in df_ring_c_gold['abbrev description'].index:
description = df_ring_c_gold['abbrev description'].loc[idx]
if description is np.nan:
continue
for term in description.split():
df_ring_c_gold.loc[idx,term] = 1
###Output
_____no_output_____
###Markdown
Jewelry Dataset
###Code
# Only Gold rings have sufficient data
df_ring_j_gold = df_ring_j[df_ring_j['material'].isin(['Gold 18ct.', 'Gold 14ct.', 'Gold 9ct.'])]
j_diamond_carats = df_ring_j_gold['rock_details'].str\
.extractall(r"diamond.? (?:triangle |brilliant |baguette |princess )?(\d+\.\d+)(?!\.|mm| mm|-)"
,flags=re.IGNORECASE)\
.astype(float).droplevel('match')
total_j_diamond_carats = j_diamond_carats.groupby(j_diamond_carats.index).agg({0: sum})
df_ring_j_gold['total diamond carats'] = total_j_diamond_carats
df_ring_j_gold['gold carats'] = df_ring_j_gold['material'].str\
.extractall(r"(\d+)(?:ct)?",
flags=re.IGNORECASE)\
.astype(float).droplevel('match')
df_ring_j_gold.head()
bad_dict_j = {
'diamond diamonds perimeter': 'diamond perimeter-diamonds',
'file krishzafori sapphires':'sapphires', 'brilliant diamonds': 'brilliant-diamonds',
'diamonds rest': 'diamonds', 'dimensions stone': 'delete', 'topaz violac': 'violac-topaz',
'diamond diamonds princess': 'diamond princess-diamonds', 'sinusite': 'delete',
'baguette brilliant diamonds': 'baguette-brilliant-diamonds',
'brilliant diamonds marquise': 'marquise-brilliant-diamonds',
'baguette brilliant sapphires': 'baguette-brilliant-sapphires',
'baguette brilliant emerald':'baguette-brilliant-emerald',
'diamond diamonds oval':'oval-diamond diamonds',
'baguette brilliant diamond diamonds': 'baguette-diamond brilliant-diamonds'
}
def replace_func_j(x):
val = x
# for num in list(map(str,list(range(10)))):
# val = val.replace(num,'')
val = re.sub('\d+\.?\d*\.?(ct|mm|t|pm)','',val)
val = re.sub('\d+.\d+','',val)
val = val.lower()
val = re.sub('\d+\.?\d*\.?(ct|mm|t|pm)','',val)
val = re.sub('(?<!vs)\d','',val)
val = val.replace('-','')
val = val.replace('centrally ','')
val = val.replace('central ','')
val = val.replace('f.w.p','fresh-water-pearl')
val = val.replace('kent','')
val = val.replace(' x','')
val = val.replace('diam.','diamond')
val = val.replace('diamonda','diamonds')
val = val.replace('&','')
val = val.replace('and','')
val = val.replace('kashmir','kashmir-')
val = val.replace('spinell','spinel')
val = val.replace('topozy','topaz')
val = val.replace('zisite','zoisite')
val = val.replace('sitrin','citrine')
val = val.replace('moronganic','morganite')
val = val.replace('mop','mother-of-pearl')
val = val.replace('mofpearl','mother-of-pearl')
val = val.replace('akoumara','aquamarine')
val = val.replace('coffee ','coffee-')
val = val.replace('black ','black-')
val = val.replace('baby ','baby-')
val = val.replace('white ','white-')
val = val.replace('whites ','white-')
val = val.replace('green ','green-')
val = val.replace('black ','black-')
val = val.replace('pearl akoya','akoya-pearl')
val = val.replace('aqua ','aqua-')
val = val.replace('blue ','blue-')
val = val.replace('red ','red-')
val = val.replace('pearl edison','edison-pearl')
val = val.replace('blazing ','blazing-')
val = val.replace('paraiba ','paraiba-')
val = val.replace('london ','london-')
val = val.replace('triangle ','triangle-')
val = val.replace('pink ','pink-')
val = val.replace('rainforest ','rainforest-')
val = val.replace('pearl south sea','south-sea-pearl')
val = val.replace('recrystallized ','recrystallized-')
val = val.replace('fresh water pearl','fresh-water-pearl')
val = val.replace('dress','')
val = val.replace('diamonds white-','white-diamonds')
val = val.replace('diamonds coffee-','coffee-diamonds')
val = val.replace('gronada','')
val = val.replace('zer','')
val = val.replace('day','')
val = val.replace('(','')
val = val.replace(')','')
val = val.replace('.mm','')
val = val.replace(' mm','')
val = val.replace('kent.dress','')
val = val.replace('.ct','')
val = val.replace(' ct','')
val = val.replace(' other ',' ')
val = val.replace(' rest',' ')
val = val.replace(' .. ',' ')
val = val.replace(' . ',' ')
val = val.replace('.','')
val = val.replace(' ',' ')
val = val.replace(' ',' ')
val = val.strip()
val = val.replace('pearl pm','pearl')
val = val.replace('pearl fresh-water-pearl','fresh-water-pearl')
val = val.replace('- ','-')
val = val.replace(' -','-')
val = val.replace('/',' ')
val = ' '.join(sorted(list(set(val.split()))))
if val in bad_dict_j:
return bad_dict_j[val]
elif val == 'nan':
return np.nan
else:
return val
df_ring_j_gold['abbrev description'] = df_ring_j_gold['rock_details'].apply(lambda x: replace_func_j(str(x)))
df_ring_j_gold = df_ring_j_gold[df_ring_j_gold['abbrev description'] != 'delete']
df_ring_j_gold['abbrev description'][~df_ring_j_gold['abbrev description'].isna()].unique()
###Output
_____no_output_____
###Markdown
Putting a value of 1 in every column that corresponds to a term in the ring description.
###Code
for idx in df_ring_j_gold['abbrev description'].index:
description = df_ring_j_gold['abbrev description'].loc[idx]
if description is np.nan:
continue
for term in description.split():
df_ring_j_gold.loc[idx,term] = 1
###Output
_____no_output_____
###Markdown
Combining datasets
###Code
print(list(df_ring_c_gold.columns))
c_cols = df_ring_c_gold.drop(['categorie', 'title', 'tags', 'description', 'rock_details', 'abbrev description',
'abbrev description', 'jewel_type'],axis=1).columns
cartier_final = df_ring_c_gold[c_cols]
print(list(df_ring_j_gold.columns))
j_cols = df_ring_j_gold.drop(['price','name','sex', 'material','rocks', 'rock_details', 'dimensions', 'chain_carat',
'chain_length','diameter','abbrev description', 'jewel_type'],axis=1).columns
jewelry_final = df_ring_j_gold[j_cols]
c_cols
j_cols
jewelry_final.rename(columns={'price_usd':'price'}, inplace=True)
jewelry_final['brand'] = jewelry_final['brand'].fillna('')
final_df = pd.concat([cartier_final,jewelry_final], ignore_index=True)
final_df.fillna(0, inplace=True)
print(final_df.isna().sum().sum())
print(list(final_df.columns))
final_df.head()
final_df = final_df[final_df['brand'].isin(['Haritidis', 'Cartier'])]
final_df[['price', 'brand', 'color', 'total diamond carats', 'gold carats', 'jewel_weight']].describe()
final_df['gold carats'].unique()
final_df.groupby('brand')['price'].mean()
final_df['brand'].value_counts()
###Output
_____no_output_____
###Markdown
Dummy Variables
###Code
final_df_w_dummies = pd.get_dummies(final_df)
final_df_w_dummies.head()
final_df_w_dummies.loc[final_df_w_dummies['color_White & Rose'] == 1, ['color_White','color_Rose']] = 1
final_df_w_dummies.loc[final_df_w_dummies['color_White & Yellow'] == 1, ['color_White','color_Yellow']] = 1
final_df_w_dummies.loc[final_df_w_dummies['color_White, Yellow & Rose'] == 1, ['color_White','color_Rose','color_Yellow']] = 1
final_df_w_dummies.loc[final_df_w_dummies['color_White & Yellow & Rose'] == 1, ['color_White','color_Rose','color_Yellow']] = 1
final_df_w_dummies.drop(['color_White & Rose', 'color_White & Yellow', 'color_White, Yellow & Rose',
'color_White & Yellow & Rose', 'brand_Cartier'],axis=1,inplace=True)
list(final_df_w_dummies.columns)
###Output
_____no_output_____
###Markdown
Model Creation
###Code
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
def scorers(target,pred):
print('MSE:', mean_squared_error(target,pred))
print('MAE:', mean_absolute_error(target,pred))
print('R^2:', r2_score(target,pred))
###Output
_____no_output_____
###Markdown
Oversampling for models
###Code
y = final_df_w_dummies['price']
X = final_df_w_dummies.drop('price',axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state = 0)
oversampling_count = X_train.groupby('brand_Haritidis')['total diamond carats'].count()[1]
underrep_idxs = X_train[X_train['brand_Haritidis'] == 0].index
idx_list = []
for i in range(oversampling_count):
idx_list.append(rand.choice(underrep_idxs))
cartier_rows_X = X_train.loc[idx_list]
haritidis_rows_X = X_train[X_train['brand_Haritidis'] == 1]
cartier_rows_y = y_train.loc[idx_list]
haritidis_rows_y = y_train[X_train['brand_Haritidis'] == 1]
oversampled_df_X = pd.concat([cartier_rows_X,haritidis_rows_X], ignore_index=True)
oversampled_df_y = pd.concat([cartier_rows_y,haritidis_rows_y], ignore_index=True)
oversampled_df_X['brand_Haritidis'].value_counts()
X_test['brand_Haritidis'].value_counts()
###Output
_____no_output_____
###Markdown
Previously trained models with cross validation: Same hyperparameters are used in final model.
###Code
# clf_ridge = Ridge(random_state=0)
# clf_rfr = RandomForestRegressor(random_state=0)
# from sklearn.model_selection import GridSearchCV
# alphas = np.array([1,0.1,0.01,0.001,0.0001,0])
# grid_ridge = GridSearchCV(estimator=clf_ridge, param_grid=dict(alpha=alphas),
# verbose=2, n_jobs=-1)
# params = {'bootstrap': [True, False],
# 'max_depth': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, None],
# 'max_features': ['auto', 'sqrt'],
# 'min_samples_leaf': [1, 2, 4],
# 'min_samples_split': [2, 5, 10],
# 'n_estimators': [200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000]}
# grid_rfr = GridSearchCV(estimator=clf_rfr, param_grid=params,
# verbose=2, n_jobs=-1)
# grid_ridge.fit(oversampled_df_X, oversampled_df_y);
# print(grid_ridge.best_score_)
# print(grid_ridge.best_estimator_)
# grid_rfr.fit(oversampled_df_X, oversampled_df_y);
# print(grid_rfr.best_score_)
# print(grid_rfr.best_estimator_)
# y_pred_ridge = grid_ridge.predict(X_test)
# y_pred_rfr = grid_rfr.predict(X_test)
###Output
_____no_output_____
###Markdown
Results from previous training: Ridge: R^20.8565809888502182 Best parametersRidge(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=0, solver='auto', tol=0.001) Random Forest: R^20.9889098573710595 Best parametersRandomForestRegressor(bootstrap=False, criterion='mse', max_depth=30, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=5, min_weight_fraction_leaf=0.0, n_estimators=200, n_jobs=None, oob_score=False, random_state=0, verbose=0, warm_start=False)
###Code
clf_ridge = Ridge(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=0,
solver='auto', tol=0.001)
clf_rfr = RandomForestRegressor(bootstrap=False, criterion='mse', max_depth=30, max_features='sqrt', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_samples_leaf=1, min_samples_split=5,
min_weight_fraction_leaf=0.0, n_estimators=200, n_jobs=None, oob_score=False, random_state=0,
verbose=0, warm_start=False)
clf_ridge.fit(oversampled_df_X, oversampled_df_y);
clf_rfr.fit(oversampled_df_X, oversampled_df_y);
y_pred_ridge = clf_ridge.predict(X_test)
y_pred_rfr = clf_rfr.predict(X_test)
###Output
_____no_output_____
###Markdown
Results
###Code
scorers(y_test, y_pred_ridge)
scorers(y_test, y_pred_rfr)
###Output
MSE: 7778246.469094671
MAE: 736.6735262542561
R^2: 0.8420380073004872
###Markdown
Exporting Model
###Code
import joblib
joblib.dump(clf_ridge, 'ridge_model.pkl', compress=9)
joblib.dump(clf_rfr, 'rfr_model.pkl', compress=9);
###Output
_____no_output_____ |
ColabNotebooks/version1.2_avgPressure.ipynb | ###Markdown
**Keystroke Dynamics on Mobile Devices Varying with Time** Importing required libraries
###Code
#import required libraries
import os
import pandas as pd
import numpy as np
import datetime
import pickle
import matplotlib.pyplot as plt
from google.colab import drive
drive.mount("/content/drive", force_remount=True)
# mounting a specific directory on my google drive for data storage and retrieval
os.chdir("/content/drive/My Drive/Practicum/")
!ls
#read specific columns from CSV file
col_list = ["user_uuid", "timestamp","type","pressure"]
df_event = pd.read_csv("CSV/event.csv", usecols=col_list)
df_event.head(10)
#convert epoc time to normal time
df_event['timestamp']=pd.to_datetime(df_event['timestamp'],unit='ms')
#df_event.describe
df_event
#df_event.timestamp.dtype
#Add a new column 'event_date' by extracting date from timstamp
df_event['event_date']=pd.to_datetime(df_event['timestamp']).dt.strftime('%Y-%m-%d')
df_event.head(10)
# pickle functions
#for reading pickle file
def read_pickle(filename, path='/content/drive/My Drive/Practicum/Pickle/'):
sct = datetime.datetime.now()
print("Start Pickle Load time: {0}".format(sct))
with open(path + filename, 'rb') as file:
unpickler = pickle.Unpickler(file)
df = pickle.load(file)
ct = datetime.datetime.now()
print("End Pickle Load time: {0} Duration:{1}".format(ct, ct-sct))
return df
#to write into a pickle file
def write_pickle(df,filename, path='/content/drive/My Drive/Practicum/Pickle/'):
sct = datetime.datetime.now()
print("Start Pickle Load time: {0}".format(sct))
with open(path + filename, 'wb') as file:
pickle.dump(pd.DataFrame(df), file)
ct = datetime.datetime.now()
print("End Pickle Load time: {0} Duration:{1}".format(ct, ct-sct))
#rename column name 'type' to 'event_type'
df_event.rename(columns = {'type' :'event_type'}, inplace = True)
df_event.head(10)
#remove the rows with pressure=0
df_event=df_event.query('pressure>0')
df_event
# resetting the DatFrame index
df_event = df_event.reset_index()
df_event
#to check for NaN under a single DataFrame column:
df_event['pressure'].isnull().values.any()
#to check for NaN under a entire DataFrame columns:
df_event.isnull().values.any()
#total no of touch_down events
touch_down_cnt=df_event.event_type.value_counts().TOUCH_DOWN
touch_down_cnt
#Add event_id to each group based on touch_down events.Because touch_down means event begins
df_event['event_id']=0
touch_down_cnt=0;
for i in range(len(df_event)) :
if (df_event.loc[i,'event_type']=="TOUCH_DOWN"):
touch_down_cnt+= 1
#print(touch_down_cnt)
df_event.loc[i,'event_id']=touch_down_cnt
df_event.head(10)
#to find unique event type
df_event.event_type.unique()
#to find unique event id
df_event.event_id.unique()
df_event.event_id.dtype
#write the dataframe 'df_event' as pickle file
write_pickle(df_event,'df_event.p')
#retrieve the pickle file 'df_event.p'
df_event=read_pickle('df_event.p')
df_event
###Output
Start Pickle Load time: 2021-06-05 08:52:01.252284
End Pickle Load time: 2021-06-05 08:52:02.418426 Duration:0:00:01.166142
###Markdown
**create new df**
###Code
#Aggregate different functions over the columns and rename the index of the resulting DataFrame.
df_event_new=df_event.groupby(['event_id','user_uuid']).agg(start_timestamp=('timestamp',max),end_timestamp=('timestamp',min),avg_pressure=('pressure',np.mean),event_date=('event_date',min)).reset_index()
df_event_new
#here we add extra coulmns as username by assigning username for each user_uuid
#create series of unique user ids
user_ids=df_event_new['user_uuid'].drop_duplicates().sort_values().reset_index(drop=True)
uname=['user1','user2','user3','user4','user5','user6','user7','user8' ]
se=pd.DataFrame(uname[0:len(user_ids)])#create series with user name
se.columns=['user_name']#set index as user_name
#create df with user_uuid and user_name
df_uname=pd.concat([user_ids,se],axis=1)
#join dataframes such as df_uname and df_event_new
df_event_new = df_event_new.merge(df_uname,on='user_uuid',how='inner').sort_values('start_timestamp').reset_index(drop=True)
df_event_new
df_uname
df_uname.to_csv('/content/drive/My Drive/Practicum/CSV/user_list.csv')
###Output
_____no_output_____
###Markdown
Write newly created df to pickle file
###Code
#write the new dataframe in pickle
write_pickle(df_event_new,'df_event_new.p')
#retrieve the pickle file 'df_event_new.p'
df_event_new=read_pickle('df_event_new.p')
df_event_new
df_event_new.to_csv('/content/drive/My Drive/Practicum/CSV/df_event_new.csv')
df_event_day = df_event_new.set_index(['event_date'])
#create a dataframe based on date
df_event_day=df_event_new.query('event_date=="2021-02-15"')
#df_event_day=df_event_new.query('(event_date=="2021-05-24")&(user_uuid=="46952d51-25ad-405a-ac11-a22a624ae6b5")')
#to find unique user_name
df_event_day.user_name.unique()
#to find unique user_name
df_event_day.user_name.unique()
#to find unique user_id
df_event_day.user_uuid.unique()
###Output
_____no_output_____
###Markdown
**Visualisation**
###Code
import pandas as pd
import matplotlib.pyplot as plt
ax = df.plot(figsize=(9, 6), ylabel='Pressure')
# add horizontal line
ax.hlines(y=3450, xmin='2020-09-10', xmax='2020-09-17', color='purple', label='test')
ax.legend()
plt.show()
# designate variables
x1 = df_event_day['start_timestamp']
x2 = df_event_day['end_timestamp']
#y = df_event_day.index.astype(np.int)
y=df_event_day['avg_pressure'].drop_duplicates().sort_values().reset_index(drop=True)
names = df_event_day['user_name'].values
labs, tickloc, col = [], [], []
# create color iterator for multi-color lines in gantt chart
color=iter(plt.cm.Dark2(np.linspace(0,1,len(y))))
plt.figure(figsize=[8,10])
fig, ax = plt.subplots()
# generate a line and line properties for each user
for i in range(len(y)):
c=next(color)
plt.hlines(i+1, x1[i], x2[i], label=y[i], color=c, linewidth=2)
labs.append(names[i].title()+" ("+str(y[i])+")")
tickloc.append(i+1)
col.append(c)
plt.ylim(0,len(y)+1)
plt.yticks(tickloc, labs)
# create custom x labels
plt.xticks(np.arange(datetime(np.min(x1).year,1,1),np.max(x2)+timedelta(days=365.25),timedelta(days=365.25*5)),rotation=45)
plt.xlim(datetime(np.min(x1).year,1,1),np.max(x2)+timedelta(days=365.25))
plt.xlabel('Date')
plt.ylabel('Pressure')
plt.grid()
plt.title('Pressure Varaition during the day')
# color y labels to match lines
gytl = plt.gca().get_yticklabels()
for i in range(len(gytl)):
gytl[i].set_color(col[i])
plt.tight_layout()
plt.savefig(rootname+'gantt.pdf');
import plotly.express as px
import plotly.figure_factory as ff
import plotly.graph_objs as go
import chart_studio
import chart_studio.plotly as py
import chart_studio.tools as tls
colors = { 'user1' : 'rgb(102, 153, 0)'
, 'user2' : 'rgb(0, 204, 255)'
, 'user3' : 'rgb(0, 51, 153)'
, 'user4' : 'rgb(102, 0, 204)'
, 'user5' : 'rgb(204, 0, 255)'
, 'user6' : 'rgb(255, 0, 204)'
, 'user7' : 'rgb(255, 0, 102)'
, 'user8' : 'rgb(51, 255, 255)'}
orders =
fig = px.timeline(df_event_day
, x_start="start_timestamp"
, x_end="end_timestamp"
, y="Resource"
, hover_name="avg_pressure"
, color_discrete_sequence=px.colors.qualitative.Prism
, category_orders={'Task':['Results of Task','Establish and agree upon Agile standards/framework','Communicate Agile structure to business groups','Select proper tooling','Results of Task','Refine analytic data use operating model','Define analytic persona\x92s and use cases','Define operational data use operating model','Define operational persona\x92s and use cases ','Define standard ','Results of Task','ID appropriate use case requirements for applying methodology (i.e. what are expectations from the business, what are deliverable expectations etc.)',
'Create template for data use stories (i.e. data sets, data presentation, data analysis, Data validation, advanced analytic stories)','Results of Task','Identify product advisory council attendees','Create governance model/methodology','Results of Task','ID all candidate roles and process to keep bench as opposed to OJT','ID incentive and disincentive components to manage contract','ID candidate partners for various work components','Create methodology to manage bid process and selection \x96 partner with procurement','Results of task','Define scope of project','Results of task','ID tools for self-service ties to tools/tech','Determine extent of centrally managed policy engines','What multi-tenancy restrictions should be established','Access control \x96 Tie to IT IAM strategy','Evaluate security, PHI, access restrictions','Results of task','Publish product and service catalog outlining initial build/implementation of the product or service as well as the ongoing costs','Results of task','ID standard tools for self-service','Create governance and methodology use cases','Create training protocol for tool consumption',
'Database/System design for persistent data to be stored on remote, network attached storage','Identify systems requirements','Technology/vendor selection','Migration effort to be considered; layout strategy','Downstream efforts - existing processes to migrate; how to enable a seamless transition; which approach to use - coexist or turn off/turn on','Enable S3 as primary storage, move away from other storage solutions and determine config','Establish rules/processes to ensure S3 is the primary storage solution','Recompilation and redeployment of jobs is required','Stand up non-prod two node cluster job/script execution','Decommission version 5 by May 21 2021','Results of task','Identify self-service needs/requirements for each group ','Identify the right tool for the right purpose','Learning and creating guidelines on how to use new tools','Estimate adoption rate','Gauge time to ramp-up','Evaluate if migration from existing tools to new tools is required',
'Results of task','Identify compatible tooling','Determine requirements specific for each business group','Identify VDI-based security issues for additional R/Python libraries','Document and communicate procedure to add new ML features/libraries to remote environments','Estimate value of paid tools vs free libraries','Determine if automated ML tools are necessary (Datarobot vs Azure ML)','Gauge level of adoption','Document FAQs and consider questions specific to each business group','Results of task','Determine scope \x96 DR, HA, Extent of insight and notification needs','Establish continuous monitoring needs aligned with DevOps','Resiliency testing approach','Monitoring tools','Incident response communication protocol (ties to governance and operating model)','Identify compatible tooling/technology','Consider location/region-based connection practices',
'Consider multiple physical data center connection locations for resiliency to connectivity failures due to fiber cut, device failure or complete location failure','Determine if dynamically routed and active connections vs. statically routed for load balancing and failover','Results of task','Identify compatible tooling','Determine instance requirements specific for each business group','Launch instances and document keys/passwords','Communicate instance configurations and setup notes to each group','Estimate level of adoption','Predict and document FAQs','Document how to use Microsoft Remote Desktop Apps for Mac users','Install supporting software (R, Python, Firefox etc.)','Identify an owner/manager of compute elasticity/infrastructure operations',
'Identify rules of engagement and governance for when the process is live','Results of task','Prioritize data pipelines (tie to Data COE)','Pilot analytics in data pipeline (tie to NextBlue AA COE)','Define event streaming hub approach (i.e. people process tech)','Determine appropriate/compatible event streaming tools (Apache Kafka, Hazelcast Jet etc.)','Determine streaming rate requirements specific to each business group','Map out the data flow/data architecture and associated patterns','Identify who will do the publishing framework','Deploy and document','There are dependencies on infrastructure prior to putting this in place','Results of task','Define tools/tech and associated dependencies','Align with cloud COE','Establish development guidelines','Pilot?','Tie to governance and operating model \x96 when to use how to use etc.','Determine which tasks will be microservices','Determine tools to run/containerize microservices',
'Select a microservice deployment strategy (Service Instance per Host, Multiple Service Instances per Host, Service Instance per Virtual Machine etc.)','Determine if you need a cluster manager (Kubernetes, Marathon etc.)','Identify enablement approach','Results of task','Define tools/tech and associated dependencies','Align with cloud COE','Establish development guidelines','Pilot?','Tie to governance and operating model \x96 when to use how to use etc.','Determine whether to containerize existing legacy application without refactoring into microservices','Identify barriers between security and devops','Determine drawbacks: such as container infrastructure is not as mature or secure as the infrastructure for VMs (containers share the kernel of the host OS with one another)','Results of task','Identify a back-end physical infrastructure to enable digital tools','Evaluate a hybrid cloud strategy (with an eye for the worker UX)',
'Evaluate existing support services for the workplace','Implement technology for a real-time, 360 view of the UX','Results of task','Orchestration and management of services, applications and data flow','Automation of business processes','Establishment of federated security and Single Sign-On','System logging and monitoring','Data analysis, reporting and predictions','Monitoring of mobile and IoT devices','Implementation of Big Data solutions','Centralized (mobile ready) web solutions for clients and employees','Scalability and high-load support','SaaS and multi-tenancy support','Centralized control of infrastructure management etc.','Results of task','Determine compatible tools','Identify downfalls of low code platforms','Estimate adoption rate',
'Results of task','Determine the FHIR server architecture','Determine compatibility of servers and FHIR versions to ensure interoperability','Understand the APIs as well as the level of completion per API','Identify where to put/install the FHIR server','Estimate adoption rate','Results of task','ID priority governance model evaluation needs \x96 self-service, machine learning model productionalization, operational area use and interaction']}
, opacity=.7
, range_x=None
, range_y=None
, template='plotly_white'
, height=800
, width=1500
, color='Dimension'
, title ="<b>IE 3.0 Gantt Chart 2021</b>"
)
fig.update_layout(
bargap=0.5
,bargroupgap=0.1
,xaxis_range=[df.Start.min(), df.Finish.max()]
,xaxis = dict(
showgrid=True
,rangeslider_visible=True
,side ="top"
,tickmode = 'array'
,dtick="M1"
,tickformat="Q%q %Y \n"
,ticklabelmode="period"
,ticks="outside"
,tickson="boundaries"
,tickwidth=.1
,layer='below traces'
,ticklen=20
,tickfont=dict(
family='Old Standard TT, serif',size=24,color='gray')
,rangeselector=dict(
buttons=list([
dict(count=1, label="1m", step="month", stepmode="backward"),
dict(count=6, label="6m", step="month", stepmode="backward"),
dict(count=1, label="YTD", step="year", stepmode="todate"),
dict(count=1, label="1y", step="year", stepmode="backward"),
dict(step="all")
])
,x=.37
,y=-.05
,font=dict(
family="Arial",
size=14,
color="darkgray"
)))
,yaxis = dict(
title= ""
,autorange="reversed"
,automargin=True
# ,anchor="free"
,ticklen=10
,showgrid=True
,showticklabels=True
,tickfont=dict(
family='Old Standard TT, serif', size=16, color='gray'))
,legend=dict(
orientation="h"
,yanchor="bottom"
,y=1.1
,title=""
,xanchor="right"
,x=1
,font=dict(
family="Arial"
,size=14
,color="darkgray"))
)
fig.update_traces( #marker_color='rgb(158,202,225)'
marker_line_color='rgb(8,48,107)'
, marker_line_width=1.5, opacity=0.95)
fig.update_layout(
title="<b>IE 3.0 Gantt Chart 2021</b>",
xaxis_title="",
# margin_l=400,
yaxis_title="Initiatives",
# legend_title="Dimension: ",
font=dict(
family="Arial",
size=24,
color="darkgray"
)
)
# fig.show()
fig.write_html("C:/Users/maxwell.bade/Downloads/ie_3_gantt.html")
go.FigureWidget(fig)
###Output
_____no_output_____ |
Exo2.ipynb | ###Markdown
Exercice sur la recherche du mot philosophie sur les pages de Wikipédia **Imports**
###Code
import requests
from bs4 import BeautifulSoup
import urllib
###Output
_____no_output_____
###Markdown
Implementation
###Code
# Maximal distance allowed, e.g. 100 pages
max_distance = 100
# Initialization
url = 'https://fr.wikipedia.org/wiki/Disposition_des_touches_d%27un_clavier_de_saisie'
distance = 0
reponse = 200
while ( (distance <max_distance) and (reponse != 404) ) :
response = requests.get(url)
contents = requests.get(url).text
soup = BeautifulSoup(contents)
#i) recherche du premier lien hypertexte
# returns the text of the page (otherwise, hyperlinks of pictures are retrieved as well)
page_text = soup.get_text()
# on recherche uniquement parmi les liens contenus dans les paragraphes (<body>)
L = [] # liste contenant tous les liens hypertexte
for paragraphe in soup.find_all('p'):
for link in paragraphe.find_all('a'):
L.append(link.get('href'))
text_of_link = link.getText()
#print(link.attrs)
url = L[0] # Adresse du premier trouvé dans le corps de texte
url = "https://fr.wikipedia.org/" + url
if ( url != 'https://fr.wikipedia.org/wiki/Portail:Philosophie'):
urllib.request.urlopen(url) # follows the found link if the term has not been found
reponse = response.status_code
distance = distance + 1
else:
break
# Display / affichage
if distance < max_distance:
print("The web page", url , " has been found after a distance of", distance, " web pages")
elif distance >= max_distance:
print("Maximum allowed distance has been reached")
elif response != 200:
print("Error", response)
else:
print("Not found")
###Output
_____no_output_____ |
Notebooks/.ipynb_checkpoints/ToastMeDF-checkpoint.ipynb | ###Markdown
SageMaker Notebook
###Code
# SELECT posts.title, comments.body ,comments.score
# FROM `fh-bigquery.reddit_comments.2019_*` AS comments
# JOIN `fh-bigquery.reddit_posts.2019_*` AS posts
# ON posts.id = SUBSTR(comments.link_id, 4)
# WHERE posts.subreddit = 'toastme' AND comments.score >= 1 AND posts.author != comments.author
# AND comments.body != '[removed]' AND comments.body != '[deleted]' AND parent_id = link_id AND NOT REGEXP_CONTAINS(comments.body, r"\[[a-zA-Z0-9-]\]")
!pip3 install --upgrade six>=1.13.0
!pip install google-cloud-bigquery
import os
import json
import boto3
from google.cloud.bigquery import magics
from google.oauth2 import service_account
%load_ext google.cloud.bigquery
import os
import boto3
BUCKET = 'redcreds'
KEY = 'My First Project-01f0de489aed.json'
s3= boto3.resource('s3')
s3.Bucket(BUCKET).download_file(KEY,'My First Project-01f0de489aed.json')
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'My First Project-01f0de489aed.json'
%load_ext google.cloud.bigquery
%%bigquery df
SELECT *
FROM `toastme123.redditclean`
df.head()
# with open(writePath, 'a') as f:
# f.write(df.to_string(header = False, index = False))
df = df.drop(['score','created_utc'], axis=1)
df['body'] = df.StringColumn.apply(lambda s: s+'XXX')
###Output
_____no_output_____ |
Bayesian_Belief_Network/2175052_Safir_ML_Lab_Ass5.ipynb | ###Markdown
ML Assignment 5 Name : Safir Motiwala (2175052) Question : Write a python program to construct a Bayesian Belief Network considering medical data. Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the pgmpy (Probabilistic Graphical Models) library
###Code
from pgmpy.inference import VariableElimination
from pgmpy.models import BayesianModel
from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator
columns = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope', 'ca',
'thal', 'heartdisease']
heartDiseaseDataset = pd.read_csv('heart.csv', names = columns)
heartDiseaseDataset = heartDiseaseDataset.drop([0])
heartDiseaseDataset
heartDiseaseDataset = heartDiseaseDataset.replace('?', np.nan)
heartDiseaseDataset
###Output
_____no_output_____
###Markdown
Building the Bayesian Model
###Code
bayesian_model = BayesianModel([('age', 'trestbps'), ('age', 'fbs'), ('sex', 'trestbps'), ('exang', 'trestbps'), ('trestbps','heartdisease'), ('fbs','heartdisease'), ('heartdisease','restecg'), ('heartdisease','thalach'), ('heartdisease','chol')])
###Output
_____no_output_____
###Markdown
Fitting the dataset to Bayesian Model
###Code
bayesian_model.fit(heartDiseaseDataset, estimator=MaximumLikelihoodEstimator)
###Output
_____no_output_____
###Markdown
Variable Elimination to eliminate variables one by one which are irrelevant for the query
###Code
from pgmpy.inference import VariableElimination
HeartDisease_infer = VariableElimination(bayesian_model)
q = HeartDisease_infer.query(variables=['heartdisease'], evidence={'age': 37, 'sex' :0})
print(q)
###Output
+-----------------+---------------------+
| heartdisease | phi(heartdisease) |
+=================+=====================+
| heartdisease(0) | 0.4404 |
+-----------------+---------------------+
| heartdisease(1) | 0.5596 |
+-----------------+---------------------+
|
groking-leetcode/2-two-pointers.ipynb | ###Markdown
对撞指针双指针(two pointer)是 Leetcode 中重要的解题技巧,[滑动窗口](https://www.jianshu.com/p/485ce4e0f8a5)也是一种双指针技巧。窗口的滑动是 lo 和 hi 两个指针分别向右滑动。此外,两个指针如果分别从 低位 和 高位相向滑动,那么就是对撞指针。对撞指针的用法很多种,对于两边夹,单调性的数组(已排序的数组),以及 partition 区分都非常有用。下面就通过一些简单的题目来介绍对撞指针。 两边夹对撞,顾名思义,就是相向而行,直到碰撞。leetcode的 [125 验证回文串](https://leetcode-cn.com/problems/valid-palindrome/) 题要求是验证回文串。> 给定一个字符串,验证它是否是回文串,只考虑字母和数字字符,可以忽略字母的大小写。> > 说明:本题中,我们将空字符串定义为有效的回文串。> > 示例 1:> > 输入: "A man, a plan, a canal: Panama"> > 输出: true> > 示例 2:> > 输入: "race a car"> > 输出: false所谓回文串就是从左到右扫描的字符,和从右到左扫描的字符一模一样。回文在诗歌中应用也颇为广泛,颇有炫才之技。著名的[璇玑图](https://zh.wikipedia.org/wiki/%E7%92%87%E7%8E%91%E5%9B%BE) 里的回文就很妙。回到题目本身,既然回文是为了验证从左往右和从右往左都必须一样。那么就可以设置两个指针,一个分别从左往右扫描,另外一个从右往左扫描。扫描的过程中,比对指针所对应的字符。例如下图 分别从左往右和从右往左
###Code
class Solution:
def isPalindrome(self, s: str) -> bool:
l = 0
r = len(s) - 1
while l < len(s):
if s[l] != s[r]:
return False
l += 1
r -= 1
return True
###Output
_____no_output_____
###Markdown
实际上,对于回文串,它们本身是对称的。因此在比对的时候,不需要完全从左到右。而是从左到右和从右到左一起移动,一旦相撞,则停止。
###Code
class Solution:
def isPalindrome(self, s: str) -> bool:
l = 0
r = len(s) - 1
while l < r:
if s[l] != s[r]:
return False
l += 1
r -= 1
return True
###Output
_____no_output_____
###Markdown
代码几乎没有改动,只改变了循环条件。当 `l < r` 的时候,说明中间只剩一个字符元素。左右两边完全对称,符合回文的定义。`l <= r` 的时候,表示两个指针重叠,此时字符相对自身也是属于回文,也可以这样写,只是这题没有必要。在滑动窗口中,通常设置窗口的大小为 [lo, hi) 左闭右开的区间,为了方便计算窗口的大小 hi - lo,而在对撞指针中,两个指针的元素都需要使用,即都是闭合的,可以设置为 [l, r] 的区间内向中间靠拢。对于两边夹的对撞指针,核心就是在处理两边指针扫描元素的时候进行逻辑判断,以便移动指针。当前题目的指针是同步移动,类似的还有下面一题。[344 翻转字符串](https://leetcode-cn.com/problems/reverse-string/)> 编写一个函数,其作用是将输入的字符串反转过来。输入字符串以字符数组 char[] 的形式给出。> > 不要给另外的数组分配额外的空间,你必须原地修改输入数组、使用 O(1) 的额外空间解决这一问题。> > 你可以假设数组中的所有字符都是 ASCII 码表中的可打印字符。> > 示例 1:> > 输入:["h","e","l","l","o"]> > 输出:["o","l","l","e","h"]> > 示例 2:> > 输入:["H","a","n","n","a","h"]> > 输出:["h","a","n","n","a","H"]解题思路类似,初始化连个指针 l 和 r,分别指向数组的第一个元素和最后一个元素,然后同步向中间靠拢,靠拢的过程中交互两个指针的元素。
###Code
from typing import *
class Solution:
def reverseString(self, s: List[str]) -> None:
l = 0
r = len(s) - 1
while l < r:
s[l], s[r] = s[r], s[l]
l += 1
r -= 1
###Output
_____no_output_____
###Markdown
从上面两题可以初步的感受对撞指针的套路。即左右指针相向移动,上面的两题是同步移动的。还有其他的题目,指针的移动未必是同步的,移动前提需要判断一些条件。当条件满足之后适当的移动左边或者右边,最终相撞求解返回。例如[11.盛水最多的容器](https://leetcode-cn.com/problems/container-with-most-water/)> 给定 n 个非负整数 a1,a2,...,an,每个数代表坐标中的一个点 (i, ai) 。在坐标内画 n 条垂直线,垂直线 i 的> 两个端点分别为 (i, ai) 和 (i, 0)。找出其中的两条线,使得它们与 x 轴共同构成的容器可以容纳最多的水。> > 说明:你不能倾斜容器,且 n 的值至少为 2。> > 示例:> > 输入: [1,8,6,2,5,4,8,3,7]>> 输出: 49由题意可知,盛水的容量来与数组两个数字之间有关,其中数字越小(高度越短)决定了最终的盛水量,类似木桶理论。每两个数字可以组成一个容器,一个数字则无法盛水。与滑动窗口类似,可以找出任意两个数字组成的一个窗口,求解其值存储。当所有窗口都计算完毕之后返回最大的值即可。与滑动窗口又不一样的地方在于,窗口不是从左往右滑动。而是使用了对撞指针相向缩小。初始化的窗口为数组的最大宽度,然后左右指针相向移动。什么时候移动呢?由 [l, r] 组成的窗口,其盛水量为 `min(l, r) * (r - l)`。由此可见,r-l 只能更小,想要这个表达式更大,只能找到更高的数。因此想要盛更多的水,只有把最矮的设法变高,即高度更矮的指针进行移动。如果移动更高的指针,那么 r - l 距离变小了,同时 `min(l, r)` 也不会更大,只会更小。最终代码如下:
###Code
class Solution:
def maxArea(self, height: List[int]) -> int:
l = 0
r = len(height) - 1
max_area = float('-inf')
while l < r:
min_height = min(height[l], height[r])
max_area = max(min_height * (r - l), max_area)
if height[l] < height[r]:
l += 1
else:
r -= 1
return max_area if max_area != float('-inf') else 0
###Output
_____no_output_____
###Markdown
从上面的例子可以看出,对撞指针的一般模板是设置 `[l, r]`, 然后再 `l < r` 的条件中缩小窗口,直到指针相撞退出。左右指针向中间靠拢,最多移动单位不过为数组的长度,即时间复杂度为 O(n)。与 上面一题类似还有一道接雨水的问题。[42.接雨水](https://leetcode-cn.com/problems/trapping-rain-water/)> 给定 n 个非负整数表示每个宽度为 1 的柱子的高度图,计算按此排列的柱子,下雨之后能接多少雨水。> > >上面是由数组 [0,1,0,2,1,0,1,3,2,1,2,1] 表示的高度图,在这种情况下,可以接 6 个单位的雨水(蓝色部分表示雨水)。 感谢 Marcos 贡献此图。> > 示例:> > 输入: [0,1,0,2,1,0,1,3,2,1,2,1]> > 输出: 6这一题与上一题的要求不一样,每个数组的元素自身的高度对于盛水量有关系。上一题是两个元素之间的,元素本身没有关系。这一题恰好相反。当前元素所能盛水的大小,取决于其左边最高的元素和其右边最高元素中比较矮的那个,同时需要减掉自身的高度。如下图 因此假设有一个指针重做往右进行移动,那么它当前接水的容量来自它左边最高和它右边最高的数。如何求出左边和右边的最高数呢?一个简单办法就是使用对撞指针,l 和 r 分别向中间靠拢,移动的过程中可以找出 [0, l] 中间的最高 l_max 和 [r, len]中间的最高 r_max,而当前的指针可以取 min(l_max, r_max) 中的部分。如下图当 l , r 分别在上图的过程中,l 所能盛水的容量是 ` height[l_max] - height[l]`,此时是0,然后 l 向右移动。此时 l 能盛水 依然是 ` height[l_max] - height[l]` 则是一个单位。依次类推,只要 l_max < r_max ,那么就移动 l 指针,直到 l 等于 r。如果 l_max 小于 r_max,也就是上图的对称过程,就不再赘述。具体可以看代码:
###Code
class Solution:
def trap(self, height: List[int]) -> int:
if len(height) <= 1:
return 0
l = 0
r = len(height) - 1
l_max = height[l]
r_max = height[r]
ret = 0
while l <= r:
l_max = max(l_max, height[l]) # 找出左边最大的柱子
r_max = max(r_max, height[r]) # 找出右边最大的柱子
if l_max < r_max:
ret += l_max - height[l] # 移动左边
l += 1
else:
ret += r_max - height[r]
r -= 1
return ret
###Output
_____no_output_____
###Markdown
上面这题对于对撞指针需要一定的理解,万变不离其宗。找到最初窗口缩小的条件,然后逐步调整缩小窗口,缩小窗口来自左右两边的指针向中间靠拢。其时间复杂度为O(n),只需要线性时间即可。 twoSum对撞指针还有一个应用场景就是针对已经排序的数组进行碰撞。leetcode的第一题[1.两数之和](https://leetcode-cn.com/problems/two-sum/)特别经典。使用 哈希字典能很快解题。其中第[167. 两数之和 II - 输入有序数组](https://leetcode-cn.com/problems/two-sum-ii-input-array-is-sorted/) 题,将输入的数组排定为有序数组。> 给定一个已按照升序排列 的有序数组,找到两个数使得它们相加之和等于目标数。> > 函数应该返回这两个下标值 index1 和 index2,其中 index1 必须小于 index2。> > 说明:> > 返回的下标值(index1 和 index2)不是从零开始的。> > 你可以假设每个输入只对应唯一的答案,而且你不可以重复使用相同的元素。> > 示例:> > 输入: numbers = [2, 7, 11, 15], target = 9> > 输出: [1,2]> > 解释: 2 与 7 之和等于目标数 9 。因此 index1 = 1, index2 = 2 。有序的数组就比好办了,对于两个数之和,如何大于target,那么就需要减少两数之和,自然就是 r 指针向左移动,如果小于target元素,那么就需要增加两数之和,自然就是移动左边的r指针。代码如下:
###Code
class Solution:
def twoSum(self, numbers: List[int], target: int) -> List[int]:
if len(numbers) < 2:
return [-1, -1]
l = 0
r = len(numbers) - 1
while l < r:
if numbers[l] + numbers[r] < target:
l += 1
elif target < numbers[l] + numbers[r]:
r -= 1
else:
return [l+1, r+1]
return [-1, -1]
###Output
_____no_output_____
###Markdown
与两数之和类似的自然就是三数之和,三数之和的注意点就是需要先对数组进行排序,然后可以对数组进行迭代。迭代过程中,就以当前元素i 到末尾的数组为子数组 [i, len],然后转变为两数之和进行处理,即寻找 子数组的 pair 对。[15.三数之和](https://leetcode-cn.com/problems/3sum/)> 给定一个包含 n 个整数的数组 nums,判断 nums 中是否存在三个元素 a,b,c ,使得 a + b + c = 0 ?找出所有满足条件且不重复的三元组。> > 注意:答案中不可以包含重复的三元组。> > 例如, 给定数组 nums = [-1, 0, 1, 2, -1, -4],> > 满足要求的三元组集合为:> > [[-1, 0, 1], [-1, -1, 2]]由于题目不能有重复的数组,因此对于连续的两个相同的数字,其子数组是一样的,需要先去重。同时一旦找到了两个合法的数字,l 和 r尚 不能停止,因为 [l, r] 之间可能有多个符合题意的pair对,同时在寻找剩余 pair对的时候也需要注意去重。
###Code
class Solution:
def threeSum(self, nums: List[int]) -> List[List[int]]:
nums.sort()
ret = []
for i in range(len(nums)-2):
if i > 0 and nums[i-1] == nums[i]:
continue
l = i + 1
r = len(nums) - 1
while l < r:
if nums[i] + nums[l] + nums[r] < 0:
l += 1
elif 0 < nums[i] + nums[l] + nums[r]:
r -= 1
else:
ret.append([nums[i], nums[l], nums[r]])
l += 1
r -= 1
while l < r and nums[l-1] == nums[l]:
l += 1
while l < r and nums[r] == nums[r+1]:
r -= 1
continue
return ret
###Output
_____no_output_____
###Markdown
三数之和的变种有一题为 [16.最接近的三数之和](https://leetcode-cn.com/problems/3sum-closest/)。初看会觉得比三数之和复杂,其实更简单。所谓最近,即 pair对的和 与 target 的差的绝对值就是距离。初始化一个距离,当距离更短的时候更新距离,同时缩小对撞指针。题目如下> 给定一个包括 n 个整数的数组 nums 和 一个目标值 target。找出 nums 中的三个整数,使得它们的和与 target 最接近。返回这三个数的和。假定每组输入只存在唯一答案。> > 例如,给定数组 nums = [-1,2,1,-4], 和 target = 1.> > 与 target 最接近的三个数的和为 2. (-1 + 2 + 1 = 2).代码如下
###Code
class Solution:
def threeSumClosest(self, nums: List[int], target: int) -> int:
if len(nums) < 3:
return None
nums.sort()
min_distance = float('inf')
ret = None
for i in range(len(nums) - 2):
l = i + 1
r = len(nums) - 1
while l < r:
three_sum = nums[i] + nums[l] + nums[r]
distance = three_sum - target
if abs(distance) < min_distance: # 如果出现更短的距离,即更接近target,则更新解
min_distance = abs(distance)
ret = three_sum
if 0 < distance: # 距离大于 0,即表示和更大,可以减小和,即 r 指针左移
r -= 1
elif distance < 0 : # 距离小于 0,即表示和更小,可以增加和,即 l 指针右移
l += 1
else:
return three_sum
return ret
###Output
_____no_output_____
###Markdown
移动指针的技巧很简单,两数之和与target的距离有大有小。如果这个值大于 0,那么可以减少 两数之和,即r指针向左移动,反之则表示这个和可以增大,即 l 指针右移。正好等于0,由于题目保证了解唯一,那么就可以直接返回了。此类两数,三数甚至[四数之和](https://leetcode-cn.com/problems/4sum/),都是对撞指针对单调数组的一种处理方式。通过对数组的排序,可以明确的知道 l 和 r 指针的 和 或者 差 应该扩大还是缩小。 Partition对于已排序的数组可以使用对撞指针,而对撞指针本身也是处理排序的一种重要技巧。快速排序的思想就是选取一个 partition 元素,分别将比其小和比其大的元素区分出来。其中使用对撞指针很容易实现类似的功能。leetcode 上 题,就是应用对撞指针求解 partition 问题。[75. 颜色分类](https://leetcode-cn.com/problems/sort-colors/)> 给定一个包含红色、白色和蓝色,一共 n 个元素的数组,原地对它们进行排序,使得相同颜色的元素相邻,并按照红色、白色、蓝色顺序排列。> > > 此题中,我们使用整数 0、 1 和 2 分别表示红色、白色和蓝色。> > 示例:> > 输入: [2,0,2,1,1,0]> > 输出: [0,0,1,1,2,2]这道题也称之为[荷兰旗帜问题](https://en.wikipedia.org/wiki/Dutch_national_flag_problem),即 三个数字分别表示荷兰国旗🇳🇱三种颜色。之所以是荷兰,最早提出这个问题的就是大名鼎鼎的荷兰计算机科学家 Edsger Dijkstra 提出的.如上图所示,分别初始化 l 指针和 r 指针。l 所在的部分都是 0, r 所在的部分都是 2,中间的部分是 1,还有 i 处于的部分。考察 i 这个元素,如果是 1,就等于直接扩展 1 的部分,i ++ 即可。如果 i 是 0,那么就和 l + 1 的元素进行交换,由于 l + 1 是1,交换之后,1 的部分自然还是合法,因此只需要 i ++ 即可。当然,l 也向右移动一个单位如果 i 是 2, 那么就需要和 r - 1 的元素进行交换,可是 r - 1 交换后,并不知道它是谁,因此 i 不能 ++ ,需要再对当前的 i 重复上述的逻辑。最终代码如下
###Code
class Solution:
def sortColors(self, nums: List[int]) -> None:
l = -1
r = len(nums)
i = 0
while i < r:
if nums[i] == 0:
nums[l+1], nums[i] = nums[i], nums[l+1] # 从左往右扫描,因为交换后的 元素 l+1 已经扫描了,因此需要 i += 1
l += 1
i += 1
elif nums[i] == 1:
i += 1
else:
nums[r-1], nums[i] = nums[i], nums[r-1] # 交换后的元素还没不知道是多少,需要针对这个进行处理
r -= 1
###Output
_____no_output_____
###Markdown
通过上图可知,初始化的 l 为 -1,r 为数组长度,即此时0的部分长度为0,2的部分长度也为0。然后逐步扩展指针。最终碰撞的是 i 和 r。由于 r 表示是 2 的部分,i 必须小于 r即可进行处理。同时可以很清楚的了解为什么 r -- 的时候,i 不需要 -- 。 滑动窗口对撞指针和滑动窗口都是双指针的一种,回忆滑动窗口可以知道。往往滑动窗口的过程有多个解,最终问题是需要求最有解。如果需要枚举所有的解,那么使用滑动窗口需要特别注意,如滑动窗口的重复解。请看下面一题:[713.乘积之和小于k的子数组](https://leetcode-cn.com/problems/subarray-product-less-than-k/)> 给定一个正整数数组 nums。> > 找出该数组内乘积小于 k 的连续的子数组的个数。> > 示例 1:> > 输入: nums = [10,5,2,6], k = 100> > 输出: 8> > 解释: 8个乘积小于100的子数组分别为: [10], [5], [2], [6], [10,5], [5,2], [2,6], [5,2,6]。> > 需要注意的是 [10,5,2] 并不是乘积小于100的子数组。> > 说明:> > 0 < nums.length <= 50000> > 0 < nums[i] < 1000> > 0 <= k < 10^6从题意可知,特别像滑动窗口的问题。通常滑动窗口会问符合要求的最小连续子数组。如果使用滑动窗口过程中。如果使用直接使用滑动窗口,会发现遗漏一些解。如果针对每个元素都使用一次滑动窗口,复杂度暂且不说,重复的解也会出现。针对这题,既要配合滑动窗口,,也要合理的使用对撞指针。使用滑动窗口确定一个可能解,由于乘积最小,那么窗口最右边的元素必须包含,那么再求解这个窗口包含最有元素的子数组,就是窗口内的所有解。具体代码如下:
###Code
class Solution:
def numSubarrayProductLessThanK(self, nums: List[int], k: int) -> int:
ret = 0
lo = hi = 0
product = 1
while hi < len(nums):
product *= nums[hi]
hi += 1
while product >= k and lo < hi:
product /= nums[lo]
lo += 1
ret += hi - lo
return ret
###Output
_____no_output_____ |
src/CatBoost/.ipynb_checkpoints/CatBoost-checkpoint.ipynb | ###Markdown
CAT BOOST
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import style
import matplotlib.ticker as ticker
import seaborn as sns
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import ParameterGrid
from sklearn.inspection import permutation_importance
import multiprocessing
labels = pd.read_csv('../../csv/train_labels.csv')
labels.head()
values = pd.read_csv('../../csv/train_values.csv')
values.T
values["building_id"].count() == values["building_id"].drop_duplicates().count()
to_be_categorized = ["land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for row in to_be_categorized:
values[row] = values[row].astype("category")
datatypes = dict(values.dtypes)
for row in values.columns:
if datatypes[row] != "int64" and datatypes[row] != "int32" and \
datatypes[row] != "int16" and datatypes[row] != "int8":
continue
if values[row].nlargest(1).item() > 32767 and values[row].nlargest(1).item() < 2**31:
values[row] = values[row].astype(np.int32)
elif values[row].nlargest(1).item() > 127:
values[row] = values[row].astype(np.int16)
else:
values[row] = values[row].astype(np.int8)
labels["building_id"] = labels["building_id"].astype(np.int32)
labels["damage_grade"] = labels["damage_grade"].astype(np.int8)
important_values = values\
.merge(labels, on="building_id")
important_values.drop(columns=["building_id"], inplace = True)
important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category")
important_values
X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'),
important_values['damage_grade'], test_size = 0.2, random_state = 123)
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
X_train = encode_and_bind(X_train, feature)
X_test = encode_and_bind(X_test, feature)
pip install catboost
from catboost import CatBoostClassifier
model = CatBoostClassifier( n_estimators = 1700,
max_depth = None,
learning_rate = 0.15,
boost_from_average = False,
verbose=True,
iterations = 5000)
model.fit(X_train, y_train)
model_pred = model.predict(X_test)
f1_score(y_test, model_pred, average='micro')
test_values = pd.read_csv('../../csv/test_values.csv', index_col = "building_id")
test_values_subset = test_values
test_values_subset["geo_level_1_id"] = test_values_subset["geo_level_1_id"].astype("category")
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
test_values_subset = encode_and_bind(test_values_subset, feature)
preds = model.predict(test_values_subset)
submission_format = pd.read_csv('../../csv/submission_format.csv', index_col = "building_id")
my_submission = pd.DataFrame(data=preds,
columns=submission_format.columns,
index=submission_format.index)
my_submission.to_csv('../../csv/predictions/CatBoostLaucha.csv')
###Output
_____no_output_____ |
fiona_temp_rnaseq/pipeline/notebook_processed/AT1G01060_gene_tracks_ref.py.ipynb | ###Markdown
Gene tracks
###Code
import os
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from matplotlib import patches, gridspec
import seaborn as sns
from IPython.display import display_markdown, Markdown
from genetrack_utils import plot_gene_track
%matplotlib inline
sns.set(font='Arial')
plt.rcParams['svg.fonttype'] = 'none'
style = sns.axes_style('white')
style.update(sns.axes_style('ticks'))
style['xtick.major.size'] = 1
style['ytick.major.size'] = 1
sns.set(font_scale=1.2, style=style)
pal = sns.color_palette(['#0072b2', '#d55e00', '#009e73', '#f0e442', '#cc79a7', '#56b4e9', '#e69f00'])
cmap = ListedColormap(pal.as_hex())
sns.set_palette(pal)
sns.palplot(pal)
plt.show()
OUTPUT_PATH = snakemake.output.gene_tracks
if not os.path.exists(OUTPUT_PATH):
os.makedirs(OUTPUT_PATH)
GENETRACK_TEMP = 20
FWD_BWS = sorted([fn for fn in snakemake.input.bws if re.search(f'{GENETRACK_TEMP}c.fwd.bw', fn)])
REV_BWS = sorted([fn for fn in snakemake.input.bws if re.search(f'{GENETRACK_TEMP}c.rev.bw', fn)])
LABELS = ['Col-0', 'fio1-3']
def convert_coords(coords):
if not coords:
return None
else:
return np.fromstring(coords.strip('[] '), sep=' ', dtype=int)
psi = pd.read_csv(
snakemake.input.psi,
header=[0, 1],
skiprows=[2,],
index_col=0,
converters={2: str,
3: convert_coords,
4: convert_coords}
)
gene_psi = psi[psi.metadata.gene_id.str.contains(snakemake.wildcards.gene_id)].psi
gene_psi_melt = pd.melt(
gene_psi.reset_index(),
id_vars='index',
value_vars=gene_psi.columns,
var_name='sample_id',
value_name='psi'
)
gene_psi_melt[['geno', 'temp']] = gene_psi_melt.sample_id.str.split('_', expand=True)[[0, 1]]
gene_psi_melt['temp'] = gene_psi_melt.temp.str.extract('(\d+)c', expand=True).astype(int)
psi_fit = pd.read_csv(
snakemake.input.psi_fit,
converters={'chrom': str,
'alt1': convert_coords,
'alt2': convert_coords},
index_col=0
)
gene_psi_fit = psi_fit[psi_fit.gene_id.str.contains(snakemake.wildcards.gene_id)]
gene_psi_fit_sig = gene_psi_fit.query('(geno_fdr < 0.05 | gxt_fdr < 0.05) & abs(dpsi) > 0.05')
gene_psi_fit_sig
display_markdown(Markdown(f'## {snakemake.wildcards.gene_id} gene tracks and boxplots'))
for event_id, record in gene_psi_fit_sig.iterrows():
try:
plot_gene_track(
record, gene_psi_melt.query(f'index == "{event_id}"'),
FWD_BWS if record.strand == '+' else REV_BWS,
LABELS,
snakemake.input.fasta,
title=f'{snakemake.wildcards.gene_id} {event_id}'
)
plt.savefig(os.path.join(OUTPUT_PATH, f'{snakemake.wildcards.gene_id}_{event_id}_gene_track.svg'))
plt.show()
except NotImplementedError:
continue
###Output
_____no_output_____ |
01-SimpleAnalysis.ipynb | ###Markdown
Analisi semplice====In questo notebook iniziamo una analisi semplice per capire il funzionamento di PyTorch e TorchText useremo come riferimento il dataset imdb, non ci interessa il risultato finale ma solo capire il funzionamento Per prima cosa impostiamo il seed in modo da avere un esperimento riproducibile
###Code
import torch
from torchtext import data
SEED = 1234
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
###Output
_____no_output_____
###Markdown
Preparare i dati----Il concetto principale di TorchText è il Field, definisce come il processare il testo.Nel nostro Dataset abbiamo in ingresso una stringa con il commento da analizzare e l'etichetta che classifica il commento.Il parametro dell'oggetto Field ci indica come l'oggetto deve essere processato.Per comodità definiamo due tipi di campo: + TEXT che elabora la recensione + LABEL che elabora l'etichettaTEXT ha impostato come proprietà tokenize='spacy', se non viene passato nulla a questa proprietà la stringa verrà spezzata usando gli spazi.LABEL è impostato come LabelField una sottoclasse di Field usata per gestire le etichettoper maggiori info [link](https://github.com/pytorch/text/blob/master/torchtext/data/field.py)
###Code
TEXT = data.Field(tokenize = 'spacy')
LABEL = data.LabelField(dtype = torch.float)
###Output
_____no_output_____
###Markdown
Andiamo a scaricare il dataset
###Code
from torchtext import datasets
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
###Output
_____no_output_____
###Markdown
Controlliamo quanti dati contiene e la divisione per classe
###Code
print(f'Number of training examples: {len(train_data)}')
print(f'Number of testing examples: {len(test_data)}')
pos = 0
neg = 0
for row in train_data[:]:
label = row.label
if(label == 'pos'):
pos += 1
else:
neg +=1
print(f'Number of training positive: {pos} negative {neg}')
pos = 0
neg = 0
for row in test_data[:]:
label = row.label
if(label == 'pos'):
pos += 1
else:
neg +=1
print(f'Number of testing positive: {pos} negative {neg}')
###Output
Number of training examples: 25000
Number of testing examples: 25000
Number of training positive: 12500 negative 12500
Number of testing positive: 12500 negative 12500
###Markdown
vediamo come è fatto un esempio
###Code
print(vars(train_data.examples[0]))
###Output
{'text': ['Bromwell', 'High', 'is', 'a', 'cartoon', 'comedy', '.', 'It', 'ran', 'at', 'the', 'same', 'time', 'as', 'some', 'other', 'programs', 'about', 'school', 'life', ',', 'such', 'as', '"', 'Teachers', '"', '.', 'My', '35', 'years', 'in', 'the', 'teaching', 'profession', 'lead', 'me', 'to', 'believe', 'that', 'Bromwell', 'High', "'s", 'satire', 'is', 'much', 'closer', 'to', 'reality', 'than', 'is', '"', 'Teachers', '"', '.', 'The', 'scramble', 'to', 'survive', 'financially', ',', 'the', 'insightful', 'students', 'who', 'can', 'see', 'right', 'through', 'their', 'pathetic', 'teachers', "'", 'pomp', ',', 'the', 'pettiness', 'of', 'the', 'whole', 'situation', ',', 'all', 'remind', 'me', 'of', 'the', 'schools', 'I', 'knew', 'and', 'their', 'students', '.', 'When', 'I', 'saw', 'the', 'episode', 'in', 'which', 'a', 'student', 'repeatedly', 'tried', 'to', 'burn', 'down', 'the', 'school', ',', 'I', 'immediately', 'recalled', '.........', 'at', '..........', 'High', '.', 'A', 'classic', 'line', ':', 'INSPECTOR', ':', 'I', "'m", 'here', 'to', 'sack', 'one', 'of', 'your', 'teachers', '.', 'STUDENT', ':', 'Welcome', 'to', 'Bromwell', 'High', '.', 'I', 'expect', 'that', 'many', 'adults', 'of', 'my', 'age', 'think', 'that', 'Bromwell', 'High', 'is', 'far', 'fetched', '.', 'What', 'a', 'pity', 'that', 'it', 'is', "n't", '!'], 'label': 'pos'}
###Markdown
Se il dataset ha solo train e test possiamo usare la funzione ```.split()``` per dividere il dataset. Di default viene impostato un rapporto 70/30 che puo essere cambiato usando il parametro ```split_ratio```, possiamo anche passare il ```random_state``` per essere sicuri di ottenere sempre lo stesso risultato
###Code
import random
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
print(f'Number of training examples: {len(train_data)}')
print(f'Number of validation examples: {len(valid_data)}')
###Output
Number of training examples: 17500
Number of validation examples: 7500
###Markdown
Dobbiamo creare un vocabolario, dove ogni parola ha un numero corrispondente, indice (index).Questo perchè quasi tutti i modelli di machine learning operano su numeri.Verrà costruita una serie di vettori _one-hot_ uno per ogni parola.Un vettore _one-hot_ ha tutti gli elementi a 0 tranne una posizione impostata a 1 (index) la dimensionalità è il totale del numero di parole univoche del dizionario, comunemente denominata $V$ Ora il numero di parole all'interno del dataset supera 100'000, questo significa che la dimensionalità di $V$ supera quella cifra.Questo può dare problemi con la memoria della GPU, andiamo dunque a impostare una dimensione massima del dizionario a 25'000 parole.Cosa facciamo con le parole che verranno tolte dal dizionario? Le andremo a rimpiazzare cone un token apposito *UNK* che sta per unknown.
###Code
MAX_VOCAB_SIZE = 25000
TEXT.build_vocab(train_data, max_size = MAX_VOCAB_SIZE)
LABEL.build_vocab(train_data)
###Output
_____no_output_____
###Markdown
controlliamo la dimensione del dizionario
###Code
print(f"Unique tokens in TEXT vocabulary: {len(TEXT.vocab)}")
print(f"Unique tokens in LABEL vocabulary: {len(LABEL.vocab)}")
###Output
Unique tokens in TEXT vocabulary: 25002
Unique tokens in LABEL vocabulary: 2
###Markdown
perchè la dimensione del dizionario è 25002? Perchè abbiamo due token aggiuntivi `````` per gestire le parole scartate e ``````.Cosa serve `````` ? Quando alimentiamo un modello a volte abbiamo la necessità che tutte le frasi abbiano la stessa lunghezza, perciò le frasi più brevi vengono riempite con `````` affinchè tutte arrivino alla stessa lunghezza.Andiamo ad analizzare le parole più frequenti all'interno del dataset di train
###Code
print(TEXT.vocab.freqs.most_common(20))
###Output
[('the', 203562), (',', 192482), ('.', 165200), ('and', 109442), ('a', 109116), ('of', 100702), ('to', 93766), ('is', 76327), ('in', 61254), ('I', 54000), ('it', 53502), ('that', 49185), ('"', 44277), ("'s", 43315), ('this', 42438), ('-', 36691), ('/><br', 35752), ('was', 35033), ('as', 30384), ('with', 29774)]
###Markdown
possiamo accedere al vocabolario direttamente usando i metodi itos (**i**nteger **to** **s**tring) e stoi (**s**tring **to** **i**nteger)
###Code
TEXT.vocab.itos[:10]
TEXT.vocab.stoi['hello']
print(LABEL.vocab.stoi)
###Output
defaultdict(None, {'neg': 0, 'pos': 1})
###Markdown
Il passo finale della preparazione dei dati è creare gli iteratori. Utilizzaremo questi per navigare i dati nel ciclo training/evaluation.Gli iteratori ritornano un batch di esempi (già convertiti in tensori) ad ogni iterazione. Utilizzeremo un ```BucketIterator```, questo è uno speciale iteratore che ritorna i Batch di esempi di lunghezze simili, minimizzando il numero di padding per ogni iterazione. Vogliamo anche piazzare i tensori ritornati sulla GPU se possibile. E' possibile farlo andando ad impostare un ```torch.device``` all'iteratore
###Code
BATCH_SIZE = 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
###Output
_____no_output_____
###Markdown
Costruzione del modello----Costruiamo un modello giocattolo per fare i nostri esperimenti.Prima di tutto bisogna creare una classe che estende ```nn.Module```. Dentro ```__init__``` si richiama con super il costruttore di ```nn.Module``` e si definiscono i layer del modello.Iniziamo creando tre layer:**Embedding Layer** usato per trasformare il nostro vettore one-hot in un vettore denso. Questo livello è "semplicemente" un Layer Fully connected. Come beneficio abbiamo anche che le parole con significato simile sono mappate vicine in questo spazio dimensionale.**RNN** La rete RNN prende in ingresso il dense vector e la memoria precedente $h_{(t-1)}$ e la usa per calcolare $h_t$**Linear Layer** Alla fine il linear layer prende lo hidden state finale e lo usa per produrre l'output $f(h_t)$ nella dimensione voluta.Il metodo ```forward``` viene chiamato per produrre l'output partendo dagli esempi.Ogni batch, text è un tensore di dimensione **[sentence lenght,batch size]**, questo verrà trasformato in un vettore one_hot.La trasformazione in one_hot viene fatta al volo all'interno del modulo di embedding per risparmiare spazio.L'output viene poi passato al layer di embedding che restituisce la versione vettorializzata della frase, embedded è un tensore di dimensione **[sentence lenght,batch size,embeddind dim]**.embedded poi alimenta la RNN, Pytorch se non viene passato nessun vettore $h_0$ lo imposta in automatico con tutti i valori a 0.La RNN ritorna due tensori, ```output``` di dimensione **[sentence lenght,batch size,hidden dim]** e ```hidden``` che rappresenta l'ultimo hidden state. Output rappresenta la concatenazione di tutti gli hidden state. Lo verifichiamo con la assert, ```squeeze``` serve per rimuovere la dimensione 1.alla fine passiamo ```hidden``` alla rete neuronale ```fc``` per ottenere il risultato
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding(input_dim, embedding_dim)
self.rnn = nn.RNN(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, text):
#la dimensione di text è [sent len, batch size]
embedded = self.embedding(text)
#la dimensione di embedded è [sent len, batch size, emb dim]
output, hidden = self.rnn(embedded)
#output contiene tutti gli hidden state concatenati [sent len, batch size, hid dim]
#la dimensione hidden è [1, batch size, hid dim] contiene l'ultimo stato della RNN
hidden_squeeze = hidden.squeeze(0)
assert torch.equal(output[-1,:,:], hidden_squeeze)
return self.fc(hidden_squeeze)
###Output
_____no_output_____
###Markdown
andiamo a instanziare la rete neuronale impostando l'ingresso con dimensionalità del dizionario *one-hot*, e un output di dimensione 1 (problema binario)
###Code
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
HIDDEN_DIM = 256
OUTPUT_DIM = 1
model = RNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM)
model
###Output
_____no_output_____
###Markdown
creiamoci anche una funzione che ci dice il numero di parametri addestrabili
###Code
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
###Output
The model has 2,592,105 trainable parameters
###Markdown
Train del modello---- Per prima cosa bisogna scegliere un ottimizzatore, questo è l'algoritmo che va ad ottimizzare i pesi della rete. Scegliamo un semplice **SGD stocastic gradiend descent**. Il primo sono i pesi che verranno ottimizzati, il secondo è il **leaning rate** che ci da la velocità con cui l'ottimizzatore cerca di avvicinarsi ad una possibile soluzione.
###Code
import torch.optim as optim
optimizer = optim.SGD(model.parameters(), lr=1e-3)
###Output
_____no_output_____
###Markdown
Definiamo ora una funzione di loss, questa funzione ci dice quanto siamo lontani dalla soluzione. In Pytorch viene spesso chiamata ```criterion``` .Useremo la funzione *binary cross entropy with logits*.Il nostro modello restituisce un numero reale (non compreso tra 0 e 1) mentre le nostre etichette sono due numeri interi 0 e 1, dobbiamo restringere il campo dell'uscita utilizzando una funzione *logit* o *sigmoide*Una volta ristretto il campo dobbiamo calcolare il loss (quanto siamo lontani dalla soluzione) utilizzando la formula dell'entropia incrociata [binary cross entropy](https://machinelearningmastery.com/cross-entropy-for-machine-learning/).La funzione ```BCEWithLogitsLoss ``` fa entrambi questi passi
###Code
criterion = nn.BCEWithLogitsLoss()
###Output
_____no_output_____
###Markdown
ora se abbiamo abilitato una GPU spostiamo l'ottimizzatore e il criterion con ```.to()```
###Code
model = model.to(device)
criterion = criterion.to(device)
###Output
_____no_output_____
###Markdown
Ci creiamo anche una funzione per valutare visivamente se il nostro modello sta lavorando bene (metrica), usiamo la funzione accuratezza.Attenzione non è detto che questa metrica vada bene per tutti i problemi.Prima alimentiamo la rete, poi schiacciamo il risultato tra 0 e 1 e poi lo approssimiamo al primo intero, il round fa si che se il numero è maggiore di 0.5 il risultato sarà 1 altrimenti sarà 0.
###Code
def binary_accuracy(preds, y):
"""
Ritorna l'accuratezza per ogni batch ad esempio se abbiamo indovinato 8 esempi su 10 avremo un risultato di 0.8
"""
#arrotondo il risultato
rounded_preds = torch.round(torch.sigmoid(preds))
#cambio il tipo in float in quanto altrimento avrei una divisione intera
correct = (rounded_preds == y).float()
acc = correct.sum() / len(correct)
return acc
###Output
_____no_output_____
###Markdown
Siamo pronti a creare la funzione di train che andrà ad analizzare tutti i record del dataset a batch con dimensione ```BATCH_SIZE```.Come prima cosa mettiamo il modello in uno stato di train con ```model.train()``` per abilitare il *dropout* e la *batch normalization*, sebbene in questo modello non ci sono è una buona pratica. Per ogni batch andiamo ad azzerare il gradiente, ogni parametro del modello ha un attributo ```grad``` che viene calcoltato con ```criterion```, questo attributo non viene ripulito da pytorch in automatico perciò dobbiamo farlo noi a mano.Andiamo poi ad alimentare il modello con il testo ```batch.text``` (attenzione che questo attributo cambia per ogni dataset).In automatico viene chiamata la funzione ```forward```.Come passo successivo andiamo a togliere una dimensione al risultato per allinearlo al campo ```batch.label```.Andiamo poi a calcolare il loss e l'accuracy del batch che verranno sommati per calcolare la media alla fine delle iterazioni.I passi più importanti sono ```loss.backward()``` che si occupa del calcolo del gradiente per ogni peso e ```optimizer.step()``` che aggiorna il modello.Se avete fatto caso abbiamo detto esplicitamente che ```LABEL ``` è un campo float, questo perchè se non lo avessimo fatto perchè TorchText in automatico mappa i campi come ```LongTensors``` invece il criterion vuole un float
###Code
from tqdm import tqdm
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in tqdm(iterator):
optimizer.zero_grad()
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
###Output
_____no_output_____
###Markdown
```evaluate``` è simile al train tranne per il fatto che non dobbiamo aggiornare i pesi e non dobbiamo utilizzare il *dropout* e la *batch normalization*.Non dobbiamo nemmeno calcolare il gradiente e questo lo si fa con ```no_grad()``` questo fa si che si usi meno memoria e il calcolo risulta più veloce.Il resto della funzione è uguale a train
###Code
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in tqdm(iterator):
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
###Output
_____no_output_____
###Markdown
Ora è giunto il momento di unire tutti i pezzi e vedere il risultato, ad ogni epoch viene fatto un train su tutti i batch e viene calcolato quanto il modello si comporta bene sul dataset di validation.Inoltre se il modello ottiene il miglior risultato di loss sul dataset di validation ne salvo i pesi.
###Code
N_EPOCHS = 5
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02}')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
###Output
100%|████████████████████████████████████████████████████████████████████████████████| 274/274 [00:20<00:00, 13.18it/s]
100%|████████████████████████████████████████████████████████████████████████████████| 118/118 [00:01<00:00, 75.81it/s]
1%|▌ | 2/274 [00:00<00:21, 12.53it/s]
###Markdown
Ora abbiamo visto che il nostro modello non si è comportato molto bene, poco male non era questo lo scopo dell'esercizio. Andiamo a vedere come il modello si comporta sul dataset di test.
###Code
model.load_state_dict(torch.load('tut1-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
###Output
100%|████████████████████████████████████████████████████████████████████████████████| 391/391 [00:05<00:00, 77.91it/s]
###Markdown
Andiamo anche a provare il modello su una frase a nostro piacimento
###Code
import spacy
nlp = spacy.load('en')
def predict_sentiment(model, sentence):
model.eval()
tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
length = [len(indexed)]
tensor = torch.LongTensor(indexed).to(device)
tensor = tensor.unsqueeze(1)
length_tensor = torch.LongTensor(length)
prediction = torch.sigmoid(model(tensor))
return prediction.item()
###Output
_____no_output_____
###Markdown
Un esempio di recensione negativa
###Code
predict_sentiment(model, "This film is terrible")
###Output
_____no_output_____
###Markdown
Un esempio di recensione positiva
###Code
predict_sentiment(model, "This film is great")
###Output
_____no_output_____ |
docs/matplotlib/graph-tool.ipynb | ###Markdown
graph-tool[graph-tool](https://graph-tool.skewed.de/) stellt eine Graph-Klasse und mehrere Algorithmen zur Verfügung, die mit ihr arbeiten. Die Interna dieser Klasse und der meisten Algorithmen sind aus Leistungsgründen in C++ geschrieben und verwenden die [Boost Graph Library](http://www.boost.org/). Installationgraph-tool ist eine in Python verpackte C++-Bibliothek mit vielen C++-Abhängigkeiten wie [Boost](http://www.boost.org/), [CGAL](http://www.cgal.org/) und [expat](http://expat.sourceforge.net/). Daher lässt sich graph-tool am einfachsten installieren mit einem Paketmanager für Linux-Distributionen und MacOS: LinuxFür Debian oder Ubuntu könnt ihr die folgende Zeile zu eurer `/etc/apt/sources.list` hinzufügen:```deb [arch=amd64] https://downloads.skewed.de/apt DISTRIBUTION main```wobei `DISTRIBUTION` einer der folgenden Werte sein kann:```bullseye, buster, sid, bionic, eoan, focal, groovy```Anschließend solltet ihr den öffentlichen Schlüssel [612DEFB798507F25](https://keys.openpgp.org/search?q=612DEFB798507F25) herunterladen, um die Pakete mit dem folgenden Befehl zu überprüfen:``` bash$ apt-key adv --keyserver keys.openpgp.org --recv-key 612DEFB798507F25```Nach dem Ausführen von `apt update` kann das Paket installiert werden mit``` bash$ apt install python3-graph-tool``` MacOSMit [Homebrew](http://brew.sh/) ist die Installation ebenfalls unkompliziert:``` bash$ brew install graph-tool```Anschließend müssen wir dem Python-Interpreter noch mitteilen, wo er `graph-tool` finden kann:``` bash$ export PYTHONPATH="$PYTHONPATH:/usr/local/Cellar/graph-tool/2.43/lib/python3.9/site-packages"``` Testen der Installation
###Code
from graph_tool.all import *
###Output
_____no_output_____
###Markdown
Erstellen und Manipulieren von GraphenEin leerer Graph kann durch Instanziierung einer Graph-Klasse erstellt werden:
###Code
g = Graph()
###Output
_____no_output_____
###Markdown
Ein Graph kann mit der Methode [set_directed()](https://graph-tool.skewed.de/static/doc/graph_tool.htmlgraph_tool.Graph.set_directed) jederzeit von gerichtet auf ungerichtet umgestellt werden. Dabei kann die Richtung des Graphen mit der Methode [is_directed()](https://graph-tool.skewed.de/static/doc/graph_tool.htmlgraph_tool.Graph.is_directed) abgefragt werden:
###Code
ug = Graph()
ug.set_directed(False)
assert ug.is_directed() == False
###Output
_____no_output_____
###Markdown
Sobald ein Graph erstellt ist, kann er mit Knoten und Kanten gefüllt werden. Ein Knoten kann mit der Methode [add_vertex()](https://graph-tool.skewed.de/static/doc/graph_tool.htmlgraph_tool.Graph.add_vertex) hinzugefügt werden, die eine Instanz einer [Vertex](https://graph-tool.skewed.de/static/doc/graph_tool.htmlgraph_tool.Vertex)-Klasse, auch *vertex descriptor* genannt, zurückgibt. Der folgende Code erstellt beispielsweise zwei Knoten und gibt *vertex descriptors* zurück, die in den Variablen `v1` und `v2` gespeichert sind:
###Code
v1 = g.add_vertex()
v2 = g.add_vertex()
e = g.add_edge(v1, v2)
graph_draw(g, vertex_text=g.vertex_index, output="two-nodes.svg")
from IPython.display import SVG
SVG('two-nodes.svg')
###Output
_____no_output_____ |
01_Getting_&_Knowing_Your_Data/Occupation/Solutions.ipynb | ###Markdown
Ex3 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed? Step 10. What is the data type of each column? Step 11. Print only the occupation column
###Code
#OR
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index
###Code
users = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user',
sep ='|' ,index_col ='user_id')
###Output
_____no_output_____
###Markdown
Step 4. See the first 25 entries
###Code
users.head(25)
###Output
_____no_output_____
###Markdown
Step 5. See the last 10 entries
###Code
users.tail(10)
###Output
_____no_output_____
###Markdown
Step 6. What is the number of observations in the dataset?
###Code
users.shape[0]
###Output
_____no_output_____
###Markdown
Step 7. What is the number of columns in the dataset?
###Code
users.shape[1]
###Output
_____no_output_____
###Markdown
Step 8. Print the name of all the columns.
###Code
users.columns
###Output
_____no_output_____
###Markdown
Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
users.index
###Output
_____no_output_____
###Markdown
Step 10. What is the data type of each column?
###Code
users.dtypes
###Output
_____no_output_____
###Markdown
Step 11. Print only the occupation column
###Code
users.occupation
###Output
_____no_output_____
###Markdown
Step 12. How many different occupations are in this dataset?
###Code
#users.occupation.value_counts().count()
users.occupation.nunique()
###Output
_____no_output_____
###Markdown
Step 13. What is the most frequent occupation?
###Code
users.occupation.value_counts().head(1).index[0]
###Output
_____no_output_____
###Markdown
Step 14. Summarize the DataFrame.
###Code
users.describe()
###Output
_____no_output_____
###Markdown
Step 15. Summarize all the columns
###Code
users.describe(include ='all')
###Output
_____no_output_____
###Markdown
Step 16. Summarize only the occupation column
###Code
users.occupation.describe()
###Output
_____no_output_____
###Markdown
Step 17. What is the mean age of users?
###Code
round(users.age.mean())
###Output
_____no_output_____
###Markdown
Step 18. What is the age with least occurrence?
###Code
users.age.value_counts().tail()
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index
###Code
users = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user', sep='|') # wrong
users = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user', sep='|', index_col="user_id") # right
users_orig = users.copy()
# users = users.set_index('user_id') # no need for this
###Output
_____no_output_____
###Markdown
Step 4. See the first 25 entries
###Code
users.head(25)
###Output
_____no_output_____
###Markdown
Step 5. See the last 10 entries
###Code
users.tail(10)
###Output
_____no_output_____
###Markdown
Step 6. What is the number of observations in the dataset?
###Code
users.shape[0]
###Output
_____no_output_____
###Markdown
Step 7. What is the number of columns in the dataset?
###Code
users.shape[1]
###Output
_____no_output_____
###Markdown
Step 8. Print the name of all the columns.
###Code
users.columns
###Output
_____no_output_____
###Markdown
Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
users.index
###Output
_____no_output_____
###Markdown
Step 10. What is the data type of each column?
###Code
users.info() # wrong
users.dtypes # right
###Output
_____no_output_____
###Markdown
Step 11. Print only the occupation column
###Code
users.occupation
###Output
_____no_output_____
###Markdown
Step 12. How many different occupations are in this dataset?
###Code
users.occupation.value_counts().count() # wrong: not idomatic
users.occupation.nunique() # idomatic
###Output
_____no_output_____
###Markdown
Step 13. What is the most frequent occupation?
###Code
users.groupby("occupation").count().age.sort_values()[-1] # wrong
users.occupation.value_counts().head(1).index[0] # right
###Output
_____no_output_____
###Markdown
Step 14. Summarize the DataFrame.
###Code
users.describe()
###Output
_____no_output_____
###Markdown
Step 15. Summarize all the columns
###Code
users.describe(include="all")
###Output
_____no_output_____
###Markdown
Step 16. Summarize only the occupation column
###Code
users.occupation.describe()
###Output
_____no_output_____
###Markdown
Step 17. What is the mean age of users?
###Code
import numpy as np
int(np.round(users.age.mean()))
###Output
_____no_output_____
###Markdown
Step 18. What is the age with least occurrence?
###Code
users.groupby("age").count().sort_values("age").head(1).index[0] # wrong
users.age.value_counts().tail() # right
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed? Step 10. What is the data type of each column? Step 11. Print only the occupation column
###Code
#OR
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed? Step 10. What is the data type of each column? Step 11. Print only the occupation column
###Code
#OR
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
단계2. 자료를 이 [주소](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user) 에서 바로 불러온다. 단계 3. 이걸 users라는 변수에 할당하고 'user_id'를 색인으로 쓴다. 단계4. 첫 25개 열을 본다. 단계5. 마지막 10개 열을 본다. 단계 6. 자료에 있는 관측치(observations)의 수는 총 몇 개인가? 단계 7. 데이터 셋에 있는 열(columns)의 수는 총 몇 개인가? 단계 8. 모든 열의 이름을 출력(print)한다. 단계 9. 자료는 어떻게 색인되어 있나?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed? Step 10. What is the data type of each column? Step 11. Print only the occupation column
###Code
#OR
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed? Step 10. What is the data type of each column? Step 11. Print only the occupation column
###Code
#OR
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____
###Markdown
Ex3 - Getting and Knowing your DataCheck out [Occupation Exercises Video Tutorial](https://www.youtube.com/watch?v=W8AB5s-L3Rw&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=4) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users and use the 'user_id' as index Step 4. See the first 25 entries Step 5. See the last 10 entries Step 6. What is the number of observations in the dataset? Step 7. What is the number of columns in the dataset? Step 8. Print the name of all the columns. Step 9. How is the dataset indexed?
###Code
# "the index" (aka "the labels")
###Output
_____no_output_____ |
research/object_detection/colab_tutorials/inference_from_saved_model_tf2_colab.ipynb | ###Markdown
Intro to Object Detection ColabWelcome to the object detection colab! This demo will take you through the steps of running an "out-of-the-box" detection model in SavedModel format on a collection of images. Imports
###Code
!pip install -U --pre tensorflow=="2.2.0"
import os
import pathlib
# Clone the tensorflow models repository if it doesn't already exist
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
# Install the Object Detection API
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
import io
import os
import scipy.misc
import numpy as np
import six
import time
from six import BytesIO
import matplotlib
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw, ImageFont
import tensorflow as tf
from object_detection.utils import visualization_utils as viz_utils
%matplotlib inline
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: a file path (this can be local or on colossus)
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
img_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(img_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
# Load the COCO Label Map
category_index = {
1: {'id': 1, 'name': 'person'},
2: {'id': 2, 'name': 'bicycle'},
3: {'id': 3, 'name': 'car'},
4: {'id': 4, 'name': 'motorcycle'},
5: {'id': 5, 'name': 'airplane'},
6: {'id': 6, 'name': 'bus'},
7: {'id': 7, 'name': 'train'},
8: {'id': 8, 'name': 'truck'},
9: {'id': 9, 'name': 'boat'},
10: {'id': 10, 'name': 'traffic light'},
11: {'id': 11, 'name': 'fire hydrant'},
13: {'id': 13, 'name': 'stop sign'},
14: {'id': 14, 'name': 'parking meter'},
15: {'id': 15, 'name': 'bench'},
16: {'id': 16, 'name': 'bird'},
17: {'id': 17, 'name': 'cat'},
18: {'id': 18, 'name': 'dog'},
19: {'id': 19, 'name': 'horse'},
20: {'id': 20, 'name': 'sheep'},
21: {'id': 21, 'name': 'cow'},
22: {'id': 22, 'name': 'elephant'},
23: {'id': 23, 'name': 'bear'},
24: {'id': 24, 'name': 'zebra'},
25: {'id': 25, 'name': 'giraffe'},
27: {'id': 27, 'name': 'backpack'},
28: {'id': 28, 'name': 'umbrella'},
31: {'id': 31, 'name': 'handbag'},
32: {'id': 32, 'name': 'tie'},
33: {'id': 33, 'name': 'suitcase'},
34: {'id': 34, 'name': 'frisbee'},
35: {'id': 35, 'name': 'skis'},
36: {'id': 36, 'name': 'snowboard'},
37: {'id': 37, 'name': 'sports ball'},
38: {'id': 38, 'name': 'kite'},
39: {'id': 39, 'name': 'baseball bat'},
40: {'id': 40, 'name': 'baseball glove'},
41: {'id': 41, 'name': 'skateboard'},
42: {'id': 42, 'name': 'surfboard'},
43: {'id': 43, 'name': 'tennis racket'},
44: {'id': 44, 'name': 'bottle'},
46: {'id': 46, 'name': 'wine glass'},
47: {'id': 47, 'name': 'cup'},
48: {'id': 48, 'name': 'fork'},
49: {'id': 49, 'name': 'knife'},
50: {'id': 50, 'name': 'spoon'},
51: {'id': 51, 'name': 'bowl'},
52: {'id': 52, 'name': 'banana'},
53: {'id': 53, 'name': 'apple'},
54: {'id': 54, 'name': 'sandwich'},
55: {'id': 55, 'name': 'orange'},
56: {'id': 56, 'name': 'broccoli'},
57: {'id': 57, 'name': 'carrot'},
58: {'id': 58, 'name': 'hot dog'},
59: {'id': 59, 'name': 'pizza'},
60: {'id': 60, 'name': 'donut'},
61: {'id': 61, 'name': 'cake'},
62: {'id': 62, 'name': 'chair'},
63: {'id': 63, 'name': 'couch'},
64: {'id': 64, 'name': 'potted plant'},
65: {'id': 65, 'name': 'bed'},
67: {'id': 67, 'name': 'dining table'},
70: {'id': 70, 'name': 'toilet'},
72: {'id': 72, 'name': 'tv'},
73: {'id': 73, 'name': 'laptop'},
74: {'id': 74, 'name': 'mouse'},
75: {'id': 75, 'name': 'remote'},
76: {'id': 76, 'name': 'keyboard'},
77: {'id': 77, 'name': 'cell phone'},
78: {'id': 78, 'name': 'microwave'},
79: {'id': 79, 'name': 'oven'},
80: {'id': 80, 'name': 'toaster'},
81: {'id': 81, 'name': 'sink'},
82: {'id': 82, 'name': 'refrigerator'},
84: {'id': 84, 'name': 'book'},
85: {'id': 85, 'name': 'clock'},
86: {'id': 86, 'name': 'vase'},
87: {'id': 87, 'name': 'scissors'},
88: {'id': 88, 'name': 'teddy bear'},
89: {'id': 89, 'name': 'hair drier'},
90: {'id': 90, 'name': 'toothbrush'},
}
# Download the saved model and put it into models/research/object_detection/test_data/
!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d5_coco17_tpu-32.tar.gz
!tar -xf efficientdet_d5_coco17_tpu-32.tar.gz
!mv efficientdet_d5_coco17_tpu-32/ models/research/object_detection/test_data/
start_time = time.time()
tf.keras.backend.clear_session()
detect_fn = tf.saved_model.load('models/research/object_detection/test_data/efficientdet_d5_coco17_tpu-32/saved_model/')
end_time = time.time()
elapsed_time = end_time - start_time
print('Elapsed time: ' + str(elapsed_time) + 's')
import time
image_dir = 'models/research/object_detection/test_images'
elapsed = []
for i in range(2):
image_path = os.path.join(image_dir, 'image' + str(i + 1) + '.jpg')
image_np = load_image_into_numpy_array(image_path)
input_tensor = np.expand_dims(image_np, 0)
start_time = time.time()
detections = detect_fn(input_tensor)
end_time = time.time()
elapsed.append(end_time - start_time)
plt.rcParams['figure.figsize'] = [42, 21]
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'][0].numpy(),
detections['detection_classes'][0].numpy().astype(np.int32),
detections['detection_scores'][0].numpy(),
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.40,
agnostic_mode=False)
plt.subplot(2, 1, i+1)
plt.imshow(image_np_with_detections)
mean_elapsed = sum(elapsed) / float(len(elapsed))
print('Elapsed time: ' + str(mean_elapsed) + ' second per image')
###Output
_____no_output_____
###Markdown
Intro to Object Detection ColabWelcome to the object detection colab! This demo will take you through the steps of running an "out-of-the-box" detection model in SavedModel format on a collection of images. Imports
###Code
!pip install -U --pre tensorflow=="2.2.0"
import os
import pathlib
# Clone the tensorflow models repository if it doesn't already exist
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
# Install the Object Detection API
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
import io
import os
import scipy.misc
import numpy as np
import six
import time
from six import BytesIO
import matplotlib
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw, ImageFont
import tensorflow as tf
from object_detection.utils import visualization_utils as viz_utils
%matplotlib inline
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: a file path (this can be local or on colossus)
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
img_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(img_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
# Load the COCO Label Map
category_index = {
1: {'id': 1, 'name': 'Faizan'},
2: {'id': 2, 'name': 'Ayan'},
3: {'id': 3, 'name': 'Rehan'},
4: {'id': 4, 'name': 'Seema'},
5: {'id': 5, 'name': 'Suffyan'}
}
# Download the saved model and put it into models/research/object_detection/test_data/
!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d0_coco17_tpu-32.tar.gz
!tar -xf efficientdet_d0_coco17_tpu-32.tar.gz
!mv efficientdet_d0_coco17_tpu-32/ models/research/object_detection/test_data/
start_time = time.time()
tf.keras.backend.clear_session()
detect_fn = tf.saved_model.load('models/research/object_detection/inference_graph/saved_model')
end_time = time.time()
elapsed_time = end_time - start_time
print('Elapsed time: ' + str(elapsed_time) + 's')
import time
image_dir = 'path/to/image/dir'
image = '*.jpg'
elapsed = []
for i in range(2):
image_path = os.path.join(image_dir, image)
image_np = load_image_into_numpy_array(image_path)
input_tensor = np.expand_dims(image_np, 0)
start_time = time.time()
detections = detect_fn(input_tensor)
end_time = time.time()
elapsed.append(end_time - start_time)
plt.rcParams['figure.figsize'] = [42, 21]
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'][0].numpy(),
detections['detection_classes'][0].numpy().astype(np.int32),
detections['detection_scores'][0].numpy(),
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.40,
agnostic_mode=False)
plt.subplot(2, 1, i+1)
plt.imshow(image_np_with_detections)
mean_elapsed = sum(elapsed) / float(len(elapsed))
print('Elapsed time: ' + str(mean_elapsed) + ' second per image')
import io
import os
import scipy.misc
import numpy as np
import six
import time
import glob
from IPython.display import display
from six import BytesIO
import matplotlib
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw, ImageFont
import tensorflow as tf
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
%matplotlib inline
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: a file path (this can be local or on colossus)
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
img_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(img_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
labelmap_path = 'models/research/object_detection/training/label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(labelmap_path, use_display_name=True)
category_index
tf.keras.backend.clear_session()
model = tf.saved_model.load('models/research/object_detection/inference_graph/saved_model')
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
for image_path in glob.glob('images/test/*.jpg'):
image_np = load_image_into_numpy_array(image_path)
output_dict = run_inference_for_single_image(model, image_np)
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
Image._show(Image.fromarray(image_np))
###Output
_____no_output_____
###Markdown
Intro to Object Detection ColabWelcome to the object detection colab! This demo will take you through the steps of running an "out-of-the-box" detection model in SavedModel format on a collection of images. Imports
###Code
!pip install -U --pre tensorflow=="2.2.0"
import os
import pathlib
# Clone the tensorflow models repository if it doesn't already exist
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
# Install the Object Detection API
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
import io
import os
import scipy.misc
import numpy as np
import six
import time
from six import BytesIO
import matplotlib
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw, ImageFont
import tensorflow as tf
from object_detection.utils import visualization_utils as viz_utils
%matplotlib inline
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: a file path (this can be local or on colossus)
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
img_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(img_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
# Load the COCO Label Map
category_index = {
1: {'id': 1, 'name': 'person'},
2: {'id': 2, 'name': 'bicycle'},
3: {'id': 3, 'name': 'car'},
4: {'id': 4, 'name': 'motorcycle'},
5: {'id': 5, 'name': 'airplane'},
6: {'id': 6, 'name': 'bus'},
7: {'id': 7, 'name': 'train'},
8: {'id': 8, 'name': 'truck'},
9: {'id': 9, 'name': 'boat'},
10: {'id': 10, 'name': 'traffic light'},
11: {'id': 11, 'name': 'fire hydrant'},
13: {'id': 13, 'name': 'stop sign'},
14: {'id': 14, 'name': 'parking meter'},
15: {'id': 15, 'name': 'bench'},
16: {'id': 16, 'name': 'bird'},
17: {'id': 17, 'name': 'cat'},
18: {'id': 18, 'name': 'dog'},
19: {'id': 19, 'name': 'horse'},
20: {'id': 20, 'name': 'sheep'},
21: {'id': 21, 'name': 'cow'},
22: {'id': 22, 'name': 'elephant'},
23: {'id': 23, 'name': 'bear'},
24: {'id': 24, 'name': 'zebra'},
25: {'id': 25, 'name': 'giraffe'},
27: {'id': 27, 'name': 'backpack'},
28: {'id': 28, 'name': 'umbrella'},
31: {'id': 31, 'name': 'handbag'},
32: {'id': 32, 'name': 'tie'},
33: {'id': 33, 'name': 'suitcase'},
34: {'id': 34, 'name': 'frisbee'},
35: {'id': 35, 'name': 'skis'},
36: {'id': 36, 'name': 'snowboard'},
37: {'id': 37, 'name': 'sports ball'},
38: {'id': 38, 'name': 'kite'},
39: {'id': 39, 'name': 'baseball bat'},
40: {'id': 40, 'name': 'baseball glove'},
41: {'id': 41, 'name': 'skateboard'},
42: {'id': 42, 'name': 'surfboard'},
43: {'id': 43, 'name': 'tennis racket'},
44: {'id': 44, 'name': 'bottle'},
46: {'id': 46, 'name': 'wine glass'},
47: {'id': 47, 'name': 'cup'},
48: {'id': 48, 'name': 'fork'},
49: {'id': 49, 'name': 'knife'},
50: {'id': 50, 'name': 'spoon'},
51: {'id': 51, 'name': 'bowl'},
52: {'id': 52, 'name': 'banana'},
53: {'id': 53, 'name': 'apple'},
54: {'id': 54, 'name': 'sandwich'},
55: {'id': 55, 'name': 'orange'},
56: {'id': 56, 'name': 'broccoli'},
57: {'id': 57, 'name': 'carrot'},
58: {'id': 58, 'name': 'hot dog'},
59: {'id': 59, 'name': 'pizza'},
60: {'id': 60, 'name': 'donut'},
61: {'id': 61, 'name': 'cake'},
62: {'id': 62, 'name': 'chair'},
63: {'id': 63, 'name': 'couch'},
64: {'id': 64, 'name': 'potted plant'},
65: {'id': 65, 'name': 'bed'},
67: {'id': 67, 'name': 'dining table'},
70: {'id': 70, 'name': 'toilet'},
72: {'id': 72, 'name': 'tv'},
73: {'id': 73, 'name': 'laptop'},
74: {'id': 74, 'name': 'mouse'},
75: {'id': 75, 'name': 'remote'},
76: {'id': 76, 'name': 'keyboard'},
77: {'id': 77, 'name': 'cell phone'},
78: {'id': 78, 'name': 'microwave'},
79: {'id': 79, 'name': 'oven'},
80: {'id': 80, 'name': 'toaster'},
81: {'id': 81, 'name': 'sink'},
82: {'id': 82, 'name': 'refrigerator'},
84: {'id': 84, 'name': 'book'},
85: {'id': 85, 'name': 'clock'},
86: {'id': 86, 'name': 'vase'},
87: {'id': 87, 'name': 'scissors'},
88: {'id': 88, 'name': 'teddy bear'},
89: {'id': 89, 'name': 'hair drier'},
90: {'id': 90, 'name': 'toothbrush'},
}
# Download the saved model and put it into models/research/object_detection/test_data/
!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d5_coco17_tpu-32.tar.gz
!tar -xf efficientdet_d5_coco17_tpu-32.tar.gz
!mv efficientdet_d5_coco17_tpu-32/ models/research/object_detection/test_data/
start_time = time.time()
tf.keras.backend.clear_session()
detect_fn = tf.saved_model.load('models/research/object_detection/test_data/efficientdet_d5_coco17_tpu-32/saved_model/')
end_time = time.time()
elapsed_time = end_time - start_time
print('Elapsed time: ' + str(elapsed_time) + 's')
import time
image_dir = 'models/research/object_detection/test_images'
elapsed = []
for i in range(2):
image_path = os.path.join(image_dir, 'image' + str(i + 1) + '.jpg')
image_np = load_image_into_numpy_array(image_path)
input_tensor = np.expand_dims(image_np, 0)
start_time = time.time()
detections = detect_fn(input_tensor)
end_time = time.time()
elapsed.append(end_time - start_time)
plt.rcParams['figure.figsize'] = [42, 21]
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'][0].numpy(),
detections['detection_classes'][0].numpy().astype(np.int32),
detections['detection_scores'][0].numpy(),
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.40,
agnostic_mode=False)
plt.subplot(2, 1, i+1)
plt.imshow(image_np_with_detections)
mean_elapsed = sum(elapsed) / float(len(elapsed))
print('Elapsed time: ' + str(mean_elapsed) + ' second per image')
###Output
_____no_output_____
###Markdown
Intro to Object Detection ColabWelcome to the object detection colab! This demo will take you through the steps of running an "out-of-the-box" detection model in SavedModel format on a collection of images. Imports
###Code
!pip install -U --pre tensorflow=="2.2.0"
import os
import pathlib
# Clone the tensorflow models repository if it doesn't already exist
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
# Install the Object Detection API
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
import io
import os
import scipy.misc
import numpy as np
import six
import time
from six import BytesIO
import matplotlib
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw, ImageFont
import tensorflow as tf
from object_detection.utils import visualization_utils as viz_utils
%matplotlib inline
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: a file path (this can be local or on colossus)
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
img_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(img_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
# Load the COCO Label Map
category_index = {
1: {'id': 1, 'name': 'person'},
2: {'id': 2, 'name': 'bicycle'},
3: {'id': 3, 'name': 'car'},
4: {'id': 4, 'name': 'motorcycle'},
5: {'id': 5, 'name': 'airplane'},
6: {'id': 6, 'name': 'bus'},
7: {'id': 7, 'name': 'train'},
8: {'id': 8, 'name': 'truck'},
9: {'id': 9, 'name': 'boat'},
10: {'id': 10, 'name': 'traffic light'},
11: {'id': 11, 'name': 'fire hydrant'},
13: {'id': 13, 'name': 'stop sign'},
14: {'id': 14, 'name': 'parking meter'},
15: {'id': 15, 'name': 'bench'},
16: {'id': 16, 'name': 'bird'},
17: {'id': 17, 'name': 'cat'},
18: {'id': 18, 'name': 'dog'},
19: {'id': 19, 'name': 'horse'},
20: {'id': 20, 'name': 'sheep'},
21: {'id': 21, 'name': 'cow'},
22: {'id': 22, 'name': 'elephant'},
23: {'id': 23, 'name': 'bear'},
24: {'id': 24, 'name': 'zebra'},
25: {'id': 25, 'name': 'giraffe'},
27: {'id': 27, 'name': 'backpack'},
28: {'id': 28, 'name': 'umbrella'},
31: {'id': 31, 'name': 'handbag'},
32: {'id': 32, 'name': 'tie'},
33: {'id': 33, 'name': 'suitcase'},
34: {'id': 34, 'name': 'frisbee'},
35: {'id': 35, 'name': 'skis'},
36: {'id': 36, 'name': 'snowboard'},
37: {'id': 37, 'name': 'sports ball'},
38: {'id': 38, 'name': 'kite'},
39: {'id': 39, 'name': 'baseball bat'},
40: {'id': 40, 'name': 'baseball glove'},
41: {'id': 41, 'name': 'skateboard'},
42: {'id': 42, 'name': 'surfboard'},
43: {'id': 43, 'name': 'tennis racket'},
44: {'id': 44, 'name': 'bottle'},
46: {'id': 46, 'name': 'wine glass'},
47: {'id': 47, 'name': 'cup'},
48: {'id': 48, 'name': 'fork'},
49: {'id': 49, 'name': 'knife'},
50: {'id': 50, 'name': 'spoon'},
51: {'id': 51, 'name': 'bowl'},
52: {'id': 52, 'name': 'banana'},
53: {'id': 53, 'name': 'apple'},
54: {'id': 54, 'name': 'sandwich'},
55: {'id': 55, 'name': 'orange'},
56: {'id': 56, 'name': 'broccoli'},
57: {'id': 57, 'name': 'carrot'},
58: {'id': 58, 'name': 'hot dog'},
59: {'id': 59, 'name': 'pizza'},
60: {'id': 60, 'name': 'donut'},
61: {'id': 61, 'name': 'cake'},
62: {'id': 62, 'name': 'chair'},
63: {'id': 63, 'name': 'couch'},
64: {'id': 64, 'name': 'potted plant'},
65: {'id': 65, 'name': 'bed'},
67: {'id': 67, 'name': 'dining table'},
70: {'id': 70, 'name': 'toilet'},
72: {'id': 72, 'name': 'tv'},
73: {'id': 73, 'name': 'laptop'},
74: {'id': 74, 'name': 'mouse'},
75: {'id': 75, 'name': 'remote'},
76: {'id': 76, 'name': 'keyboard'},
77: {'id': 77, 'name': 'cell phone'},
78: {'id': 78, 'name': 'microwave'},
79: {'id': 79, 'name': 'oven'},
80: {'id': 80, 'name': 'toaster'},
81: {'id': 81, 'name': 'sink'},
82: {'id': 82, 'name': 'refrigerator'},
84: {'id': 84, 'name': 'book'},
85: {'id': 85, 'name': 'clock'},
86: {'id': 86, 'name': 'vase'},
87: {'id': 87, 'name': 'scissors'},
88: {'id': 88, 'name': 'teddy bear'},
89: {'id': 89, 'name': 'hair drier'},
90: {'id': 90, 'name': 'toothbrush'},
}
# Download the saved model and put it into models/research/object_detection/test_data/
!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d5_coco17_tpu-32.tar.gz
!tar -xf efficientdet_d5_coco17_tpu-32.tar.gz
!mv efficientdet_d5_coco17_tpu-32/ models/research/object_detection/test_data/
start_time = time.time()
tf.keras.backend.clear_session()
detect_fn = tf.saved_model.load('models/research/object_detection/test_data/efficientdet_d5_coco17_tpu-32/saved_model/')
end_time = time.time()
elapsed_time = end_time - start_time
print('Elapsed time: ' + str(elapsed_time) + 's')
import time
image_dir = 'models/research/object_detection/test_images'
elapsed = []
for i in range(2):
image_path = os.path.join(image_dir, 'image' + str(i + 1) + '.jpg')
image_np = load_image_into_numpy_array(image_path)
input_tensor = np.expand_dims(image_np, 0)
start_time = time.time()
detections = detect_fn(input_tensor)
end_time = time.time()
elapsed.append(end_time - start_time)
plt.rcParams['figure.figsize'] = [42, 21]
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'][0].numpy(),
detections['detection_classes'][0].numpy().astype(np.int32),
detections['detection_scores'][0].numpy(),
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.40,
agnostic_mode=False)
plt.subplot(2, 1, i+1)
plt.imshow(image_np_with_detections)
mean_elapsed = sum(elapsed) / float(len(elapsed))
print('Elapsed time: ' + str(mean_elapsed) + ' second per image')
###Output
_____no_output_____
###Markdown
Intro to Object Detection ColabWelcome to the object detection colab! This demo will take you through the steps of running an "out-of-the-box" detection model in SavedModel format on a collection of images. Imports
###Code
!pip install -U --pre tensorflow=="2.2.0"
import os
import pathlib
# Clone the tensorflow models repository if it doesn't already exist
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
# Install the Object Detection API
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
import io
import os
import scipy.misc
import numpy as np
import six
import time
from six import BytesIO
import matplotlib
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw, ImageFont
import tensorflow as tf
from object_detection.utils import visualization_utils as viz_utils
%matplotlib inline
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: a file path (this can be local or on colossus)
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
img_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(img_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
# Load the COCO Label Map
category_index = {
1: {'id': 1, 'name': 'person'},
2: {'id': 2, 'name': 'bicycle'},
3: {'id': 3, 'name': 'car'},
4: {'id': 4, 'name': 'motorcycle'},
5: {'id': 5, 'name': 'airplane'},
6: {'id': 6, 'name': 'bus'},
7: {'id': 7, 'name': 'train'},
8: {'id': 8, 'name': 'truck'},
9: {'id': 9, 'name': 'boat'},
10: {'id': 10, 'name': 'traffic light'},
11: {'id': 11, 'name': 'fire hydrant'},
13: {'id': 13, 'name': 'stop sign'},
14: {'id': 14, 'name': 'parking meter'},
15: {'id': 15, 'name': 'bench'},
16: {'id': 16, 'name': 'bird'},
17: {'id': 17, 'name': 'cat'},
18: {'id': 18, 'name': 'dog'},
19: {'id': 19, 'name': 'horse'},
20: {'id': 20, 'name': 'sheep'},
21: {'id': 21, 'name': 'cow'},
22: {'id': 22, 'name': 'elephant'},
23: {'id': 23, 'name': 'bear'},
24: {'id': 24, 'name': 'zebra'},
25: {'id': 25, 'name': 'giraffe'},
27: {'id': 27, 'name': 'backpack'},
28: {'id': 28, 'name': 'umbrella'},
31: {'id': 31, 'name': 'handbag'},
32: {'id': 32, 'name': 'tie'},
33: {'id': 33, 'name': 'suitcase'},
34: {'id': 34, 'name': 'frisbee'},
35: {'id': 35, 'name': 'skis'},
36: {'id': 36, 'name': 'snowboard'},
37: {'id': 37, 'name': 'sports ball'},
38: {'id': 38, 'name': 'kite'},
39: {'id': 39, 'name': 'baseball bat'},
40: {'id': 40, 'name': 'baseball glove'},
41: {'id': 41, 'name': 'skateboard'},
42: {'id': 42, 'name': 'surfboard'},
43: {'id': 43, 'name': 'tennis racket'},
44: {'id': 44, 'name': 'bottle'},
46: {'id': 46, 'name': 'wine glass'},
47: {'id': 47, 'name': 'cup'},
48: {'id': 48, 'name': 'fork'},
49: {'id': 49, 'name': 'knife'},
50: {'id': 50, 'name': 'spoon'},
51: {'id': 51, 'name': 'bowl'},
52: {'id': 52, 'name': 'banana'},
53: {'id': 53, 'name': 'apple'},
54: {'id': 54, 'name': 'sandwich'},
55: {'id': 55, 'name': 'orange'},
56: {'id': 56, 'name': 'broccoli'},
57: {'id': 57, 'name': 'carrot'},
58: {'id': 58, 'name': 'hot dog'},
59: {'id': 59, 'name': 'pizza'},
60: {'id': 60, 'name': 'donut'},
61: {'id': 61, 'name': 'cake'},
62: {'id': 62, 'name': 'chair'},
63: {'id': 63, 'name': 'couch'},
64: {'id': 64, 'name': 'potted plant'},
65: {'id': 65, 'name': 'bed'},
67: {'id': 67, 'name': 'dining table'},
70: {'id': 70, 'name': 'toilet'},
72: {'id': 72, 'name': 'tv'},
73: {'id': 73, 'name': 'laptop'},
74: {'id': 74, 'name': 'mouse'},
75: {'id': 75, 'name': 'remote'},
76: {'id': 76, 'name': 'keyboard'},
77: {'id': 77, 'name': 'cell phone'},
78: {'id': 78, 'name': 'microwave'},
79: {'id': 79, 'name': 'oven'},
80: {'id': 80, 'name': 'toaster'},
81: {'id': 81, 'name': 'sink'},
82: {'id': 82, 'name': 'refrigerator'},
84: {'id': 84, 'name': 'book'},
85: {'id': 85, 'name': 'clock'},
86: {'id': 86, 'name': 'vase'},
87: {'id': 87, 'name': 'scissors'},
88: {'id': 88, 'name': 'teddy bear'},
89: {'id': 89, 'name': 'hair drier'},
90: {'id': 90, 'name': 'toothbrush'},
}
# Download the saved model and put it into models/research/object_detection/test_data/
!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d5_coco17_tpu-32.tar.gz
!tar -xf efficientdet_d5_coco17_tpu-32.tar.gz
!mv efficientdet_d5_coco17_tpu-32/ models/research/object_detection/test_data/
start_time = time.time()
tf.keras.backend.clear_session()
detect_fn = tf.saved_model.load('models/research/object_detection/test_data/efficientdet_d5_coco17_tpu-32/saved_model/')
end_time = time.time()
elapsed_time = end_time - start_time
print('Elapsed time: ' + str(elapsed_time) + 's')
import time
image_dir = 'models/research/object_detection/test_images'
elapsed = []
for i in range(2):
image_path = os.path.join(image_dir, 'image' + str(i + 1) + '.jpg')
image_np = load_image_into_numpy_array(image_path)
input_tensor = np.expand_dims(image_np, 0)
start_time = time.time()
detections = detect_fn(input_tensor)
end_time = time.time()
elapsed.append(end_time - start_time)
plt.rcParams['figure.figsize'] = [42, 21]
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'][0].numpy(),
detections['detection_classes'][0].numpy().astype(np.int32),
detections['detection_scores'][0].numpy(),
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.40,
agnostic_mode=False)
plt.subplot(2, 1, i+1)
plt.imshow(image_np_with_detections)
mean_elapsed = sum(elapsed) / float(len(elapsed))
print('Elapsed time: ' + str(mean_elapsed) + ' second per image')
###Output
_____no_output_____ |
notebooks/Emotion Intensity in Tweets dataset.ipynb | ###Markdown
Link: http://saifmohammad.com/WebPages/EmotionIntensity-SharedTask.htmlTutorial: https://realpython.com/python-keras-text-classification/
###Code
from __future__ import print_function, division
import numpy as np
import pandas as pd
import re
from sklearn.linear_model import LogisticRegression
from sklearn.externals import joblib
# train data
col_names = ['id', 'expression', 'emotion', 'score']
anger_train = pd.read_csv('http://saifmohammad.com/WebDocs/EmoInt%20Train%20Data/anger-ratings-0to1.train.txt',
sep='\t', header = None, names = col_names)
fear_train = pd.read_csv('http://saifmohammad.com/WebDocs/EmoInt%20Train%20Data/fear-ratings-0to1.train.txt',
sep='\t', header = None, names = col_names)
joy_train = pd.read_csv('http://saifmohammad.com/WebDocs/EmoInt%20Train%20Data/joy-ratings-0to1.train.txt',
sep='\t', header = None, names = col_names)
sadness_train = pd.read_csv('http://saifmohammad.com/WebDocs/EmoInt%20Train%20Data/sadness-ratings-0to1.train.txt',
sep='\t', header = None, names = col_names)
train = pd.concat([anger_train, fear_train, joy_train, sadness_train])
# test data
anger_test = pd.read_csv('http://saifmohammad.com/WebDocs/EmoInt%20Test%20Gold%20Data/anger-ratings-0to1.test.gold.txt',
sep='\t', header = None, names = col_names)
fear_test = pd.read_csv('http://saifmohammad.com/WebDocs/EmoInt%20Test%20Gold%20Data/fear-ratings-0to1.test.gold.txt',
sep='\t', header = None, names = col_names)
joy_test = pd.read_csv('http://saifmohammad.com/WebDocs/EmoInt%20Test%20Gold%20Data/joy-ratings-0to1.test.gold.txt',
sep='\t', header = None, names = col_names)
sadness_test = pd.read_csv('http://saifmohammad.com/WebDocs/EmoInt%20Test%20Gold%20Data/sadness-ratings-0to1.test.gold.txt',
sep='\t', header = None, names = col_names)
test = pd.concat([anger_test, fear_test, joy_test, sadness_test])
def cleaning(expression):
#removing @, #
expression=re.sub(r'@\w+', '<subject>', expression)
expression=re.sub(r'#', '', expression)
#lower case
expression=expression.lower()
return expression
train.expression = train.expression.apply(cleaning)
test.expression = test.expression.apply(cleaning)
train.tail()
test.head()
train.to_csv('../data/external/tweeter_4classes_text_train.csv')
test.to_csv('../data/external/tweeter_4classes_text_test.csv')
len(train), len(test)
###Output
_____no_output_____
###Markdown
Simple classification Logistic regression
###Code
sentences = list(train.expression) + list(test.expression)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df=0, lowercase=True)
vectorizer.fit(sentences)
vectorizer.vocabulary_
X_train = vectorizer.transform(list(train.expression)).toarray()
X_test = vectorizer.transform(list(test.expression)).toarray()
X_train.shape, X_test.shape
# encoding
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
emotion_encoder = LabelEncoder()
y_train = emotion_encoder.fit_transform(train.emotion)
y_test = emotion_encoder.fit_transform(test.emotion)
emotion_one_hot_encoder = OneHotEncoder()
# y_train = emotion_one_hot_encoder.fit_transform(y_train.reshape((-1, 1)))
# y_test = emotion_one_hot_encoder.fit_transform(y_test.reshape((-1, 1)))
emotion_encoder.classes_
joblib.dump(vectorizer, '../models/logistic_regression/vectorizer.vec')
joblib.dump(emotion_encoder, '../models/logistic_regression/emotion_encoder.enc')
y_train
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
score = classifier.score(X_test, y_test)
print("Accuracy:", score)
joblib.dump(classifier, f'../models/log_regression_{score:.2f}.model')
emotion_encoder.inverse_transform(classifier.predict(vectorizer.transform(["bitter terrorism optimism sober"])))
import eli5
eli5.show_weights(classifier, target_names=emotion_encoder.classes_, vec=vectorizer)
###Output
_____no_output_____
###Markdown
Neural network
###Code
from keras.models import Sequential
from keras import layers
input_dim = X_train.shape[1] # Number of features
output_dim = 4
model = Sequential()
model.add(layers.Dense(4, input_dim=input_dim, activation='relu'))
model.add(layers.Dense(output_dim, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
history = model.fit(X_train, y_train,
epochs=100,
verbose=False,
validation_data=(X_test, y_test),
batch_size=10)
loss, accuracy = model.evaluate(X_train, y_train, verbose=False)
print("Training Accuracy: {:.4f}".format(accuracy))
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print("Testing Accuracy: {:.4f}".format(accuracy))
###Output
Training Accuracy: 0.9867
Testing Accuracy: 0.7081
###Markdown
Advanced neural network
###Code
#
from keras.preprocessing.text import Tokenizer
sentences_train = list(train.expression)
sentences_test = list(test.expression)
tokenizer = Tokenizer(num_words=8000)
tokenizer.fit_on_texts(sentences_train)
X_train = tokenizer.texts_to_sequences(sentences_train)
X_test = tokenizer.texts_to_sequences(sentences_test)
vocab_size = len(tokenizer.word_index) + 1 # Adding 1 because of reserved 0 index
print(sentences_train[2])
print(X_train[2])
from keras.preprocessing.sequence import pad_sequences
maxlen = 80
X_train = pad_sequences(X_train, padding='post', maxlen=maxlen)
X_test = pad_sequences(X_test, padding='post', maxlen=maxlen)
print(X_train[1, :])
from keras.models import Sequential
from keras import layers
embedding_dim = 100
model = Sequential()
model.add(layers.Embedding(input_dim=vocab_size,
output_dim=embedding_dim,
input_length=maxlen))
# model.add(layers.Bidirectional(layers.LSTM(10, return_sequences=True, activation='relu')))
model.add(layers.Bidirectional(layers.LSTM(4, return_sequences=True, activation='relu')))
model.add(layers.Bidirectional(layers.LSTM(4, return_sequences=True, activation='relu')))
model.add(layers.Bidirectional(layers.LSTM(4, return_sequences=False, activation='softmax', dropout=0.5), merge_mode='sum'))
# model.add(layers.Flatten())
# # model.add(layers.Dense(5, activation='relu'))
# model.add(layers.Dense(4, activation='softmax'))
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
history = model.fit(X_train, y_train,
epochs=20,
validation_data=(X_test, y_test),
batch_size=64)
loss, accuracy = model.evaluate(X_train, y_train, verbose=False)
print("Training Accuracy: {:.4f}".format(accuracy))
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print("Testing Accuracy: {:.4f}".format(accuracy))
from keras.models import Sequential
from keras import layers
embedding_dim = 200
model = Sequential()
model.add(layers.Embedding(input_dim=vocab_size,
output_dim=embedding_dim,
input_length=maxlen))
print(model.output_shape)
# model.add(layers.Bidirectional(layers.LSTM(4, return_sequences=True, activation='relu')))
model.add(layers.Conv1D(filters=12, kernel_size=3))
# model.add(layers.MaxPool1D())
# model.add(layers.Conv1D(filters=25, kernel_size=3))
# model.add(layers.MaxPool1D())
# model.add(layers.Conv1D(filters=12, kernel_size=3))
# model.add(layers.MaxPool1D())
# model.add(layers.Conv1D(filters=20, kernel_size=5))
model.add(layers.GlobalMaxPool1D())
# model.add(layers.Dense(5, activation='relu'))
model.add(layers.Dropout(0.5))
# model.add(layers.Bidirectional(layers.LSTM(4, return_sequences=False, activation='softmax'), merge_mode='sum'))
model.add(layers.Dense(4, activation='softmax'))
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
history = model.fit(X_train, y_train,
epochs=50,
validation_data=(X_test, y_test),
batch_size=16)
loss, accuracy = model.evaluate(X_train, y_train, verbose=False)
print("Training Accuracy: {:.4f}".format(accuracy))
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print("Testing Accuracy: {:.4f}".format(accuracy))
###Output
Train on 3613 samples, validate on 3142 samples
Epoch 1/50
3613/3613 [==============================] - 23s 6ms/step - loss: 1.3282 - acc: 0.3692 - val_loss: 1.2467 - val_acc: 0.5076
Epoch 2/50
3613/3613 [==============================] - 13s 4ms/step - loss: 0.9693 - acc: 0.6477 - val_loss: 0.7732 - val_acc: 0.7772
Epoch 3/50
3613/3613 [==============================] - 13s 4ms/step - loss: 0.5440 - acc: 0.8356 - val_loss: 0.5291 - val_acc: 0.8180
Epoch 4/50
3613/3613 [==============================] - 14s 4ms/step - loss: 0.3302 - acc: 0.9031 - val_loss: 0.5059 - val_acc: 0.8132
Epoch 5/50
3613/3613 [==============================] - 14s 4ms/step - loss: 0.2283 - acc: 0.9361 - val_loss: 0.5163 - val_acc: 0.8148
Epoch 6/50
3613/3613 [==============================] - 14s 4ms/step - loss: 0.1829 - acc: 0.9471 - val_loss: 0.5648 - val_acc: 0.8202
Epoch 7/50
3613/3613 [==============================] - 14s 4ms/step - loss: 0.1657 - acc: 0.9499 - val_loss: 0.6046 - val_acc: 0.8097
Epoch 8/50
3613/3613 [==============================] - 14s 4ms/step - loss: 0.1488 - acc: 0.9568 - val_loss: 0.6442 - val_acc: 0.8062
Epoch 9/50
2480/3613 [===================>..........] - ETA: 3s - loss: 0.1469 - acc: 0.9560 |
_notebooks/2020-09-20-Israel_excess_mortality.ipynb | ###Markdown
תמותה עודפת בישראל בקיץ 2020 (Israel excess mortality summer 2020) > או מתים עם קורונה או מקורונה?- toc: true - badges: true- comments: true- categories: [covid-19] כחלק מהדיון הציבורי על מגפת הקורונה ודרכי ההתמודדות עימה, עלתה הטענה כי מרבית הנפטרים הנכללים בסטטיסטיקת הנפטרים עקב המחלה אינם נפטרים "מקורונה" אלא "עם קורונה". הם "מתים מהלכים" אשר היו מתים מסיבות אלו ואחרות גם ללא קשר להמצאות נגיף הקורונה בגופם. לטענה זו יש היבטים שונים, מוסריים (האם מותר לדרוס "מתים מהלכים"?), ביולוגיים/רפואיים, סטטיסטיים ועוד. מטרת פוסט זה להתמקד באספקט צר (אך חשוב!) ולבחון האם קיימת תמותה עודפת בישראל בשנת 2020 ובפרט בחודשי הקיץ של שנה זו, כיוון שהמצאות תמותה עודפת עשויה להצביע על נפטרים "מקורונה" ולא "עם קורונה". במספר מדינות בעולם הודגמה תמותה עודפת בשנת 2020 (לדוגמא [כאן](https://www.economist.com/graphic-detail/2020/07/15/tracking-covid-19-excess-deaths-across-countries)). בישראל הופיע באחרונה בכותרות [דו"ח של הלמ"ס](https://www.cbs.gov.il/he/mediarelease/DocLib/2020/274/05_20_274b.pdf) בו לא נמצאה עלייה משמעותית בתמותה. דו"ח זה כולל נתונים אך ורק עד ליולי 2020. בחודש זה חלה אמנם עלייה משמעותית בתחלואה ובסטטיסטקת הנפטרים "עם קורונה", אך עלייה זו המשיכה והתגברה מאז ועד למועד כתיבת שורות אלו (20.9.2020),כך שיש עניין לבחון האם בנתונים החדשים ניתן לראות תמותה עודפת או לא?לצערי, הנתונים העדכניים ביותר באתר הלמ"ס מגיעים עד לאמצע אוגוסט, כך שהתוספת על דו"ח הלמ"ס הינה קטנה יחסית. בעתיד יהיה ניתן לעדכן את הניתוח עם נתונים חדשים.השתמשתי בקובץ "פטירות של תושבי ישראל, לפי שבוע, מין, קבוצת אוכלוסייה וגיל, 2020" שהורדתי מ[כאן](https://www.cbs.gov.il/he/Pages/search/TableMaps.aspx?CbsSubject=%D7%AA%D7%9E%D7%95%D7%AA%D7%94%20%D7%95%D7%AA%D7%95%D7%97%D7%9C%D7%AA%20%D7%97%D7%99%D7%99%D7%9D). ניתן למצוא שם קבצים נוספים ברזולציות אחרות.מן הראוי להעיר שבעוד ההתסכלות על נתוני תמותה עודפת היא חשובה ביותר, היא אינה חפה מבעיות. בפרט, היא חשופה לתנודות אחרות בתמותה, לדוגמא ירידה התמותה מתאונות דרכים [כאן עמ' 11-12](https://old.cbs.gov.il/www/hodaot2020n/19_20_297b.pdf), התאבדויות עקב בדידות ומצוקה כלכלית (לא ראיתי נתונים, אך כן נבואות זעם) ועוד. EDA
###Code
#collapse
import urllib
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
MORTALITY_FILE_URL = 'https://www.cbs.gov.il/he/publications/LochutTlushim/2020/%D7%A4%D7%98%D7%99%D7%A8%D7%95%D7%AA-2000-2020-%D7%9C%D7%A4%D7%99-%D7%A9%D7%91%D7%95%D7%A2.xlsx'
MORTALITY_FILE_LOCATION = "/home/adiell/data/israel_moratality_stats.xslx"
#collapse
## Run this to get the data from CBS website
# urllib.request.urlretrieve(FILE_URL, MORTALITY_FILE_LOCATION)
#collapse
AGE_GROUPS = ["0-19", "20-29", "30-39", "40-49", "50-59", "60-69", "70-79","80+"]
COLUMNS_NAMES = ["Week", "Date",
"Total", "Males", "Females",
"Total - Jews", "Males - Jews", "Females - Jews",
"Total - Arabs", "Males - Arabs", "Females - Arabs",
"Total - 70+", "Arabs - 70+", "Jews - 70+",
"Males - 70+", "Males - Arabs - 70+", "Males - Jews - 70+",
"Females - 70+", "Females - Arabs - 70+", "Females - Jews - 70+" ] + \
["Males - " + age for age in AGE_GROUPS] + ["Females - " + age for age in AGE_GROUPS]
#collapse
def read_sheet(year):
df = pd.read_excel(MORTALITY_FILE_LOCATION, sheet_name = str(year), skiprows=12)
df.columns = COLUMNS_NAMES
df['year'] = year
df['month'] = df['Date'].apply(lambda x: x.month)
return df
#collapse
mortality_raw_data = pd.concat([read_sheet(year) for year in range(2000, 2021)])
mortality_raw_data = mortality_raw_data.dropna(subset=["Total"]) ## Future dates have NA
###Output
_____no_output_____
###Markdown
נתמקד בנתוני התמותה הכוללת ולא במגזר או חתך ספציפי. יהיה מעניין לחזור על הניתוח עבור חתכים נוספים. אם נמצא משהו מעניין, זה יכול לעזור להעמקת השסעים בחברה הישראלית.
###Code
#collapse
column_of_interest = "Total"
_ = mortality_raw_data.plot("Date", column_of_interest, figsize = (12,6))
###Output
_____no_output_____
###Markdown
הנתונים בעלי עונתיות שנתית ומגמת גידול כללית. אפשר לנסות לתקנן את מגמת הגידול אם נחלק בגודל האוכלוסיה (לא אידיאלי כי מה שרלוונטי זו האוכלוסיה המבוגרת יותר, אבל זה מה שיש וקל לעשות :). זה גם מה שהלמ"ס עושים כמדומני. בנוסף גם המידע ברזולוציה זמנית אחרת. מה לעשות). את נתוני גודל האוכלוסיה לקחתי מ[ויקיפדיה](https://he.wikipedia.org/wiki/%D7%93%D7%9E%D7%95%D7%92%D7%A8%D7%A4%D7%99%D7%94_%D7%A9%D7%9C_%D7%99%D7%A9%D7%A8%D7%90%D7%9C%D7%9C%D7%99%D7%93%D7%95%D7%AA_%D7%95%D7%A4%D7%98%D7%99%D7%A8%D7%95%D7%AA_%D7%9C%D7%A4%D7%99_%D7%A9%D7%) (משום מה לא ראיתי נתונים דומים באתר הלמ"ס). משום מה גם אין נתונים על גודל האוכלוסיה ב2017 אז השלמתי על ידי ממוצע גאומטרי של 2016,2018
###Code
#collapse
population = pd.DataFrame(
{
'year' : range(2000, 2021),
'population': [6369, 6508, 6631, 6748, 6869, 6991, 7116, 7244, 7337, 7552, 7695, 7837, 7984, 8134, 8297, 8463, 8628, (8628*8972)**0.5, 8972, 9021, 9190]
}
)
population
#collapse
mortality_raw_data = mortality_raw_data.merge(population)
normed_columns_of_interest = 'Norm. ' + column_of_interest
mortality_raw_data[normed_columns_of_interest] =\
mortality_raw_data[column_of_interest]/ mortality_raw_data['population']
#collapse
_ = mortality_raw_data.plot("Date",normed_columns_of_interest , figsize = (12,6))
###Output
_____no_output_____
###Markdown
נראה שיש מגמת ירידה בתמותה המנורמלת לפי שנים, אך היא נראית חלשה יותר מאשר המגמה בנתונים הגולמיים.
###Code
#collapse
_ = mortality_raw_data.boxplot(column = normed_columns_of_interest, by='month', figsize = (12,6))
###Output
_____no_output_____
###Markdown
יש עונתיות שנתית ברורה בתמותה. כמו כן, יש תנודתיות רבה בין השנים. התנודתיות גדולה יותר בחורף. רואים את זה גם בנתונים הגולמיים וגם ובמתוקננים. מודל למ"סנתחיל מניתוח דמוי למ"ס. נשווה את התחלואה בפועל (פר שבוע) לממוצע של חמש השנים האחרונות. ליתר דיוק ניקח ממוצע של הנתונים המתוקננים ונכפיל באוכלוסיה העכשווית. כמו כן, עקב רעש די גדול ניקח ממוצע נע של 3 שבועות על ה"מודל". נחשב גבולות בטחון על פי סטיית תקן * 1.96 (z-score מתאים לרווח סמך של 95%)
###Code
#collapse
class CBSModel():
def __init__(self, metric, norm_factor = population['population'].values[-1]):
self._metric = metric
self._norm_factor = norm_factor
def fit(self, df):
mean = self._norm_factor *\
df\
.query('2015 <= year <= 2019')\
.groupby('Week')[self._metric].mean()
mean = mean.rolling(3, center=True, min_periods=1).mean()
std = self._norm_factor *\
df\
.query('2015 <= year <= 2019')\
.groupby('Week')[self._metric].std()
std = std.rolling(3, center=True, min_periods=1).mean()
self._model = pd.concat([mean, std], axis = 1)
self._model.columns = ['mean', 'std']
def predict(self, df, conf_level = 1.96):
return df.merge(self._model, left_on='Week', right_index=True).\
assign(
actual_mortality = lambda x: x[self._metric] * self._norm_factor,
predicted_mortality = lambda x: x['mean'],
upper_bound = lambda x: x['mean'] + (conf_level * x['std']),
lower_bound = lambda x: x['mean'] - (conf_level * x['std']),
)[['Date', 'year', 'Week', 'month', 'actual_mortality', 'predicted_mortality' ,'lower_bound', 'upper_bound']]
#collapse
cbs_model = CBSModel(normed_columns_of_interest)
pre_covid_data = mortality_raw_data.query('Date <= "2020-03-01"')
cbs_model.fit(pre_covid_data)
cbs_result = cbs_model.predict(mortality_raw_data.query('Date >= "2020-01-01"'))
#collapse
def plot_mortality_predition(result):
fig = plt.figure(figsize = (12,6))
plt.plot(result['Date'], result['actual_mortality'],'r', label = 'Actual mortality')
plt.plot(result['Date'], result['predicted_mortality'],'b', label = 'Predicted mortality')
plt.plot(result['Date'], result['upper_bound'],'b--')
plt.plot(result['Date'], result['lower_bound'],'b--')
_=plt.legend()
#collapse
plot_mortality_predition(cbs_result)
###Output
_____no_output_____
###Markdown
בחודשי החורף (ינואר ופברואר) תמותה בחסר, כפי שצויין גם בדו"ח הלמ"ס, כנראה עקב שפעת יחסית קלה השנה. ניתן לראות שהחל מתחילת יולי התחלואה מעל למצופה בכל השבועות. אולם רק בשבועיים העלייה היא מעל לגבול העליון של ה-95%. אחד מהם כבר היה בדו"ח הלמ"ס המקורי ואחד "חדש" מאוגוסט. נסתכל גם על ממוצע נע של התמותה בפועל להחלקת רעשים:
###Code
#collapse
mortality_raw_data['Mortality. mavg'] = mortality_raw_data[normed_columns_of_interest]\
.rolling(3, center=True, min_periods=1).mean()
cbs_model2 = CBSModel('Mortality. mavg')
pre_covid_data = mortality_raw_data.query('Date <= "2020-03-01"')
cbs_model2.fit(pre_covid_data)
cbs_result2 = cbs_model2.predict(mortality_raw_data.query('year == 2020'))
plot_mortality_predition(cbs_result2)
###Output
_____no_output_____
###Markdown
כאן ניתן לראות שאנחנו כבר כמה שבועות טובים רצופים מעל הגבול העליון. מהי ההערכה לתמותה עודפת מצטברת? נסתכל לפי חודשים:
###Code
#collapse
cbs_result.assign(
excess_mortality = lambda x: x.actual_mortality - x.predicted_mortality,
excess_mortality_percent = lambda x: (x.actual_mortality - x.predicted_mortality)/x.predicted_mortality
).groupby('month')\
.agg({'excess_mortality': 'sum',
'excess_mortality_percent': 'mean'})
###Output
_____no_output_____
###Markdown
אם כן, אנו רואים תמותה כי בחודש יולי יש תמותה עודפת של כ-175 מתים. גם באוגוסט יש תמותה עודפת לא מובוטלת. אם ניקח בחשבון כי נתוני אוגוסט הם חלקיים (3 שבועות?), אזי במונחים חודשיים מדובר בכ-250 מתים עודפים וסה"כ בכ-425 בחודשי הקיץ יולי אוגוסט. באחוזים מדובר בקרוב ל7% על חודש אוגוסט. אם נמשיך בקצב זה (שעשוי אף לעשות לצערנו) על תמותה שנתית של כ-45000 נגיע לתמותה עודפת של כ3100 איש. אז מה הלאה? חישוב ממוצעים וסטיית תקן זה טוב ויפה אבל זה סה"כ סטטיסיטיקה. כData scientists שמכבדים את עצמם לא צריך להשתמש במודל Machine Learning יותר רציני. וכאומרים Machine learning כמובן שמדובר על Deep learning אך כיוון שקצרה היריעה, נתחיל מלנסות [prophet](https://facebook.github.io/prophet/) כמודל time series לחזות התמותה במקום ממוצע. אבל דבר ראשון נפסיק עם העברית שלא מרונדרת טוב ונתחיל לכתוב באנגלית. Prophet Let's now use [prophet](https://facebook.github.io/prophet/) to estimate the expected mortality. Using prophet has several benefits. It's easy to run out of the box. In addition, it's supposed to take into account seasonality and also has built in mechanism to estimate the trend. Therefore in we'll work with the raw data without normalizing by population.
###Code
#collapse
from fbprophet import Prophet
prophet_df = mortality_raw_data[["Date",column_of_interest]].copy().rename(columns = {'Date':'ds', column_of_interest:"y"})
pre_corona_data = prophet_df.query('ds < "2020-03-01"')
#collapse
#hide_output
prophet = Prophet()
prophet.fit(pre_corona_data)
forecast = prophet.predict(prophet_df)
forecast = forecast.merge(prophet_df)
prophet.plot(forecast, xlabel='Date', ylabel='Mortality', figsize = (12,6));
###Output
_____no_output_____
###Markdown
Looks like prophet captures reasonably well the time series, trend and seasonality. The large noise during some of the winter periods is evident.
###Code
#collapse
forecast = forecast.rename(columns = {
'y':'actual_mortality', 'yhat':'predicted_mortality' ,'yhat_lower':'lower_bound', 'yhat_upper':'upper_bound', 'ds':'Date'
})
plot_mortality_predition(forecast.query('Date >= "2020-01-01"'))
###Output
_____no_output_____
###Markdown
In general this seems very similar to the "CBS" model. Mortality is above expectation since the beginning of July and above the upper confidence bound in two weeks. It's important to note that it's seems it is not especially rare to get several consecutive weeks above the upper confidence bound in the winter seasons.So, I would say that from the data shows that covid isn't a "light flu", maybe a "flu with public relations" but not a light one. Let's hope it stays this way and mortality doesn't increase, though it doesn't looks promising right now :(Absolute excess mortality by month using prophet:
###Code
#collapse
forecast['month'] = forecast['Date'].apply(lambda x: x.month)
forecast.query('Date >= "2020-01-01"').assign(
excess_mortality = lambda x: x.actual_mortality - x.predicted_mortality,
excess_mortality_percent = lambda x: (x.actual_mortality - x.predicted_mortality)/x.predicted_mortality
).groupby('month')\
.agg({'excess_mortality': 'sum',
'excess_mortality_percent': 'mean'})
###Output
_____no_output_____ |
how-to-use-azureml/work-with-data/datasets-tutorial/train-with-datasets/train-with-datasets.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Train with Azure Machine Learning datasetsDatasets are categorized into TabularDataset and FileDataset based on how users consume them in training. * A TabularDataset represents data in a tabular format by parsing the provided file or list of files. TabularDataset can be created from csv, tsv, parquet files, SQL query results etc. For the complete list, please visit our [documentation](https://aka.ms/tabulardataset-api-reference). It provides you with the ability to materialize the data into a pandas DataFrame.* A FileDataset references single or multiple files in your datastores or public urls. This provides you with the ability to download or mount the files to your compute. The files can be of any format, which enables a wider range of machine learning scenarios including deep learning.In this tutorial, you will learn how to train with Azure Machine Learning datasets:&x2611; Use datasets directly in your training script&x2611; Use datasets to mount files to a remote compute PrerequisitesIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first if you haven't already established your connection to the AzureML Workspace.
###Code
# Check core SDK version number
import azureml.core
print('SDK version:', azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
Create Experiment**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
###Code
experiment_name = 'train-with-datasets'
from azureml.core import Experiment
exp = Experiment(workspace=ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create or Attach existing compute resourceBy using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
import os
# choose a name for your cluster
compute_name = os.environ.get('AML_COMPUTE_CLUSTER_NAME', 'cpu-cluster')
compute_min_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MIN_NODES', 0)
compute_max_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MAX_NODES', 4)
# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
vm_size = os.environ.get('AML_COMPUTE_CLUSTER_SKU', 'STANDARD_D2_V2')
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print('found compute target. just use it. ' + compute_name)
else:
print('creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size=vm_size,
min_nodes=compute_min_nodes,
max_nodes=compute_max_nodes)
# create the cluster
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
You now have the necessary packages and compute resources to train a model in the cloud. Use datasets directly in training Create a TabularDatasetBy creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. Every workspace comes with a default [datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data) (and you can register more) which is backed by the Azure blob storage account associated with the workspace. We can use it to transfer data from local to the cloud, and create dataset from it. We will now upload the [Iris data](./train-dataset/Iris.csv) to the default datastore (blob) within your workspace.
###Code
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./train-dataset/iris.csv'],
target_path = 'train-dataset/tabular/',
overwrite = True,
show_progress = True)
###Output
_____no_output_____
###Markdown
Then we will create an unregistered TabularDataset pointing to the path in the datastore. You can also create a dataset from multiple paths. [learn more](https://aka.ms/azureml/howto/createdatasets) [TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a Pandas or Spark DataFrame. You can create a TabularDataset object from .csv, .tsv, and parquet files, and from SQL query results. For a complete list, see [TabularDatasetFactory](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py) class.
###Code
from azureml.core import Dataset
dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, 'train-dataset/tabular/iris.csv')])
# preview the first 3 rows of the dataset
dataset.take(3).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_titanic.py` in the script_folder.
###Code
script_folder = os.path.join(os.getcwd(), 'train-dataset')
%%writefile $script_folder/train_iris.py
import os
from azureml.core import Dataset, Run
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.externals import joblib
run = Run.get_context()
# get input dataset by name
dataset = run.input_datasets['iris']
df = dataset.to_pandas_dataframe()
x_col = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
y_col = ['species']
x_df = df.loc[:, x_col]
y_df = df.loc[:, y_col]
#dividing X,y into train and test data
x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2, random_state=223)
data = {'train': {'X': x_train, 'y': y_train},
'test': {'X': x_test, 'y': y_test}}
clf = DecisionTreeClassifier().fit(data['train']['X'], data['train']['y'])
model_file_name = 'decision_tree.pkl'
print('Accuracy of Decision Tree classifier on training set: {:.2f}'.format(clf.score(x_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'.format(clf.score(x_test, y_test)))
os.makedirs('./outputs', exist_ok=True)
with open(model_file_name, 'wb') as file:
joblib.dump(value=clf, filename='outputs/' + model_file_name)
###Output
_____no_output_____
###Markdown
Configure and use datasets as the input to Estimator An estimator is a configuration object you submit to Azure Machine Learning to instruct how to set up the remote environment. Azure Machine Learning has pre-configured estimators for common machine learning frameworks, as well as generic Estimator. Create a SKLearn estimator by specifying:* The name of the estimator object, `est`* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution. * The training script name, train_titanic.py* The input dataset for training. `as_named_input()` is required so that the input dataset can be referenced by the assigned name in your training script. * The compute target. In this case you will use the AmlCompute you created* The environment definition for the experiment
###Code
from azureml.train.sklearn import SKLearn
est = SKLearn(source_directory=script_folder,
entry_script='train_iris.py',
# pass dataset object as an input with name 'titanic'
inputs=[dataset.as_named_input('iris')],
pip_packages=['azureml-dataprep[fuse]'],
compute_target=compute_target)
###Output
_____no_output_____
###Markdown
Submit job to runSubmit the estimator to the Azure ML experiment to kick off the execution.
###Code
run = exp.submit(est)
from azureml.widgets import RunDetails
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Use datasets to mount files to a remote computeYou can use the `Dataset` object to mount or download files referred by it. When you mount a file system, you attach that file system to a directory (mount point) and make it available to the system. Because mounting load files at the time of processing, it is usually faster than download. Note: mounting is only available for Linux-based compute (DSVM/VM, AMLCompute, HDInsights). Upload data files into datastoreWe will first load diabetes data from `scikit-learn` to the train-dataset folder.
###Code
from sklearn.datasets import load_diabetes
import numpy as np
training_data = load_diabetes()
np.save(file='train-dataset/features.npy', arr=training_data['data'])
np.save(file='train-dataset/labels.npy', arr=training_data['target'])
###Output
_____no_output_____
###Markdown
Now let's upload the 2 files into the default datastore under a path named `diabetes`:
###Code
datastore.upload_files(['train-dataset/features.npy', 'train-dataset/labels.npy'], target_path='diabetes', overwrite=True)
###Output
_____no_output_____
###Markdown
Create a FileDataset[FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.file_dataset.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public URLs. Using this method, you can download or mount the files to your compute as a FileDataset object. The files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
###Code
from azureml.core import Dataset
dataset = Dataset.File.from_files(path = [(datastore, 'diabetes/')])
# see a list of files referenced by dataset
dataset.to_path()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_diabetes.py` in the script_folder.
###Code
%%writefile $script_folder/train_diabetes.py
import os
import glob
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from azureml.core.run import Run
from sklearn.externals import joblib
import numpy as np
os.makedirs('./outputs', exist_ok=True)
run = Run.get_context()
base_path = run.input_datasets['diabetes']
X = np.load(glob.glob(os.path.join(base_path, '**/features.npy'), recursive=True)[0])
y = np.load(glob.glob(os.path.join(base_path, '**/labels.npy'), recursive=True)[0])
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
data = {'train': {'X': X_train, 'y': y_train},
'test': {'X': X_test, 'y': y_test}}
# list of numbers from 0.0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
for alpha in alphas:
# use Ridge algorithm to create a regression model
reg = Ridge(alpha=alpha)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
mse = mean_squared_error(preds, data['test']['y'])
run.log('alpha', alpha)
run.log('mse', mse)
model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)
with open(model_file_name, 'wb') as file:
joblib.dump(value=reg, filename='outputs/' + model_file_name)
print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))
###Output
_____no_output_____
###Markdown
Configure & Run You can ask the system to build a conda environment based on your dependency specification. Once the environment is built, and if you don't change your dependencies, it will be reused in subsequent runs.
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
conda_env = Environment('conda-env')
conda_env.python.conda_dependencies = CondaDependencies.create(pip_packages=['azureml-sdk',
'azureml-dataprep[pandas,fuse]',
'scikit-learn'])
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=script_folder,
script='train_diabetes.py',
# to mount the dataset on the remote compute and pass the mounted path as an argument to the training script
arguments =[dataset.as_named_input('diabetes').as_mount()])
src.run_config.framework = 'python'
src.run_config.environment = conda_env
src.run_config.target = compute_target.name
run = exp.submit(config=src)
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Display run resultsYou now have a model trained on a remote cluster. Retrieve all the metrics logged during the run, including the accuracy of the model:
###Code
print(run.get_metrics())
metrics = run.get_metrics()
###Output
_____no_output_____
###Markdown
Register datasetsUse the register() method to register datasets to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
###Code
dataset = dataset.register(workspace = ws,
name = 'diabetes dataset',
description='training dataset',
create_new_version=True)
###Output
_____no_output_____
###Markdown
Register models with datasetsThe last step in the training script wrote the model files in a directory named `outputs` in the VM of the cluster where the job is executed. `outputs` is a special directory in that all content in this directory is automatically uploaded to your workspace. This content appears in the run record in the experiment under your workspace. Hence, the model file is now also available in your workspace.You can register models with datasets for reproducibility and auditing purpose.
###Code
# find the index where MSE is the smallest
indices = list(range(0, len(metrics['mse'])))
min_mse_index = min(indices, key=lambda x: metrics['mse'][x])
print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(
metrics['mse'][min_mse_index],
metrics['alpha'][min_mse_index]
))
# find the best model
best_alpha = metrics['alpha'][min_mse_index]
model_file_name = 'ridge_{0:.2f}.pkl'.format(best_alpha)
# register the best model with the input dataset
model = run.register_model(model_name='sklearn_diabetes', model_path=os.path.join('outputs', model_file_name),
datasets =[('training data',dataset)])
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Train with Azure Machine Learning datasetsDatasets are categorized into TabularDataset and FileDataset based on how users consume them in training. * A TabularDataset represents data in a tabular format by parsing the provided file or list of files. TabularDataset can be created from csv, tsv, parquet files, SQL query results etc. For the complete list, please visit our [documentation](https://aka.ms/tabulardataset-api-reference). It provides you with the ability to materialize the data into a pandas DataFrame.* A FileDataset references single or multiple files in your datastores or public urls. This provides you with the ability to download or mount the files to your compute. The files can be of any format, which enables a wider range of machine learning scenarios including deep learning.In this tutorial, you will learn how to train with Azure Machine Learning datasets:&x2611; Use datasets directly in your training script&x2611; Use datasets to mount files to a remote compute PrerequisitesIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first if you haven't already established your connection to the AzureML Workspace.
###Code
# Check core SDK version number
import azureml.core
print('SDK version:', azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
Create Experiment**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
###Code
experiment_name = 'train-with-datasets'
from azureml.core import Experiment
exp = Experiment(workspace=ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create or Attach existing compute resourceBy using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
import os
# choose a name for your cluster
compute_name = os.environ.get('AML_COMPUTE_CLUSTER_NAME', 'cpu-cluster')
compute_min_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MIN_NODES', 0)
compute_max_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MAX_NODES', 4)
# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
vm_size = os.environ.get('AML_COMPUTE_CLUSTER_SKU', 'STANDARD_D2_V2')
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print('found compute target. just use it. ' + compute_name)
else:
print('creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size=vm_size,
min_nodes=compute_min_nodes,
max_nodes=compute_max_nodes)
# create the cluster
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
You now have the necessary packages and compute resources to train a model in the cloud. Use datasets directly in training Create a TabularDatasetBy creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. Every workspace comes with a default [datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data) (and you can register more) which is backed by the Azure blob storage account associated with the workspace. We can use it to transfer data from local to the cloud, and create dataset from it. We will now upload the [Iris data](./train-dataset/Iris.csv) to the default datastore (blob) within your workspace.
###Code
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./train-dataset/iris.csv'],
target_path = 'train-dataset/tabular/',
overwrite = True,
show_progress = True)
###Output
_____no_output_____
###Markdown
Then we will create an unregistered TabularDataset pointing to the path in the datastore. You can also create a dataset from multiple paths. [learn more](https://aka.ms/azureml/howto/createdatasets) [TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a Pandas or Spark DataFrame. You can create a TabularDataset object from .csv, .tsv, and parquet files, and from SQL query results. For a complete list, see [TabularDatasetFactory](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py) class.
###Code
from azureml.core import Dataset
dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, 'train-dataset/tabular/iris.csv')])
# preview the first 3 rows of the dataset
dataset.take(3).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_titanic.py` in the script_folder.
###Code
script_folder = os.path.join(os.getcwd(), 'train-dataset')
%%writefile $script_folder/train_iris.py
import os
from azureml.core import Dataset, Run
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
# sklearn.externals.joblib is removed in 0.23
from sklearn import __version__ as sklearnver
from packaging.version import Version
if Version(sklearnver) < Version("0.23.0"):
from sklearn.externals import joblib
else:
import joblib
run = Run.get_context()
# get input dataset by name
dataset = run.input_datasets['iris']
df = dataset.to_pandas_dataframe()
x_col = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
y_col = ['species']
x_df = df.loc[:, x_col]
y_df = df.loc[:, y_col]
#dividing X,y into train and test data
x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2, random_state=223)
data = {'train': {'X': x_train, 'y': y_train},
'test': {'X': x_test, 'y': y_test}}
clf = DecisionTreeClassifier().fit(data['train']['X'], data['train']['y'])
model_file_name = 'decision_tree.pkl'
print('Accuracy of Decision Tree classifier on training set: {:.2f}'.format(clf.score(x_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'.format(clf.score(x_test, y_test)))
os.makedirs('./outputs', exist_ok=True)
with open(model_file_name, 'wb') as file:
joblib.dump(value=clf, filename='outputs/' + model_file_name)
###Output
_____no_output_____
###Markdown
Create an environmentDefine a conda environment YAML file with your training script dependencies and create an Azure ML environment.
###Code
%%writefile conda_dependencies.yml
dependencies:
- python=3.6.2
- scikit-learn
- pip:
- azureml-defaults
- packaging
from azureml.core import Environment
sklearn_env = Environment.from_conda_specification(name = 'sklearn-env', file_path = './conda_dependencies.yml')
###Output
_____no_output_____
###Markdown
Configure training run A ScriptRunConfig object specifies the configuration details of your training job, including your training script, environment to use, and the compute target to run on. Specify the following in your script run configuration:* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution. * The training script name, train_iris.py* The input dataset for training, passed as an argument to your training script. `as_named_input()` is required so that the input dataset can be referenced by the assigned name in your training script. * The compute target. In this case you will use the AmlCompute you created* The environment definition for the experiment
###Code
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=script_folder,
script='train_iris.py',
arguments=[dataset.as_named_input('iris')],
compute_target=compute_target,
environment=sklearn_env)
###Output
_____no_output_____
###Markdown
Submit job to runSubmit the ScriptRunConfig to the Azure ML experiment to kick off the execution.
###Code
run = exp.submit(src)
from azureml.widgets import RunDetails
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Use datasets to mount files to a remote computeYou can use the `Dataset` object to mount or download files referred by it. When you mount a file system, you attach that file system to a directory (mount point) and make it available to the system. Because mounting load files at the time of processing, it is usually faster than download. Note: mounting is only available for Linux-based compute (DSVM/VM, AMLCompute, HDInsights). Upload data files into datastoreWe will first load diabetes data from `scikit-learn` to the train-dataset folder.
###Code
from sklearn.datasets import load_diabetes
import numpy as np
training_data = load_diabetes()
np.save(file='train-dataset/features.npy', arr=training_data['data'])
np.save(file='train-dataset/labels.npy', arr=training_data['target'])
###Output
_____no_output_____
###Markdown
Now let's upload the 2 files into the default datastore under a path named `diabetes`:
###Code
datastore.upload_files(['train-dataset/features.npy', 'train-dataset/labels.npy'], target_path='diabetes', overwrite=True)
###Output
_____no_output_____
###Markdown
Create a FileDataset[FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.file_dataset.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public URLs. Using this method, you can download or mount the files to your compute as a FileDataset object. The files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
###Code
from azureml.core import Dataset
dataset = Dataset.File.from_files(path = [(datastore, 'diabetes/')])
# see a list of files referenced by dataset
dataset.to_path()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_diabetes.py` in the script_folder.
###Code
%%writefile $script_folder/train_diabetes.py
import os
import glob
import argparse
from azureml.core.run import Run
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
# sklearn.externals.joblib is removed in 0.23
from sklearn import __version__ as sklearnver
from packaging.version import Version
if Version(sklearnver) < Version("0.23.0"):
from sklearn.externals import joblib
else:
import joblib
import numpy as np
parser = argparse.ArgumentParser()
parser.add_argument('--data-folder', type=str, help='training dataset')
args = parser.parse_args()
os.makedirs('./outputs', exist_ok=True)
base_path = args.data_folder
run = Run.get_context()
X = np.load(glob.glob(os.path.join(base_path, '**/features.npy'), recursive=True)[0])
y = np.load(glob.glob(os.path.join(base_path, '**/labels.npy'), recursive=True)[0])
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
data = {'train': {'X': X_train, 'y': y_train},
'test': {'X': X_test, 'y': y_test}}
# list of numbers from 0.0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
for alpha in alphas:
# use Ridge algorithm to create a regression model
reg = Ridge(alpha=alpha)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
mse = mean_squared_error(preds, data['test']['y'])
run.log('alpha', alpha)
run.log('mse', mse)
model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)
with open(model_file_name, 'wb') as file:
joblib.dump(value=reg, filename='outputs/' + model_file_name)
print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))
###Output
_____no_output_____
###Markdown
Configure & Run Now configure your run. We will reuse the same `sklearn_env` environment from the previous run. Once the environment is built, and if you don't change your dependencies, it will be reused in subsequent runs. We will pass in the DatasetConsumptionConfig of our FileDataset to the `'--data-folder'` argument of the script. Azure ML will resolve this to mount point of the data on the compute target, which we parse in the training script.
###Code
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=script_folder,
script='train_diabetes.py',
# to mount the dataset on the remote compute and pass the mounted path as an argument to the training script
arguments =['--data-folder', dataset.as_mount()],
compute_target=compute_target,
environment=sklearn_env)
run = exp.submit(config=src)
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Display run resultsYou now have a model trained on a remote cluster. Retrieve all the metrics logged during the run, including the accuracy of the model:
###Code
run.wait_for_completion()
metrics = run.get_metrics()
print(metrics)
###Output
_____no_output_____
###Markdown
Register datasetsUse the register() method to register datasets to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
###Code
dataset = dataset.register(workspace = ws,
name = 'diabetes dataset',
description='training dataset',
create_new_version=True)
###Output
_____no_output_____
###Markdown
Register models with datasetsThe last step in the training script wrote the model files in a directory named `outputs` in the VM of the cluster where the job is executed. `outputs` is a special directory in that all content in this directory is automatically uploaded to your workspace. This content appears in the run record in the experiment under your workspace. Hence, the model file is now also available in your workspace.You can register models with datasets for reproducibility and auditing purpose.
###Code
# find the index where MSE is the smallest
indices = list(range(0, len(metrics['mse'])))
min_mse_index = min(indices, key=lambda x: metrics['mse'][x])
print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(
metrics['mse'][min_mse_index],
metrics['alpha'][min_mse_index]
))
# find the best model
best_alpha = metrics['alpha'][min_mse_index]
model_file_name = 'ridge_{0:.2f}.pkl'.format(best_alpha)
# register the best model with the input dataset
model = run.register_model(model_name='sklearn_diabetes', model_path=os.path.join('outputs', model_file_name),
datasets =[('training data',dataset)])
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Train with Azure Machine Learning datasetsDatasets are categorized into TabularDataset and FileDataset based on how users consume them in training. * A TabularDataset represents data in a tabular format by parsing the provided file or list of files. TabularDataset can be created from csv, tsv, parquet files, SQL query results etc. For the complete list, please visit our [documentation](https://aka.ms/tabulardataset-api-reference). It provides you with the ability to materialize the data into a pandas DataFrame.* A FileDataset references single or multiple files in your datastores or public urls. This provides you with the ability to download or mount the files to your compute. The files can be of any format, which enables a wider range of machine learning scenarios including deep learning.In this tutorial, you will learn how to train with Azure Machine Learning datasets:&x2611; Use datasets directly in your training script&x2611; Use datasets to mount files to a remote compute PrerequisitesIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first if you haven't already established your connection to the AzureML Workspace.
###Code
# Check core SDK version number
import azureml.core
print('SDK version:', azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
Create Experiment**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
###Code
experiment_name = 'train-with-datasets'
from azureml.core import Experiment
exp = Experiment(workspace=ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create or Attach existing compute resourceBy using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
import os
# choose a name for your cluster
compute_name = os.environ.get('AML_COMPUTE_CLUSTER_NAME', 'cpu-cluster')
compute_min_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MIN_NODES', 0)
compute_max_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MAX_NODES', 4)
# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
vm_size = os.environ.get('AML_COMPUTE_CLUSTER_SKU', 'STANDARD_D2_V2')
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print('found compute target. just use it. ' + compute_name)
else:
print('creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size=vm_size,
min_nodes=compute_min_nodes,
max_nodes=compute_max_nodes)
# create the cluster
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
You now have the necessary packages and compute resources to train a model in the cloud. Use datasets directly in training Create a TabularDatasetBy creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. Every workspace comes with a default [datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data) (and you can register more) which is backed by the Azure blob storage account associated with the workspace. We can use it to transfer data from local to the cloud, and create dataset from it. We will now upload the [Iris data](./train-dataset/Iris.csv) to the default datastore (blob) within your workspace.
###Code
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./train-dataset/iris.csv'],
target_path = 'train-dataset/tabular/',
overwrite = True,
show_progress = True)
###Output
_____no_output_____
###Markdown
Then we will create an unregistered TabularDataset pointing to the path in the datastore. You can also create a dataset from multiple paths. [learn more](https://aka.ms/azureml/howto/createdatasets) [TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a Pandas or Spark DataFrame. You can create a TabularDataset object from .csv, .tsv, and parquet files, and from SQL query results. For a complete list, see [TabularDatasetFactory](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py) class.
###Code
from azureml.core import Dataset
dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, 'train-dataset/tabular/iris.csv')])
# preview the first 3 rows of the dataset
dataset.take(3).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_titanic.py` in the script_folder.
###Code
script_folder = os.path.join(os.getcwd(), 'train-dataset')
%%writefile $script_folder/train_iris.py
import os
from azureml.core import Dataset, Run
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.externals import joblib
run = Run.get_context()
# get input dataset by name
dataset = run.input_datasets['iris']
df = dataset.to_pandas_dataframe()
x_col = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
y_col = ['species']
x_df = df.loc[:, x_col]
y_df = df.loc[:, y_col]
#dividing X,y into train and test data
x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2, random_state=223)
data = {'train': {'X': x_train, 'y': y_train},
'test': {'X': x_test, 'y': y_test}}
clf = DecisionTreeClassifier().fit(data['train']['X'], data['train']['y'])
model_file_name = 'decision_tree.pkl'
print('Accuracy of Decision Tree classifier on training set: {:.2f}'.format(clf.score(x_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'.format(clf.score(x_test, y_test)))
os.makedirs('./outputs', exist_ok=True)
with open(model_file_name, 'wb') as file:
joblib.dump(value=clf, filename='outputs/' + model_file_name)
###Output
_____no_output_____
###Markdown
Configure and use datasets as the input to Estimator An estimator is a configuration object you submit to Azure Machine Learning to instruct how to set up the remote environment. Azure Machine Learning has pre-configured estimators for common machine learning frameworks, as well as generic Estimator. Create a SKLearn estimator by specifying:* The name of the estimator object, `est`* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution. * The training script name, train_titanic.py* The input dataset for training. `as_named_input()` is required so that the input dataset can be referenced by the assigned name in your training script. * The compute target. In this case you will use the AmlCompute you created* The environment definition for the experiment
###Code
from azureml.train.sklearn import SKLearn
est = SKLearn(source_directory=script_folder,
entry_script='train_iris.py',
# pass dataset object as an input with name 'titanic'
inputs=[dataset.as_named_input('iris')],
pip_packages=['azureml-dataprep[fuse]'],
compute_target=compute_target)
###Output
_____no_output_____
###Markdown
Submit job to runSubmit the estimator to the Azure ML experiment to kick off the execution.
###Code
run = exp.submit(est)
from azureml.widgets import RunDetails
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Use datasets to mount files to a remote computeYou can use the `Dataset` object to mount or download files referred by it. When you mount a file system, you attach that file system to a directory (mount point) and make it available to the system. Because mounting load files at the time of processing, it is usually faster than download. Note: mounting is only available for Linux-based compute (DSVM/VM, AMLCompute, HDInsights). Upload data files into datastoreWe will first load diabetes data from `scikit-learn` to the train-dataset folder.
###Code
from sklearn.datasets import load_diabetes
import numpy as np
training_data = load_diabetes()
np.save(file='train-dataset/features.npy', arr=training_data['data'])
np.save(file='train-dataset/labels.npy', arr=training_data['target'])
###Output
_____no_output_____
###Markdown
Now let's upload the 2 files into the default datastore under a path named `diabetes`:
###Code
datastore.upload_files(['train-dataset/features.npy', 'train-dataset/labels.npy'], target_path='diabetes', overwrite=True)
###Output
_____no_output_____
###Markdown
Create a FileDataset[FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.file_dataset.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public URLs. Using this method, you can download or mount the files to your compute as a FileDataset object. The files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
###Code
from azureml.core import Dataset
dataset = Dataset.File.from_files(path = [(datastore, 'diabetes/')])
# see a list of files referenced by dataset
dataset.to_path()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_diabetes.py` in the script_folder.
###Code
%%writefile $script_folder/train_diabetes.py
import os
import glob
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from azureml.core.run import Run
from sklearn.externals import joblib
import numpy as np
os.makedirs('./outputs', exist_ok=True)
run = Run.get_context()
base_path = run.input_datasets['diabetes']
X = np.load(glob.glob(os.path.join(base_path, '**/features.npy'), recursive=True)[0])
y = np.load(glob.glob(os.path.join(base_path, '**/labels.npy'), recursive=True)[0])
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
data = {'train': {'X': X_train, 'y': y_train},
'test': {'X': X_test, 'y': y_test}}
# list of numbers from 0.0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
for alpha in alphas:
# use Ridge algorithm to create a regression model
reg = Ridge(alpha=alpha)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
mse = mean_squared_error(preds, data['test']['y'])
run.log('alpha', alpha)
run.log('mse', mse)
model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)
with open(model_file_name, 'wb') as file:
joblib.dump(value=reg, filename='outputs/' + model_file_name)
print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))
###Output
_____no_output_____
###Markdown
Configure & Run You can ask the system to build a conda environment based on your dependency specification. Once the environment is built, and if you don't change your dependencies, it will be reused in subsequent runs.
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
conda_env = Environment('conda-env')
conda_env.python.conda_dependencies = CondaDependencies.create(pip_packages=['azureml-sdk',
'azureml-dataprep[pandas,fuse]',
'scikit-learn'])
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=script_folder,
script='train_diabetes.py',
# to mount the dataset on the remote compute and pass the mounted path as an argument to the training script
arguments =[dataset.as_named_input('diabetes').as_mount()])
src.run_config.framework = 'python'
src.run_config.environment = conda_env
src.run_config.target = compute_target.name
run = exp.submit(config=src)
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Display run resultsYou now have a model trained on a remote cluster. Retrieve all the metrics logged during the run, including the accuracy of the model:
###Code
run.wait_for_completion()
metrics = run.get_metrics()
print(metrics)
###Output
_____no_output_____
###Markdown
Register datasetsUse the register() method to register datasets to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
###Code
dataset = dataset.register(workspace = ws,
name = 'diabetes dataset',
description='training dataset',
create_new_version=True)
###Output
_____no_output_____
###Markdown
Register models with datasetsThe last step in the training script wrote the model files in a directory named `outputs` in the VM of the cluster where the job is executed. `outputs` is a special directory in that all content in this directory is automatically uploaded to your workspace. This content appears in the run record in the experiment under your workspace. Hence, the model file is now also available in your workspace.You can register models with datasets for reproducibility and auditing purpose.
###Code
# find the index where MSE is the smallest
indices = list(range(0, len(metrics['mse'])))
min_mse_index = min(indices, key=lambda x: metrics['mse'][x])
print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(
metrics['mse'][min_mse_index],
metrics['alpha'][min_mse_index]
))
# find the best model
best_alpha = metrics['alpha'][min_mse_index]
model_file_name = 'ridge_{0:.2f}.pkl'.format(best_alpha)
# register the best model with the input dataset
model = run.register_model(model_name='sklearn_diabetes', model_path=os.path.join('outputs', model_file_name),
datasets =[('training data',dataset)])
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Train with Azure Machine Learning datasetsDatasets are categorized into TabularDataset and FileDataset based on how users consume them in training. * A TabularDataset represents data in a tabular format by parsing the provided file or list of files. TabularDataset can be created from csv, tsv, parquet files, SQL query results etc. For the complete list, please visit our [documentation](https://aka.ms/tabulardataset-api-reference). It provides you with the ability to materialize the data into a pandas DataFrame.* A FileDataset references single or multiple files in your datastores or public urls. This provides you with the ability to download or mount the files to your compute. The files can be of any format, which enables a wider range of machine learning scenarios including deep learning.In this tutorial, you will learn how to train with Azure Machine Learning datasets:&x2611; Use datasets directly in your training script&x2611; Use datasets to mount files to a remote compute PrerequisitesIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first if you haven't already established your connection to the AzureML Workspace.
###Code
# Check core SDK version number
import azureml.core
print('SDK version:', azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
Create Experiment**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
###Code
experiment_name = 'train-with-datasets'
from azureml.core import Experiment
exp = Experiment(workspace=ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create or Attach existing compute resourceBy using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
import os
# choose a name for your cluster
compute_name = os.environ.get('AML_COMPUTE_CLUSTER_NAME', 'cpu-cluster')
compute_min_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MIN_NODES', 0)
compute_max_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MAX_NODES', 4)
# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
vm_size = os.environ.get('AML_COMPUTE_CLUSTER_SKU', 'STANDARD_D2_V2')
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print('found compute target. just use it. ' + compute_name)
else:
print('creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size=vm_size,
min_nodes=compute_min_nodes,
max_nodes=compute_max_nodes)
# create the cluster
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
You now have the necessary packages and compute resources to train a model in the cloud. Use datasets directly in training Create a TabularDatasetBy creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. Every workspace comes with a default [datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data) (and you can register more) which is backed by the Azure blob storage account associated with the workspace. We can use it to transfer data from local to the cloud, and create dataset from it. We will now upload the [Iris data](./train-dataset/Iris.csv) to the default datastore (blob) within your workspace.
###Code
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./train-dataset/iris.csv'],
target_path = 'train-dataset/tabular/',
overwrite = True,
show_progress = True)
###Output
_____no_output_____
###Markdown
Then we will create an unregistered TabularDataset pointing to the path in the datastore. You can also create a dataset from multiple paths. [learn more](https://aka.ms/azureml/howto/createdatasets) [TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a Pandas or Spark DataFrame. You can create a TabularDataset object from .csv, .tsv, and parquet files, and from SQL query results. For a complete list, see [TabularDatasetFactory](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py) class.
###Code
from azureml.core import Dataset
dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, 'train-dataset/tabular/iris.csv')])
# preview the first 3 rows of the dataset
dataset.take(3).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_titanic.py` in the script_folder.
###Code
script_folder = os.path.join(os.getcwd(), 'train-dataset')
%%writefile $script_folder/train_iris.py
import os
from azureml.core import Dataset, Run
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
# sklearn.externals.joblib is removed in 0.23
from sklearn import __version__ as sklearnver
from packaging.version import Version
if Version(sklearnver) < Version("0.23.0"):
from sklearn.externals import joblib
else:
import joblib
run = Run.get_context()
# get input dataset by name
dataset = run.input_datasets['iris']
df = dataset.to_pandas_dataframe()
x_col = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
y_col = ['species']
x_df = df.loc[:, x_col]
y_df = df.loc[:, y_col]
#dividing X,y into train and test data
x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2, random_state=223)
data = {'train': {'X': x_train, 'y': y_train},
'test': {'X': x_test, 'y': y_test}}
clf = DecisionTreeClassifier().fit(data['train']['X'], data['train']['y'])
model_file_name = 'decision_tree.pkl'
print('Accuracy of Decision Tree classifier on training set: {:.2f}'.format(clf.score(x_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'.format(clf.score(x_test, y_test)))
os.makedirs('./outputs', exist_ok=True)
with open(model_file_name, 'wb') as file:
joblib.dump(value=clf, filename='outputs/' + model_file_name)
###Output
_____no_output_____
###Markdown
Create an environmentDefine a conda environment YAML file with your training script dependencies and create an Azure ML environment.
###Code
%%writefile conda_dependencies.yml
dependencies:
- python=3.6.2
- scikit-learn
- pip:
- azureml-defaults
- packaging
from azureml.core import Environment
sklearn_env = Environment.from_conda_specification(name = 'sklearn-env', file_path = './conda_dependencies.yml')
###Output
_____no_output_____
###Markdown
Configure training run A ScriptRunConfig object specifies the configuration details of your training job, including your training script, environment to use, and the compute target to run on. Specify the following in your script run configuration:* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution. * The training script name, train_iris.py* The input dataset for training, passed as an argument to your training script. `as_named_input()` is required so that the input dataset can be referenced by the assigned name in your training script. * The compute target. In this case you will use the AmlCompute you created* The environment definition for the experiment
###Code
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=script_folder,
script='train_iris.py',
arguments=[dataset.as_named_input('iris')],
compute_target=compute_target,
environment=sklearn_env)
###Output
_____no_output_____
###Markdown
Submit job to runSubmit the ScriptRunConfig to the Azure ML experiment to kick off the execution.
###Code
run = exp.submit(src)
from azureml.widgets import RunDetails
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Use datasets to mount files to a remote computeYou can use the `Dataset` object to mount or download files referred by it. When you mount a file system, you attach that file system to a directory (mount point) and make it available to the system. Because mounting load files at the time of processing, it is usually faster than download. Note: mounting is only available for Linux-based compute (DSVM/VM, AMLCompute, HDInsights). Upload data files into datastoreWe will first load diabetes data from `scikit-learn` to the train-dataset folder.
###Code
from sklearn.datasets import load_diabetes
import numpy as np
training_data = load_diabetes()
np.save(file='train-dataset/features.npy', arr=training_data['data'])
np.save(file='train-dataset/labels.npy', arr=training_data['target'])
###Output
_____no_output_____
###Markdown
Now let's upload the 2 files into the default datastore under a path named `diabetes`:
###Code
datastore.upload_files(['train-dataset/features.npy', 'train-dataset/labels.npy'], target_path='diabetes', overwrite=True)
###Output
_____no_output_____
###Markdown
Create a FileDataset[FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.file_dataset.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public URLs. Using this method, you can download or mount the files to your compute as a FileDataset object. The files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
###Code
from azureml.core import Dataset
dataset = Dataset.File.from_files(path = [(datastore, 'diabetes/')])
# see a list of files referenced by dataset
dataset.to_path()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_diabetes.py` in the script_folder.
###Code
%%writefile $script_folder/train_diabetes.py
import os
import glob
import argparse
from azureml.core.run import Run
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
# sklearn.externals.joblib is removed in 0.23
from sklearn import __version__ as sklearnver
from packaging.version import Version
if Version(sklearnver) < Version("0.23.0"):
from sklearn.externals import joblib
else:
import joblib
import numpy as np
parser = argparse.ArgumentParser()
parser.add_argument('--data-folder', type=str, help='training dataset')
args = parser.parse_args()
os.makedirs('./outputs', exist_ok=True)
base_path = args.data_folder
run = Run.get_context()
X = np.load(glob.glob(os.path.join(base_path, '**/features.npy'), recursive=True)[0])
y = np.load(glob.glob(os.path.join(base_path, '**/labels.npy'), recursive=True)[0])
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
data = {'train': {'X': X_train, 'y': y_train},
'test': {'X': X_test, 'y': y_test}}
# list of numbers from 0.0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
for alpha in alphas:
# use Ridge algorithm to create a regression model
reg = Ridge(alpha=alpha)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
mse = mean_squared_error(preds, data['test']['y'])
run.log('alpha', alpha)
run.log('mse', mse)
model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)
with open(model_file_name, 'wb') as file:
joblib.dump(value=reg, filename='outputs/' + model_file_name)
print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))
###Output
_____no_output_____
###Markdown
Configure & Run Now configure your run. We will reuse the same `sklearn_env` environment from the previous run. Once the environment is built, and if you don't change your dependencies, it will be reused in subsequent runs. We will pass in the DatasetConsumptionConfig of our FileDataset to the `'--data-folder'` argument of the script. Azure ML will resolve this to mount point of the data on the compute target, which we parse in the training script.
###Code
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=script_folder,
script='train_diabetes.py',
# to mount the dataset on the remote compute and pass the mounted path as an argument to the training script
arguments =['--data-folder', dataset.as_mount()],
compute_target=compute_target,
environment=sklearn_env)
run = exp.submit(config=src)
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Display run resultsYou now have a model trained on a remote cluster. Retrieve all the metrics logged during the run, including the accuracy of the model:
###Code
run.wait_for_completion()
metrics = run.get_metrics()
print(metrics)
###Output
_____no_output_____
###Markdown
Register datasetsUse the register() method to register datasets to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
###Code
dataset = dataset.register(workspace = ws,
name = 'diabetes dataset',
description='training dataset',
create_new_version=True)
###Output
_____no_output_____
###Markdown
Register models with datasetsThe last step in the training script wrote the model files in a directory named `outputs` in the VM of the cluster where the job is executed. `outputs` is a special directory in that all content in this directory is automatically uploaded to your workspace. This content appears in the run record in the experiment under your workspace. Hence, the model file is now also available in your workspace.You can register models with datasets for reproducibility and auditing purpose.
###Code
# find the index where MSE is the smallest
indices = list(range(0, len(metrics['mse'])))
min_mse_index = min(indices, key=lambda x: metrics['mse'][x])
print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(
metrics['mse'][min_mse_index],
metrics['alpha'][min_mse_index]
))
# find the best model
best_alpha = metrics['alpha'][min_mse_index]
model_file_name = 'ridge_{0:.2f}.pkl'.format(best_alpha)
# register the best model with the input dataset
model = run.register_model(model_name='sklearn_diabetes', model_path=os.path.join('outputs', model_file_name),
datasets =[('training data',dataset)])
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Train with Azure Machine Learning datasetsDatasets are categorized into TabularDataset and FileDataset based on how users consume them in training. * A TabularDataset represents data in a tabular format by parsing the provided file or list of files. TabularDataset can be created from csv, tsv, parquet files, SQL query results etc. For the complete list, please visit our [documentation](https://aka.ms/tabulardataset-api-reference). It provides you with the ability to materialize the data into a pandas DataFrame.* A FileDataset references single or multiple files in your datastores or public urls. This provides you with the ability to download or mount the files to your compute. The files can be of any format, which enables a wider range of machine learning scenarios including deep learning.In this tutorial, you will learn how to train with Azure Machine Learning datasets:&x2611; Use datasets directly in your training script&x2611; Use datasets to mount files to a remote compute PrerequisitesIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first if you haven't already established your connection to the AzureML Workspace.
###Code
# Check core SDK version number
import azureml.core
print('SDK version:', azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
Create Experiment**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
###Code
experiment_name = 'train-with-datasets'
from azureml.core import Experiment
exp = Experiment(workspace=ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create or Attach existing compute resourceBy using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
import os
# choose a name for your cluster
compute_name = os.environ.get('AML_COMPUTE_CLUSTER_NAME', 'cpu-cluster')
compute_min_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MIN_NODES', 0)
compute_max_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MAX_NODES', 4)
# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
vm_size = os.environ.get('AML_COMPUTE_CLUSTER_SKU', 'STANDARD_D2_V2')
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print('found compute target. just use it. ' + compute_name)
else:
print('creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size=vm_size,
min_nodes=compute_min_nodes,
max_nodes=compute_max_nodes)
# create the cluster
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
You now have the necessary packages and compute resources to train a model in the cloud. Use datasets directly in training Create a TabularDatasetBy creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. Every workspace comes with a default [datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data) (and you can register more) which is backed by the Azure blob storage account associated with the workspace. We can use it to transfer data from local to the cloud, and create dataset from it. We will now upload the [Iris data](./train-dataset/Iris.csv) to the default datastore (blob) within your workspace.
###Code
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./train-dataset/iris.csv'],
target_path = 'train-dataset/tabular/',
overwrite = True,
show_progress = True)
###Output
_____no_output_____
###Markdown
Then we will create an unregistered TabularDataset pointing to the path in the datastore. You can also create a dataset from multiple paths. [learn more](https://aka.ms/azureml/howto/createdatasets) [TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a Pandas or Spark DataFrame. You can create a TabularDataset object from .csv, .tsv, and parquet files, and from SQL query results. For a complete list, see [TabularDatasetFactory](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py) class.
###Code
from azureml.core import Dataset
dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, 'train-dataset/tabular/iris.csv')])
# preview the first 3 rows of the dataset
dataset.take(3).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_titanic.py` in the script_folder.
###Code
script_folder = os.path.join(os.getcwd(), 'train-dataset')
%%writefile $script_folder/train_iris.py
import os
from azureml.core import Dataset, Run
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.externals import joblib
run = Run.get_context()
# get input dataset by name
dataset = run.input_datasets['iris']
df = dataset.to_pandas_dataframe()
x_col = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
y_col = ['species']
x_df = df.loc[:, x_col]
y_df = df.loc[:, y_col]
#dividing X,y into train and test data
x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2, random_state=223)
data = {'train': {'X': x_train, 'y': y_train},
'test': {'X': x_test, 'y': y_test}}
clf = DecisionTreeClassifier().fit(data['train']['X'], data['train']['y'])
model_file_name = 'decision_tree.pkl'
print('Accuracy of Decision Tree classifier on training set: {:.2f}'.format(clf.score(x_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'.format(clf.score(x_test, y_test)))
os.makedirs('./outputs', exist_ok=True)
with open(model_file_name, 'wb') as file:
joblib.dump(value=clf, filename='outputs/' + model_file_name)
###Output
_____no_output_____
###Markdown
Configure and use datasets as the input to Estimator An estimator is a configuration object you submit to Azure Machine Learning to instruct how to set up the remote environment. Azure Machine Learning has pre-configured estimators for common machine learning frameworks, as well as generic Estimator. Create a SKLearn estimator by specifying:* The name of the estimator object, `est`* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution. * The training script name, train_titanic.py* The input dataset for training. `as_named_input()` is required so that the input dataset can be referenced by the assigned name in your training script. * The compute target. In this case you will use the AmlCompute you created* The environment definition for the experiment
###Code
from azureml.train.sklearn import SKLearn
est = SKLearn(source_directory=script_folder,
entry_script='train_iris.py',
# pass dataset object as an input with name 'titanic'
inputs=[dataset.as_named_input('iris')],
pip_packages=['azureml-dataprep[fuse]'],
compute_target=compute_target)
###Output
_____no_output_____
###Markdown
Submit job to runSubmit the estimator to the Azure ML experiment to kick off the execution.
###Code
run = exp.submit(est)
from azureml.widgets import RunDetails
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Use datasets to mount files to a remote computeYou can use the `Dataset` object to mount or download files referred by it. When you mount a file system, you attach that file system to a directory (mount point) and make it available to the system. Because mounting load files at the time of processing, it is usually faster than download. Note: mounting is only available for Linux-based compute (DSVM/VM, AMLCompute, HDInsights). Upload data files into datastoreWe will first load diabetes data from `scikit-learn` to the train-dataset folder.
###Code
from sklearn.datasets import load_diabetes
import numpy as np
training_data = load_diabetes()
np.save(file='train-dataset/features.npy', arr=training_data['data'])
np.save(file='train-dataset/labels.npy', arr=training_data['target'])
###Output
_____no_output_____
###Markdown
Now let's upload the 2 files into the default datastore under a path named `diabetes`:
###Code
datastore.upload_files(['train-dataset/features.npy', 'train-dataset/labels.npy'], target_path='diabetes', overwrite=True)
###Output
_____no_output_____
###Markdown
Create a FileDataset[FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.file_dataset.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public URLs. Using this method, you can download or mount the files to your compute as a FileDataset object. The files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
###Code
from azureml.core import Dataset
dataset = Dataset.File.from_files(path = [(datastore, 'diabetes/')])
# see a list of files referenced by dataset
dataset.to_path()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_diabetes.py` in the script_folder.
###Code
%%writefile $script_folder/train_diabetes.py
import os
import glob
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from azureml.core.run import Run
from sklearn.externals import joblib
import numpy as np
os.makedirs('./outputs', exist_ok=True)
run = Run.get_context()
base_path = run.input_datasets['diabetes']
X = np.load(glob.glob(os.path.join(base_path, '**/features.npy'), recursive=True)[0])
y = np.load(glob.glob(os.path.join(base_path, '**/labels.npy'), recursive=True)[0])
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
data = {'train': {'X': X_train, 'y': y_train},
'test': {'X': X_test, 'y': y_test}}
# list of numbers from 0.0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
for alpha in alphas:
# use Ridge algorithm to create a regression model
reg = Ridge(alpha=alpha)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
mse = mean_squared_error(preds, data['test']['y'])
run.log('alpha', alpha)
run.log('mse', mse)
model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)
with open(model_file_name, 'wb') as file:
joblib.dump(value=reg, filename='outputs/' + model_file_name)
print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))
###Output
_____no_output_____
###Markdown
Configure & Run You can ask the system to build a conda environment based on your dependency specification. Once the environment is built, and if you don't change your dependencies, it will be reused in subsequent runs.
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
conda_env = Environment('conda-env')
conda_env.python.conda_dependencies = CondaDependencies.create(pip_packages=['azureml-sdk',
'azureml-dataprep[pandas,fuse]',
'scikit-learn'])
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=script_folder,
script='train_diabetes.py',
# to mount the dataset on the remote compute and pass the mounted path as an argument to the training script
arguments =[dataset.as_named_input('diabetes').as_mount()])
src.run_config.framework = 'python'
src.run_config.environment = conda_env
src.run_config.target = compute_target.name
run = exp.submit(config=src)
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Display run resultsYou now have a model trained on a remote cluster. Retrieve all the metrics logged during the run, including the accuracy of the model:
###Code
run.wait_for_completion()
metrics = run.get_metrics()
print(metrics)
###Output
_____no_output_____
###Markdown
Register datasetsUse the register() method to register datasets to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
###Code
dataset = dataset.register(workspace = ws,
name = 'diabetes dataset',
description='training dataset',
create_new_version=True)
###Output
_____no_output_____
###Markdown
Register models with datasetsThe last step in the training script wrote the model files in a directory named `outputs` in the VM of the cluster where the job is executed. `outputs` is a special directory in that all content in this directory is automatically uploaded to your workspace. This content appears in the run record in the experiment under your workspace. Hence, the model file is now also available in your workspace.You can register models with datasets for reproducibility and auditing purpose.
###Code
# find the index where MSE is the smallest
indices = list(range(0, len(metrics['mse'])))
min_mse_index = min(indices, key=lambda x: metrics['mse'][x])
print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(
metrics['mse'][min_mse_index],
metrics['alpha'][min_mse_index]
))
# find the best model
best_alpha = metrics['alpha'][min_mse_index]
model_file_name = 'ridge_{0:.2f}.pkl'.format(best_alpha)
# register the best model with the input dataset
model = run.register_model(model_name='sklearn_diabetes', model_path=os.path.join('outputs', model_file_name),
datasets =[('training data',dataset)])
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Train with Azure Machine Learning datasetsDatasets are categorized into TabularDataset and FileDataset based on how users consume them in training. * A TabularDataset represents data in a tabular format by parsing the provided file or list of files. TabularDataset can be created from csv, tsv, parquet files, SQL query results etc. For the complete list, please visit our [documentation](https://aka.ms/tabulardataset-api-reference). It provides you with the ability to materialize the data into a pandas DataFrame.* A FileDataset references single or multiple files in your datastores or public urls. This provides you with the ability to download or mount the files to your compute. The files can be of any format, which enables a wider range of machine learning scenarios including deep learning.In this tutorial, you will learn how to train with Azure Machine Learning datasets:&x2611; Use datasets directly in your training script&x2611; Use datasets to mount files to a remote compute PrerequisitesIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first if you haven't already established your connection to the AzureML Workspace.
###Code
# Check core SDK version number
import azureml.core
print('SDK version:', azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
Create Experiment**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
###Code
experiment_name = 'train-with-datasets'
from azureml.core import Experiment
exp = Experiment(workspace=ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create or Attach existing compute resourceBy using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
import os
# choose a name for your cluster
compute_name = os.environ.get('AML_COMPUTE_CLUSTER_NAME', 'cpu-cluster')
compute_min_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MIN_NODES', 0)
compute_max_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MAX_NODES', 4)
# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
vm_size = os.environ.get('AML_COMPUTE_CLUSTER_SKU', 'STANDARD_D2_V2')
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print('found compute target. just use it. ' + compute_name)
else:
print('creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size=vm_size,
min_nodes=compute_min_nodes,
max_nodes=compute_max_nodes)
# create the cluster
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
You now have the necessary packages and compute resources to train a model in the cloud. Use datasets directly in training Create a TabularDatasetBy creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. Every workspace comes with a default [datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data) (and you can register more) which is backed by the Azure blob storage account associated with the workspace. We can use it to transfer data from local to the cloud, and create dataset from it. We will now upload the [Iris data](./train-dataset/Iris.csv) to the default datastore (blob) within your workspace.
###Code
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./train-dataset/iris.csv'],
target_path = 'train-dataset/tabular/',
overwrite = True,
show_progress = True)
###Output
_____no_output_____
###Markdown
Then we will create an unregistered TabularDataset pointing to the path in the datastore. You can also create a dataset from multiple paths. [learn more](https://aka.ms/azureml/howto/createdatasets) [TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a Pandas or Spark DataFrame. You can create a TabularDataset object from .csv, .tsv, and parquet files, and from SQL query results. For a complete list, see [TabularDatasetFactory](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py) class.
###Code
from azureml.core import Dataset
dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, 'train-dataset/tabular/iris.csv')])
# preview the first 3 rows of the dataset
dataset.take(3).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_titanic.py` in the script_folder.
###Code
script_folder = os.path.join(os.getcwd(), 'train-dataset')
%%writefile $script_folder/train_iris.py
import os
from azureml.core import Dataset, Run
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.externals import joblib
run = Run.get_context()
# get input dataset by name
dataset = run.input_datasets['iris']
df = dataset.to_pandas_dataframe()
x_col = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
y_col = ['species']
x_df = df.loc[:, x_col]
y_df = df.loc[:, y_col]
#dividing X,y into train and test data
x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2, random_state=223)
data = {'train': {'X': x_train, 'y': y_train},
'test': {'X': x_test, 'y': y_test}}
clf = DecisionTreeClassifier().fit(data['train']['X'], data['train']['y'])
model_file_name = 'decision_tree.pkl'
print('Accuracy of Decision Tree classifier on training set: {:.2f}'.format(clf.score(x_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'.format(clf.score(x_test, y_test)))
os.makedirs('./outputs', exist_ok=True)
with open(model_file_name, 'wb') as file:
joblib.dump(value=clf, filename='outputs/' + model_file_name)
###Output
_____no_output_____
###Markdown
Configure and use datasets as the input to Estimator You can ask the system to build a conda environment based on your dependency specification. Once the environment is built, and if you don't change your dependencies, it will be reused in subsequent runs.
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
conda_env = Environment('conda-env')
conda_env.python.conda_dependencies = CondaDependencies.create(pip_packages=['azureml-sdk',
'azureml-dataprep[pandas,fuse]',
'scikit-learn'])
###Output
_____no_output_____
###Markdown
An estimator object is used to submit the run. Azure Machine Learning has pre-configured estimators for common machine learning frameworks, as well as generic Estimator. Create a generic estimator for by specifying* The name of the estimator object, `est`* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution. * The training script name, train_titanic.py* The input dataset for training* The compute target. In this case you will use the AmlCompute you created* The environment definition for the experiment
###Code
from azureml.train.estimator import Estimator
est = Estimator(source_directory=script_folder,
entry_script='train_iris.py',
# pass dataset object as an input with name 'titanic'
inputs=[dataset.as_named_input('iris')],
compute_target=compute_target,
environment_definition= conda_env)
###Output
_____no_output_____
###Markdown
Submit job to runSubmit the estimator to the Azure ML experiment to kick off the execution.
###Code
run = exp.submit(est)
from azureml.widgets import RunDetails
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Use datasets to mount files to a remote computeYou can use the `Dataset` object to mount or download files referred by it. When you mount a file system, you attach that file system to a directory (mount point) and make it available to the system. Because mounting load files at the time of processing, it is usually faster than download. Note: mounting is only available for Linux-based compute (DSVM/VM, AMLCompute, HDInsights). Upload data files into datastoreWe will first load diabetes data from `scikit-learn` to the train-dataset folder.
###Code
from sklearn.datasets import load_diabetes
import numpy as np
training_data = load_diabetes()
np.save(file='train-dataset/features.npy', arr=training_data['data'])
np.save(file='train-dataset/labels.npy', arr=training_data['target'])
###Output
_____no_output_____
###Markdown
Now let's upload the 2 files into the default datastore under a path named `diabetes`:
###Code
datastore.upload_files(['train-dataset/features.npy', 'train-dataset/labels.npy'], target_path='diabetes', overwrite=True)
###Output
_____no_output_____
###Markdown
Create a FileDataset[FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.file_dataset.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public URLs. Using this method, you can download or mount the files to your compute as a FileDataset object. The files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
###Code
from azureml.core import Dataset
dataset = Dataset.File.from_files(path = [(datastore, 'diabetes/')])
# see a list of files referenced by dataset
dataset.to_path()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_diabetes.py` in the script_folder.
###Code
%%writefile $script_folder/train_diabetes.py
import os
import glob
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from azureml.core.run import Run
from sklearn.externals import joblib
import numpy as np
os.makedirs('./outputs', exist_ok=True)
run = Run.get_context()
base_path = run.input_datasets['diabetes']
X = np.load(glob.glob(os.path.join(base_path, '**/features.npy'), recursive=True)[0])
y = np.load(glob.glob(os.path.join(base_path, '**/labels.npy'), recursive=True)[0])
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
data = {'train': {'X': X_train, 'y': y_train},
'test': {'X': X_test, 'y': y_test}}
# list of numbers from 0.0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
for alpha in alphas:
# use Ridge algorithm to create a regression model
reg = Ridge(alpha=alpha)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
mse = mean_squared_error(preds, data['test']['y'])
run.log('alpha', alpha)
run.log('mse', mse)
model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)
with open(model_file_name, 'wb') as file:
joblib.dump(value=reg, filename='outputs/' + model_file_name)
print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))
###Output
_____no_output_____
###Markdown
Configure & Run
###Code
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=script_folder,
script='train_diabetes.py',
# to mount the dataset on the remote compute and pass the mounted path as an argument to the training script
arguments =[dataset.as_named_input('diabetes').as_mount()])
src.run_config.framework = 'python'
src.run_config.environment = conda_env
src.run_config.target = compute_target.name
run = exp.submit(config=src)
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Display run resultsYou now have a model trained on a remote cluster. Retrieve all the metrics logged during the run, including the accuracy of the model:
###Code
print(run.get_metrics())
metrics = run.get_metrics()
###Output
_____no_output_____
###Markdown
Register datasetsUse the register() method to register datasets to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
###Code
dataset = dataset.register(workspace = ws,
name = 'diabetes dataset',
description='training dataset',
create_new_version=True)
###Output
_____no_output_____
###Markdown
Register models with datasetsThe last step in the training script wrote the model files in a directory named `outputs` in the VM of the cluster where the job is executed. `outputs` is a special directory in that all content in this directory is automatically uploaded to your workspace. This content appears in the run record in the experiment under your workspace. Hence, the model file is now also available in your workspace.You can register models with datasets for reproducibility and auditing purpose.
###Code
# find the index where MSE is the smallest
indices = list(range(0, len(metrics['mse'])))
min_mse_index = min(indices, key=lambda x: metrics['mse'][x])
print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(
metrics['mse'][min_mse_index],
metrics['alpha'][min_mse_index]
))
# find the best model
best_alpha = metrics['alpha'][min_mse_index]
model_file_name = 'ridge_{0:.2f}.pkl'.format(best_alpha)
# register the best model with the input dataset
model = run.register_model(model_name='sklearn_diabetes', model_path=os.path.join('outputs', model_file_name),
datasets =[('training data',dataset)])
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Train with Azure Machine Learning datasetsDatasets are categorized into TabularDataset and FileDataset based on how users consume them in training. * A TabularDataset represents data in a tabular format by parsing the provided file or list of files. TabularDataset can be created from csv, tsv, parquet files, SQL query results etc. For the complete list, please visit our [documentation](https://aka.ms/tabulardataset-api-reference). It provides you with the ability to materialize the data into a pandas DataFrame.* A FileDataset references single or multiple files in your datastores or public urls. This provides you with the ability to download or mount the files to your compute. The files can be of any format, which enables a wider range of machine learning scenarios including deep learning.In this tutorial, you will learn how to train with Azure Machine Learning datasets:&x2611; Use datasets directly in your training script&x2611; Use datasets to mount files to a remote compute PrerequisitesIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first if you haven't already established your connection to the AzureML Workspace.
###Code
# Check core SDK version number
import azureml.core
print('SDK version:', azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
Create Experiment**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
###Code
experiment_name = 'train-with-datasets'
from azureml.core import Experiment
exp = Experiment(workspace=ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create or Attach existing compute resourceBy using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
import os
# choose a name for your cluster
compute_name = os.environ.get('AML_COMPUTE_CLUSTER_NAME', 'cpu-cluster')
compute_min_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MIN_NODES', 0)
compute_max_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MAX_NODES', 4)
# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
vm_size = os.environ.get('AML_COMPUTE_CLUSTER_SKU', 'STANDARD_D2_V2')
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print('found compute target. just use it. ' + compute_name)
else:
print('creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size=vm_size,
min_nodes=compute_min_nodes,
max_nodes=compute_max_nodes)
# create the cluster
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
You now have the necessary packages and compute resources to train a model in the cloud. Use datasets directly in training Create a TabularDatasetBy creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. Every workspace comes with a default [datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data) (and you can register more) which is backed by the Azure blob storage account associated with the workspace. We can use it to transfer data from local to the cloud, and create dataset from it. We will now upload the [Iris data](./train-dataset/Iris.csv) to the default datastore (blob) within your workspace.
###Code
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./train-dataset/iris.csv'],
target_path = 'train-dataset/tabular/',
overwrite = True,
show_progress = True)
###Output
_____no_output_____
###Markdown
Then we will create an unregistered TabularDataset pointing to the path in the datastore. You can also create a dataset from multiple paths. [learn more](https://aka.ms/azureml/howto/createdatasets) [TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a Pandas or Spark DataFrame. You can create a TabularDataset object from .csv, .tsv, and parquet files, and from SQL query results. For a complete list, see [TabularDatasetFactory](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py) class.
###Code
from azureml.core import Dataset
dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, 'train-dataset/tabular/iris.csv')])
# preview the first 3 rows of the dataset
dataset.take(3).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_titanic.py` in the script_folder.
###Code
script_folder = os.path.join(os.getcwd(), 'train-dataset')
%%writefile $script_folder/train_iris.py
import os
from azureml.core import Dataset, Run
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
# sklearn.externals.joblib is removed in 0.23
from sklearn import __version__ as sklearnver
from packaging.version import Version
if Version(sklearnver) < Version("0.23.0"):
from sklearn.externals import joblib
else:
import joblib
run = Run.get_context()
# get input dataset by name
dataset = run.input_datasets['iris']
df = dataset.to_pandas_dataframe()
x_col = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
y_col = ['species']
x_df = df.loc[:, x_col]
y_df = df.loc[:, y_col]
#dividing X,y into train and test data
x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2, random_state=223)
data = {'train': {'X': x_train, 'y': y_train},
'test': {'X': x_test, 'y': y_test}}
clf = DecisionTreeClassifier().fit(data['train']['X'], data['train']['y'])
model_file_name = 'decision_tree.pkl'
print('Accuracy of Decision Tree classifier on training set: {:.2f}'.format(clf.score(x_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'.format(clf.score(x_test, y_test)))
os.makedirs('./outputs', exist_ok=True)
with open(model_file_name, 'wb') as file:
joblib.dump(value=clf, filename='outputs/' + model_file_name)
###Output
_____no_output_____
###Markdown
Configure and use datasets as the input to Estimator An estimator is a configuration object you submit to Azure Machine Learning to instruct how to set up the remote environment. Azure Machine Learning has pre-configured estimators for common machine learning frameworks, as well as generic Estimator. Create a SKLearn estimator by specifying:* The name of the estimator object, `est`* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution. * The training script name, train_titanic.py* The input dataset for training. `as_named_input()` is required so that the input dataset can be referenced by the assigned name in your training script. * The compute target. In this case you will use the AmlCompute you created* The environment definition for the experiment
###Code
from azureml.train.sklearn import SKLearn
est = SKLearn(source_directory=script_folder,
entry_script='train_iris.py',
# pass dataset object as an input with name 'titanic'
inputs=[dataset.as_named_input('iris')],
pip_packages=['azureml-dataset-runtime[fuse]', 'packaging'],
compute_target=compute_target)
###Output
_____no_output_____
###Markdown
Submit job to runSubmit the estimator to the Azure ML experiment to kick off the execution.
###Code
run = exp.submit(est)
from azureml.widgets import RunDetails
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Use datasets to mount files to a remote computeYou can use the `Dataset` object to mount or download files referred by it. When you mount a file system, you attach that file system to a directory (mount point) and make it available to the system. Because mounting load files at the time of processing, it is usually faster than download. Note: mounting is only available for Linux-based compute (DSVM/VM, AMLCompute, HDInsights). Upload data files into datastoreWe will first load diabetes data from `scikit-learn` to the train-dataset folder.
###Code
from sklearn.datasets import load_diabetes
import numpy as np
training_data = load_diabetes()
np.save(file='train-dataset/features.npy', arr=training_data['data'])
np.save(file='train-dataset/labels.npy', arr=training_data['target'])
###Output
_____no_output_____
###Markdown
Now let's upload the 2 files into the default datastore under a path named `diabetes`:
###Code
datastore.upload_files(['train-dataset/features.npy', 'train-dataset/labels.npy'], target_path='diabetes', overwrite=True)
###Output
_____no_output_____
###Markdown
Create a FileDataset[FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.file_dataset.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public URLs. Using this method, you can download or mount the files to your compute as a FileDataset object. The files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
###Code
from azureml.core import Dataset
dataset = Dataset.File.from_files(path = [(datastore, 'diabetes/')])
# see a list of files referenced by dataset
dataset.to_path()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_diabetes.py` in the script_folder.
###Code
%%writefile $script_folder/train_diabetes.py
import os
import glob
from azureml.core.run import Run
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
# sklearn.externals.joblib is removed in 0.23
from sklearn import __version__ as sklearnver
from packaging.version import Version
if Version(sklearnver) < Version("0.23.0"):
from sklearn.externals import joblib
else:
import joblib
import numpy as np
os.makedirs('./outputs', exist_ok=True)
run = Run.get_context()
base_path = run.input_datasets['diabetes']
X = np.load(glob.glob(os.path.join(base_path, '**/features.npy'), recursive=True)[0])
y = np.load(glob.glob(os.path.join(base_path, '**/labels.npy'), recursive=True)[0])
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
data = {'train': {'X': X_train, 'y': y_train},
'test': {'X': X_test, 'y': y_test}}
# list of numbers from 0.0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
for alpha in alphas:
# use Ridge algorithm to create a regression model
reg = Ridge(alpha=alpha)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
mse = mean_squared_error(preds, data['test']['y'])
run.log('alpha', alpha)
run.log('mse', mse)
model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)
with open(model_file_name, 'wb') as file:
joblib.dump(value=reg, filename='outputs/' + model_file_name)
print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))
###Output
_____no_output_____
###Markdown
Configure & Run You can ask the system to build a conda environment based on your dependency specification. Once the environment is built, and if you don't change your dependencies, it will be reused in subsequent runs.
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
conda_env = Environment('conda-env')
conda_env.python.conda_dependencies = CondaDependencies.create(pip_packages=['azureml-core',
'azureml-dataset-runtime[pandas,fuse]',
'scikit-learn',
'packaging'])
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=script_folder,
script='train_diabetes.py',
# to mount the dataset on the remote compute and pass the mounted path as an argument to the training script
arguments =[dataset.as_named_input('diabetes').as_mount()])
src.run_config.framework = 'python'
src.run_config.environment = conda_env
src.run_config.target = compute_target.name
run = exp.submit(config=src)
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Display run resultsYou now have a model trained on a remote cluster. Retrieve all the metrics logged during the run, including the accuracy of the model:
###Code
run.wait_for_completion()
metrics = run.get_metrics()
print(metrics)
###Output
_____no_output_____
###Markdown
Register datasetsUse the register() method to register datasets to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
###Code
dataset = dataset.register(workspace = ws,
name = 'diabetes dataset',
description='training dataset',
create_new_version=True)
###Output
_____no_output_____
###Markdown
Register models with datasetsThe last step in the training script wrote the model files in a directory named `outputs` in the VM of the cluster where the job is executed. `outputs` is a special directory in that all content in this directory is automatically uploaded to your workspace. This content appears in the run record in the experiment under your workspace. Hence, the model file is now also available in your workspace.You can register models with datasets for reproducibility and auditing purpose.
###Code
# find the index where MSE is the smallest
indices = list(range(0, len(metrics['mse'])))
min_mse_index = min(indices, key=lambda x: metrics['mse'][x])
print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(
metrics['mse'][min_mse_index],
metrics['alpha'][min_mse_index]
))
# find the best model
best_alpha = metrics['alpha'][min_mse_index]
model_file_name = 'ridge_{0:.2f}.pkl'.format(best_alpha)
# register the best model with the input dataset
model = run.register_model(model_name='sklearn_diabetes', model_path=os.path.join('outputs', model_file_name),
datasets =[('training data',dataset)])
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Train with Azure Machine Learning datasetsDatasets are categorized into TabularDataset and FileDataset based on how users consume them in training. * A TabularDataset represents data in a tabular format by parsing the provided file or list of files. TabularDataset can be created from csv, tsv, parquet files, SQL query results etc. For the complete list, please visit our [documentation](https://aka.ms/tabulardataset-api-reference). It provides you with the ability to materialize the data into a pandas DataFrame.* A FileDataset references single or multiple files in your datastores or public urls. This provides you with the ability to download or mount the files to your compute. The files can be of any format, which enables a wider range of machine learning scenarios including deep learning.In this tutorial, you will learn how to train with Azure Machine Learning datasets:&x2611; Use datasets directly in your training script&x2611; Use datasets to mount files to a remote compute PrerequisitesIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first if you haven't already established your connection to the AzureML Workspace.
###Code
# Check core SDK version number
import azureml.core
print('SDK version:', azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
Create Experiment**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
###Code
experiment_name = 'train-with-datasets'
from azureml.core import Experiment
exp = Experiment(workspace=ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create or Attach existing compute resourceBy using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
import os
# choose a name for your cluster
compute_name = os.environ.get('AML_COMPUTE_CLUSTER_NAME', 'cpu-cluster')
compute_min_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MIN_NODES', 0)
compute_max_nodes = os.environ.get('AML_COMPUTE_CLUSTER_MAX_NODES', 4)
# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
vm_size = os.environ.get('AML_COMPUTE_CLUSTER_SKU', 'STANDARD_D2_V2')
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print('found compute target. just use it. ' + compute_name)
else:
print('creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size=vm_size,
min_nodes=compute_min_nodes,
max_nodes=compute_max_nodes)
# create the cluster
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
You now have the necessary packages and compute resources to train a model in the cloud. Use datasets directly in training Create a TabularDatasetBy creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. Every workspace comes with a default [datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data) (and you can register more) which is backed by the Azure blob storage account associated with the workspace. We can use it to transfer data from local to the cloud, and create dataset from it. We will now upload the [Iris data](./train-dataset/Iris.csv) to the default datastore (blob) within your workspace.
###Code
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./train-dataset/iris.csv'],
target_path = 'train-dataset/tabular/',
overwrite = True,
show_progress = True)
###Output
_____no_output_____
###Markdown
Then we will create an unregistered TabularDataset pointing to the path in the datastore. You can also create a dataset from multiple paths. [learn more](https://aka.ms/azureml/howto/createdatasets) [TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a Pandas or Spark DataFrame. You can create a TabularDataset object from .csv, .tsv, and parquet files, and from SQL query results. For a complete list, see [TabularDatasetFactory](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py) class.
###Code
from azureml.core import Dataset
dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, 'train-dataset/tabular/iris.csv')])
# preview the first 3 rows of the dataset
dataset.take(3).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_titanic.py` in the script_folder.
###Code
script_folder = os.path.join(os.getcwd(), 'train-dataset')
%%writefile $script_folder/train_iris.py
import os
from azureml.core import Dataset, Run
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
# sklearn.externals.joblib is removed in 0.23
from sklearn import __version__ as sklearnver
from packaging.version import Version
if Version(sklearnver) < Version("0.23.0"):
from sklearn.externals import joblib
else:
import joblib
run = Run.get_context()
# get input dataset by name
dataset = run.input_datasets['iris']
df = dataset.to_pandas_dataframe()
x_col = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
y_col = ['species']
x_df = df.loc[:, x_col]
y_df = df.loc[:, y_col]
#dividing X,y into train and test data
x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2, random_state=223)
data = {'train': {'X': x_train, 'y': y_train},
'test': {'X': x_test, 'y': y_test}}
clf = DecisionTreeClassifier().fit(data['train']['X'], data['train']['y'])
model_file_name = 'decision_tree.pkl'
print('Accuracy of Decision Tree classifier on training set: {:.2f}'.format(clf.score(x_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'.format(clf.score(x_test, y_test)))
os.makedirs('./outputs', exist_ok=True)
with open(model_file_name, 'wb') as file:
joblib.dump(value=clf, filename='outputs/' + model_file_name)
###Output
_____no_output_____
###Markdown
Configure and use datasets as the input to Estimator An estimator is a configuration object you submit to Azure Machine Learning to instruct how to set up the remote environment. Azure Machine Learning has pre-configured estimators for common machine learning frameworks, as well as generic Estimator. Create a SKLearn estimator by specifying:* The name of the estimator object, `est`* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution. * The training script name, train_titanic.py* The input dataset for training. `as_named_input()` is required so that the input dataset can be referenced by the assigned name in your training script. * The compute target. In this case you will use the AmlCompute you created* The environment definition for the experiment
###Code
from azureml.train.sklearn import SKLearn
est = SKLearn(source_directory=script_folder,
entry_script='train_iris.py',
# pass dataset object as an input with name 'titanic'
inputs=[dataset.as_named_input('iris')],
pip_packages=['azureml-dataprep[fuse]', 'packaging'],
compute_target=compute_target)
###Output
_____no_output_____
###Markdown
Submit job to runSubmit the estimator to the Azure ML experiment to kick off the execution.
###Code
run = exp.submit(est)
from azureml.widgets import RunDetails
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Use datasets to mount files to a remote computeYou can use the `Dataset` object to mount or download files referred by it. When you mount a file system, you attach that file system to a directory (mount point) and make it available to the system. Because mounting load files at the time of processing, it is usually faster than download. Note: mounting is only available for Linux-based compute (DSVM/VM, AMLCompute, HDInsights). Upload data files into datastoreWe will first load diabetes data from `scikit-learn` to the train-dataset folder.
###Code
from sklearn.datasets import load_diabetes
import numpy as np
training_data = load_diabetes()
np.save(file='train-dataset/features.npy', arr=training_data['data'])
np.save(file='train-dataset/labels.npy', arr=training_data['target'])
###Output
_____no_output_____
###Markdown
Now let's upload the 2 files into the default datastore under a path named `diabetes`:
###Code
datastore.upload_files(['train-dataset/features.npy', 'train-dataset/labels.npy'], target_path='diabetes', overwrite=True)
###Output
_____no_output_____
###Markdown
Create a FileDataset[FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.file_dataset.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public URLs. Using this method, you can download or mount the files to your compute as a FileDataset object. The files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
###Code
from azureml.core import Dataset
dataset = Dataset.File.from_files(path = [(datastore, 'diabetes/')])
# see a list of files referenced by dataset
dataset.to_path()
###Output
_____no_output_____
###Markdown
Create a training scriptTo submit the job to the cluster, first create a training script. Run the following code to create the training script called `train_diabetes.py` in the script_folder.
###Code
%%writefile $script_folder/train_diabetes.py
import os
import glob
from azureml.core.run import Run
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
# sklearn.externals.joblib is removed in 0.23
from sklearn import __version__ as sklearnver
from packaging.version import Version
if Version(sklearnver) < Version("0.23.0"):
from sklearn.externals import joblib
else:
import joblib
import numpy as np
os.makedirs('./outputs', exist_ok=True)
run = Run.get_context()
base_path = run.input_datasets['diabetes']
X = np.load(glob.glob(os.path.join(base_path, '**/features.npy'), recursive=True)[0])
y = np.load(glob.glob(os.path.join(base_path, '**/labels.npy'), recursive=True)[0])
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
data = {'train': {'X': X_train, 'y': y_train},
'test': {'X': X_test, 'y': y_test}}
# list of numbers from 0.0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
for alpha in alphas:
# use Ridge algorithm to create a regression model
reg = Ridge(alpha=alpha)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
mse = mean_squared_error(preds, data['test']['y'])
run.log('alpha', alpha)
run.log('mse', mse)
model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)
with open(model_file_name, 'wb') as file:
joblib.dump(value=reg, filename='outputs/' + model_file_name)
print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))
###Output
_____no_output_____
###Markdown
Configure & Run You can ask the system to build a conda environment based on your dependency specification. Once the environment is built, and if you don't change your dependencies, it will be reused in subsequent runs.
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
conda_env = Environment('conda-env')
conda_env.python.conda_dependencies = CondaDependencies.create(pip_packages=['azureml-core',
'azureml-dataprep[pandas,fuse]',
'scikit-learn',
'packaging'])
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=script_folder,
script='train_diabetes.py',
# to mount the dataset on the remote compute and pass the mounted path as an argument to the training script
arguments =[dataset.as_named_input('diabetes').as_mount()])
src.run_config.framework = 'python'
src.run_config.environment = conda_env
src.run_config.target = compute_target.name
run = exp.submit(config=src)
# monitor the run
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Display run resultsYou now have a model trained on a remote cluster. Retrieve all the metrics logged during the run, including the accuracy of the model:
###Code
run.wait_for_completion()
metrics = run.get_metrics()
print(metrics)
###Output
_____no_output_____
###Markdown
Register datasetsUse the register() method to register datasets to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
###Code
dataset = dataset.register(workspace = ws,
name = 'diabetes dataset',
description='training dataset',
create_new_version=True)
###Output
_____no_output_____
###Markdown
Register models with datasetsThe last step in the training script wrote the model files in a directory named `outputs` in the VM of the cluster where the job is executed. `outputs` is a special directory in that all content in this directory is automatically uploaded to your workspace. This content appears in the run record in the experiment under your workspace. Hence, the model file is now also available in your workspace.You can register models with datasets for reproducibility and auditing purpose.
###Code
# find the index where MSE is the smallest
indices = list(range(0, len(metrics['mse'])))
min_mse_index = min(indices, key=lambda x: metrics['mse'][x])
print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(
metrics['mse'][min_mse_index],
metrics['alpha'][min_mse_index]
))
# find the best model
best_alpha = metrics['alpha'][min_mse_index]
model_file_name = 'ridge_{0:.2f}.pkl'.format(best_alpha)
# register the best model with the input dataset
model = run.register_model(model_name='sklearn_diabetes', model_path=os.path.join('outputs', model_file_name),
datasets =[('training data',dataset)])
###Output
_____no_output_____ |
Exercises/Exercise6.ipynb | ###Markdown
Exercises 6 - Transform the small processing pipeline from Exercise 5 into a function, which takes your noisy image as input and outputs the mask. In principle this requires only copying all relevant lines into a function- Create a new file called mymodule.py and copy your function in there- Try to use that function from your notebook- If you get errors messages, try to debug (make sure variables are defined, modules imported *etc.*)- Instead of hard-coding the threshold level, use it as a function parameter- In a for loop, use thresholds from 100 to 160 in steps of 10 in the function and plot the resulting mask Solutions 6
###Code
import numpy as np
import matplotlib.pyplot as plt
import skimage.data
#load moon image
image = skimage.data.rocket()
import skimage.color
from skimage.morphology import binary_opening, disk
from skimage.measure import label, regionprops
from skimage.filters import median
def create_mask(image):
image_gray = skimage.color.rgb2gray(image)
#generate normal noise
normal_matrix = np.random.randn(image_gray.shape[0], image_gray.shape[1])
#add it to the image
noisy_image = image_gray + 0.1*normal_matrix
#rescale image to uint8
noisy_image_int = noisy_image-noisy_image.min()
noisy_image_int = 255*noisy_image_int/noisy_image_int.max()
noisy_image_int = noisy_image_int.astype(np.uint8)
#filter image to suppress noise
image_median = median(noisy_image_int, selem=disk(3))
#create mask
mask = image_median>140
#close the mask
mask_closed = binary_opening(mask, selem=disk(2))
#measure regions
image_label = label(mask_closed)
regions = regionprops(image_label, image_median)
area = [x.area for x in regions]
mean_intensity = [x.mean_intensity for x in regions]
newmask = np.zeros(image_label.shape)
for x in regions:
if (x.area<500) and (x.mean_intensity>160):
newmask[x.coords[:,0],x.coords[:,1]]=1
return newmask
mask = create_mask(image)
plt.imshow(mask, cmap = 'gray')
plt.show()
###Output
_____no_output_____
###Markdown
Use threshold as a parameter
###Code
def create_mask_threshold(image, threshold):
image_gray = skimage.color.rgb2gray(image)
#generate normal noise
normal_matrix = np.random.randn(image_gray.shape[0], image_gray.shape[1])
#add it to the image
noisy_image = image_gray + 0.1*normal_matrix
#rescale image to uint8
noisy_image_int = noisy_image-noisy_image.min()
noisy_image_int = 255*noisy_image_int/noisy_image_int.max()
noisy_image_int = noisy_image_int.astype(np.uint8)
#filter image to suppress noise
image_median = median(noisy_image_int, selem=disk(3))
#create mask
mask = image_median>threshold
#close the mask
mask_closed = binary_opening(mask, selem=disk(2))
#measure regions
image_label = label(mask_closed)
regions = regionprops(image_label, image_median)
area = [x.area for x in regions]
mean_intensity = [x.mean_intensity for x in regions]
newmask = np.zeros(image_label.shape)
for x in regions:
if (x.area<500) and (x.mean_intensity>160):
newmask[x.coords[:,0],x.coords[:,1]]=1
return newmask
for th in range(100,160,10):
mask = create_mask_threshold(image, threshold= th)
plt.imshow(mask, cmap = 'gray')
plt.show()
###Output
_____no_output_____ |
data-ingestion-and-preparation/gpu-cudf-vs-pd.ipynb | ###Markdown
Performance Comparison — pandas Versus RAPIDS cuDFThis tutorial uses `timeit` to compare performance benchmarks with pandas and RAPIDS cuDF. Setting Up a RAPIDS conda Environment with cuDF and cuML To use the cuDF and cuML RAPIDS libraries, you need to create a RAPIDS conda environment and run this notebook with the python kernel.For example, use the following command to create a RAPIDS conda environment named `rapids` with rapids version 0.14 and python 3.7:```shconda create -n rapids -c rapidsai -c nvidia -c anaconda -c conda-forge -c defaults ipykernel rapids=0.14 python=3.7 cudatoolkit=10.1```After that, make sure to open this notebook with the kernel named `conda-rapids`. System Details GPU
###Code
!nvidia-smi -q
###Output
==============NVSMI LOG==============
Timestamp : Thu Jul 2 15:45:49 2020
Driver Version : 440.31
CUDA Version : 10.2
Attached GPUs : 1
GPU 00000000:00:1E.0
Product Name : Tesla V100-SXM2-16GB
Product Brand : Tesla
Display Mode : Enabled
Display Active : Disabled
Persistence Mode : Enabled
Accounting Mode : Disabled
Accounting Mode Buffer Size : 4000
Driver Model
Current : N/A
Pending : N/A
Serial Number : 0323617005627
GPU UUID : GPU-43bd4553-f5b7-55ab-0633-ecba7c3a64d5
Minor Number : 0
VBIOS Version : 88.00.4F.00.09
MultiGPU Board : No
Board ID : 0x1e
GPU Part Number : 900-2G503-0000-000
Inforom Version
Image Version : G503.0201.00.03
OEM Object : 1.1
ECC Object : 5.0
Power Management Object : N/A
GPU Operation Mode
Current : N/A
Pending : N/A
GPU Virtualization Mode
Virtualization Mode : Pass-Through
Host VGPU Mode : N/A
IBMNPU
Relaxed Ordering Mode : N/A
PCI
Bus : 0x00
Device : 0x1E
Domain : 0x0000
Device Id : 0x1DB110DE
Bus Id : 00000000:00:1E.0
Sub System Id : 0x121210DE
GPU Link Info
PCIe Generation
Max : 3
Current : 3
Link Width
Max : 16x
Current : 16x
Bridge Chip
Type : N/A
Firmware : N/A
Replays Since Reset : 0
Replay Number Rollovers : 0
Tx Throughput : 0 KB/s
Rx Throughput : 2000 KB/s
Fan Speed : N/A
Performance State : P0
Clocks Throttle Reasons
Idle : Active
Applications Clocks Setting : Not Active
SW Power Cap : Not Active
HW Slowdown : Not Active
HW Thermal Slowdown : Not Active
HW Power Brake Slowdown : Not Active
Sync Boost : Not Active
SW Thermal Slowdown : Not Active
Display Clock Setting : Not Active
FB Memory Usage
Total : 16160 MiB
Used : 0 MiB
Free : 16160 MiB
BAR1 Memory Usage
Total : 16384 MiB
Used : 2 MiB
Free : 16382 MiB
Compute Mode : Default
Utilization
Gpu : 0 %
Memory : 0 %
Encoder : 0 %
Decoder : 0 %
Encoder Stats
Active Sessions : 0
Average FPS : 0
Average Latency : 0
FBC Stats
Active Sessions : 0
Average FPS : 0
Average Latency : 0
Ecc Mode
Current : Enabled
Pending : Enabled
ECC Errors
Volatile
Single Bit
Device Memory : 0
Register File : 0
L1 Cache : 0
L2 Cache : 0
Texture Memory : N/A
Texture Shared : N/A
CBU : N/A
Total : 0
Double Bit
Device Memory : 0
Register File : 0
L1 Cache : 0
L2 Cache : 0
Texture Memory : N/A
Texture Shared : N/A
CBU : 0
Total : 0
Aggregate
Single Bit
Device Memory : 4
Register File : 0
L1 Cache : 0
L2 Cache : 0
Texture Memory : N/A
Texture Shared : N/A
CBU : N/A
Total : 4
Double Bit
Device Memory : 0
Register File : 0
L1 Cache : 0
L2 Cache : 0
Texture Memory : N/A
Texture Shared : N/A
CBU : 0
Total : 0
Retired Pages
Single Bit ECC : 0
Double Bit ECC : 0
Pending Page Blacklist : No
Temperature
GPU Current Temp : 40 C
GPU Shutdown Temp : 90 C
GPU Slowdown Temp : 87 C
GPU Max Operating Temp : 83 C
Memory Current Temp : 37 C
Memory Max Operating Temp : 85 C
Power Readings
Power Management : Supported
Power Draw : 25.67 W
Power Limit : 300.00 W
Default Power Limit : 300.00 W
Enforced Power Limit : 300.00 W
Min Power Limit : 150.00 W
Max Power Limit : 300.00 W
Clocks
Graphics : 135 MHz
SM : 135 MHz
Memory : 877 MHz
Video : 555 MHz
Applications Clocks
Graphics : 1312 MHz
Memory : 877 MHz
Default Applications Clocks
Graphics : 1312 MHz
Memory : 877 MHz
Max Clocks
Graphics : 1530 MHz
SM : 1530 MHz
Memory : 877 MHz
Video : 1372 MHz
Max Customer Boost Clocks
Graphics : 1530 MHz
Clock Policy
Auto Boost : N/A
Auto Boost Default : N/A
Processes : None
###Markdown
Benchmark Setup InstallationsInstall v3io-generator to create a 1 GB data set for the benchmark.You only need to run the generator once, and then you can reuse the generated data set.
###Code
import sys
!{sys.executable} -m pip install -i https://test.pypi.org/simple/ v3io-generator
!{sys.executable} -m pip install faker
!{sys.executable} -m pip install pytimeparse
###Output
Looking in indexes: https://test.pypi.org/simple/
Collecting v3io-generator
Downloading https://test-files.pythonhosted.org/packages/6c/f6/ba9045111de98747af2c94e10f3dbf74311e6bd3a033c7ea1ca84e084e82/v3io_generator-0.0.27.dev0-py3-none-any.whl (9.3 kB)
Installing collected packages: v3io-generator
Successfully installed v3io-generator-0.0.27.dev0
Collecting faker
Using cached Faker-4.1.1-py3-none-any.whl (1.0 MB)
Requirement already satisfied: python-dateutil>=2.4 in /User/.conda/envs/rapids/lib/python3.7/site-packages (from faker) (2.8.1)
Collecting text-unidecode==1.3
Using cached text_unidecode-1.3-py2.py3-none-any.whl (78 kB)
Requirement already satisfied: six>=1.5 in /User/.conda/envs/rapids/lib/python3.7/site-packages (from python-dateutil>=2.4->faker) (1.15.0)
Installing collected packages: text-unidecode, faker
Successfully installed faker-4.1.1 text-unidecode-1.3
Collecting pytimeparse
Using cached pytimeparse-1.1.8-py2.py3-none-any.whl (10.0 kB)
Installing collected packages: pytimeparse
Successfully installed pytimeparse-1.1.8
###Markdown
> **Note:** You must **restart the Jupyter kernel** to complete the installation. Imports
###Code
import os
import yaml
import time
import datetime
import json
import itertools
# Generator
from v3io_generator import metrics_generator, deployment_generator
# Dataframes
import cudf
import pandas as pd
###Output
_____no_output_____
###Markdown
Configurations
###Code
# Benchmark configurations
metric_names = ['cpu_utilization', 'latency', 'packet_loss', 'throughput']
nlargest = 10
source_file = os.path.join(os.getcwd(), 'data', 'ops.logs') # Use full path
###Output
_____no_output_____
###Markdown
Create the Data SourceUse v3io-generator to create a time-series network-operations dataset for 100 companies, including 4 metrics (CPU utilization, latency, throughput, and packet loss).Then, write the dataset to a JSON file to be used as the data source.
###Code
# Create a metadata factory
dep_gen = deployment_generator.deployment_generator()
faker=dep_gen.get_faker()
# Design the metadata
dep_gen.add_level(name='company',number=100,level_type=faker.company)
# Generate a deployment structure
deployment_df = dep_gen.generate_deployment()
# Initialize the metric values
for metric in metric_names:
deployment_df[metric] = 0
deployment_df.head()
###Output
_____no_output_____
###Markdown
Specify metrics configuration for the generator.
###Code
metrics_configuration = yaml.safe_load("""
errors: {length_in_ticks: 50, rate_in_ticks: 150}
timestamps: {interval: 5s, stochastic_interval: false}
metrics:
cpu_utilization:
accuracy: 2
distribution: normal
distribution_params: {mu: 70, noise: 0, sigma: 10}
is_threshold_below: true
past_based_value: false
produce_max: false
produce_min: false
validation:
distribution: {max: 1, min: -1, validate: false}
metric: {max: 100, min: 0, validate: true}
latency:
accuracy: 2
distribution: normal
distribution_params: {mu: 0, noise: 0, sigma: 5}
is_threshold_below: true
past_based_value: false
produce_max: false
produce_min: false
validation:
distribution: {max: 1, min: -1, validate: false}
metric: {max: 100, min: 0, validate: true}
packet_loss:
accuracy: 0
distribution: normal
distribution_params: {mu: 0, noise: 0, sigma: 2}
is_threshold_below: true
past_based_value: false
produce_max: false
produce_min: false
validation:
distribution: {max: 1, min: -1, validate: false}
metric: {max: 50, min: 0, validate: true}
throughput:
accuracy: 2
distribution: normal
distribution_params: {mu: 250, noise: 0, sigma: 20}
is_threshold_below: false
past_based_value: false
produce_max: false
produce_min: false
validation:
distribution: {max: 1, min: -1, validate: false}
metric: {max: 300, min: 0, validate: true}
""")
###Output
_____no_output_____
###Markdown
Create the data according to the given hierarchy and metrics configuration.
###Code
met_gen = metrics_generator.Generator_df(metrics_configuration,
user_hierarchy=deployment_df,
initial_timestamp=time.time())
metrics = met_gen.generate_range(start_time=datetime.datetime.now(),
end_time=datetime.datetime.now()+datetime.timedelta(hours=62),
as_df=True,
as_iterator=False)
# Verify that the source-file parent directory exists.
os.makedirs(os.path.dirname(source_file), exist_ok=1)
print(f'Saving generated data to: {source_file}')
# Generate file from metrics
with open(source_file, 'w') as f:
metrics_batch = metrics
metrics_batch.to_json(f,
orient='records',
lines=True)
###Output
Saving generated data to: /User/data-ingestion-and-preparation/data/ops.logs
###Markdown
Validate the Target File SizeGet the target size for the test file.
###Code
from pathlib import Path
Path(source_file).stat().st_size
with open(source_file) as myfile:
head = [next(myfile) for x in range(10)]
print(head)
###Output
['{"company":"Williams_and_Sons","cpu_utilization":64.6440138248,"cpu_utilization_is_error":false,"latency":2.9965630871,"latency_is_error":false,"packet_loss":0.0,"packet_loss_is_error":false,"throughput":258.7732213917,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Guerrero_Ltd","cpu_utilization":68.5296690547,"cpu_utilization_is_error":false,"latency":0.0,"latency_is_error":false,"packet_loss":0.0,"packet_loss_is_error":false,"throughput":288.8039306559,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Harris-Gutierrez","cpu_utilization":55.8557277251,"cpu_utilization_is_error":false,"latency":1.7068227314,"latency_is_error":false,"packet_loss":1.6544231936,"packet_loss_is_error":false,"throughput":265.4031916784,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Shaw-Williams","cpu_utilization":72.8668610421,"cpu_utilization_is_error":false,"latency":1.6477141418,"latency_is_error":false,"packet_loss":0.8709185994,"packet_loss_is_error":false,"throughput":237.5182913153,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Harris_Inc","cpu_utilization":83.5172325497,"cpu_utilization_is_error":false,"latency":7.8220358909,"latency_is_error":false,"packet_loss":1.3942153104,"packet_loss_is_error":false,"throughput":274.9563709951,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Johnson__Smith_and_Lewis","cpu_utilization":65.3007890236,"cpu_utilization_is_error":false,"latency":9.012152204,"latency_is_error":false,"packet_loss":0.0,"packet_loss_is_error":false,"throughput":247.0685516947,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Banks-Young","cpu_utilization":80.0440916828,"cpu_utilization_is_error":false,"latency":7.304937434,"latency_is_error":false,"packet_loss":2.1692271547,"packet_loss_is_error":false,"throughput":279.7641913689,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Gonzalez_Group","cpu_utilization":71.4195844054,"cpu_utilization_is_error":false,"latency":0.0,"latency_is_error":false,"packet_loss":0.0,"packet_loss_is_error":false,"throughput":260.0017327497,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Moore-Guerrero","cpu_utilization":65.0205705374,"cpu_utilization_is_error":false,"latency":1.7684290753,"latency_is_error":false,"packet_loss":0.0,"packet_loss_is_error":false,"throughput":266.4209778666,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Sanchez__Bennett_and_Thompson","cpu_utilization":67.2085370307,"cpu_utilization_is_error":false,"latency":4.5898304002,"latency_is_error":false,"packet_loss":0.0,"packet_loss_is_error":false,"throughput":274.830056152,"throughput_is_error":false,"timestamp":1593707325519}\n']
###Markdown
BenchmarkThe benchmark tests use the following flow:- Read file- Compute aggregations- Get the n-largest values
###Code
benchmark_file = source_file
###Output
_____no_output_____
###Markdown
In the following examples, `timeit` is executed in a loop.You can change the number of runs and loops:```%%timeit -n 1 -r 1``` Test Load Times cuDF
###Code
%%timeit -n 1 -r 2
gdf = cudf.read_json(benchmark_file, lines=True)
###Output
5.04 s ± 35.7 ms per loop (mean ± std. dev. of 2 runs, 1 loop each)
###Markdown
pandas
###Code
%%timeit -n 1 -r 2
pdf = pd.read_json(benchmark_file, lines=True)
###Output
36.7 s ± 202 ms per loop (mean ± std. dev. of 2 runs, 1 loop each)
###Markdown
Test AggregationLoad the files to memory to allow applying `timeit` only to the aggregations.
###Code
gdf = cudf.read_json(benchmark_file, lines=True)
pdf = pd.read_json(benchmark_file, lines=True)
###Output
_____no_output_____
###Markdown
cuDF
###Code
%%timeit -n 1 -r 7
ggdf = gdf.groupby(['company']).\
agg({k: ['min', 'max', 'mean'] for k in metric_names})
raw_nlargest = gdf.nlargest(nlargest, 'cpu_utilization')
###Output
246 ms ± 10.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
pandas
###Code
%%timeit -n 1 -r 7
gpdf = pdf.groupby(['company']).\
agg({k: ['min', 'max', 'mean'] for k in metric_names})
raw_nlargest = pdf.nlargest(nlargest, 'cpu_utilization')
###Output
1.82 s ± 38.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Performance Comparison — pandas Versus RAPIDS cuDFThis tutorial uses `timeit` to compare performance benchmarks with pandas and RAPIDS cuDF. Setting Up a RAPIDS conda Environment with cuDF and cuML To use the cuDF and cuML RAPIDS libraries, you need to create a RAPIDS conda environment and run this notebook with the python kernel.For example, use the following command to create a RAPIDS conda environment named `rapids` with rapids version 0.17 and python 3.7:```shconda create -n rapids -c rapidsai -c nvidia -c anaconda -c conda-forge -c defaults ipykernel rapids=0.17 python=3.7 cudatoolkit=11.0```After that, make sure to open this notebook with the kernel named `conda-rapids`. System Details GPU
###Code
!nvidia-smi -q
###Output
==============NVSMI LOG==============
Timestamp : Thu Jul 2 15:45:49 2020
Driver Version : 440.31
CUDA Version : 10.2
Attached GPUs : 1
GPU 00000000:00:1E.0
Product Name : Tesla V100-SXM2-16GB
Product Brand : Tesla
Display Mode : Enabled
Display Active : Disabled
Persistence Mode : Enabled
Accounting Mode : Disabled
Accounting Mode Buffer Size : 4000
Driver Model
Current : N/A
Pending : N/A
Serial Number : 0323617005627
GPU UUID : GPU-43bd4553-f5b7-55ab-0633-ecba7c3a64d5
Minor Number : 0
VBIOS Version : 88.00.4F.00.09
MultiGPU Board : No
Board ID : 0x1e
GPU Part Number : 900-2G503-0000-000
Inforom Version
Image Version : G503.0201.00.03
OEM Object : 1.1
ECC Object : 5.0
Power Management Object : N/A
GPU Operation Mode
Current : N/A
Pending : N/A
GPU Virtualization Mode
Virtualization Mode : Pass-Through
Host VGPU Mode : N/A
IBMNPU
Relaxed Ordering Mode : N/A
PCI
Bus : 0x00
Device : 0x1E
Domain : 0x0000
Device Id : 0x1DB110DE
Bus Id : 00000000:00:1E.0
Sub System Id : 0x121210DE
GPU Link Info
PCIe Generation
Max : 3
Current : 3
Link Width
Max : 16x
Current : 16x
Bridge Chip
Type : N/A
Firmware : N/A
Replays Since Reset : 0
Replay Number Rollovers : 0
Tx Throughput : 0 KB/s
Rx Throughput : 2000 KB/s
Fan Speed : N/A
Performance State : P0
Clocks Throttle Reasons
Idle : Active
Applications Clocks Setting : Not Active
SW Power Cap : Not Active
HW Slowdown : Not Active
HW Thermal Slowdown : Not Active
HW Power Brake Slowdown : Not Active
Sync Boost : Not Active
SW Thermal Slowdown : Not Active
Display Clock Setting : Not Active
FB Memory Usage
Total : 16160 MiB
Used : 0 MiB
Free : 16160 MiB
BAR1 Memory Usage
Total : 16384 MiB
Used : 2 MiB
Free : 16382 MiB
Compute Mode : Default
Utilization
Gpu : 0 %
Memory : 0 %
Encoder : 0 %
Decoder : 0 %
Encoder Stats
Active Sessions : 0
Average FPS : 0
Average Latency : 0
FBC Stats
Active Sessions : 0
Average FPS : 0
Average Latency : 0
Ecc Mode
Current : Enabled
Pending : Enabled
ECC Errors
Volatile
Single Bit
Device Memory : 0
Register File : 0
L1 Cache : 0
L2 Cache : 0
Texture Memory : N/A
Texture Shared : N/A
CBU : N/A
Total : 0
Double Bit
Device Memory : 0
Register File : 0
L1 Cache : 0
L2 Cache : 0
Texture Memory : N/A
Texture Shared : N/A
CBU : 0
Total : 0
Aggregate
Single Bit
Device Memory : 4
Register File : 0
L1 Cache : 0
L2 Cache : 0
Texture Memory : N/A
Texture Shared : N/A
CBU : N/A
Total : 4
Double Bit
Device Memory : 0
Register File : 0
L1 Cache : 0
L2 Cache : 0
Texture Memory : N/A
Texture Shared : N/A
CBU : 0
Total : 0
Retired Pages
Single Bit ECC : 0
Double Bit ECC : 0
Pending Page Blacklist : No
Temperature
GPU Current Temp : 40 C
GPU Shutdown Temp : 90 C
GPU Slowdown Temp : 87 C
GPU Max Operating Temp : 83 C
Memory Current Temp : 37 C
Memory Max Operating Temp : 85 C
Power Readings
Power Management : Supported
Power Draw : 25.67 W
Power Limit : 300.00 W
Default Power Limit : 300.00 W
Enforced Power Limit : 300.00 W
Min Power Limit : 150.00 W
Max Power Limit : 300.00 W
Clocks
Graphics : 135 MHz
SM : 135 MHz
Memory : 877 MHz
Video : 555 MHz
Applications Clocks
Graphics : 1312 MHz
Memory : 877 MHz
Default Applications Clocks
Graphics : 1312 MHz
Memory : 877 MHz
Max Clocks
Graphics : 1530 MHz
SM : 1530 MHz
Memory : 877 MHz
Video : 1372 MHz
Max Customer Boost Clocks
Graphics : 1530 MHz
Clock Policy
Auto Boost : N/A
Auto Boost Default : N/A
Processes : None
###Markdown
Benchmark Setup InstallationsInstall v3io-generator to create a 1 GB data set for the benchmark.You only need to run the generator once, and then you can reuse the generated data set.
###Code
import sys
!{sys.executable} -m pip install -i https://test.pypi.org/simple/ v3io-generator
!{sys.executable} -m pip install faker
!{sys.executable} -m pip install pytimeparse
###Output
Looking in indexes: https://test.pypi.org/simple/
Collecting v3io-generator
Downloading https://test-files.pythonhosted.org/packages/6c/f6/ba9045111de98747af2c94e10f3dbf74311e6bd3a033c7ea1ca84e084e82/v3io_generator-0.0.27.dev0-py3-none-any.whl (9.3 kB)
Installing collected packages: v3io-generator
Successfully installed v3io-generator-0.0.27.dev0
Collecting faker
Using cached Faker-4.1.1-py3-none-any.whl (1.0 MB)
Requirement already satisfied: python-dateutil>=2.4 in /User/.conda/envs/rapids/lib/python3.7/site-packages (from faker) (2.8.1)
Collecting text-unidecode==1.3
Using cached text_unidecode-1.3-py2.py3-none-any.whl (78 kB)
Requirement already satisfied: six>=1.5 in /User/.conda/envs/rapids/lib/python3.7/site-packages (from python-dateutil>=2.4->faker) (1.15.0)
Installing collected packages: text-unidecode, faker
Successfully installed faker-4.1.1 text-unidecode-1.3
Collecting pytimeparse
Using cached pytimeparse-1.1.8-py2.py3-none-any.whl (10.0 kB)
Installing collected packages: pytimeparse
Successfully installed pytimeparse-1.1.8
###Markdown
> **Note:** You must **restart the Jupyter kernel** to complete the installation. Imports
###Code
import os
import yaml
import time
import datetime
import json
import itertools
# Generator
from v3io_generator import metrics_generator, deployment_generator
# Dataframes
import cudf
import pandas as pd
###Output
_____no_output_____
###Markdown
Configurations
###Code
# Benchmark configurations
metric_names = ['cpu_utilization', 'latency', 'packet_loss', 'throughput']
nlargest = 10
source_file = os.path.join(os.getcwd(), 'data', 'ops.logs') # Use full path
###Output
_____no_output_____
###Markdown
Create the Data SourceUse v3io-generator to create a time-series network-operations dataset for 100 companies, including 4 metrics (CPU utilization, latency, throughput, and packet loss).Then, write the dataset to a JSON file to be used as the data source.
###Code
# Create a metadata factory
dep_gen = deployment_generator.deployment_generator()
faker=dep_gen.get_faker()
# Design the metadata
dep_gen.add_level(name='company',number=100,level_type=faker.company)
# Generate a deployment structure
deployment_df = dep_gen.generate_deployment()
# Initialize the metric values
for metric in metric_names:
deployment_df[metric] = 0
deployment_df.head()
###Output
_____no_output_____
###Markdown
Specify metrics configuration for the generator.
###Code
metrics_configuration = yaml.safe_load("""
errors: {length_in_ticks: 50, rate_in_ticks: 150}
timestamps: {interval: 5s, stochastic_interval: false}
metrics:
cpu_utilization:
accuracy: 2
distribution: normal
distribution_params: {mu: 70, noise: 0, sigma: 10}
is_threshold_below: true
past_based_value: false
produce_max: false
produce_min: false
validation:
distribution: {max: 1, min: -1, validate: false}
metric: {max: 100, min: 0, validate: true}
latency:
accuracy: 2
distribution: normal
distribution_params: {mu: 0, noise: 0, sigma: 5}
is_threshold_below: true
past_based_value: false
produce_max: false
produce_min: false
validation:
distribution: {max: 1, min: -1, validate: false}
metric: {max: 100, min: 0, validate: true}
packet_loss:
accuracy: 0
distribution: normal
distribution_params: {mu: 0, noise: 0, sigma: 2}
is_threshold_below: true
past_based_value: false
produce_max: false
produce_min: false
validation:
distribution: {max: 1, min: -1, validate: false}
metric: {max: 50, min: 0, validate: true}
throughput:
accuracy: 2
distribution: normal
distribution_params: {mu: 250, noise: 0, sigma: 20}
is_threshold_below: false
past_based_value: false
produce_max: false
produce_min: false
validation:
distribution: {max: 1, min: -1, validate: false}
metric: {max: 300, min: 0, validate: true}
""")
###Output
_____no_output_____
###Markdown
Create the data according to the given hierarchy and metrics configuration.
###Code
met_gen = metrics_generator.Generator_df(metrics_configuration,
user_hierarchy=deployment_df,
initial_timestamp=time.time())
metrics = met_gen.generate_range(start_time=datetime.datetime.now(),
end_time=datetime.datetime.now()+datetime.timedelta(hours=62),
as_df=True,
as_iterator=False)
# Verify that the source-file parent directory exists.
os.makedirs(os.path.dirname(source_file), exist_ok=1)
print(f'Saving generated data to: {source_file}')
# Generate file from metrics
with open(source_file, 'w') as f:
metrics_batch = metrics
metrics_batch.to_json(f,
orient='records',
lines=True)
###Output
Saving generated data to: /User/data-ingestion-and-preparation/data/ops.logs
###Markdown
Validate the Target File SizeGet the target size for the test file.
###Code
from pathlib import Path
Path(source_file).stat().st_size
with open(source_file) as myfile:
head = [next(myfile) for x in range(10)]
print(head)
###Output
['{"company":"Williams_and_Sons","cpu_utilization":64.6440138248,"cpu_utilization_is_error":false,"latency":2.9965630871,"latency_is_error":false,"packet_loss":0.0,"packet_loss_is_error":false,"throughput":258.7732213917,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Guerrero_Ltd","cpu_utilization":68.5296690547,"cpu_utilization_is_error":false,"latency":0.0,"latency_is_error":false,"packet_loss":0.0,"packet_loss_is_error":false,"throughput":288.8039306559,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Harris-Gutierrez","cpu_utilization":55.8557277251,"cpu_utilization_is_error":false,"latency":1.7068227314,"latency_is_error":false,"packet_loss":1.6544231936,"packet_loss_is_error":false,"throughput":265.4031916784,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Shaw-Williams","cpu_utilization":72.8668610421,"cpu_utilization_is_error":false,"latency":1.6477141418,"latency_is_error":false,"packet_loss":0.8709185994,"packet_loss_is_error":false,"throughput":237.5182913153,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Harris_Inc","cpu_utilization":83.5172325497,"cpu_utilization_is_error":false,"latency":7.8220358909,"latency_is_error":false,"packet_loss":1.3942153104,"packet_loss_is_error":false,"throughput":274.9563709951,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Johnson__Smith_and_Lewis","cpu_utilization":65.3007890236,"cpu_utilization_is_error":false,"latency":9.012152204,"latency_is_error":false,"packet_loss":0.0,"packet_loss_is_error":false,"throughput":247.0685516947,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Banks-Young","cpu_utilization":80.0440916828,"cpu_utilization_is_error":false,"latency":7.304937434,"latency_is_error":false,"packet_loss":2.1692271547,"packet_loss_is_error":false,"throughput":279.7641913689,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Gonzalez_Group","cpu_utilization":71.4195844054,"cpu_utilization_is_error":false,"latency":0.0,"latency_is_error":false,"packet_loss":0.0,"packet_loss_is_error":false,"throughput":260.0017327497,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Moore-Guerrero","cpu_utilization":65.0205705374,"cpu_utilization_is_error":false,"latency":1.7684290753,"latency_is_error":false,"packet_loss":0.0,"packet_loss_is_error":false,"throughput":266.4209778666,"throughput_is_error":false,"timestamp":1593707325519}\n', '{"company":"Sanchez__Bennett_and_Thompson","cpu_utilization":67.2085370307,"cpu_utilization_is_error":false,"latency":4.5898304002,"latency_is_error":false,"packet_loss":0.0,"packet_loss_is_error":false,"throughput":274.830056152,"throughput_is_error":false,"timestamp":1593707325519}\n']
###Markdown
BenchmarkThe benchmark tests use the following flow:- Read file- Compute aggregations- Get the n-largest values
###Code
benchmark_file = source_file
###Output
_____no_output_____
###Markdown
In the following examples, `timeit` is executed in a loop.You can change the number of runs and loops:```%%timeit -n 1 -r 1``` Test Load Times cuDF
###Code
%%timeit -n 1 -r 2
gdf = cudf.read_json(benchmark_file, lines=True)
###Output
5.04 s ± 35.7 ms per loop (mean ± std. dev. of 2 runs, 1 loop each)
###Markdown
pandas
###Code
%%timeit -n 1 -r 2
pdf = pd.read_json(benchmark_file, lines=True)
###Output
36.7 s ± 202 ms per loop (mean ± std. dev. of 2 runs, 1 loop each)
###Markdown
Test AggregationLoad the files to memory to allow applying `timeit` only to the aggregations.
###Code
gdf = cudf.read_json(benchmark_file, lines=True)
pdf = pd.read_json(benchmark_file, lines=True)
###Output
_____no_output_____
###Markdown
cuDF
###Code
%%timeit -n 1 -r 7
ggdf = gdf.groupby(['company']).\
agg({k: ['min', 'max', 'mean'] for k in metric_names})
raw_nlargest = gdf.nlargest(nlargest, 'cpu_utilization')
###Output
246 ms ± 10.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
pandas
###Code
%%timeit -n 1 -r 7
gpdf = pdf.groupby(['company']).\
agg({k: ['min', 'max', 'mean'] for k in metric_names})
raw_nlargest = pdf.nlargest(nlargest, 'cpu_utilization')
###Output
1.82 s ± 38.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
|
.ipynb_checkpoints/Dynamics_lab09_SWE2DwithLocalLaxFriedrich_old-checkpoint.ipynb | ###Markdown
AG Dynamics of the Earth Jupyter notebooks Georg Kaufmann Dynamic systems: 9. Shallow-water equations 2D solution----*Georg Kaufmann,Geophysics Section,Institute of Geological Sciences,Freie Universität Berlin,Germany*
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Shallow-water equations in 2D$$\begin{array}{rcl}\frac{\partial h}{\partial t}+ \frac{\partial (hu)}{\partial x}+ \frac{\partial (hv)}{\partial y} = 0 \\\frac{\partial (hu)}{\partial t}+ \frac{\partial (hu^2 + \frac{1}{2}gh^2)}{\partial x}+ \frac{\partial (huv)}{\partial y} = 0 \\\frac{\partial (hv)}{\partial t}+ \frac{\partial (huv)}{\partial x}+ \frac{\partial (hv^2 + \frac{1}{2}gh^2)}{\partial y} = 0\end{array}$$with:- $h$ [m] - water height,- $u,v$ [m/s] - velocity in $x$- and $y$-direction,- $g$ [m/s$^2$] - grav. acceleration,- $t$ [s] - time,- $x,y$ [m] - $x$- and $y$-coordinates.We develop solutions for the **shallow-water equations** for two dimensions, using the`local Lax-Friedrich and Rusanov flux` scheme (LLF-RF):$$\begin{array}{rcl}F_{hu} &=& h u \\F_{hv} &=& h v \\F_{hu^2} &=& hu^2 + \frac{1}{2}gh^2 \\F_{hv^2} &=& hv^2 + \frac{1}{2}gh^2 \\F_{huv} &=& huv \end{array}$$
###Code
nx = 101
ny = 101
xmin = 0.; xmax = 10.0
ymin = 0.; ymax = 10.0
T = 5.0
CFL = 0.99
g = 9.81
mu = 2
sigma = 1
x = np.linspace(xmin,xmax,nx)
y = np.linspace(ymin,ymax,ny)
X,Y = np.meshgrid(x,y,indexing='ij')
dx = (x.max()-x.min()) / (nx-1)
dy = (y.max()-y.min()) / (ny-1)
dt = 0.005
print("dx: ",dx," dy: ",dy," dt: ",dt)
h = 0*X
hu = 0*X
hv = 0*X
#h = np.zeros(nx*ny).reshape(ny,nx)
#hu = np.zeros(nx*ny).reshape(ny,nx)
#hv = np.zeros(nx*ny).reshape(ny,nx)
print(h.shape)
print(hu.shape)
print(hv.shape)
def bcNeumann(var):
### NEUMANN BC ###
var[0,:] = var[1,:]
var[-1,:] = var[-2,:]
var[:,0] = var[:,1]
var[:,-1] = var[:,-2]
return var
def bcPeriodic(var):
### PERIODIC BC ###
var[0,:] = var[-2,:]
var[-1,:] = var[1,:]
var[:,0] = var[:,-2]
var[:,-1] = var[:,1]
return var
def dtCourant(dx,maxeigen,time,TMAX,CFL):
dt = CFL*dx/maxeigen
if time+dt>TMAX:
dt = TMAX-time
return dt
def addGhostCells(var,gleft,gright,gbottom,gtop):
var = np.vstack([gtop*np.ones(nx).reshape(1,nx),var,gbottom*np.ones(nx).reshape(1,nx)])
var = np.hstack([gleft*np.ones(ny+2).reshape(ny+2,1),var,gright*np.ones(ny+2).reshape(ny+2,1)])
return var
uu = 1.0
vv = 0.3
# Finite volume with Rusanov flux scheme
# start time
time = 0
dtplot = 1.00
tplot = dtplot
# initial values
h = np.exp(-((X-mu)**2+(Y-mu)**2)/sigma**2)
plt.figure(figsize=(7,7))
plt.contour(X,Y,h,levels=[0.5,0.6,0.7])
# solution
while (time < T):
time = time + dt
# set boundary conditions
h = bcPeriodic(h)
h = np.maximum(0.1,h)
Fhup = h[1:,:]*uu
Fhum = h[:-1,:]*uu
Fhvp = h[:,1:]*vv
Fhvm = h[:,:-1]*vv
#print(Fhup.shape,Fhum.shape)
#print(Fhvp.shape,Fhvm.shape)
Rhx = (Fhup+Fhum)/2 - uu/2*(h[1:,:]-h[:-1,:])
Rhy = (Fhvp+Fhvm)/2 - vv/2*(h[:,1:]-h[:,:-1])
#print(Rhx.shape,Rhy.shape)
h[1:-1,1:-1] = h[1:-1,1:-1] - dt/dx*(Rhx[1:,1:-1]-Rhx[:-1,1:-1]) \
- dt/dy*(Rhy[1:-1,1:]-Rhy[1:-1,:-1])
if (time > tplot):
plt.contour(X,Y,h,levels=[0.5,0.8])
tplot = tplot + dtplot
uu = 1.0
vv = 0.3
# Finite volume with Rusanov flux scheme
# start time
time = 0
dtplot = 1.0
tplot = dtplot
# initial values
h = np.exp(-((X-mu)**2+(Y-mu)**2)/sigma**2)
hu = 0*h
hv = 0*h
plt.figure(figsize=(7,7))
plt.contour(X,Y,h,levels=[0.5,0.6,0.7])
# solution
while (time < T):
time = time + dt
# set boundary conditions
h = bcPeriodic(h)
hu = bcPeriodic(hu)
hv = bcPeriodic(hv)
h = np.maximum(0.1,h)
u = uu*h/h #hu/h
v = vv*h/h #hv/h
c = np.sqrt(g*h)
#print(u.shape,v.shape,c.shape)
#print(u.min(),u.max())
#print(v.min(),v.max())
#print(c.min(),c.max())
maxeigenu = np.max(np.maximum(np.abs(u-c),np.abs(u+c)))
maxeigenv = np.max(np.maximum(np.abs(v-c),np.abs(v+c)))
maxeigen = max(maxeigenu,maxeigenv)
# calculate time step according to CFL-condition
#dt = dtCourant(dx,maxeigen,time,T,CFL)
#print('dt: ',dx,dt,maxeigenu,maxeigenv,maxeigen)
Fhup = h[1:,:]*u[1:,:]
Fhum = h[:-1,:]*u[:-1,:]
Fhvp = h[:,1:]*v[:,1:]
Fhvm = h[:,:-1]*v[:,:-1]
Fhu2p = h[1:,:]*u[1:,:]**2 + g*h[1:,:]**2/2
Fhu2m = h[:-1,:]*u[:-1,:]**2 + g*h[:-1,:]**2/2
Fhuvp = h[:,1:]*u[:,1:]*v[:,1:]
Fhuvm = h[:,:-1]*u[:,:-1]*v[:,:-1]
Fhuv2p = h[1:,:]*u[1:,:]*v[1:,:]
Fhuv2m = h[:-1,:]*u[:-1,:]*v[:-1,:]
Fhv2p = h[:,1:]*v[:,1:]**2 + g*h[:,1:]**2/2
Fhv2m = h[:,:-1]*v[:,:-1]**2 + g*h[:,:-1]**2/2
#print(Fhup.shape,Fhum.shape)
#print(Fhvp.shape,Fhvm.shape)
Rhx = (Fhup+Fhum)/2 - uu/2*(h[1:,:]-h[:-1,:])
Rhy = (Fhvp+Fhvm)/2 - vv/2*(h[:,1:]-h[:,:-1])
Rhux = (Fhu2p+Fhu2m)/2 - dx/dt/2*(hu[1:,:]-hu[:-1,:])
Rhuy = (Fhuvp+Fhuvm)/2 - dy/dt/2*(hu[:,1:]-hu[:,:-1])
Rhvx = (Fhuv2p+Fhuv2m)/2 - dx/dt/2*(hv[1:,:]-hv[:-1,:])
Rhvy = (Fhv2p+Fhv2m)/2 - dy/dt/2*(hv[:,1:]-hv[:,:-1])
#print(Rhx.shape,Rhy.shape)
h[1:-1,1:-1] = h[1:-1,1:-1] - dt/dx*(Rhx[1:,1:-1]-Rhx[:-1,1:-1]) \
- dt/dy*(Rhy[1:-1,1:]-Rhy[1:-1,:-1])
#hu[1:-1,1:-1] = hu[1:-1,1:-1] - dt/dx*(Rhux[1:,1:-1]-Rhux[:-1,1:-1]) \
# - dt/dy*(Rhuy[1:-1,1:]-Rhuy[1:-1,:-1])
#hv[1:-1,1:-1] = hv[1:-1,1:-1] - dt/dx*(Rhvx[1:,1:-1]-Rhvx[:-1,1:-1]) \
# - dt/dy*(Rhvy[1:-1,1:]-Rhvy[1:-1,:-1])
if (time > tplot):
plt.contour(X,Y,h,levels=[0.5,0.8])
tplot = tplot + dtplot
###Output
_____no_output_____
###Markdown
Fluxes, Courant time step, patching ...
###Code
def fluxes(h,hu,hv):
Fhu,Fhv,Fhu2,Fhv2,eigen1,eigen2,eigen3,eigen4 = fluxAndLambda(h,hu,hv)
maxeigen = np.max(np.maximum(np.abs(eigen1),np.abs(eigen2),np.abs(eigen3),np.abs(eigen4)))
return Fhu,Fhv,Fhu2,Fhv2,maxeigen
def fluxAndLambda(h,hu,hv):
c = np.sqrt(g*h)
h = np.maximum(0.1,h)
Fhu = hu
Fhv = hv
Fhu2 = Fhu*u + .5*g*(h**2)
Fhv2 = Fhv*v + .5*g*(h**2)
return Fhu,Fhv,Fhu2,Fhv2,u-c,u+c,v-c,v+c
def LxFflux(q, fqm, fqp, eigen):
# Lax-Friedrichs Flux
return 0.5*( (fqp+fqm) - 1./eigen*(q[1:] - q[:-1]) )
def dtCourant(dx,maxeigen,time,TMAX,CFL):
dt = CFL*dx/maxeigen
if time+dt>TMAX:
dt = TMAX-time
return dt
def addGhostCells(var,gleft,gright):
return np.hstack([gleft,var,gright])
###Output
_____no_output_____
###Markdown
Boundary conditions ...
###Code
def bcNeumann(var):
### NEUMANN BC ###
var[0,:] = var[1,:]
var[-1,:] = var[-2,:]
var[:,0] = var[:,1]
var[:,-1] = var[:,-2]
return var
def bcPeriodic(var):
### PERIODIC BC ###
var[0,:] = var[-2,:]
var[-1,:] = var[1,:]
var[:,0] = var[:,-2]
var[:,-1] = var[:,1]
return var
###Output
_____no_output_____ |
naverAPI.ipynb | ###Markdown
###Code
import requests
auth_dict = {'X-Naver-Client-Id': 'aITCAPrRSpJfA6kKKSbO','X-Naver-Client-Secret': 'fuxth8kTcc'}
https://openapi.naver.com/v1/search/{serviceid}
serviceid=blog
query=진주
display=10
start=1
sort=sim
filter=all
uri = 'https://openapi.naver.com/v1/search/blog?query=진주&display=10&start=1&sort=sim'
req = requests.get(uri, headers = auth_dict)
req.status_code
req.content
import json
json.loads(req.content)
body_dict = json.loads(req.content)
type(body_dict), body_dict.keys()
body_dict['total']
type(body_dict['items']), body_dict['items']
body_dict['items'][4]['bloggerlink']
###Output
_____no_output_____
###Markdown
**Naver Developer**``` 공공 API**API공통 가이드** 샘플코드 로그인방식 API호출 예 비로그인 방식 API호출 예 서비스 API 검색 블로그 오픈 API이용 신청 Application 개요 : Client ID(n5q*******6), Client Secret(Vnl******h) API설정 : 내게 맞는 key설정 사용 API(검색) **Playground(beta)**: API선택(Search-기본검색) API URL(https://openapi.naver.com/v1/search/{serviceid}) serviceid(*) (blog) query(*) (검색어) display (검색결과 출력수) start (검색시작위치) sort(유사도Similarity) filter(이미지 링크도 가져올 수 있음) **API호출 ** API 요청 API 호출 결과``` - Client ID : XqGxdxX9SIqvP5eLByKo- Client Secret: kkD6QfQUzQ
###Code
import requests
auth_dict = {'X-Naver-Client-Id': 'XqGxdxX9SIqvP5eLByKo',
'X-Naver-Client-Secret': 'kkD6QfQUzQ'}
###Output
_____no_output_____
###Markdown
- https://openapi.naver.com/v1/search/{serviceid}- serviceid = blog- query = 진주- display = 10- start = 1- sort = sim- filter = all> *https://openapi.naver.com/v1/search/blog?query=진주&display=10&start=1&sort=sim*
###Code
url = 'https://openapi.naver.com/v1/search/blog?query=진주&display=10&start=1&sort=sim'
req = requests.get(url, headers=auth_dict)
req.status_code
req.content
import json # json 파일로 바꾸는 게 좋다
body_dict = json.loads(req.content)
type(body_dict), body_dict.keys()
# Naver API의 key가 그대로 들어감
# "lastBuildDate": "Tue, 24 Aug 2021 10:52:21 +0900", "total": 3403686, "start": 1,"display": 10, "items": [~~]
body_dict['total']
type(body_dict['items']) # items -> 내용
# body_dict['items']
body_dict['items'][4]['bloggerlink'] # 리스트에 접근
###Output
_____no_output_____ |
03. Linear Classifier/Tutorials/1_Linear_Classifier.ipynb | ###Markdown
Linear Classifier (Logistic Regression) IntroductionIn this tutorial, we'll create a simple linear classifier in TensorFlow. We will implement this model for classifying images of hand-written digits from the so-called MNIST data-set. The structure of the network is presented in the following figure.___Fig. 1-___ Sample Logistic Regression structure implemented for classifying MNIST digits You should be familiar with basic linear algebra, Machine Learning and classification. To specifically learn about the linear classifiers, read [this](https://cs231n.github.io/linear-classify/) article.Technically, in a linear model we will use the simplest function to predict the label $\mathbf{y_i}$ of the image $\mathbf{x_i}$. We'll do so by using a linear mapping like $f(\mathbf{x_i}, \mathbf{W}, \mathbf{b})=\mathbf{W}\mathbf{x_i}+\mathbf{b}$ where $\mathbf{W}$ and $\mathbf{b}$ are called weight matrix and bias vector respectively. 0. Import the required libraries:We will start with importing the required Python libraries.
###Code
# imports
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1. Load the MNIST dataFor this tutorial we use the MNIST dataset. MNIST is a dataset of handwritten digits. If you are into machine learning, you might have heard of this dataset by now. MNIST is kind of benchmark of datasets for deep learning and is easily accesible through TensorflowThe dataset contains $55,000$ examples for training, $5,000$ examples for validation and $10,000$ examples for testing. The digits have been size-normalized and centered in a fixed-size image ($28\times28$ pixels) with values from $0$ to $1$. For simplicity, each image has been flattened and converted to a 1-D numpy array of $784$ features ($28\times28$).If you want to know more about the MNIST dataset you can check __Yann Lecun__'s [website](http://yann.lecun.com/exdb/mnist/). 1.1. Data dimensionHere, we specify the dimensions of the images which will be used in several places in the code below. Defining these variables makes it easier (compared with using hard-coded number all throughout the code) to modify them later. Ideally these would be inferred from the data that has been read, but here we will just write the numbers.It's important to note that in a linear model, we have to flatten the input images into a vector. Here, each of the $28\times28$ images are flattened into a $1\times784$ vector.
###Code
img_h = img_w = 28 # MNIST images are 28x28
img_size_flat = img_h * img_w # 28x28=784, the total number of pixels
n_classes = 10 # Number of classes, one class per digit
###Output
_____no_output_____
###Markdown
1.2. Helper functions to load the MNIST dataIn this section, we'll write the function which automatically loads the MNIST data and returns it in our desired shape and format. If you wanna learn more about loading your data, you may read our __How to Load Your Data in TensorFlow __ tutorial which explains all the available methods to load your own data; no matter how big it is. Here, we'll simply write a function (__`load_data`__) which has two modes: train (which loads the training and validation images and their corresponding labels) and test (which loads the test images and their corresponding labels). You can replace this function to use your own dataset. Other than a function for loading the images and corresponding labels, we define two more functions:1. __randomize__: which randomizes the order of images and their labels. This is important to make sure that the input images are sorted in a completely random order. Moreover, at the beginning of each __epoch__, we will re-randomize the order of data samples to make sure that the trained model is not sensitive to the order of data.2. __get_next_batch__: which only selects a few number of images determined by the batch_size variable (if you don't know why, read about Stochastic Gradient Method)
###Code
def load_data(mode='train'):
"""
Function to (download and) load the MNIST data
:param mode: train or test
:return: images and the corresponding labels
"""
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
if mode == 'train':
x_train, y_train, x_valid, y_valid = mnist.train.images, mnist.train.labels, \
mnist.validation.images, mnist.validation.labels
return x_train, y_train, x_valid, y_valid
elif mode == 'test':
x_test, y_test = mnist.test.images, mnist.test.labels
return x_test, y_test
def randomize(x, y):
""" Randomizes the order of data samples and their corresponding labels"""
permutation = np.random.permutation(y.shape[0])
shuffled_x = x[permutation, :]
shuffled_y = y[permutation]
return shuffled_x, shuffled_y
def get_next_batch(x, y, start, end):
x_batch = x[start:end]
y_batch = y[start:end]
return x_batch, y_batch
###Output
_____no_output_____
###Markdown
1.3. Load the data and display the sizesNow we can use the defined helper function in __train__ mode which loads the train and validation images and their corresponding labels. We'll also display their sizes:
###Code
# Load MNIST data
x_train, y_train, x_valid, y_valid = load_data(mode='train')
print("Size of:")
print("- Training-set:\t\t{}".format(len(y_train)))
print("- Validation-set:\t{}".format(len(y_valid)))
###Output
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting MNIST_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
Size of:
- Training-set: 55000
- Validation-set: 5000
###Markdown
To get a better sense of the data, let's checkout the shapes of the loaded arrays.
###Code
print('x_train:\t{}'.format(x_train.shape))
print('y_train:\t{}'.format(y_train.shape))
print('x_train:\t{}'.format(x_valid.shape))
print('y_valid:\t{}'.format(y_valid.shape))
###Output
x_train: (55000, 784)
y_train: (55000, 10)
x_train: (5000, 784)
y_valid: (5000, 10)
###Markdown
As you can see, __`x_train`__ and __`x_valid`__ arrays contain $55000$ and $5000$ flattened images ( of size $28\times28=784$ values). __`y_train`__ and __`y_valid`__ contain the corresponding labels of the images in the training and validation set respectively. Based on the dimesnion of the arrays, for each image, we have 10 values as its label. Why? This technique is called __One-Hot Encoding__. This means the labels have been converted from a single number to a vector whose length equals the number of possible classes. All elements of the vector are zero except for the $i^{th}$ element which is one and means the class is $i$. For example, the One-Hot encoded labels for the first 5 images in the validation set are:
###Code
y_valid[:5, :]
###Output
_____no_output_____
###Markdown
where the $10$ values in each row represents the label assigned to that partiular image. 2. HyperparametersHere, we have about $55,000$ images in our training set. It takes a long time to calculate the gradient of the model using all these images. We therefore use __Stochastic Gradient Descent__ which only uses a small batch of images in each iteration of the optimizer. Let's define some of the terms usually used in this context:- __epoch__: one forward pass and one backward pass of __all__ the training examples.- __batch size__: the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need.- __iteration__: one forward pass and one backward pass of __one batch of images__ the training examples.
###Code
# Hyper-parameters
epochs = 10 # Total number of training epochs
batch_size = 100 # Training batch size
display_freq = 100 # Frequency of displaying the training results
learning_rate = 0.001 # The optimization initial learning rate
###Output
_____no_output_____
###Markdown
Given the above definitions, each epoch consists of $55,000/100=550$ iterations. 3. Helper functions for creating the network 3.1. Helper functions for creating new variablesAs explained (and also illustrated in Fig. 1), we need to define two variables $\mathbf{W}$ and $\mathbf{b}$ to construt our linear model. These are generally called model parameters and as explained in our [Tensor Types](https://github.com/easy-tensorflow/easy-tensorflow/blob/master/1_TensorFlow_Basics/Tutorials/2_Tensor_Types.ipynb) tutorial, we use __Tensorflow Variables__ of proper size and initialization to define them.The following functions are written to be later used for generating the weight and bias variables of the desired shape:
###Code
def weight_variable(shape):
"""
Create a weight variable with appropriate initialization
:param name: weight name
:param shape: weight shape
:return: initialized weight variable
"""
initer = tf.truncated_normal_initializer(stddev=0.01)
return tf.get_variable('W',
dtype=tf.float32,
shape=shape,
initializer=initer)
def bias_variable(shape):
"""
Create a bias variable with appropriate initialization
:param name: bias variable name
:param shape: bias variable shape
:return: initialized bias variable
"""
initial = tf.constant(0., shape=shape, dtype=tf.float32)
return tf.get_variable('b',
dtype=tf.float32,
initializer=initial)
###Output
_____no_output_____
###Markdown
4. Create the network graphNow that we have defined all the helped functions to create our model, we can create our network. 4.1. Placeholders for the inputs (x) and corresponding labels (y)First we need to define the proper tensors to feed in the input values to our model. As explained in the [Tensor Types](https://github.com/easy-tensorflow/easy-tensorflow/blob/master/1_TensorFlow_Basics/Tutorials/2_Tensor_Types.ipynb) tutorial, placeholder variable is the suitable choice for the input images and corresponding labels. This allows us to change the inputs (images and labels) to the TensorFlow graph.
###Code
# Create the graph for the linear model
# Placeholders for inputs (x) and outputs(y)
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='X')
y = tf.placeholder(tf.float32, shape=[None, n_classes], name='Y')
###Output
_____no_output_____
###Markdown
Placeholder __`x`__ is defined for the images; its data-type is set to __`float32`__ and the shape is set to __[None, img_size_flat]__, where __`None`__ means that the tensor may hold an arbitrary number of images with each image being a vector of length __`img_size_flat`__.Next we have __`y`__ which is the placeholder variable for the true labels associated with the images that were input in the placeholder variable __`x`__. The shape of this placeholder variable is __[None, num_classes]__ which means it may hold an arbitrary number of labels and each label is a vector of length __`num_classes`__ which is $10$ in this case. 4.2. Create the model structureAfter creating the proper input, we have to pass it to our model. Since we have a linear classifier, we will have __`output_logits`__$=\mathbf{W}\times \mathbf{x} + \mathbf{b}$ and we will use __`tf.nn.softmax`__ to normalize the __`output_logits`__ between $0$ and $1$ as probabilities (predictions) of samples.
###Code
# Create weight matrix initialized randomely from N~(0, 0.01)
W = weight_variable(shape=[img_size_flat, n_classes])
# Create bias vector initialized as zero
b = bias_variable(shape=[n_classes])
output_logits = tf.matmul(x, W) + b
###Output
_____no_output_____
###Markdown
4.3. Define the loss function, optimizer, accuracy, and predicted classAfter creating the network, we have to calculate the loss and optimize it. Also, to evaluate our model, we have to calculate the `correct_prediction` and `accuracy`. We will also define `cls_prediction` to visualize our results. 4.3. Define the loss function, optimizer, accuracy, and predicted class
###Code
# Define the loss function, optimizer, and accuracy
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=output_logits), name='loss')
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, name='Adam-op').minimize(loss)
correct_prediction = tf.equal(tf.argmax(output_logits, 1), tf.argmax(y, 1), name='correct_pred')
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name='accuracy')
# Model predictions
cls_prediction = tf.argmax(output_logits, axis=1, name='predictions')
###Output
WARNING:tensorflow:From <ipython-input-11-c1f11071639f>:2: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.
See tf.nn.softmax_cross_entropy_with_logits_v2.
###Markdown
4.4. Initialize all variablesAs explained in the [Tensor Types](https://github.com/easy-tensorflow/easy-tensorflow/blob/master/1_TensorFlow_Basics/Tutorials/2_Tensor_Types.ipynb) tutorial, we have to invoke a variable initializer operation to initialize all variables.
###Code
# Creating the op for initializing all variables
init = tf.global_variables_initializer()
###Output
_____no_output_____
###Markdown
5. TrainAfter creating the graph, it is time to train our model. To train the model, As explained in the [Graph_and_Session](https://github.com/easy-tensorflow/easy-tensorflow/blob/master/1_TensorFlow_Basics/Tutorials/1_Graph_and_Session.ipynb) tutorial, we have to create a session and run the graph in our session.
###Code
# Create an interactive session (to keep the session in the other cells)
sess = tf.InteractiveSession()
# Initialize all variables
sess.run(init)
# Number of training iterations in each epoch
num_tr_iter = int(len(y_train) / batch_size)
for epoch in range(epochs):
print('Training epoch: {}'.format(epoch + 1))
# Randomly shuffle the training data at the beginning of each epoch
x_train, y_train = randomize(x_train, y_train)
for iteration in range(num_tr_iter):
start = iteration * batch_size
end = (iteration + 1) * batch_size
x_batch, y_batch = get_next_batch(x_train, y_train, start, end)
# Run optimization op (backprop)
feed_dict_batch = {x: x_batch, y: y_batch}
sess.run(optimizer, feed_dict=feed_dict_batch)
if iteration % display_freq == 0:
# Calculate and display the batch loss and accuracy
loss_batch, acc_batch = sess.run([loss, accuracy],
feed_dict=feed_dict_batch)
print("iter {0:3d}:\t Loss={1:.2f},\tTraining Accuracy={2:.01%}".
format(iteration, loss_batch, acc_batch))
# Run validation after every epoch
feed_dict_valid = {x: x_valid[:1000], y: y_valid[:1000]}
loss_valid, acc_valid = sess.run([loss, accuracy], feed_dict=feed_dict_valid)
print('---------------------------------------------------------')
print("Epoch: {0}, validation loss: {1:.2f}, validation accuracy: {2:.01%}".
format(epoch + 1, loss_valid, acc_valid))
print('---------------------------------------------------------')
###Output
Training epoch: 1
iter 0: Loss=2.22, Training Accuracy=24.0%
iter 100: Loss=0.78, Training Accuracy=84.0%
iter 200: Loss=0.53, Training Accuracy=89.0%
iter 300: Loss=0.42, Training Accuracy=89.0%
iter 400: Loss=0.36, Training Accuracy=90.0%
iter 500: Loss=0.33, Training Accuracy=92.0%
---------------------------------------------------------
Epoch: 1, validation loss: 0.40, validation accuracy: 88.7%
---------------------------------------------------------
Training epoch: 2
iter 0: Loss=0.25, Training Accuracy=95.0%
iter 100: Loss=0.35, Training Accuracy=88.0%
iter 200: Loss=0.31, Training Accuracy=91.0%
iter 300: Loss=0.38, Training Accuracy=87.0%
iter 400: Loss=0.37, Training Accuracy=91.0%
iter 500: Loss=0.35, Training Accuracy=91.0%
---------------------------------------------------------
Epoch: 2, validation loss: 0.35, validation accuracy: 90.1%
---------------------------------------------------------
Training epoch: 3
iter 0: Loss=0.43, Training Accuracy=89.0%
iter 100: Loss=0.29, Training Accuracy=92.0%
iter 200: Loss=0.30, Training Accuracy=89.0%
iter 300: Loss=0.30, Training Accuracy=94.0%
iter 400: Loss=0.24, Training Accuracy=92.0%
iter 500: Loss=0.21, Training Accuracy=94.0%
---------------------------------------------------------
Epoch: 3, validation loss: 0.32, validation accuracy: 90.5%
---------------------------------------------------------
Training epoch: 4
iter 0: Loss=0.42, Training Accuracy=88.0%
iter 100: Loss=0.33, Training Accuracy=91.0%
iter 200: Loss=0.27, Training Accuracy=92.0%
iter 300: Loss=0.20, Training Accuracy=94.0%
iter 400: Loss=0.17, Training Accuracy=95.0%
iter 500: Loss=0.30, Training Accuracy=90.0%
---------------------------------------------------------
Epoch: 4, validation loss: 0.31, validation accuracy: 90.7%
---------------------------------------------------------
Training epoch: 5
iter 0: Loss=0.25, Training Accuracy=94.0%
iter 100: Loss=0.25, Training Accuracy=94.0%
iter 200: Loss=0.32, Training Accuracy=94.0%
iter 300: Loss=0.27, Training Accuracy=92.0%
iter 400: Loss=0.30, Training Accuracy=91.0%
iter 500: Loss=0.32, Training Accuracy=90.0%
---------------------------------------------------------
Epoch: 5, validation loss: 0.30, validation accuracy: 91.3%
---------------------------------------------------------
Training epoch: 6
iter 0: Loss=0.28, Training Accuracy=92.0%
iter 100: Loss=0.40, Training Accuracy=89.0%
iter 200: Loss=0.25, Training Accuracy=92.0%
iter 300: Loss=0.29, Training Accuracy=91.0%
iter 400: Loss=0.31, Training Accuracy=92.0%
iter 500: Loss=0.23, Training Accuracy=91.0%
---------------------------------------------------------
Epoch: 6, validation loss: 0.29, validation accuracy: 90.9%
---------------------------------------------------------
Training epoch: 7
iter 0: Loss=0.22, Training Accuracy=97.0%
iter 100: Loss=0.16, Training Accuracy=95.0%
iter 200: Loss=0.20, Training Accuracy=95.0%
iter 300: Loss=0.33, Training Accuracy=88.0%
iter 400: Loss=0.26, Training Accuracy=91.0%
iter 500: Loss=0.23, Training Accuracy=93.0%
---------------------------------------------------------
Epoch: 7, validation loss: 0.29, validation accuracy: 91.5%
---------------------------------------------------------
Training epoch: 8
iter 0: Loss=0.32, Training Accuracy=92.0%
iter 100: Loss=0.28, Training Accuracy=93.0%
iter 200: Loss=0.26, Training Accuracy=94.0%
iter 300: Loss=0.21, Training Accuracy=94.0%
iter 400: Loss=0.27, Training Accuracy=93.0%
iter 500: Loss=0.24, Training Accuracy=94.0%
---------------------------------------------------------
Epoch: 8, validation loss: 0.29, validation accuracy: 91.8%
---------------------------------------------------------
Training epoch: 9
iter 0: Loss=0.24, Training Accuracy=93.0%
iter 100: Loss=0.40, Training Accuracy=90.0%
iter 200: Loss=0.24, Training Accuracy=94.0%
iter 300: Loss=0.27, Training Accuracy=92.0%
iter 400: Loss=0.25, Training Accuracy=93.0%
iter 500: Loss=0.11, Training Accuracy=98.0%
---------------------------------------------------------
Epoch: 9, validation loss: 0.29, validation accuracy: 91.3%
---------------------------------------------------------
Training epoch: 10
iter 0: Loss=0.19, Training Accuracy=95.0%
iter 100: Loss=0.29, Training Accuracy=95.0%
iter 200: Loss=0.28, Training Accuracy=86.0%
iter 300: Loss=0.47, Training Accuracy=87.0%
iter 400: Loss=0.16, Training Accuracy=98.0%
iter 500: Loss=0.19, Training Accuracy=94.0%
---------------------------------------------------------
Epoch: 10, validation loss: 0.28, validation accuracy: 91.4%
---------------------------------------------------------
###Markdown
6. TestAfter the training is done, we have to test our model to see how good it performs on a new dataset. There are multiple approaches to for this purpose. We will use two different methods. 6.1. AccuracyOne way that we can evaluate our model is reporting the accuracy on the test set.
###Code
# Test the network after training
# Accuracy
x_test, y_test = load_data(mode='test')
feed_dict_test = {x: x_test[:1000], y: y_test[:1000]}
loss_test, acc_test = sess.run([loss, accuracy], feed_dict=feed_dict_test)
print('---------------------------------------------------------')
print("Test loss: {0:.2f}, test accuracy: {1:.01%}".format(loss_test, acc_test))
print('---------------------------------------------------------')
###Output
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
---------------------------------------------------------
Test loss: 0.27, test accuracy: 91.7%
---------------------------------------------------------
###Markdown
6.2. plot some resultsAnother way to evaluate the model is to visualize the input and the model results and compare them with the true label of the input. This is advantages in numerous ways. For example, even if you get a decent accuracy, when you plot the results, you might see all the samples have been classified in one class. Another example is when you plot, you can have a rough idea on which examples your model failed. Let's define the helper functions to plot some correct and missclassified examples. 6.2.1 Helper functions for plotting the results
###Code
def plot_images(images, cls_true, cls_pred=None, title=None):
"""
Create figure with 3x3 sub-plots.
:param images: array of images to be plotted, (9, img_h*img_w)
:param cls_true: corresponding true labels (9,)
:param cls_pred: corresponding true labels (9,)
"""
fig, axes = plt.subplots(3, 3, figsize=(9, 9))
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(28, 28), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
ax_title = "True: {0}".format(cls_true[i])
else:
ax_title = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
ax.set_title(ax_title)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
if title:
plt.suptitle(title, size=20)
plt.show(block=False)
def plot_example_errors(images, cls_true, cls_pred, title=None):
"""
Function for plotting examples of images that have been mis-classified
:param images: array of all images, (#imgs, img_h*img_w)
:param cls_true: corresponding true labels, (#imgs,)
:param cls_pred: corresponding predicted labels, (#imgs,)
"""
# Negate the boolean array.
incorrect = np.logical_not(np.equal(cls_pred, cls_true))
# Get the images from the test-set that have been
# incorrectly classified.
incorrect_images = images[incorrect]
# Get the true and predicted classes for those images.
cls_pred = cls_pred[incorrect]
cls_true = cls_true[incorrect]
# Plot the first 9 images.
plot_images(images=incorrect_images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9],
title=title)
###Output
_____no_output_____
###Markdown
6.2.2 Visualize correct and missclassified examples
###Code
# Plot some of the correct and misclassified examples
cls_pred = sess.run(cls_prediction, feed_dict=feed_dict_test)
cls_true = np.argmax(y_test[:1000], axis=1)
plot_images(x_test, cls_true, cls_pred, title='Correct Examples')
plot_example_errors(x_test[:1000], cls_true, cls_pred, title='Misclassified Examples')
plt.show()
###Output
_____no_output_____
###Markdown
After we finished, we have to close the __`session`__ to free the memory. We could have also used:```pythonwith tf.Session as sess: ...```Please check our [Graph_and_Session](https://github.com/easy-tensorflow/easy-tensorflow/blob/master/1_TensorFlow_Basics/Tutorials/1_Graph_and_Session.ipynb) tutorial if you do not know the differences between these two implementations.
###Code
sess.close()
###Output
_____no_output_____ |
pipeline/analytics.ipynb | ###Markdown
Tutorial: Building an Web Logs Analytics Data Pipeline Before you beginIf you’ve ever wanted to learn Python online with streaming data, or data that changes quickly, you may be familiar with the concept of a data pipeline. Data pipelines allow you transform data from one representation to another through a series of steps. In this tutorial, we’re going to walk through building a simple data pipeline with help of Kale.A common use case for a data pipeline is figuring out information about the visitors to your web site. If you’re familiar with Google Analytics, you know the value of seeing real-time and historical information on visitors. In this tutorial, we’ll use data from web server logs to answer questions about our visitors.If you’re unfamiliar, every time you visit a web page, your browser is sent data from a web server. To host this tutorial, we use a high-performance web server called Nginx. Here’s how the process of you typing in a URL and seeing a result works:The process of sending a request from a web browser to a server.First, the client sends a request to the web server asking for a certain page. The web server then loads the page from the filesystem and returns it to the client. As it serves the request, the web server writes a line to a log file on the filesystem that contains some metadata about the client and the request. This log enables someone to later see who visited which pages on the website at what time, and perform other analysis.A typical line from the Nginx log could look like this: `X.X.X.X - - [09/Mar/2017:01:15:59 +0000] "GET /blog/assets/css/jupyter.css HTTP/1.1" 200 30294 "http://www.dataquest.io/blog/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/53.0.2785.143 Chrome/53.0.2785.143 Safari/537.36 PingdomPageSpeed/1.0 (pingbot/2.0; +http://www.pingdom.com/)"`Each request is a single line, and lines are appended in chronological order, as requests are made to the server. The format of each line is the Nginx combined format, below are some descriptions of each variable in this format:- **remote_addr**: the ip address of the client making the request to the server.- **remote_user**: if the client authenticated with basic authentication, this is the user name.- **time_local**: the local time when the request was made. For instance 09/Mar/2017:01:15:59 +0000- **request**: the type of request, and the URL that it was made to. For instance GET /blog/assets/css/jupyter.css HTTP/1.1 - **status**: the response status code from the server.- **body_bytes_sent**: the number of bytes sent by the server to the client in the response body.- **http_referrer**: the page that the client was on before sending the current request.- **http_user_agent**: information about the browser and system of the clientAs you can imagine, companies derive a lot of value from knowing which visitors are on their site, and what they’re doing. For example, realizing that users who use the Google Chrome browser rarely visit a certain page may indicate that the page has a rendering issue in that browser.Another example is in knowing how many users from each district visit your site each day. It can help you figure out which district to focus your marketing efforts on. At the simplest level, just knowing how many visitors you have per day can help you understand if your marketing efforts are working properly.In order to calculate these metrics, we need to parse the log files and analyze them. And to do this, we need to construct a data pipeline. Pipeline structureGetting from raw logs to browser and status counts per day.As you can see above, we go from raw log data to statistical queries where we can see different browser/status counts per day. If necessary, this pipeline can run continuously — when new entries are added to the server log, it grabs them and processes them. There are a few things you’ve hopefully noticed about how we structured the pipeline:- Each pipeline component is separated from the others, and takes in a defined input, and returns a defined output. Each output will be further stored in a Google Storage Bucket to pass the data between pipeline steps. And these cached outputs can be used for further analysis.- We also store the raw log data to a SQLite database. This ensures that if we ever want to run a different analysis, we have access to all of the raw data.- We remove duplicate records. It’s very easy to introduce duplicate data into your analysis process, so deduplicating before passing data through the pipeline is critical.- Each pipeline component feeds data into another component. We want to keep each component as small as possible, so that we can individually scale pipeline components up, or use the outputs for a different type of analysis.Now that we’ve seen how this pipeline looks at a high level, let’s begin. Generating webserver logsIn order to create our data pipeline, we’ll need access to webserver log data. In this step we created a script that will continuously generate fake (but somewhat realistic) log data. After running the following cells, you should see new entries being written to logs.txt in the same folder.
###Code
import os
import sys
import subprocess
# Please note that currently you may have to install the extra packages in this way
# As "!" symbol is not supported yet by Kale
subprocess.check_call([sys.executable, "-m", "pip", "install", "Faker==0.7.9", "--quiet"])
subprocess.check_call([sys.executable, "-m", "pip", "install", "google-cloud-storage==1.24.1", "--quiet"])
import random
from faker import Faker
from datetime import datetime
from google.cloud import storage
LINE = """\
{remote_addr} - - [{time_local} +0000] "{request_type} {request_path} HTTP/1.1" {status} {body_bytes_sent} "{http_referer}" "{http_user_agent}"\
"""
LOG_MAX = 100 # Define the size of webserver logs to be generated
LOG_FILE = "logs.txt"
USER = "hong" # Define the name of your sub-directory in the storage bucket
BUCKET = "gs://web-log-test"
# GCS bucket acts as a transfer station for data passing between pipeline components.
def upload_file_to_gcs(bucket_name, file_name):
"""Upload a file to GCS bucket. Ignore errors."""
try:
subprocess.call(['gsutil', 'cp', file_name, bucket_name])
except:
pass
def download_file_from_gcs(bucket_name, file_name):
"""Download a file from GCS bucket. Ignore errors."""
try:
subprocess.call(['gsutil', 'cp', bucket_name, file_name])
except:
pass
def generate_log_line():
fake = Faker()
now = datetime.now()
remote_addr = fake.ipv4()
time_local = now.strftime('%d/%b/%Y:%H:%M:%S')
request_type = random.choice(["GET", "POST", "PUT"])
request_path = "/" + fake.uri_path()
status = random.choice([200, 401, 403, 404, 500])
body_bytes_sent = random.choice(range(5, 1000, 1))
http_referer = fake.uri()
http_user_agent = fake.user_agent()
log_line = LINE.format(
remote_addr=remote_addr,
time_local=time_local,
request_type=request_type,
request_path=request_path,
status=status,
body_bytes_sent=body_bytes_sent,
http_referer=http_referer,
http_user_agent=http_user_agent
)
return log_line
def write_log_line(log_file, line):
with open(log_file, "a") as f:
f.write(line)
f.write("\n")
def generate_log_file():
"""
Generate the weblog file with defined size.
This file will be stored in the given bucket.
"""
current_log_file = LOG_FILE
lines_written = 0
while lines_written != LOG_MAX:
line = generate_log_line()
write_log_line(current_log_file, line)
lines_written += 1
print("{}{}{}".format("Log file with length ", LOG_MAX, " successfully generated"))
generate_log_file()
upload_file_to_gcs(bucket_name=BUCKET+"/"+USER+"/"+LOG_FILE, file_name=LOG_FILE)
print("Generated log file is uploaded to GCS bucket " + BUCKET + "/" + USER)
###Output
_____no_output_____
###Markdown
Processing and storing webserver logsOnce we’ve finished creating the data, we just need to write some code to ingest (or read in) the logs. The script will need to:- Open the log files and read from them line by line.- Parse each line into fields.- Write each line and the parsed fields to a database.- Ensure that duplicate lines aren’t written to the database.
###Code
import time
import sqlite3
# We picked SQLite in this tutorial because it’s simple, and stores all of the data in a single file.
# This enables us to upload the database into a bucket.
# If you’re more concerned with performance, you might be better off with a database like Postgres.
DB_NAME = "db.sqlite"
DOWNLOAD_FILE = "downloaded_logs.txt"
def create_table():
"""
Create table logs in the SQLite database.
The table schema is defined accroding to the log format.
"""
conn = sqlite3.connect(DB_NAME)
conn.execute("""
CREATE TABLE IF NOT EXISTS logs (
raw_log TEXT NOT NULL,
remote_addr TEXT,
time_local TEXT,
request_type TEXT,
request_path TEXT,
status INTEGER,
body_bytes_sent INTEGER,
http_referer TEXT,
http_user_agent TEXT,
created DATETIME DEFAULT CURRENT_TIMESTAMP
)
""")
conn.close()
def parse_line(line):
"""
Parse each log line by splitting it into structured fields.
Extract all of the fields from the split representation.
"""
split_line = line.split(" ")
if len(split_line) < 12:
return []
remote_addr = split_line[0]
time_local = split_line[3] + " " + split_line[4]
request_type = split_line[5]
request_path = split_line[6]
status = split_line[8]
body_bytes_sent = split_line[9]
http_referer = split_line[10]
http_user_agent = " ".join(split_line[11:])
created = datetime.now().strftime("%Y-%m-%dT%H:%M:%S")
return [
remote_addr,
time_local,
request_type,
request_path,
status,
body_bytes_sent,
http_referer,
http_user_agent,
created
]
def insert_record(line, parsed):
"""Insert a single parsed record into the logs table of the SQLite database."""
conn = sqlite3.connect(DB_NAME,timeout=10)
cur = conn.cursor()
args = [line] + parsed # Parsed is a list of the values parsed earlier
cur.execute('INSERT INTO logs VALUES (?,?,?,?,?,?,?,?,?,?)', args)
conn.commit()
conn.close()
def insert_file_to_db(file_name):
"""Insert the whole parsed file into database."""
try:
f = open(DOWNLOAD_FILE, "r")
lines = f.readlines()
for line in lines:
parsed = parse_line(line.strip())
time.sleep(1)
insert_record(line, parsed)
f.close()
except KeyboardInterrupt:
pass
download_file_from_gcs(bucket_name=BUCKET+"/"+USER+"/"+LOG_FILE, file_name=DOWNLOAD_FILE)
create_table()
insert_file_to_db(file_name=DOWNLOAD_FILE)
upload_file_to_gcs(bucket_name= BUCKET+"/"+USER+"/"+DB_NAME, file_name=DB_NAME)
print(DB_NAME + " is successfully uploaded to GCS bucket " + BUCKET + "/" + USER)
###Output
_____no_output_____
###Markdown
Query data from the databaseNow we want to consume the data generated by pulling data out of the SQLite database and does some counting by day.In the below code, we:- Connect to the database.- Query any rows that have been added after a certain timestamp.- Fetch all the rows, sorting out unique ips by day.- Count different visitor browsers and HTTP response statuses based on fetched rows. [TODO] Finish the pipeline designNow you've gained some knowledge about how to define the pipeline with help of Kale extensions. It's time to do some practices. Take your time to read the following cells, and try to finish the last one/two pipeline components. After this, you could complie the pipeline and submit it into the Kubeflow dashboard. **Hint**: The counting tasks could run parallelly as each pipeline component is separated from the others.
###Code
import csv
import pandas as pd
# Define the constaint date to start the query.
YEAR = 2018
MONTH = 9
DAY = 15
RESULT_BROWSER = "browser_counts.csv"
RESULT_STATUS = "status_counts.csv"
def parse_time(time_str):
try:
time_obj = datetime.strptime(time_str, '[%d/%b/%Y:%H:%M:%S %z]')
except Exception:
time_obj = ""
return time_obj
# Helper functions to count distinct browsers per day.
def get_lines_browser(time_obj):
conn = sqlite3.connect(DB_NAME)
cur = conn.cursor()
cur.execute("SELECT time_local,http_user_agent FROM logs WHERE created > ?", [time_obj])
resp = cur.fetchall()
return resp
def parse_user_agent(user_agent):
"""Parsing the user agent to retrieve the name of the browser."""
browsers = ["Firefox", "Chrome", "Opera", "Safari", "MSIE"]
for browser in browsers:
if browser in user_agent:
return browser
return "Other"
def get_time_and_ip_browser(lines):
"""Extract the ip and time from each row we queried."""
browsers = []
times = []
for line in lines:
times.append(parse_time(line[0]))
browsers.append(parse_user_agent(line[1]))
return browsers, times
def count_browser():
browser_counts = {}
start_time = datetime(year=YEAR, month=MONTH, day=DAY)
lines = get_lines_browser(start_time)
browsers, times = get_time_and_ip_browser(lines)
if len(times) > 0:
start_time = times[-1]
for browser, time_obj in zip(browsers, times):
if browser not in browser_counts:
browser_counts[browser] = 0
browser_counts[browser] += 1
count_list = sorted(browser_counts.items(), key=lambda x: x[0])
with open(RESULT_BROWSER,'w') as file:
writer = csv.writer(file, delimiter=",", lineterminator="\r\n")
writer.writerow(["browser", "count"])
writer.writerows(count_list)
download_file_from_gcs(bucket_name=BUCKET+"/"+USER+"/"+DB_NAME, file_name=DB_NAME)
count_browser()
upload_file_to_gcs(bucket_name=BUCKET+"/"+USER+"/"+RESULT_BROWSER, file_name=RESULT_BROWSER)
print("Count result is successfully uploaded to GCS bucket " + BUCKET + "/" + USER)
# Helper functions to count distinct response statuses per day.
def get_lines_status(time_obj):
conn = sqlite3.connect(DB_NAME)
cur = conn.cursor()
cur.execute("SELECT time_local,status FROM logs WHERE created > ?", [time_obj])
resp = cur.fetchall()
return resp
def parse_response_status(user_status):
"""
Retrieve the HTTP request response status
200: OK
401: Bad Request
403: Forbidden
404: Not Found
500: Internal Server Error
"""
statuses = [200, 401, 403, 404, 500]
for status in statuses:
if status == user_status :
return status
return "Other"
def get_time_and_ip_status(lines):
statuses = []
times = []
for line in lines:
times.append(parse_time(line[0]))
statuses.append(parse_response_status(line[1]))
return statuses, times
def count_status():
statuses_counts = {}
start_time = datetime(year=YEAR, month=MONTH, day=DAY)
lines = get_lines_status(start_time)
statuses, times = get_time_and_ip_status(lines)
if len(times) > 0:
start_time = times[-1]
for status, time_obj in zip(statuses, times):
if status not in statuses_counts:
statuses_counts[status] = 0
statuses_counts[status] += 1
count_list = sorted(statuses_counts.items(), key=lambda x: x[0])
with open(RESULT_STATUS,'w') as file:
writer = csv.writer(file, delimiter=",", lineterminator="\r\n")
writer.writerow(["status", "count"])
writer.writerows(count_list)
download_file_from_gcs(bucket_name=BUCKET+"/"+USER+"/"+DB_NAME, file_name=DB_NAME)
count_status()
upload_file_to_gcs(bucket_name=BUCKET+"/"+USER+"/"+RESULT_STATUS, file_name=RESULT_STATUS)
print("Count result is successfully uploaded to GCS bucket " + BUCKET + "/" + USER)
###Output
_____no_output_____ |
Cancer_image_classifier.ipynb | ###Markdown
Get image data, count types, construct training data
###Code
REBUILD_DATA = True
class IMGproc():
#we can reduce the size of the image to try and reduce the amount of data per image
img_size = 100
#dictionary of labels and values
labels = {'akiec': 0, 'bcc': 1, 'bkl': 2, 'df': 3, 'nv': 4, 'mel': 5, 'vasc': 6}
training_data = []
akiec_count = 0
bcc_count = 0
bkl_count = 0
df_count = 0
nv_count = 0
mel_count = 0
vasc_count = 0
#look up and load image files and labels based on metadata
def make_training_data(self):
ham_meta = pd.read_csv('HAM10000_metadata.csv')
image_files_1 = glob.glob('HAM10000_images_part_1/*')
image_files_2 = glob.glob('HAM10000_images_part_2/*')
image_files = np.concatenate((np.array(image_files_1), np.array(image_files_2)), axis=0)
#if we want to limit the number of images so that all images have the same amount
count_limit = 99999
#loop through image files
for i in tqdm(range(len(image_files))):
try:
#get image name
img_name = image_files[i].split('/')[1][:-4]
#check image name with image_id
img_dex = np.where(ham_meta['image_id'].values == img_name)[0][0]
#check cancer type
dx_type = ham_meta['dx'][img_dex]
#check if cancer type has reached limit
if dx_type == 'akiec':
if self.akiec_count >= count_limit:
continue
if dx_type == 'bcc':
if self.bcc_count >= count_limit:
continue
if dx_type == 'bkl':
if self.bkl_count >= count_limit:
continue
if dx_type == 'df':
if self.df_count >= count_limit:
continue
if dx_type == 'nv':
if self.nv_count >= count_limit:
continue
if dx_type == 'mel':
if self.mel_count >= count_limit:
continue
if dx_type == 'vasc':
if self.vasc_count >= count_limit:
continue
#Load image in grayscale
img = cv2.imread(image_files[i], cv2.IMREAD_GRAYSCALE)
#Reduce image size
img = cv2.resize(img, (self.img_size, self.img_size))
#add blur
#img = gaussian_filter(img, sigma = 1)
#Add to training data
self.training_data.append([np.array(img), np.eye(7)[self.labels[dx_type]]])
if dx_type == 'akiec':
self.akiec_count += 1
elif dx_type == 'bcc':
self.bcc_count += 1
elif dx_type == 'bkl':
self.bkl_count += 1
elif dx_type == 'df':
self.df_count += 1
elif dx_type == 'nv':
self.nv_count += 1
elif dx_type == 'mel':
self.mel_count += 1
elif dx_type == 'vasc':
self.vasc_count += 1
except Exception as e:
print(str(e))
#Shuffle training data
np.random.shuffle(self.training_data)
np.save('training_data.npy', self.training_data)
print('akiec:', self.akiec_count)
print('bcc:', self.bcc_count)
print('bkl:', self.bkl_count)
print('df:', self.df_count)
print('nv:', self.nv_count)
print('mel:', self.mel_count)
print('vasc:', self.vasc_count)
if REBUILD_DATA:
img_proc = IMGproc()
img_proc.make_training_data()
###Output
100%|██████████| 10015/10015 [01:55<00:00, 86.51it/s]
/Users/georgevejar/opt/anaconda3/lib/python3.8/site-packages/numpy/core/_asarray.py:171: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order, subok=True)
###Markdown
Check data
###Code
training_data = np.load('training_data.npy', allow_pickle=True)
print(len(training_data))
print(len(training_data[0][0]))
print(training_data[1])
import matplotlib.pyplot as plt
from scipy.ndimage import gaussian_filter
img = 36
plt.imshow(training_data[img][0], cmap = 'gray')
plt.show()
plt.hist(training_data[img][0])
plt.show()
# result = gaussian_filter(training_data[img][0], sigma=1)
# #training_data[img][0][np.where(training_data[img][0] >= 160)] = 0
# plt.imshow(result, cmap = 'gray')
# plt.show()
# plt.hist(result)
# plt.show()
###Output
_____no_output_____
###Markdown
Create Neural Net
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
#Convolutional layers
self.conv1 = nn.Conv2d(1,32, 5) # input is 1 image, 32 output channels, 5x5 kernel / window
self.conv2 = nn.Conv2d(32,64, 5) # input is 32, bc the first layer output 32. Then we say the output will be 64 channels, 5x5 conv
self.conv3 = nn.Conv2d(64,128, 5)
#test data
x = torch.randn(100,100).view(-1,1,100,100)
self._to_linear = None
self.convs(x)
#linear fully connected layers
self.fc1 = nn.Linear(self._to_linear, 512) #flattening
self.fc2 = nn.Linear(512, 7) # 512 in, 7 out bc we're doing 7 classes
def convs(self, x):
#max pooling and relu
x = F.max_pool2d(F.relu(self.conv1(x)), (2,2))
x = F.max_pool2d(F.relu(self.conv2(x)), (2,2))
x = F.max_pool2d(F.relu(self.conv3(x)), (2,2))
print(x[0].shape)
if self._to_linear is None:
self._to_linear = x[0].shape[0]*x[0].shape[1]*x[0].shape[2]
return x
def forward(self, x):
x = self.convs(x)
x = x.view(-1, self._to_linear) # .view is reshape ... this flattens X before
x = F.relu(self.fc1(x))
x = self.fc2(x) # bc this is our output layer. No activation here.
return F.softmax(x, dim=1)
net = Net()
print(net)
import torch.optim as optim
optimizer = optim.Adam(net.parameters(), lr = 0.001)
loss_function = nn.MSELoss()
X = torch.Tensor([i[0] for i in training_data]).view(-1, 100,100)
X = X/255.0
y = torch.Tensor([i[1] for i in training_data])
VAL_PCT = 0.2 # lets reserve 20% of our data for validation
val_size = int(len(X)*VAL_PCT)
print(val_size)
train_X = X[:-val_size]
train_y = y[:-val_size]
test_X = X[-val_size:]
test_y = y[-val_size:]
print(len(train_X))
print(len(test_X))
BATCH_SIZE = 10
EPOCHS = 1
for epoch in range(EPOCHS):
for i in tqdm(range(0, len(train_X), BATCH_SIZE)): # from 0, to the len of x, stepping BATCH_SIZE at a time.
#print(i, i+BATCH_SIZE)
batch_X = train_X[i:i+BATCH_SIZE].view(-1, 1, 100,100)
batch_y = train_y[i:i+BATCH_SIZE]
net.zero_grad()
outputs = net(batch_X)
#print(len(outputs), len(batch_y))
loss = loss_function(outputs, batch_y)
loss.backward()
optimizer.step() # Does the update
print(loss)
correct = 0
total = 0
with torch.no_grad():
for i in tqdm(range(len(test_X))):
real_class = torch.argmax(test_y[i])
net_out = net(test_X[i].view(-1,1,100,100))[0] # returns a list
predicted_class = torch.argmax(net_out)
if predicted_class == real_class:
correct += 1
total += 1
print("Accuracy:", round(correct/total,3))
###Output
0%| | 0/2003 [00:00<?, ?it/s] |
Numpy 3.ipynb | ###Markdown
Derivatives can be calculated numerically with the finite difference method as:f'(x)=f(x+del(x)) - f(x-del(x))/2*del(x) Construct 1D Numpy array containing the values of x in the interval [0,π/2] with spacing Δx=0.1. Evaluate numericallythe derivative of sin in this interval (excluding the end points) using the above formula. Try to avoid for loops.Compare the result to function cos in the same interval
###Code
import numpy as np
val = np.arange(0,90,0.1)
sinans = (np.sin(val+0.1) - np.sin(val-0.1))/2*0.1
cosans = (np.cos(val+0.1) - np.cos(val-0.1))/2*0.1
print("sin",sinans)
print("cos",cosans)
import numpy as np
from matplotlib import pyplot as plt
val = np.arange(0,90,0.1)
sinans = (np.sin(val+0.1) - np.sin(val-0.1))/2*0.1
cosans = (np.cos(val+0.1) - np.cos(val-0.1))/2*0.1
plt.plot(val,sinans)
plt.show()
plt.plot(val,cosans)
plt.show()
###Output
_____no_output_____ |
week3/lavitra.kshitij/Q4 - 3/Attempt1_filesubmission_Copy_of_pure_pursuit.ipynb | ###Markdown
Configurable parameters for pure pursuit+ How fast do you want the robot to move? It is fixed at $v_{max}$ in this exercise+ When can we declare the goal has been reached?+ What is the lookahead distance? Determines the next position on the reference path that we want the vehicle to catch up to
###Code
vmax = 0.75
goal_threshold = 0.05
lookahead = 3.0
#You know what to do!
def simulate_unicycle(pose, v,w, dt=0.1):
x, y, t = pose
return x + v*np.cos(t)*dt, y + v*np.sin(t)*dt, t+w*dt
class PurePursuitTracker(object):
def __init__(self, x, y, v, lookahead = 3.0):
"""
Tracks the path defined by x, y at velocity v
x and y must be numpy arrays
v and lookahead are floats
"""
self.length = len(x)
self.ref_idx = 0 #index on the path that tracker is to track
self.lookahead = lookahead
self.x, self.y = x, y
self.v, self.w = v, 0
def update(self, xc, yc, theta):
"""
Input: xc, yc, theta - current pose of the robot
Update v, w based on current pose
Returns True if trajectory is over.
"""
#Calculate ref_x, ref_y using current ref_idx
#Check if we reached the end of path, then return TRUE
#Two conditions must satisfy
#1. ref_idx exceeds length of traj
#2. ref_x, ref_y must be within goal_threshold
# Write your code to check end condition
ref_x, ref_y = self.x[self.ref_idx], self.y[self.ref_idx]
goal_x, goal_y = self.x[-1], self.y[-1]
if (self.ref_idx > self.length):
if (np.linalg.norm([ref_x-goal_x, ref_y-goal_y])) < goal_threshold:
return True
#End of path has not been reached
#update ref_idx using np.hypot([ref_x-xc, ref_y-yc]) < lookahead
if np.hypot(ref_x-xc, ref_y-yc) < lookahead:
self.ref_idx += 1
#Find the anchor point
# this is the line we drew between (0, 0) and (x, y)
anchor = np.asarray([ref_x - xc, ref_y - yc])
#Remember right now this is drawn from current robot pose
#we have to rotate the anchor to (0, 0, pi/2)
#code is given below for this
theta = np.pi/2 - theta
rot = np.asarray([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]])
anchor = np.dot(rot, anchor)
L = (anchor[0] ** 2 + anchor[1] **2)**0.5 # dist to reference path
X = anchor[0] #cross-track error
#from the derivation in notes, plug in the formula for omega
self.w = -2 * self.v / self.lookahead**2 * X
return False
###Output
_____no_output_____
###Markdown
Visualize given trajectory
###Code
x = np.arange(0, 50, 0.5)
y = [np.sin(idx / 5.0) * idx / 2.0 for idx in x]
#write code here
plt.figure()
plt.plot(x,y)
plt.grid()
###Output
_____no_output_____
###Markdown
Run the tracker simulation1. Instantiate the tracker class2. Initialize some starting pose3. Simulate robot motion 1 step at a time - get $v$, $\omega$ from tracker, predict new pose using $v$, $\omega$, current pose in simulate_unicycle()4. Stop simulation if tracker declares that end-of-path is reached5. Record all parameters
###Code
#write code to instantiate the tracker class
tracker = PurePursuitTracker(x, y, 0.1)
pose = -1, 0, np.pi/2 #arbitrary initial pose
x0,y0,t0 = pose # record it for plotting
traj =[]
while True:
#write the usual code to obtain successive poses
pose = simulate_unicycle(pose, tracker.v, tracker.w)
if tracker.update(*pose):
print("ARRIVED!!")
break
traj.append([*pose, tracker.w, tracker.ref_idx])
xs,ys,ts,ws,ids = zip(*traj)
plt.figure()
plt.plot(x,y,label='Reference')
plt.quiver(x0,y0, np.cos(t0), np.sin(t0),scale=12)
plt.plot(xs,ys,label='Tracked')
x0,y0,t0 = pose
plt.quiver(x0,y0, np.cos(t0), np.sin(t0),scale=12)
plt.title('Pure Pursuit trajectory')
plt.legend()
plt.grid()
###Output
_____no_output_____
###Markdown
Visualize curvature
###Code
plt.figure()
plt.title('Curvature')
plt.plot(np.abs(ws))
plt.grid()
###Output
_____no_output_____
###Markdown
AnimateMake a video to plot the current pose of the robot and reference pose it is trying to track. You can use funcAnimation in matplotlib
###Code
from matplotlib.animation import FuncAnimation
from matplotlib import rc
fig, ax = plt.subplots
ax.set_xlim(-10,50)
ax.set_ylim(-22,22)
line, = ax.plot(0,0)
def animation_frame(i):
line.set_xdata(xs)
line.set_ydata(ys)
return line,
animation = FuncAnimation(fig, func=animation_frame, frames=np.arange(0,10,0.01), interval=10)
rc('animation', html='jshtml')
rc('animation', embed_limit=4096)
plt.show()
animation
###Output
_____no_output_____
###Markdown
Effect of noise in simulationsWhat happens if you add a bit of Gaussian noise to the simulate_unicycle() output? Is the tracker still robust?The noise signifies that $v$, $\omega$ commands did not get realized exactly
###Code
###Output
_____no_output_____ |
Coursera/algorithm_trading_course/Analysing Returns/Returns.ipynb | ###Markdown
###Code
prices_a = [8.70, 8.91, 8.71]
8.91/8.70 -1
8.71/8.91-1
print(prices_a[1:]) #all except the first one
prices_a[:-1] #all from the begining except for the last one
#prices_a[1:]/prices_a[:-1]-1 --> won't work as vectors are not treated as lists
import numpy as np
prices_a = np.array([8.70,8.91,8.71])
prices_a
prices_a[1:]/prices_a[:-1]-1
import pandas as pd
prices = pd.DataFrame({'Blue':[8.70,8.91,8.71, 8.43, 8.73]
,
'Orange':[10.66,11.08,10.71,11.59,12.11]})
prices
prices.iloc[1:] #starting from 1 all the way to the end
prices.iloc[:-1]
prices.iloc[1:]/prices.iloc[:-1]
#because of the serial numbers also called as unwanted index (0,1,2,3) ....because of pandas alignment (covered in pandas crash course)
###Output
_____no_output_____
###Markdown
```We can get rid of this issue by removing the index (extracting the values) from any one of the rows. Extracting values from both returns a numpy matrix```
###Code
prices.iloc[1:].values/prices.iloc[:-1] -1
# as long as one of them does not have an index, there will be no index to align with and the output will be perfect
prices.iloc[1:].values/prices.iloc[:-1].values -1
#divide values, gets rid of the error above .... does pure positional division and returns a numpy matrix
prices.iloc[1:]/prices.iloc[:-1].values -1
# as long as one of them does not have an index, there will be no index to align with and the output will be perfect
prices
prices/prices.shift(1) -1 #no returns for the first day, we don't have prices for the day before day 1
prices.pct_change() #calculates percentage change by using a function
prices = pd.read_csv('sample_prices.csv')
prices
returns = prices.pct_change()
returns
###Output
_____no_output_____
###Markdown
Type ```%matplotlib inline```if you are using an older version
###Code
prices.plot()
returns.plot.bar()
returns.head() #look at first few rows of return
returns.std() #calculate standard deviation
returns.mean() #calculates the average
returns+1 #does vector addition, adds 1 element wise
np.prod(returns+1)-1 #takes returns, then adds 1 to it and multiplies each column
###Output
_____no_output_____
###Markdown
```Blue gives us a compounded return of 1.23%Orange gives us a compounded return of 8.71%```
###Code
(returns+1).prod()-1
#since returns+1 is itself a dataframe, we can call the prod method on it
((returns+1).prod()-1)*100 #gives percentage return
(((returns+1).prod()-1)*100).round(2) #rounds it off to 2 decimal places
###Output
_____no_output_____
###Markdown
Annualization
###Code
rm = 0.01
(1+rm)**12 - 1
rq = 0.04
(1+rq)**4 -1
rd = 0.0001
(1+rd)**252 -1
###Output
_____no_output_____ |
step_by_step_development.ipynb | ###Markdown
Binomial Mixture Model with Expectation-Maximization (EM) Algorithm Generating DataWe first generate some data points which are randomly drawn from a Binomial Mixture Model with two Binomial Distributions. Given $N_i$, the probability of $n_i$ is$P(n_i | N_i, \Theta) = \sum_{k=1}^{2}\pi_k \mathrm{Bino}(n_i|N_i, \theta_k)$, where the Binomial Distribution is$\mathrm{Bino}(n_i|N_i, \theta) = {N_i!\over n_i!(N_i-n_i)!} \theta^{n_i} (1-\theta)^{N_i-n_i}$,and the sum of $\pi$'s is unity, i.e.$\sum_{k=1}^{2} \pi_k = 1$
###Code
import numpy as np
import torch
from torch.distributions.binomial import Binomial
if torch.cuda.is_available():
print("cuda is available")
import torch.cuda as t
else:
print("cuda is unavailable")
import torch as t
S = int(1e3)
# the theta's the two Binomial Distributions
theta_1 = 0.5
theta_2 = 0.3
# the probabilities, pi's, of the two Binomial Distributions
pi_1 = 0.7
pi_2 = 1.0 - pi_1
# the list of (Ni| i =1, 2, ..., S), uniformly drawn between low and high
N_ls_all = t.randint(low=10, high=20, size=(S,))
N_ls_all = N_ls_all.type(t.FloatTensor)
# the list of theta, each theta is either theta_1 or theta_2. The probability of theta_i is pi_i
theta_ls = t.FloatTensor(np.random.choice([theta_1,theta_2], size=S, p=[pi_1,pi_2]))
# the list of (ni | i=1,2 ...,S)
n_ls_all = Binomial(N_ls_all, theta_ls).sample()
###Output
_____no_output_____
###Markdown
Make some figures to get some visual impression of the dataset.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
fig, axes = plt.subplots(1,2,figsize=(14,6))
axes[0].scatter(N_ls_all-n_ls_all, n_ls_all)
axes[0].set_xlabel("N-n",size=16)
axes[0].set_ylabel("n",size=16)
axes[0].tick_params(labelsize=14)
axes[1].hist(n_ls_all/N_ls_all, bins=20)
axes[1].set_xlabel("n/N", size=16)
axes[1].tick_params(labelsize=14)
axes[1].set_title("Histogram of n/N", size=16)
plt.show()
###Output
_____no_output_____
###Markdown
We split the dataset into train and validation.
###Code
# Split into train and validation sets
S = len(N_ls_all)
shuffled_indice = torch.randperm(S)
N_ls_shuffled = N_ls_all[shuffled_indice]
n_ls_shuffled = n_ls_all[shuffled_indice]
# percentage of train set.
train_frac = 0.7
train_index = int(0.7*S)
N_ls_train = N_ls_shuffled[0:train_index]
N_ls_valid = N_ls_shuffled[train_index:]
n_ls_train = n_ls_shuffled[0:train_index]
n_ls_valid = n_ls_shuffled[train_index:]
###Output
_____no_output_____
###Markdown
Calculating the `log_likelihood` The `log_likelihood` is the log of the probability of the parameters, $\Theta$, given the observed data, `N_ls` and `n_ls`. It is defined below,`log_likelihood` $= \ln(L(\Theta, {ni})) =\ln( P({ni} | \Theta)) = \sum_{i=1}^{S} \ln(\sum_{k=1}^{K} \pi_k * \mathrm{Binom}(n_i|N_i, \theta_k) )$
###Code
# calculate log_likelihood using a method which supposes to avoid underflow or overflow in
# log_sum_exp = log(sum_{i=1}^{S} exp(bi)). When bi >> 1, overflow leads to log_sum_exp = infty
# When bi << -1, underflow leads to log_sum_exp = -infty.
def calc_logL(N_ls, n_ls, pi_list, theta_list):
'''
Input: N_ls is a [S] shape tensor = [N1, N2, ..., NS]
n_ls is a [S] shape tensor = [n1, n2, ..., nS]
pi_list is a [K] shape tensor = [pi_1, .., pi_K]
theta_list is a [K] shape tensor = [theta_1, ..., theta_K]
Output: log_likelihood of the parameters (pi and theta) given the observed data (N, n).
'''
S = len(N_ls)
K = len(pi_list)
# log_binom_mat has shape (S,K), element_{i,l} = log_Binomial(ni|Ni, theta_l)
# log with natural base.
log_binom_mat = Binomial(N_ls.reshape(S,1), theta_list.reshape(1,K)).log_prob(n_ls.reshape(S,1))
# mean_log_binom, the mean value of all elements in log_binom_mat.
c = torch.mean(log_binom_mat)
# binom_mat has shape (S,K), element_{i,l} = Binomial(ni|Ni, theta_l)
binom_mat = torch.exp(log_binom_mat - c)
# log_likelihood = sum_{i=1}^{S} log(prob_i), this is a real number
log_likelihood = S*c + torch.sum(torch.log(torch.matmul(binom_mat, pi_list)))
return log_likelihood
###Output
_____no_output_____
###Markdown
Calculating $P(z_i=m| n_i, \Theta_\mathrm{old})$$P(z_i=m| n_i, \Theta_\mathrm{old}) = \left[\sum_{l=1}^{K} {\pi_{l,old}\over \pi_{m,old}}\left(\theta_{l,old}\over\theta_{m,old}\right)^{n_i}\left({1-\theta_{l,old}}\over{1-\theta_{m,old}}\right)^{N_i-n_i}\right]^{-1}$We take advantage of the [broadcast](https://pytorch.org/docs/stable/notes/broadcasting.html) nature for torch tensors. Any torch math manipulations, except element-wise manipulations, would take action in a broadcast way as long as the tensors are broadcastable. Broadcasting does not make copy of the data on the memory and thus is very efficient and much more preferred to `for loop`.
###Code
def calc_Posterior(N_ls, n_ls, pi_list, theta_list):
'''
Input: N_ls is a [S] shape tensor = [N1, N2, ..., NS]
n_ls is a [S] shape tensor = [n1, n2, ..., nS]
pi_list is a [K] shape tensor = [pi_1, .., pi_K]
theta_list is a [K] shape tensor = [theta_1, ..., theta_K]
Output: Posterior, a tensor with shape (K,S) and its element_{m,i} = P(zi=m|ni,Theta_old) which is
the posterior probability of the i-th sample belonging to the m-th Binomial distribution.
'''
# shape = (K,K) with theta_ratio_{m,l} = theta_l/theta_m, m-th row, l-th column
theta_ratio = torch.div(theta_list.reshape(1,K), theta_list.reshape(K,1))
# shape = (K,K), element_{ml} = (1-theta_l)/(1-theta_m)
unity_minus_theta_ratio = torch.div((1e0 - theta_list).reshape(1,K), (1e0 - theta_list).reshape(K,1))
# shape = (K,K), element_{m,l} = (theta_l/theta_m) * [(1-theta_l)/(1-theta_m)]
mixed_ratio = torch.mul(theta_ratio, unity_minus_theta_ratio)
# shape = (K,K,S) with element_{m,l,i} = [(theta_l/theta_m)*(1-theta_l)/(1-theta_m)]^ni
# its element won't be either 0 or infty no matther whether theta_l > or < theta_m
mixed_ratio_pow = torch.pow(theta_ratio.reshape(K,K,1), n_ls)
mixed_ratio_pow = torch.clamp(mixed_ratio_pow, min=0.0, max=1e15)
# shape = (K,K,S) with element_{m,l,i} = [ (1-theta_l)/(1-theta_m) ]^(Ni-2ni)
# its element may be infty if theta_l<<theta_m, or 0 if theta_l >> theta_m
unity_minus_theta_ratio_pow = torch.pow(unity_minus_theta_ratio.reshape(K,K,1), N_ls-2.0*n_ls)
unity_minus_theta_ratio_pow = torch.clamp(unity_minus_theta_ratio_pow, min=0.0, max=1e15)
# In below, we multiply the element of mixed_ratio_pow and the element of unity_minus_theta_ratio_pow,
# and there won't be nan caused by 0*infty or infty*0 because the element in mixed_ratio_pow won't be 0 or infty.
# Thus we make sure there won't be nan in Posterior.
# element-wise multiply, pow_tensor has shape(K,K,S), element_{m,l,i} = (theta_l/theta_m)^ni * [(1-theta_l)/(1-theta_m)]^(Ni-ni).
# Note that torch.mul(a, b) would broadcast if a and b are different in shape & they are
# broadcastable. If a and b are the same in shape, torch.mul(a,b) would operate element-wise multiplication.
pow_tensor = torch.mul(mixed_ratio_pow, unity_minus_theta_ratio_pow)
# pi_ratio has shape (K,K) with element_{m,l} = pi_l/pi_m
pi_ratio = torch.div(pi_list.reshape(1,K), pi_list.reshape(K,1))
# posterior probability tensor, Pzim = P(zi=m|ni,Theta_old)
# shape (K,S), element_{m,i} = P(zi=m|ni,Theta_old)
Posterior = torch.pow(torch.matmul(pi_ratio.reshape(K,1,K), pow_tensor), -1e0).reshape(K,S)
return Posterior
###Output
_____no_output_____
###Markdown
Update $\Theta \equiv \{(\pi_m, \theta_m)|m=1,2,...,K\}$ According to the EM AlgorithmThe computational complexity of the EM Algorithm is $S\times K$ per iteration.$\pi_m ={1\over S} \sum_{i=1}^{S} P(z_i=m| n_i, \Theta_{old})$$\theta_m = {{\sum_{i=1}^{S} n_i P(z_i=m| n_i, \Theta_{old})}\over{\sum_{j=1}^{S} N_j P(z_j=m| n_j, \Theta_{old})}}$
###Code
def calc_params(N_ls, n_ls, Posterior):
'''
Input: N_ls, tensor of shape [S]
n_ls, tensor of shape [S]
Posterior, tensor of shape (K,S)
'''
# update pi_list
# torch.sum(tensor, n) sum over the n-th dimension of the tensor
# e.g. if tensor'shape is (K,S) and n=1, the resulting tensor has shape (K,)
# the m-th element is the sum_{i=1}^{S} tensor_{m,i}
pi_list = torch.sum(Posterior,1)/len(N_ls)
# update theta_list
theta_list = torch.div(torch.matmul(Posterior, n_ls), torch.matmul(Posterior, N_ls))
return pi_list, theta_list
###Output
_____no_output_____
###Markdown
We only want to train on the training set. So we make the following assignments.
###Code
N_ls = N_ls_train
n_ls = n_ls_train
###Output
_____no_output_____
###Markdown
Fitting the Data by a Binomial Mixture ModelThe method would fit the parameters$\Theta = \{ (\pi_k, \theta_k) | k=1, 2, ..., K\}$We need to pre-set K. Here we set $K=2$. Of course, in reality we would not know the best $K$ to adopt. We will discuss how to choose $K$ after this section. Step 1: Initializing the parameters $\Theta$We denote $\Theta$ by `params` in this code.
###Code
# set K
K = 2
# choose a very small positive real number
small_value = 1e-6
# initialize pi's, make sure that the sum of all pi's is unity
# pi is drawn from a Uniform distribution bound by [small_value, 1)
from torch.distributions.uniform import Uniform
pi_list = Uniform(low=small_value, high=1e0).sample([K-1])
pi_K = t.FloatTensor([1e0]) - pi_list.sum()
pi_list = torch.cat([pi_list, pi_K], dim=0)
# initialize theta's, make sure that each theta satisfies 0<theta<1
from torch.distributions.normal import Normal
theta_list = torch.clamp(Normal(loc=0.5, scale=0.3).sample(t.IntTensor([K])), min=small_value, max=1e0-small_value)
# combine all pi and theta into a list of tuples called `params`, which is the capital Theta in my article
# params has the shape of K rows x 2 columns
params = torch.stack([pi_list, theta_list], dim=1)
###Output
_____no_output_____
###Markdown
Step 2: Setting Up Conditions for Stopping the Iteration* Calculate the `log_likelihood`. * Initialize the change of the `log_likelihood` named `delta_log_likelihood` and the iteration step `iter_step`.* Set the lower bound for `delta_log_likelihood` named `tolerance` and the upper bound for the `iter_step` named `max_step`.* Define conditions for the iteration to continue. If either condition fails, the iteration stops.
###Code
# calculate the initial log_likelihood
log_likelihood = calc_logL(N_ls, n_ls, pi_list, theta_list)
# initialize the change of log_likelihood named `delta_log_likelihood` and the iteration step called `iter_step`
delta_log_likelihood = torch.norm(log_likelihood)
iter_step = 0
# tolerance for the change of the log-likelihood
tolerance = 1e-6
# set the maximum steps for iteration, stop the iteration if the number of steps reaches `max_step`
max_step = int(1e2)
# The iteration stops when either of the following two conditions is broken first
cond_likelihood = (delta_log_likelihood > tolerance)
cond_step = t.BoolTensor([iter_step < max_step])
###Output
_____no_output_____
###Markdown
Step 3: Iteration using the EM Algorithm
###Code
import time
start_time = time.time()
while cond_step & cond_likelihood:
# posterior probability tensor, Pzim = P(zi=m|ni,Theta_old)
# shape (K,S), element_{m,i} = P(zi=m|ni,Theta_old)
Posterior = calc_Posterior(N_ls, n_ls, pi_list, theta_list)
# calculate the new pi_list and theta_list
pi_list_new, theta_list_new = calc_params(N_ls, n_ls, Posterior)
# calculate the new log_likelihood
log_likelihood_new = calc_logL(N_ls, n_ls, pi_list_new, theta_list_new)
# calculate the change of the log-likelihood
delta_log_likelihood = torch.norm(log_likelihood_new - log_likelihood)
# update params
pi_list = pi_list_new
theta_list = theta_list_new
# update log_likelihood
log_likelihood = log_likelihood_new
# increase iter_step by 1
iter_step += 1
# update the conditions for the while loop
# cond_params = (delta_params > epsilon)
cond_likelihood = (delta_log_likelihood > tolerance)
cond_step = t.BoolTensor([iter_step < max_step])
if iter_step % 1 == 0:
print(f"Iteration {iter_step}:")
print(f"delta_log_likelihood = {delta_log_likelihood}")
print(f"log_likelihood ={log_likelihood}")
params = torch.stack([pi_list, theta_list], dim=1)
print(f"{params}")
print(f"used {time.time()-start_time}")
###Output
_____no_output_____
###Markdown
Fitting the number of components, KThe EM Algorithm above fits a BMM model to the data with a preset K, the number of components. However, in reality, we usually do not know K. Therefore, we need to find an optimal K.We look at three metrics. * `log_likelihood`: $\log L$, the probability of the parameters, $\Theta$, given the observed data set, {ni, Ni|i=1,..,S}.* Akaike Information Criterion (AIC): AIC $= -{2\over S}\log L + {2 (2K+1)\over S}$, where $(2K+1)$ is the number of parameters in a BMM model.* Bayesian Information Criterion (BIC): BIC $= -2\log L + (2K+1)\log S$.The more complicated the model is, the more parameters, and thus greater $(2K+1)$. A more complicated model generally fits the data better and thus results in greater `log_likelihood`, $\log L$. However, a complicated model may overfit the data, meaning that it fits the training data set well but generalizes poorly to new data. Therefore, one should put some penalty on the complexity of the model, which is taken care of by both AIC and BIC. The following cell fits a number of BMM models on the training data sets with various values for $K$. The the three metrics above are calculated and stored in `logL_list`, `AIC_list`, `BIC_list`. Then the trained BMM models are applied to the validation set and calculate the corresponding metrics. The metrics are stored in `logL_val_list`, `AIC_val_list`, `BIC_val_list`.
###Code
# ------------------- Initialization -------------------------------------
# Sample size
S = len(N_ls)
S_val = len(N_ls_valid)
K_list = range(2, 9)
params_list = []
logL_list = []
AIC_list = []
BIC_list = []
logL_val_list = []
AIC_val_list = []
BIC_val_list = []
# Set K, the number of Binomial distributions in the to-be-fitted mixture model
for K in K_list:
'''
Initialize theta_list and pi_list
'''
# list of n/N sorted in ascending order
ratio_ls = np.sort(n_array/N_array)
# reproducibility
np.random.seed(seed=123)
# pick K random integer indice, [index_1, ..., index_K]
random_indice = np.sort(np.random.choice(len(ratio_ls),K))
# theta are the ratio at the random indice, { ratio_ls[index_k] | k=1,2,...,K }
theta_array = ratio_ls[random_indice]
# the proportion of the midpoint of each pair of consecutive indice
# (index_k + index_{k+1})/2/len(ratio_ls), for k=1,2,...,K-1
acc_portion_ls = (random_indice[1:]+random_indice[:-1])/2.0/len(ratio_ls)
acc_portion_ls = np.append(acc_portion_ls, 1.0)
# initialize pi_list using the portions of indice
pi_array = np.insert(acc_portion_ls[1:] - acc_portion_ls[:-1], obj=0, values=acc_portion_ls[0])
print(f"initial theta's are {theta_array}")
print(f"initial pi's are {pi_array}")
print(f"sum of all pi's is {pi_array.sum()}.")
# convert numpy arrays to torch tensors.
theta_list = t.FloatTensor(theta_array)
pi_list = t.FloatTensor(pi_array)
print(f"theta_list is on device {theta_list.get_device()}")
print(f"pi_list is on device {pi_list.get_device()}")
# combine all pi and theta into a list of tuples called `params`, which is referred to by Theta as well
# params has the shape of K rows x 2 columns
params = torch.stack([pi_list, theta_list], dim=1)
# ---------------Setting stop conditions for the EM-Algorithm iterations---------------
'''
Conditions for the EM-Algorithm Iteration to Continue.
'''
# calculate the initial log_likelihood
log_likelihood = calc_logL(N_ls, n_ls, pi_list, theta_list)
log_likelihood_init = log_likelihood
print(f"Initial log_likelihood is {log_likelihood}")
# initialize the change of log_likelihood named `delta_log_likelihood` and the iteration step called `iter_step`
delta_log_likelihood = torch.abs(log_likelihood)
iter_step = 0
delta_params = torch.norm(params.reshape(2*K,))
# tolerance for the change of the log-likelihood or the change of params, depending on
# which condition you decide to use
logL_eps = 1e-2
param_eps = 1e-5
# set the maximum steps for iteration, stop the iteration if the number of steps reaches `max_step`
max_step = int(1e4)
# we define 3 conditions:
# the condition below is that "the log_likelihood are still changing much"
if torch.isnan(delta_log_likelihood) :
cond_likelihood = True
else:
cond_likelihood = (torch.abs(delta_log_likelihood) > logL_eps)
# the condition below is that "the iteration steps have not exceeded max_step"
cond_step = t.BoolTensor([iter_step < max_step])
# the condition below is that "the params are still changing much"
cond_params = (delta_params >param_eps)
# -------------- Iteration Loop -------------------------------------------------------
'''
EM-Algorithm Iterations
'''
import time
start_time = time.time()
# the second condition below may be either `cond_likelihood` or `cond_params`
while cond_step & cond_params:
# posterior probability tensor, Pzim = P(zi=m|ni,Theta_old)
# shape (K,S), element_{m,i} = P(zi=m|ni,Theta_old)
Posterior = calc_Posterior(N_ls, n_ls, pi_list, theta_list)
# print(f"Does Posterior contain nan? {torch.isnan(Posterior).any()}")
# calculate the new pi_list and theta_list
pi_list_new, theta_list_new = calc_params(N_ls, n_ls, Posterior)
params_new = torch.stack([pi_list_new, theta_list_new], dim=1)
# calculate the new log_likelihood
log_likelihood_new = calc_logL(N_ls, n_ls, pi_list_new, theta_list_new)
# calculate the change of the log-likelihood
delta_log_likelihood = log_likelihood_new - log_likelihood
# calculate the change of params
delta_params = torch.norm(params_new.reshape(2*K,)-params.reshape(2*K,))
# update params
pi_list = pi_list_new
theta_list = theta_list_new
params = params_new
# update log_likelihood
log_likelihood = log_likelihood_new
# increase iter_step by 1
iter_step += 1
# update the conditions for the while loop
if torch.isnan(delta_log_likelihood) :
cond_likelihood = True
else:
cond_likelihood = (torch.abs(delta_log_likelihood) > logL_eps)
cond_step = t.BoolTensor([iter_step < max_step])
cond_params = (delta_params > param_eps)
# if iter_step % 5 == 0:
# print(f"Iteration {iter_step}:")
# print(f"logL = {log_likelihood:.6f}")
# # print(f"logL - logL_init = {log_likelihood - log_likelihood_init}")
# print(f"delta_logL = {delta_log_likelihood:.6f}")
# print(f"delta_params = {delta_params:.6f}")
# print(f"{params}")
# calculate Akaike Information Criterion (AIC)
AIC = -2.0/float(S)*log_likelihood + 2.0*(2.0*float(K)+1.0)/float(S)
# Bayesian Information Criterion
BIC = -2.0*log_likelihood + np.log(float(S))*(2.0*float(K)+1.0)
# calculate metrics for the validation sets
log_likelihood_val = calc_logL(N_ls_valid, n_ls_valid, pi_list, theta_list)
AIC_val = -2.0/float(S_val)*log_likelihood_val + 2.0*(2.0*float(K)+1.0)/float(S_val)
BIC_val = -2.0*log_likelihood_val + np.log(float(S_val))*(2.0*float(K)+1.0)
print(f"used {time.time()-start_time} seconds.")
print("Final Results:")
print(f"Iteration {iter_step}:")
print(f"logL = {log_likelihood:.6f}")
# print(f"logL - logL_init = {log_likelihood - log_likelihood_init}")
print(f"delta_log_likelihood = {delta_log_likelihood:.6f}")
print(f"delta_params = {delta_params:.6f}")
params = torch.stack([pi_list, theta_list], dim=1)
print(f"{params}")
print(f"log_binom_min, max, mean = {log_binom_minmaxmean(N_ls, n_ls, pi_list, theta_list)}")
print(f"Akaike Information Criterion (AIC) = {AIC:.6f}")
print(f"Bayesian Information Criterion (BIC) = {BIC:.6f}")
logL_list.append(log_likelihood)
AIC_list.append(AIC)
BIC_list.append(BIC)
params_list.append(params)
logL_val_list.append(log_likelihood_val)
AIC_val_list.append(AIC_val)
BIC_val_list.append(BIC_val)
###Output
_____no_output_____ |
answers/rl_workshop_cliff_world.ipynb | ###Markdown
Cliff WorldIn this notebook, we will see the difference of using Q-learning and SARSA is solving the famous Cliff World problem.First, we have to make sure we are connected to the right **python 3 reutime and using the GPU**. (Click the 'Runtime' tab and choose 'Change runtime type'), then import the required package (all are already installed in Google Colab)The policy we're gonna use is epsilon-greedy policy, where agent takes optimal action with probability $(1-\epsilon)$, otherwise samples action at random. Note that agent __can__ occasionally sample optimal action during random sampling by pure chance.
###Code
#Setting up the enviroment, ignore the warning
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
!bash ../xvfb start
%env DISPLAY=:1
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
from collections import defaultdict
import random, math
import gym, gym.envs.toy_text
env = gym.envs.toy_text.CliffWalkingEnv()
n_actions = env.action_space.n
print(env.__doc__)
# Our cliffworld has one difference from what's on the image: there is no wall.
# Agent can choose to go as close to the cliff as it wishes. x:start, T:exit, C:cliff, o: flat ground
env.render()
###Output
_____no_output_____
###Markdown
Task 1: Imprement Q-learning Agent
###Code
class QLearningAgent:
def __init__(self, alpha, epsilon, discount, get_legal_actions):
"""
Q-Learning Agent
based on http://inst.eecs.berkeley.edu/~cs188/sp09/pacman.html
Instance variables you have access to
- self.epsilon (exploration prob)
- self.alpha (learning rate)
- self.discount (discount rate aka gamma)
Functions you should use
- self.get_legal_actions(state) {state, hashable -> list of actions, each is hashable}
which returns legal actions for a state
- self.get_qvalue(state,action)
which returns Q(state,action)
- self.set_qvalue(state,action,value)
which sets Q(state,action) := value
!!!Important!!!
Note: please avoid using self._qValues directly.
There's a special self.get_qvalue/set_qvalue for that.
"""
self.get_legal_actions = get_legal_actions
self._qvalues = defaultdict(lambda: defaultdict(lambda: 0))
self.alpha = alpha
self.epsilon = epsilon
self.discount = discount
def get_qvalue(self, state, action):
""" Returns Q(state,action) """
return self._qvalues[state][action]
def set_qvalue(self,state,action,value):
""" Sets the Qvalue for [state,action] to the given value """
self._qvalues[state][action] = value
#---------------------START OF YOUR CODE---------------------#
def get_value(self, state):
"""
Compute your agent's estimate of V(s) using current q-values
V(s) = max_over_action Q(state,action) over possible actions.
Note: please take into account that q-values can be negative.
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return 0.0
if len(possible_actions) == 0:
return 0.0
Q_allA = [self.get_qvalue(state, a) for a in possible_actions]
value = max(Q_allA)
return value
def update(self, state, action, reward, next_state):
"""
You should do your Q-Value update here:
Q(s,a) := (1 - alpha) * Q(s,a) + alpha * (r + gamma * V(s'))
"""
#agent parameters
gamma = self.discount
learning_rate = self.alpha
new_Q = (1 - learning_rate) * self.get_qvalue(state,action) + \
learning_rate * (reward + gamma * self.get_value(next_state))
self.set_qvalue(state, action, new_Q)
def get_best_action(self, state):
"""
Compute the best action to take in a state (using current q-values).
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
Q_allA = {a: self.get_qvalue(state, a) for a in possible_actions}
best_action = max(Q_allA, key = lambda a: Q_allA[a])
return best_action
def get_action(self, state):
"""
Compute the action to take in the current state, including exploration.
With probability self.epsilon, we should take a random action.
otherwise - the best policy action (self.getPolicy).
Note: To pick randomly from a list, use random.choice(list).
To pick True or False with a given probablity, generate uniform number in [0, 1]
and compare it with your probability
"""
# Pick Action
possible_actions = self.get_legal_actions(state)
action = None
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
#agent parameters:
epsilon = self.epsilon
if random.random() < epsilon:
chosen_action = random.choice(possible_actions)
else:
chosen_action = self.get_best_action(state)
return chosen_action
###Output
_____no_output_____
###Markdown
Task 2: Imprement SARSA Agent
###Code
class EVSarsaAgent(QLearningAgent):
"""
An agent that changes some of q-learning functions to implement Expected Value SARSA.
Note: this demo assumes that your implementation of QLearningAgent.update uses get_value(next_state).
If it doesn't, please add
def update(self, state, action, reward, next_state):
and implement it for Expected Value SARSA's V(s')
"""
def get_value(self, state):
"""
Returns Vpi for current state under epsilon-greedy policy:
V_{pi}(s) = sum _{over a_i} {pi(a_i | s) * Q(s, a_i)}
Hint: all other methods from QLearningAgent are still accessible.
"""
epsilon = self.epsilon
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return 0.0
if len(possible_actions) == 0:
return 0.0
num_of_a = len(possible_actions)
piQ_allA = {a: epsilon * self.get_qvalue(state, a) / (num_of_a - 1)
for a in possible_actions}
piQ_allA[self.get_best_action(state)] = (1 - epsilon) * self.get_qvalue(state,
self.get_best_action(state))
state_value = sum(piQ_allA.values())
return state_value
###Output
_____no_output_____
###Markdown
Play and train and evaluateLet's now see how our algorithm compares against q-learning in case where we force agent to explore all the time.image by cs188
###Code
def play_and_train(env,agent,t_max=10**4):
"""This function should
- run a full game, actions given by agent.getAction(s)
- train agent using agent.update(...) whenever possible
- return total reward"""
total_reward = 0.0
s = env.reset()
for t in range(t_max):
a = agent.get_action(s)
next_s,r,done,_ = env.step(a)
agent.update(s, a, r, next_s)
s = next_s
total_reward +=r
if done:break
return total_reward
agent_sarsa = EVSarsaAgent(alpha=0.25, epsilon=0.2, discount=0.99,
get_legal_actions = lambda s: range(n_actions))
agent_ql = QLearningAgent(alpha=0.25, epsilon=0.2, discount=0.99,
get_legal_actions = lambda s: range(n_actions))
from IPython.display import clear_output
from pandas import DataFrame
moving_average = lambda x, span=100: DataFrame({'x':np.asarray(x)}).x.ewm(span=span).mean().values
rewards_sarsa, rewards_ql = [], []
for i in range(5000):
rewards_sarsa.append(play_and_train(env, agent_sarsa))
rewards_ql.append(play_and_train(env, agent_ql))
#Note: agent.epsilon stays constant
if i %100 ==0:
clear_output(True)
print('EVSARSA mean reward =', np.mean(rewards_sarsa[-100:]))
print('QLEARNING mean reward =', np.mean(rewards_ql[-100:]))
plt.title("epsilon = %s" % agent_ql.epsilon)
plt.plot(moving_average(rewards_sarsa), label='ev_sarsa')
plt.plot(moving_average(rewards_ql), label='qlearning')
plt.grid()
plt.legend()
plt.ylim(-500, 0)
plt.show()
###Output
_____no_output_____
###Markdown
Let's now see what did the algorithms learn by visualizing their actions at every state.
###Code
def draw_policy(env, agent):
""" Prints CliffWalkingEnv policy with arrows. Hard-coded. """
n_rows, n_cols = env._cliff.shape
actions = '^>v<'
for yi in range(n_rows):
for xi in range(n_cols):
if env._cliff[yi, xi]:
print(" C ", end='')
elif (yi * n_cols + xi) == env.start_state_index:
print(" X ", end='')
elif (yi * n_cols + xi) == n_rows * n_cols - 1:
print(" T ", end='')
else:
print(" %s " % actions[agent.get_best_action(yi * n_cols + xi)], end='')
print()
print("Q-Learning")
draw_policy(env, agent_ql)
print("SARSA")
draw_policy(env, agent_sarsa)
###Output
_____no_output_____ |
Workshop/.ipynb_checkpoints/ML_workshop-checkpoint.ipynb | ###Markdown
Machine Learning WorkshopHere we will walk through an example of a machine learning workflow following five steps:For more detailed information on the Shiu Lab's ML pipeline, including explanations of all output files,check out the [README](https://github.com/ShiuLab/ML-Pipeline).*** Step 0. Set up Jupyter notebook & softwareCheck out this [**guide**](https://github.com/ShiuLab/ML-Pipeline/tree/master/Workshop) to learn how to set up Jupyter notebook and the software needed to run the Shiu Lab's ML pipeline.*** **What do we want to predict?** If a gene is annotated as being involved in specialized or general metabolism. **What are the labeled instances?**Tomato genes annotated as being involved in specialized or general metabolism by TomatoCyc.**What are the predictive features?** - duplication information (e.g. number of paralogs, gene family size)- sequence conservation (e.g. nonsynonymous/synonymouse substitution rates between homologs)- gene expression (e.g. breadth, stress specific, co-expression)- protein domain conent (e.g. p450, Aldedh)- epigenetic modification (e.g. H3K23ac histone marks)- network properties ( protein-protein interactions, network connectivity).**What data do we have?**- 532 tomato genes with specialized metabolism annotation by TomatoCyc- 2,318 tomato genes with general metabolism annotation by TomatoCyc- 4,197 features (we are only using a subset of **564** for this workshop)*** 
###Code
## A. Lets look at the data (note, you can do this in excel or R!)
import pandas as pd
d = pd.read_table('data.txt', sep='\t', index_col = 0)
print('Shape of data (rows, cols):')
print(d.shape)
print('\nSnapshot of data:')
print(d.iloc[:6,:5]) # prints first 6 rows and 5 columns
print('\nList of class labels')
print(d['Class'].value_counts())
###Output
_____no_output_____
###Markdown
**Things to notice:**- Our data has NAs. ML algorithms cannot handel NAs. We either needs to drop or impute NA values!- We have binary, continuous, and categorical features in this dataset. A perk of ML models is that they can integrate multiple datatypes in a single model. - However, before being used as input, a categorical feature needs to be converted into set binary features using an approach called [one-hot-encoding](https://www.kaggle.com/dansbecker/using-categorical-data-with-one-hot-encoding). *Before One-Hot Encoding:*| ID | Class | Weather ||--- |--- |--- || instance_A | 1 | sunny || instance_B | 0 | overcast || instance_C | 0 | rain | | instance_D | 1 | sunny |*After One-Hot Encoding:*| ID | Class | Weather_sunny | Weather_overcast | Weather_rain ||--- |--- |--- |--- |--- || instance_A | 1 | 1 | 0 | 0 || instance_B | 0 | 0 | 1 | 0 || instance_C | 0 | 0 | 0 | 1 | | instance_D | 1 | 1 | 0 | 0 |*** Automated data cleaning: ML_preprocess.pyInput```-df: your data table-na_method: how you want to impute NAs (options: drop, mean, median, mode)-h: show more options```
###Code
# B. Drop/Impute NAs and one-hot-encode categorical features
%run ../ML_preprocess.py -df data.txt -na_method median
###Output
_____no_output_____
###Markdown
*** Set aside instances for testing We want to set aside a subset of our data to use to test how well our model performed. Note that this is done before feature engineering, parameter selection, or model training. This will ensure our performance metric is entirely independent from our modeling! Automated selection of test set: test_set.pyInput:```-df: your data table-use: what class labels to include in the test set (we don't want to include unknowns!)-type: (c) classification or (r) regression-p: What percent of instances from each class to select for test (0.1 = 10%)-save: save name for test set```
###Code
# C. Define test set
%run ../test_set.py -df data_mod.txt \
-use gen,special \
-type c \
-p 0.1 \
-save test_genes.txt
###Output
_____no_output_____
###Markdown
***While one major advantage of ML approaches is that they are robust when the number of features is very large, there are cases where removing unuseful features or selecting only the best features may help you better answer your question. One common issue we see with using feature selection for machine learning is using the whole dataset to select the best features, which results in overfitting! **Be sure you specify your test set so that this data is not used for feature selection!** Automated feature selection: Feature_Selection.pyInput```-df: your data table-test: what instances to hold out (i.e. test instances!)-cl_train: labels to include in training the feature selection algorithm-type: (c) classification or (r) regression-alg: what feature selection algorithm to use (e.g. lasso, elastic net, random forest)-p: Parameter specific to different algorithms (use -h for more information)-n: Number of feature to select (unless algorithm does this automatically)-save: save name for list of selected features```Here we will use one of the most common feature selection algorithms: LASSO. LASSO requires the user to select the level of sparcity (-p) they want to induce during feature selection, where a larger value will result in more features being selected and a smaller value resulting in fewer features being selected. You can play around with this value to see what it does for your data.
###Code
%run ../Feature_Selection.py -df data_mod.txt \
-test test_genes.txt \
-cl_train special,gen \
-type c \
-alg lasso \
-p 0.01 \
-save top_feat_lasso.txt
%run ../Feature_Selection.py -df data_mod.txt \
-test test_genes.txt \
-cl_train special,gen \
-type c \
-alg random \
-n 11 \
-save rand_feat.txt
###Output
_____no_output_____
###Markdown
***Next we want to determine which ML algorithm we should use and what combination of hyperparameters for those algorithms work best. Importantly, at this stage we **only assess our model performance on the validation data** in order to assure we aren't just selecting the algorithm that works best on our held out testing data. The pipeline will automatically withhold the testing data from the parameter selection (i.e. grid search) step. Note, the pipeline **automatically "balances" your data**, meaning it pulls the same number of instances of each class for training. This avoids biasing the model to just predict everything as the more common class. This is a major reason why we want to run multiple replicates of the model! Algorithm SelectionThe machine learning algorithms in the ML_Pipeline are implement from [SciKit-Learn](https://scikit-learn.org/stable/), which has excellent resources to learn more about the ins and outs of these algorithms.**Why is algorithm selection useful?** ML models are able to learn patterns from data without the being explictely programmed to look for those patterns. ML algorithms differ in what patterns they excel at finding. For example, SVM is limited to linear relationships between feature and labels, while RF, because of its heiarchical structure, is able to model interactive patterns between your features. Furthermore, algorithms vary in their complexity and the amount of training data that is needed in order to [Hyper]-Parameter SelectionMost ML algorithms have internal parameters that need to be set by the user. For example:There are two general strategies for parameter selection: the grid search (default option: left) and the random search (use "-gs_type random": right):*Image: Bergstra & Bengio 2012; used under CC-BY license* Automated Training and ValidationTraining and validation is done using a [cross-validation (CV)](https://towardsdatascience.com/cross-validation-70289113a072) scheme. CV is useful because it makes good use of our data (i.e. uses all non-test data for training at some point) but also makes sure we are selecting the best parameters/algorithms on models that aren't overfit to the training data. Here is a visual to demonstrate how CV works (with 10-cv folds in this example): ML_classification.py (similar to ML_regression.py)**Input:**```-df: your data table-test: what instances to hold out (i.e. test instances)-cl_train: labels to include in training the feature selection algorithm-alg: what ML algorithm to use (e.g. SVM, RF, LogReg)-cv: Number of cross-validation folds (default = 10, use fewer if data set is small)-n: Number of replicates of the balanced cross-validation scheme to run (default = 100)-save: Name to save output to (will over-write old files)```*There are many functions available within the pipeline that are not described in this workshop. For more options run:*```python ML_classification.py -h```
###Code
%run ../ML_classification.py -df data_mod.txt \
-test test_genes.txt \
-cl_train special,gen \
-alg SVM \
-cv 5 \
-n 10 \
-save metab_SVM
###Output
_____no_output_____
###Markdown
Results BreakdownThere are dozens of [performance metrics](https://scikit-learn.org/stable/modules/model_evaluation.html) that can be used to assess how well a ML model works. While the best metric for you depends on the type of question you are asking, some of the most generally useful metrics include the area under the Receiver Operator Characteristic curve (AUC-ROC), the area under the Precision-Recall curve (AUC_PRc), and the F-measure (F1).Running the same script (only changing **-alg XXX**), average performance on the validation data using other algorithms:| Alg | F1 | AUC-ROC ||--- |--- |--- || RF | 0.787 | 0.824 || SVMpoly | 0.833 | 0.897 || SVMrbf | 0.855 | 0.905 || SVM | 0.856 | 0.911 |***SVM performed best on the validation data so we will continue with that algorithm!*** Now that we have our best performing algorithm, we will run the pipeline one more time, but with more replicates (note, I still just use 10 here for time!) and we will use it to predict our unknown genes. **Additional input:**```- apply: List of lable names to apply trained model to (i.e. all, or 'unknown')- plots: True/False if you want the pipeline to generate performance metric plots (default = F)```
###Code
%run ../ML_classification.py -df data_mod.txt \
-test test_genes.txt \
-cl_train special,gen \
-alg SVM \
-cv 5 \
-n 10 \
-apply unknown \
-plots T \
-save metab_SVM
###Output
_____no_output_____
###Markdown
**Let's check out our results...**Here are the files that are output from the model:- **data.txt_results:** A detailed look at the model that was run and its performance. - **data.txt_scores:** The probability score for each gene (i.e. how confidently it was predicted) and the final classification for each gene, including the unknowns the model was applied to.- **data.txt_imp:** The importance of each feature in your model.- **data.txt_GridSearch:** Detailed results from the parameter grid search.- **data.txt_BalancedID:** A list of the genes that were included in each replicate after downsampling to balance the model.*For a detailed description of the content of the pipeline output see the [README](../README.md)**** What if we use fewer features?Additional input:```- feat: List of features to use.```
###Code
%run ../ML_classification.py -df data_mod.txt \
-test test_genes.txt \
-cl_train special,gen \
-alg SVM \
-cv 5 \
-n 10 \
-feat top_feat_lasso.txt \
-save metab_SVM_lasso10
%run ../ML_classification.py -df data_mod.txt \
-test test_genes.txt \
-cl_train special,gen \
-alg SVM \
-cv 5 \
-n 10 \
-feat rand_feat.txt_11 \
-save metab_SVM_rand
###Output
_____no_output_____
###Markdown
Visualizing Your ResultsThere are a number of vizualization tools available in the ML-Pipeline (see ML_Postprocessing). Here we will use ML_plots. **ML_plots.py input:**```-save: Name to save output figures-cl_train: positive and negative classes-names: short names to call each model being included-scores: path to name_scores.txt files to include```
###Code
%run ../scripts_PostAnalysis/ML_plots.py -save compare_SVM \
-cl_train special gen \
-names All LASSO Random \
-scores metab_SVM_scores.txt metab_SVM_lasso10_scores.txt metab_SVM_rand_scores.txt
###Output
_____no_output_____ |
combinatorial_tracing/ImageAnalysis/.ipynb_checkpoints/Part2__FittingAndDecoding-checkpoint.ipynb | ###Markdown
__Author:__ Bogdan Bintu__Email:__ [email protected]__Date:__ 3/4/2020 Segment DAPI images
###Code
import sys,cv2,os,glob
import tifffile
sys.path.append(os.path.dirname(os.getcwd()))
from tqdm import tqdm_notebook as tqdm
import scipy.ndimage as ndi
from skimage.feature import peak_local_max
from skimage.morphology import watershed
from skimage.segmentation import random_walker
from scipy.spatial.distance import cdist
import matplotlib.pylab as plt
### usefull functions
def get_frame(dax_fl,ind_z=1,sx=2048,sy=2048):
"returns single frame of a dax file at frame ind_z"
f = open(dax_fl, "rb")
bytes_frame = sx*sy*2
f.seek(bytes_frame*ind_z)
im_ = np.fromfile(f,dtype=np.uint16,count=sx*sy).reshape([sx,sy]).swapaxes(0,1)
f.close()
return im_
def im2d_to_infocus(im_,bs = 11,mbs=7,th_b = 1.6,th_s = 0.15,plt_val=False):
"""Takes a 2d image and thresholds it based on the level of the signal and the local standard deviation of the sinal
This is used to threshold image prior to cell segmentation.
"""
im_sq_blur = cv2.blur(im_*im_,(bs,bs))
im_blur_sq = cv2.blur(im_,(bs,bs))
im_blur_sq *=im_blur_sq
im_std = np.sqrt(im_sq_blur - im_blur_sq)
im__ = (im_std<th_s)&(im_<th_b)
im_in = np.array(1-im__,dtype=np.uint8)
im_in = cv2.medianBlur(im_in,mbs)
if plt_val:
plt.figure()
plt.plot(im_std.ravel(),im_.ravel(),'o',alpha=0.01)
plt.show()
plt.figure(figsize=(15,15))
plt.imshow(im_,cmap='gray')
plt.contour(im_in,[0.5],colors=['r'])
plt.show()
return im_in
master_folder=r'\\10.245.74.218\Raw_data\Bogdan\7_27_2019_IMR90RNA'
analysis_folder = master_folder+'-Analysis'
analysis_folder = analysis_folder+os.sep+'_CellSegm_Analysis'
if not os.path.exists(analysis_folder):
os.makedirs(analysis_folder)
H0_folder = glob.glob(master_folder+os.sep+'H*B,B')[0]
dax_files = glob.glob(H0_folder+os.sep+'*.dax')
###Output
_____no_output_____
###Markdown
Check a frame of a field of view of the nuclear signal
###Code
dapi_mean = np.load(analysis_folder+os.sep+'dapi_mean.npy')
dax_file = dax_files[60]
im = get_frame(dax_file,ind_z=5*45-1)
im=im/dapi_mean
plt.figure(figsize=(20,20))
plt.imshow(im,vmax=6)
plt.show()
###Output
_____no_output_____
###Markdown
Check paramaters for initial thresholding
###Code
im_in = im2d_to_infocus(im[::2,::2],bs = 10,mbs=7,th_b = 3.5,th_s = 0.4,plt_val=True)
###Output
_____no_output_____
###Markdown
Core functions for cell segmentation
###Code
def cast_uint8(im,min_=None,max_=None):
im_ = np.array(im,dtype=np.float32)
if min_ is None: min_ = np.min(im)
if max_ is None: max_ = np.max(im)
delta = max_-min_
if delta==0: delta =1
im_ = (im-min_)/delta
im_ = (np.clip(im_,0,1)*255).astype(np.uint8)
return im_
def save_3dSegmentation_tif(im_base,imcells3d,imcells3d_lims,save_file,min_=0,max_=2):
im_overlay = cast_uint8(im_base,min_=min_,max_=max_)
im_overlay = np.array([im_overlay,im_overlay,im_overlay]).swapaxes(0,-1).swapaxes(0,1).swapaxes(1,2)
for index in range(len(imcells3d_lims)):
zm,zM,xm,xM,ym,yM = imcells3d_lims[index]
imcells3d_red = imcells3d[zm:zM,xm:xM,ym:yM]==index+1
for zmed in range(zM-zm):
if zmed<len(imcells3d_red):
im_2d = imcells3d_red[zmed]
cont_results = cv2.findContours(im_2d.astype(np.uint8), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contour = cont_results[0]#np.squeeze()
#print contour,cont_results
if len(contour)>0:
base_im = np.zeros([xM-xm,yM-ym,3],dtype = np.uint8)
cont_im = cv2.polylines(base_im, contour,1, (128,0,0),2)
xs,ys = np.where(im_2d)
cm = (int(np.mean(ys)),int(np.mean(xs)))
cv2.putText(cont_im,str(index+1),cm,
cv2.FONT_HERSHEY_SIMPLEX,0.33,(0,255,255),1,cv2.LINE_AA)
im_overlay_ = im_overlay[zm+zmed,xm:xM,ym:yM]
im_overlay[zm+zmed,xm:xM,ym:yM] = np.clip(im_overlay_+cont_im*1.,0,255).astype(np.uint8)
tifffile.imwrite(save_file,im_overlay,compress=0)
def segment_daxfl(dax_file,save_file):
print("Loading:",dax_file)
im = get_frame(dax_file,ind_z=5*45-1)
im=im/dapi_mean
im_mask_2d = im2d_to_infocus(im[::2,::2],bs = 10,mbs=7,th_b = 3.5,th_s = 0.4,plt_val=False)
#im_mask = np.swapaxes(im_mask,0,1)
#im_mask = np.array([cv2.medianBlur(im_,11) for im_ in im_mask ])
#im_mask_2d = np.max(im_mask,0)>0
im_mask_2d = ndi.binary_fill_holes(im_mask_2d>0)
dist = ndi.distance_transform_edt(im_mask_2d) #distance transformation for watershed
local_max = peak_local_max(dist, indices = False, min_distance=20)
x,y = np.where(local_max)
X=np.array([x,y]).T
distM = cdist(X,X)
distM[range(len(X)),range(len(X))]=np.inf
xp,yp = np.where(distM<30)
ik=np.setdiff1d(np.arange(len(x)),xp[xp>yp])
x,y = x[ik],y[ik]
local_max = 0*im_mask_2d
local_max[np.array(x,dtype=int),np.array(y,dtype=int)]=1
markers = ndi.label(local_max)[0]
markers[im_mask_2d==0] = -1
labeled_dapi = random_walker(im_mask_2d, markers)
labeled_dapi[labeled_dapi==-1]=0
im_temp = np.array(labeled_dapi)
xs,ys = np.where(im_temp>0)
delta = 1
bad = (xs<delta)|(xs>=im_temp.shape[0]-delta)|(ys<delta)|(ys>=im_temp.shape[1]-delta)
bad_inds = np.unique(im_temp[xs[bad],ys[bad]])
for ind in bad_inds:
im_temp[im_temp==ind]=0
#labeled_nuc, num_nuc = ndi.label(im)
inds = np.unique(im_temp)[1:]
im_ = np.array(im_temp)
ct = 0
for iind,ind in enumerate(inds):
kp = im_==ind
area = np.sum(kp)
if area>2000:
ct+=1
im_temp[kp]=ct
else:
im_temp[kp]=0
im = np.array([im])
imcells3d = np.array([im_temp]*len(im),dtype=np.uint16)
np.save(save_file,imcells3d)
imcells3d_lims = []
pad = 20
for celli in np.unique(im_temp)[1:]:
x,y = np.where(im_temp==celli)
xm,xM,ym,yM = np.min(x)-pad,np.max(x)+pad,np.min(y)-pad,np.max(y)+pad
zm,zM = 0,len(im)
if xM>im_temp.shape[0]:xM=im_temp.shape[0]-1
if yM>im_temp.shape[1]:yM=im_temp.shape[1]-1
if xm<0:xm=0
if ym<0:ym=0
imcells3d_lims.append([zm,zM,xm,xM,ym,yM])
imcells3d_lims = np.array(imcells3d_lims)
np.save(save_file.replace('__imcells2d.npy','__imcells2d_lims.npy'),imcells3d_lims)
save_3dSegmentation_tif(im[:,::2,::2],imcells3d,imcells3d_lims,save_file.replace('__imcells2d.npy','__2dshow.tif'),
min_=0,max_=8)
###Output
_____no_output_____
###Markdown
Run cell segmentation
###Code
dax_files = np.sort(dax_files)
for dax_file in tqdm(dax_files[:]):
save_file = analysis_folder+os.sep+os.path.basename(dax_file).replace('.dax','__imcells2d.npy')
segment_daxfl(dax_file,save_file)
#break
###Output
_____no_output_____
###Markdown
Perform manual correction to automatic cell segmentaion
###Code
import os,glob,sys
import numpy as np
import cPickle as pickle
from tqdm import tqdm_notebook as tqdm
import scipy.ndimage as ndi
from scipy.spatial.distance import cdist
import matplotlib.pylab as plt
import cv2
import IOTools as io
###Output
_____no_output_____
###Markdown
Make and write a png file with the automatic mask in one channel and the DAPI signal in another by concatenating across all fields of view
###Code
dapiSeg = r'masterAnalysisFolder\_CellSegm_Analysis'
seg_fls = np.sort(glob.glob(dapiSeg+os.sep+'*__imcells2d.npy'))
dapiFolder = r'dapi_folder'
im_comps = []
for seg_fl in tqdm(seg_fls):
dapi_fl = dapiFolder+os.sep+os.path.basename(seg_fl).replace('__imcells2d.npy','.dax')
im_mask = np.load(seg_fl)[0].T
nmax = np.max(im_mask)+1
im_edge = np.zeros_like(im_mask)
for iobj in range(1,nmax):
im_mask_ = (im_mask==iobj).astype(np.uint8)
kernel = np.ones([3,3],dtype=np.uint8)#cv2.getStructuringElement(cv2.MORPH_OPEN,(4,4))
im_erode = cv2.erode(im_mask_,kernel)
im_edge += im_mask_-im_erode
imf = (1-(im_edge>0))*(im_mask>0)
im = np.array(io.DaxReader(dapi_fl).loadMap()[5*45-1][::2,::2],dtype=np.float32)
im_dapi = im/np.max(im)
im_comp = np.dstack([imf*0,im_dapi,imf*0.5])
im_comps.append(im_comp)
#plt.figure()
#plt.imshow(im_comp)
#plt.show()
#break
cv2.imwrite(r'masterAnalysisFolder\_CellSegm_Analysis\all_masks.png',
(np.concatenate(im_comps,axis=0)*255).astype(np.uint8))
###Output
_____no_output_____
###Markdown
Refine the _allmask_ file in Photoshop to correct missegmentation and then load it in
###Code
im = cv2.imread(r'masterAnalysisFolder\_CellSegm_Analysis\all_masks.tif')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Restructure refined mask and save for each field of view
###Code
im_mask = im[:,:,-1].reshape([-1,1024,1024])
nlabtot=0
dapi_fls = np.sort(glob.glob(r'dapi_folder\*.dax'))
save_folder = r'masterAnalysisFolder\_CellSegm_Analysis\cnn_segmentation'
if not os.path.exists(save_folder):os.makedirs(save_folder)
for im_m,dax_file in tqdm(zip(im_mask,dapi_fls)):
#print("Loading:",dax_file)
save_file = save_folder+os.sep+os.path.basename(dax_file).replace('.dax','__imcells3d.npy')
im_dapi = io.DaxReader(dax_file).loadMap()
im_dapi = im_dapi[:-10][4::5][2:]
nz = len(im_dapi)
im_dapi = np.array(im_dapi[int(nz/2)])
im_ = np.array(im_m>100,dtype=np.uint8)
nlab,imlab,res,centers = cv2.connectedComponentsWithStats(im_)
im_temp = imlab.copy()
inds = np.unique(im_temp)[1:]
im_ = np.array(im_temp)
ct = 0
for iind,ind in enumerate(inds):
kp = im_==ind
area = np.sum(kp)
if area>100:
ct+=1
im_temp[kp]=ct
else:
im_temp[kp]=0
imlab = im_temp
imcells3d = np.array([imlab]*nz,dtype=np.uint16)
np.save(save_file,imcells3d)
imcells3d_lims = []
pad = 20
for celli in np.unique(imlab)[1:]:
x,y = np.where(imlab==celli)
xm,xM,ym,yM = np.min(x)-pad,np.max(x)+pad,np.min(y)-pad,np.max(y)+pad
zm,zM = 0,len(im)-1
if xM>imlab.shape[0]:xM=imlab.shape[0]-1
if yM>imlab.shape[1]:yM=imlab.shape[1]-1
if xm<0:xm=0
if ym<0:ym=0
imcells3d_lims.append([zm,zM,xm,xM,ym,yM])
imcells3d_lims = np.array(imcells3d_lims)
np.save(save_file.replace('__imcells3d.npy','__imcells3d_lims.npy'),imcells3d_lims)
fig = plt.figure(figsize=(10,10))
plt.contour(im_,[0.5],colors=['r'])
plt.imshow(im_dapi[::2,::2])
fig.savefig(save_file.replace('.npy','.png'))
plt.close()
#break
###Output
_____no_output_____
###Markdown
Fit the signal per each cell per each filed of view. Also use the bead data to align.
###Code
#Turn on clusters
#Open terminal and run: ipcluster start -n 20
import ipyparallel as ipp
from ipyparallel import Client
rc = Client()
def f(index):
import sys
sys.path.append(r'path_to_current_code')
import workers_cells_v2 as wkc
reload(wkc)
try:
obj = wkc.cell_focus(index_fov=index,dataset_tag='tag_condition',
parent_dapiSeg=r'masterAnalysisFolder\_CellSegm_Analysis',
parent_fits_analysis=r'masterAnalysisFolder',
parent_save = r'saveFolder',
fl_750_647 = r'ChromaticAberation\dic_chr_150nm_IMR90_v2.pkl',#
fl_750_561 = r'ChromaticAberation\dic_chr_150nm_IMR90_v2.pkl',#
RNA_dapi_tag = 'dapitag',
overwrite=False)
if not obj.is_complete():
obj.z_cutoff=5
obj.pad=10
obj.master_folder=r'master_data_folder'
obj.apply_to_pfits()
obj.save()
success = True
except:
success = False
return success
res = rc[:].map_sync(f,range(70)) #70 indicates the number of field of views
###Output
_____no_output_____
###Markdown
Decoding Analysis
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load all the fitted data per cell across all cells
###Code
import glob,os,sys
import numpy as np
import workers_cells_v3 as wkc
import PostFitting as pf
reload(pf)
reload(wkc)
files = np.sort(glob.glob(r'saveFolder\cell_dics\*_cells.npy'))
cell_obj = wkc.cell_analysis(files[:])#,5,6,7,8
cell_obj.normalize_cell_dic()
###Output
_____no_output_____
###Markdown
Check drift errors acros field of views
###Code
### Check drift errors
fovs = np.sort(cell_obj.dic_noncells.keys())
for fov in fovs:
cell_obj.dic_noncell = cell_obj.dic_noncells[fov]
errors,drifts = [],[]
for i in range(100):
errors.append(cell_obj.dic_noncell['drift_errors']['R'+str(i+1)])
drifts.append(cell_obj.dic_noncell['dic_drift_final']['R'+str(i+1)][0][:,0])
errors,drifts =np.array(errors),np.array(drifts)
plt.figure()
plt.title(fov+'-Drift-error')
plt.plot(errors[:,0])
plt.plot(errors[:,1])
plt.plot(errors[:,2])
plt.ylim([0,1])
plt.figure()
plt.title(fov+'-Drift')
plt.plot(drifts[:,0])
plt.plot(drifts[:,1])
plt.plot(drifts[:,2])
plt.show()
###Output
_____no_output_____
###Markdown
Test decoding per one cell
###Code
reload(wkc)
###Load a cell from the dataset i.e. cell 95
cell_obj.set_cell(95)
print cell_obj.cell
#decide whether to apply a different chromatic abberation correction
if False:
fl = r'ChromaticAberation\dic_chr_150nm_IMR90_v2.pkl'
dic_chr = np.load(fl)
ms_chr = [None,dic_chr['m_750_647'],dic_chr['m_750_561']]
cell_obj = wkc.update_color(cell_obj,ms_chr)
########## DNA ####################
#load the DNA code
cell_obj.load_DNA_code()
#get a list of candidates that are roughly in the correct ~200nm threshold
cell_obj.get_candidates(cutoff_candidate = [2.25,1.75,1.75],cap_invalid = 4000)
cell_obj.get_main_scores()
cell_obj.get_th_bleed(nkeep=2,plt_val=False)
cell_obj.get_chr_points()
cell_obj.get_chr_trees()
cell_obj.enhance_scores_1(nneigh=15)
cell_obj.get_th_bleed(nkeep=2,plt_val=False)
cell_obj.get_chr_points()
print cell_obj.bleed_score,cell_obj.coverage
#separate into homologs
#cell_obj.get_homolog_centers_single(ichr_ = 1,plt_val=True)
cell_obj.get_homolog_centers()
cell_obj.enhance_scores_2(ihom=0)
cell_obj.Distrib1,cell_obj.DsCC1 = cell_obj.DsCC_distr_valid[:],cell_obj.DsCC_valid
cell_obj.get_th_bleed(nkeep=1,plt_val=False)
cell_obj.th_score1,cell_obj.scores_valid1 = cell_obj.th_score,cell_obj.scores_valid[:]
cell_obj.enhance_scores_2(ihom=1,Distrib=cell_obj.Distrib1)
cell_obj.Distrib2,cell_obj.DsCC2 = cell_obj.DsCC_distr_valid[:],cell_obj.DsCC_valid
cell_obj.get_th_bleed(nkeep=1,plt_val=False)
cell_obj.th_score2,cell_obj.scores_valid2 = cell_obj.th_score,cell_obj.scores_valid[:]
cell_obj.get_chr_points_homologs()
def compare(pts1,pts2,nan=True):
good_num,all_num = 0,0
for pts1_chr,pts2_chr in zip(pts1,pts2):
pts1_chr,pts2_chr=np.array(pts1_chr),np.array(pts2_chr)
if nan:
pts1_chr[np.isnan(pts1_chr)]=np.inf
pts2_chr[np.isnan(pts2_chr)]=np.inf
good_num+=np.sum(np.all(pts1_chr==pts2_chr,axis=-1))
all_num+=len(pts1_chr)
return good_num/float(all_num)
print compare(cell_obj.points1+cell_obj.points2,cell_obj.points1+cell_obj.points2,nan=False)
for irep in range(2):
cell_obj.enhance_scores_2(ihom=0,Distrib=cell_obj.refined_Distr)
cell_obj.Distrib1,cell_obj.DsCC1 = cell_obj.DsCC_distr_valid[:],cell_obj.DsCC_valid
cell_obj.get_th_bleed(nkeep=1,plt_val=False)
cell_obj.th_score1,cell_obj.scores_valid1 = cell_obj.th_score,cell_obj.scores_valid[:]
cell_obj.enhance_scores_2(ihom=1,Distrib=cell_obj.refined_Distr)
cell_obj.Distrib2,cell_obj.DsCC2 = cell_obj.DsCC_distr_valid[:],cell_obj.DsCC_valid
cell_obj.get_th_bleed(nkeep=1,plt_val=False)
cell_obj.th_score2,cell_obj.scores_valid2 = cell_obj.th_score,cell_obj.scores_valid[:]
cell_obj.points1_prev,cell_obj.points2_prev = cell_obj.points1,cell_obj.points2
cell_obj.get_chr_points_homologs()
print compare(cell_obj.points1_prev,cell_obj.points1),compare(cell_obj.points2_prev,cell_obj.points2)
print compare(cell_obj.points1+cell_obj.points2,cell_obj.points1+cell_obj.points2,nan=False)
#plot a few positions
for chri in [0,1,2,3,5,20,21]:
fig = cell_obj.plot_chr(chri=chri,ax1=1,ax2=2)
#fig.savefig(save_base+'__chri'+str(chri)+'.png')
cell_obj.plot_cell_chromatic_difference(col_check = [1,0])
cell_obj.plot_cell_chromatic_difference(col_check = [0,1])
########## RNA ####################
cell_obj.clean_Qs(h_cutoff=4.,zxy_cutoff=1.5)
for key in cell_obj.dic_cell:
if 'Q' in key:
cell_obj.dic_cell[key] = cell_obj.dic_cell[key][:50]
cell_obj.load_RNA_code()
cell_obj.get_candidates(cutoff_candidate = [2.25,1.75,1.75],cap_invalid = 4000)
#rough RNA
dna_zxy,rna_zxy,ps_rna,dna_loc,rna_names,rna_iqs,fig = cell_obj.get_dna_rna_pts(drift=[0,0,0],cutoff_dist=15,plt_val=False)
#print(len(rna_zxy))
#fine RNA
drift = np.nanmedian(rna_zxy-dna_zxy,axis=0)
dna_zxy,rna_zxy,ps_rna,dna_loc,rna_names,rna_iqs,fig = cell_obj.get_dna_rna_pts(drift=drift,cutoff_dist=10.,plt_val=True)
print("Number of RNA:",len(rna_zxy))
#cell_obj.get_main_scores()
#cell_obj.get_th_bleed(nkeep=1,plt_val=True)
ps_pairs_final = [[[cell_obj.ps_pairs_valid_keep1[iv],cell_obj.ps_pairs_valid_keep2[iv]]
for iv in ichrs]
for ichrs in cell_obj.chr_ivs]
dic_save = {'dna_ps_pairs':ps_pairs_final,'dna_zxy':[cell_obj.points1,cell_obj.points2]}
dic_save.update({'dna_fromRNA_zxy':dna_zxy,'rna_zxy':rna_zxy,
'ps_rna':ps_rna,'dna_fromRNA_loc':dna_loc,'rna_names':rna_names,
'rna_dna_drift':drift,'cell':cell_obj.cell,'dna_refined_Distr':cell_obj.refined_Distr})
###Output
_____no_output_____
###Markdown
Run across all data
###Code
def analyze_cells(list_cells=None):
import glob,os,sys
import numpy as np
sys.path.append(r'analysis_code_path')
import workers_cells_v2 as wkc
import PostFitting as pf
reload(pf)
reload(wkc)
from tqdm import tqdm_notebook as tqdm
import matplotlib.pylab as plt
import cPickle as pickle
success=True
#Load all the data
files = np.sort(glob.glob(r'analysis_save_folder\cell_dics\*_cells.npy'))
cell_obj = wkc.cell_analysis(files[:])#,5,6,7,8
cell_obj.normalize_cell_dic()
master_folder = os.path.dirname(os.path.dirname(files[0]))
save_folder = master_folder+os.sep+'cell_decoding'
save_folder = r'new_save_folder'+os.sep+os.path.basename(master_folder)+os.sep+'cell_decoding_refitted2'
print save_folder
if not os.path.exists(save_folder): os.makedirs(save_folder)
overwrite = False
if list_cells is None: list_cells = range(len(cell_obj.cells))
#remap chromatic abberation
fl = r'ChromaticAberation\dic_chr_150nm_IMR90_v2.pkl'
dic_chr = np.load(fl)
ms_chr = [None,dic_chr['m_750_647'],dic_chr['m_750_561']]
for icell in tqdm(list_cells):
try:
#if True:
cell_obj.set_cell(icell)
save_base = save_folder+os.sep+cell_obj.cell
print save_base
if overwrite or not os.path.exists(save_base+'.pkl'):
#if True:
fid=open(save_base+'.new','w')
fid.close()
cell_obj = wkc.update_color(cell_obj,ms_chr)
cell_obj.load_DNA_code()
cell_obj.get_candidates(cutoff_candidate = [2.25,1.75,1.75],cap_invalid = 4000)
cell_obj.get_main_scores()
cell_obj.get_th_bleed(nkeep=2,plt_val=False)
cell_obj.get_chr_points()
cell_obj.get_chr_trees()
cell_obj.enhance_scores_1(nneigh=15)
cell_obj.get_th_bleed(nkeep=2,plt_val=False)
cell_obj.get_chr_points()
print cell_obj.bleed_score,cell_obj.coverage
#separate into homologs
#cell_obj.get_homolog_centers_single(ichr_ = 1,plt_val=True)
cell_obj.get_homolog_centers()
cell_obj.enhance_scores_2(ihom=0)
cell_obj.Distrib1,cell_obj.DsCC1 = cell_obj.DsCC_distr_valid[:],cell_obj.DsCC_valid
cell_obj.get_th_bleed(nkeep=1,plt_val=False)
cell_obj.th_score1,cell_obj.scores_valid1 = cell_obj.th_score,cell_obj.scores_valid[:]
cell_obj.enhance_scores_2(ihom=1,Distrib=cell_obj.Distrib1)
cell_obj.Distrib2,cell_obj.DsCC2 = cell_obj.DsCC_distr_valid[:],cell_obj.DsCC_valid
cell_obj.get_th_bleed(nkeep=1,plt_val=False)
cell_obj.th_score2,cell_obj.scores_valid2 = cell_obj.th_score,cell_obj.scores_valid[:]
cell_obj.get_chr_points_homologs()
def compare(pts1,pts2,nan=True):
good_num,all_num = 0,0
for pts1_chr,pts2_chr in zip(pts1,pts2):
pts1_chr,pts2_chr=np.array(pts1_chr),np.array(pts2_chr)
if nan:
pts1_chr[np.isnan(pts1_chr)]=np.inf
pts2_chr[np.isnan(pts2_chr)]=np.inf
good_num+=np.sum(np.all(pts1_chr==pts2_chr,axis=-1))
all_num+=len(pts1_chr)
return good_num/float(all_num)
print compare(cell_obj.points1+cell_obj.points2,cell_obj.points1+cell_obj.points2,nan=False)
for irep in range(2):
cell_obj.enhance_scores_2(ihom=0,Distrib=cell_obj.refined_Distr)
cell_obj.Distrib1,cell_obj.DsCC1 = cell_obj.DsCC_distr_valid[:],cell_obj.DsCC_valid
cell_obj.get_th_bleed(nkeep=1,plt_val=False)
cell_obj.th_score1,cell_obj.scores_valid1 = cell_obj.th_score,cell_obj.scores_valid[:]
cell_obj.enhance_scores_2(ihom=1,Distrib=cell_obj.refined_Distr)
cell_obj.Distrib2,cell_obj.DsCC2 = cell_obj.DsCC_distr_valid[:],cell_obj.DsCC_valid
cell_obj.get_th_bleed(nkeep=1,plt_val=False)
cell_obj.th_score2,cell_obj.scores_valid2 = cell_obj.th_score,cell_obj.scores_valid[:]
cell_obj.points1_prev,cell_obj.points2_prev = cell_obj.points1,cell_obj.points2
cell_obj.get_chr_points_homologs()
print compare(cell_obj.points1_prev,cell_obj.points1),compare(cell_obj.points2_prev,cell_obj.points2)
print compare(cell_obj.points1+cell_obj.points2,cell_obj.points1+cell_obj.points2,nan=False)
#plot a few positions
for chri in [0,1,2,3,5,20,21]:
fig = cell_obj.plot_chr(chri=chri,ax1=1,ax2=2)
fig.savefig(save_base+'__chri'+str(chri)+'.png')
cell_obj.plot_cell_chromatic_difference(col_check = [1,0])
cell_obj.plot_cell_chromatic_difference(col_check = [0,1])
########## RNA ####################
cell_obj.clean_Qs(ih=-7,h_cutoff=4.,zxy_cutoff=1.5)
for key in cell_obj.dic_cell:
if 'Q' in key:
cell_obj.dic_cell[key] = cell_obj.dic_cell[key][:50]
cell_obj.load_RNA_code()
cell_obj.get_candidates(cutoff_candidate = [1.75,1.75,1.75],cap_invalid = 4000)
#rough RNA
dna_zxy,rna_zxy,ps_rna,dna_loc,rna_names,rna_iqs,fig = cell_obj.get_dna_rna_pts(drift=[0,0,0],cutoff_dist=15,plt_val=False)
#print(len(rna_zxy))
#fine RNA
drift = np.nanmedian(rna_zxy-dna_zxy,axis=0)
dna_zxy,rna_zxy,ps_rna,dna_loc,rna_names,rna_iqs,fig = cell_obj.get_dna_rna_pts(drift=drift,cutoff_dist=10.,plt_val=True)
fig.savefig(save_base+'_RNA.png')
print("Number of RNA:",len(rna_zxy))
#cell_obj.get_main_scores()
#cell_obj.get_th_bleed(nkeep=1,plt_val=True)
ps_pairs_final = [[[cell_obj.ps_pairs_valid_keep1[iv],cell_obj.ps_pairs_valid_keep2[iv]]
for iv in ichrs]
for ichrs in cell_obj.chr_ivs]
dic_save = {'dna_ps_pairs':ps_pairs_final,'dna_zxy':[cell_obj.points1,cell_obj.points2]}
dic_save.update({'dna_fromRNA_zxy':dna_zxy,'rna_zxy':rna_zxy,
'ps_rna':ps_rna,'dna_fromRNA_loc':dna_loc,'rna_names':rna_names,
'rna_dna_drift':drift,'cell':cell_obj.cell,'dna_refined_Distr':cell_obj.refined_Distr})
plt.close('all')
pickle.dump(dic_save,open(save_base+'.pkl','wb'))
except:
print("Failed ",getattr(cell_obj,'cell','None'))
success=False
return success
#Turn on clusters
#Open terminal and run: ipcluster start -n 20
import ipyparallel as ipp
from ipyparallel import Client
rc = Client()
nrc = len(rc)
paramaters = range(2000)
paramaters=[paramaters[icl::nrc] for icl in range(nrc)]
res = rc[:].map_sync(analyze_cells,paramaters)
###Output
_____no_output_____
###Markdown
Chromatic aberation
###Code
def get_im(dax_fl,icols=[0],num_cols=3,sx=2048,sy=2048):
im = np.fromfile(dax_fl,dtype=np.uint16).reshape([-1,sx,sy]).swapaxes(1,2)
return [im[icol::num_cols] for icol in icols]
def normalzie_im(im,sz=10):
im_ = np.array(im,dtype=np.float32)
im_blur = np.array([cv2.blur(im__,(sz,sz)) for im__ in im_])
im_ =im_/(im_blur)
return im_
def get_standard_fits(im,th_stds = 6,sz_blur=10,better_fit=False):
im_norm = normalzie_im(im,sz_blur)
hcutoff = 1+np.std(im_norm-1)*th_stds
#hcutoff=1.5
z,x,y = np.where(im_norm>hcutoff)
h_im = im_norm[z,x,y]
sz,sx,sy = im_norm.shape
keep = h_im>0
deltas = range(-3,4)
for deltax in deltas:
for deltay in deltas:
for deltaz in deltas:
keep &= (h_im>=im_norm[(z+deltaz)%sz,(x+deltax)%sx,(y+deltay)%sy])
zf,xf,yf = z[keep],x[keep],y[keep]
hf = im_norm[zf,xf,yf]
centers_zxy = np.array([zf,xf,yf]).T
pfits = ft.fast_fit_big_image(im.astype(np.float32),centers_zxy,radius_fit = 4,avoid_neigbors=True,
recenter=False,verbose = False,better_fit=better_fit,troubleshoot=False)
return pfits
def sort_pfits(pfits):
return pfits[np.argsort(pfits[:,0])[::-1],:]
folder_750 = r'folder_1'
dax_fls_750 = glob.glob(folder_750+os.sep+'*.dax')
folder_647 = r'fodler_2_colorswapped'
dax_fls_647 = glob.glob(folder_647+os.sep+'*.dax')
zxys_750 = []
zxys_647 = []
for dax_fl1,dax_fl2 in tqdm(zip(dax_fls_750[1:],dax_fls_647[1:])):
#get image 1 and fit both image and beads 1
im1,im_beads1 = get_im(dax_fl1,icols = [0,2],num_cols=3)
#im1,im_beads1 = im1[4:-4],im_beads1[4:-4]
pfits1 = get_standard_fits(im1,th_stds = 10,sz_blur=10)
pfits_beads1 = get_standard_fits(im_beads1,th_stds = 6,sz_blur=10)
#get image 2 and fit both image and beads 2
im2,im_beads2 = get_im(dax_fl2,icols = [1,2],num_cols=3)
#im2,im_beads2 = im2[4:-4],im_beads2[4:-4]
pfits2 = get_standard_fits(im2,th_stds = 10,sz_blur=10)
pfits_beads2 = get_standard_fits(im_beads2,th_stds = 6,sz_blur=10)
tz,tx,ty = ft.fft3d_from2d(im_beads1,im_beads2)
pfits_beads2_ = pfits_beads2[:,1:4]+[tz,tx,ty]
pfits_beads1_ = pfits_beads1[:,1:4]
from scipy.spatial.distance import cdist
M = cdist(pfits_beads2_,pfits_beads1_)
iM = np.argmin(M,0)
jM = np.arange(len(iM))
keep = M[iM,jM]<5
tzxy = np.median(pfits_beads2[iM[keep],1:4]-pfits_beads1[jM[keep],1:4],axis=0)
pfits1_ = pfits1[:,1:4]
pfits2_ = pfits2[:,1:4]-tzxy
M = cdist(pfits2_,pfits1_)
iM = np.argmin(M,0)
jM = np.arange(len(iM))
keep = M[iM,jM]<5
zxy1,zxy2 = pfits1_[jM[keep]],pfits2_[iM[keep]]
zxys_750.append(zxy1)
zxys_647.append(zxy2)
zxys_750 = np.array(zxys_750)
zxys_647 = np.array(zxys_647)
plt.figure()
#keep = (zxys_750[:,2]>1024-100)&(zxys_750[:,2]<1024+100)
#plt.plot(zxys_750[keep,1],zxys_750[keep,0],'o')
#plt.plot(zxys_647[keep,1],zxys_647[keep,0],'o')
plt.plot(zxys_750[:,1],zxys_750[:,2],'o')
plt.plot(zxys_647[:,1],zxys_647[:,2],'o')
plt.show()
zxy_647,zxy_750 = dic_chr['zxy_647_750']*[200/150.,1,1]
dic_chr_['m_750_647'] = ft.calc_color_matrix(zxy_750,zxy_647+[0.5,0,0])
zxy_561,zxy_750 = dic_chr['zxy_561_750']*[200/150.,1,1]
dic_chr_['m_750_561'] = ft.calc_color_matrix(zxy_750,zxy_561+[0.75,0,0])
fl = r'ChromaticAberation\dic_chr_150nm.pkl'
pickle.dump(dic_chr_,open(fl,'wb'))
###Output
_____no_output_____ |
experiments/0xxx_notebooks/analysis-kf-vs-camaro.ipynb | ###Markdown
アンサンブル(Valid)
###Code
valid_df = pd.concat([load_prediction(path) for path in tqdm(VALID_CSVs)], ignore_index=True)
valid_df = valid_df.drop_duplicates(["image_id", "model", "InChI"]).reset_index(drop=True)
shared_valid_ids = pd.read_csv("/work/input/kfujikawa/kf-bms-candidates/shared_valid_image_ids_kf_camaro.csv").image_id
common_valid_df = valid_df.query("image_id.isin(@shared_valid_ids)", engine="python")
display(valid_df.head(1))
with pd.option_context("display.float_format", '{:.4f}'.format):
display(valid_df.groupby("model").describe().T)
kf_valid_df = common_valid_df.query("user == 'kfujikawa'")
sort_keys = dict(
image_id=True,
is_valid=False,
normed_score=True,
)
kf_valid_ensembled_df = kf_valid_df.groupby(["image_id", "InChI"]).mean().reset_index()
kf_valid_ensembled_df = kf_valid_ensembled_df.sort_values(
by=list(sort_keys.keys()),
ascending=list(sort_keys.values()),
).groupby("image_id").first()
kf_valid_ensembled_df.levenshtein.mean()
camaro_valid_df = common_valid_df.query("user == 'camaro'")
sort_keys = dict(
image_id=True,
is_valid=False,
focal_score=True,
)
camaro_valid_ensembled_df = camaro_valid_df.groupby(["image_id", "InChI"]).mean().reset_index()
camaro_valid_ensembled_df = camaro_valid_ensembled_df.sort_values(
by=list(sort_keys.keys()),
ascending=list(sort_keys.values()),
).groupby("image_id").first()
camaro_valid_ensembled_df.levenshtein.mean()
###Output
_____no_output_____
###Markdown
KF vs camaro の選択結果同士のLevenshtein
###Code
merged_df = kf_valid_ensembled_df.merge(camaro_valid_ensembled_df, on="image_id")
np.mean([
Levenshtein.distance(x, y)
for x, y in merged_df[["InChI_x", "InChI_y"]].values
])
###Output
_____no_output_____
###Markdown
アンサンブル(Test)
###Code
test_df = pd.concat([load_prediction(path) for path in tqdm(TEST_CSVs)], ignore_index=True)
display(test_df.head(1))
with pd.option_context("display.float_format", '{:.4f}'.format):
display(test_df.groupby("model").describe().T)
kf_test_df = test_df.query("user == 'kfujikawa'")
sort_keys = dict(
image_id=True,
is_valid=False,
normed_score=True,
)
kf_test_ensembled_df = kf_test_df.groupby(["image_id", "InChI"]).mean().reset_index()
kf_test_ensembled_df = kf_test_ensembled_df.sort_values(
by=list(sort_keys.keys()),
ascending=list(sort_keys.values()),
).groupby("image_id").first()
camaro_test_df = test_df.query("user == 'camaro'")
sort_keys = dict(
image_id=True,
is_valid=False,
focal_score=True,
)
camaro_test_ensembled_df = camaro_test_df.groupby(["image_id", "InChI"]).mean().reset_index()
camaro_test_ensembled_df = camaro_test_ensembled_df.sort_values(
by=list(sort_keys.keys()),
ascending=list(sort_keys.values()),
).groupby("image_id").first()
###Output
_____no_output_____
###Markdown
KF vs camaro の選択結果同士のLevenshtein
###Code
merged_df = kf_test_ensembled_df.merge(camaro_test_ensembled_df, on="image_id")
np.mean([
Levenshtein.distance(x, y)
for x, y in merged_df[["InChI_x", "InChI_y"]].values
])
test_ensembled_df.normed_score.hist(log=True)
valid_ensembled_df.normed_score.hist(log=True)
submission_df = test_ensembled_df[["image_id", "InChI"]]
assert len(submission_df) == 1616107
submission_df.to_csv("submission.csv", index=False)
!head submission.csv
!wc submission.csv
###Output
_____no_output_____ |
gated_linear_networks/colabs/dendritic_gated_network.ipynb | ###Markdown
Simple Dendritic Gated Networks in numpyThis colab implements a Dendritic Gated Network (DGN) solving a regression (using quadratic loss) or a binary classification problem (using Bernoulli log loss).See our paper titled ["A rapid and efficient learning rule for biological neural circuits"](https://www.biorxiv.org/content/10.1101/2021.03.10.434756v1) for details of the DGN model.Some implementation details:- We utilize `sklearn.datasets.load_breast_cancer` for binary classification and `sklearn.datasets.load_diabetes` for regression.- This code is meant for educational purposes only. It is not optimized for high-performance, both in terms of computational efficiency and quality of fit.- Network is trained on 80% of the dataset and tested on the rest. For classification, we report log loss (negative log likelihood) and accuracy (percentage of correctly identified labels). For regression, we report MSE expressed in units of target variance.
###Code
# Copyright 2021 DeepMind Technologies Limited. All rights reserved.
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
from sklearn import datasets
from sklearn import preprocessing
from sklearn import model_selection
from typing import List, Optional
###Output
_____no_output_____
###Markdown
Choose classification or regression
###Code
do_classification = True # if False, does regression
###Output
_____no_output_____
###Markdown
Load dataset
###Code
if do_classification:
features, targets = datasets.load_breast_cancer(return_X_y=True)
else:
features, targets = datasets.load_diabetes(return_X_y=True)
x_train, x_test, y_train, y_test = model_selection.train_test_split(
features, targets, test_size=0.2, random_state=0)
n_features = x_train.shape[-1]
# Input features are centered and scaled to unit variance:
feature_encoder = preprocessing.StandardScaler()
x_train = feature_encoder.fit_transform(x_train)
x_test = feature_encoder.transform(x_test)
if not do_classification:
# Continuous targets are centered and scaled to unit variance:
target_encoder = preprocessing.StandardScaler()
y_train = np.squeeze(target_encoder.fit_transform(y_train[:, np.newaxis]))
y_test = np.squeeze(target_encoder.transform(y_test[:, np.newaxis]))
###Output
_____no_output_____
###Markdown
DGN inference/update
###Code
def step_square_loss(inputs: np.ndarray,
weights: List[np.ndarray],
hyperplanes: List[np.ndarray],
hyperplane_bias_magnitude: Optional[float] = 1.,
learning_rate: Optional[float] = 1e-5,
target: Optional[float] = None,
update: bool = False,
):
"""Implements a DGN inference/update using square loss."""
r_in = inputs
side_info = np.hstack([hyperplane_bias_magnitude, inputs])
for w, h in zip(weights, hyperplanes): # loop over layers
r_in = np.hstack([1., r_in]) # add biases
gate_values = np.heaviside(h.dot(side_info), 0).astype(bool)
effective_weights = gate_values.dot(w).sum(axis=1)
r_out = effective_weights.dot(r_in)
if update:
grad = (r_out[:, None] - target) * r_in[None]
w -= learning_rate * gate_values[:, :, None] * grad[:, None]
r_in = r_out
loss = (target - r_out)**2 / 2
return r_out, loss
def sigmoid(x): # numerically stable sigmoid
return np.exp(-np.logaddexp(0, -x))
def inverse_sigmoid(x):
return np.log(x/(1-x))
def step_bernoulli(inputs: np.ndarray,
weights: List[np.ndarray],
hyperplanes: List[np.ndarray],
hyperplane_bias_magnitude: Optional[float] = 1.,
learning_rate: Optional[float] = 1e-5,
epsilon: float = 0.01,
target: Optional[float] = None,
update: bool = False,
):
"""Implements a DGN inference/update using Bernoulli log loss."""
r_in = np.clip(sigmoid(inputs), epsilon, 1-epsilon)
side_info = np.hstack([hyperplane_bias_magnitude, inputs])
for w, h in zip(weights, hyperplanes): # loop over layers
r_in = np.hstack([sigmoid(1.), r_in]) # add biases
h_in = inverse_sigmoid(r_in)
gate_values = np.heaviside(h.dot(side_info), 0).astype(bool)
effective_weights = gate_values.dot(w).sum(axis=1)
h_out = effective_weights.dot(h_in)
r_out_unclipped = sigmoid(h_out)
r_out = np.clip(r_out_unclipped, epsilon, 1 - epsilon)
if update:
update_indicator = np.abs(target - r_out_unclipped) > epsilon
grad = (r_out[:, None] - target) * h_in[None] * update_indicator[:, None]
w -= learning_rate * gate_values[:, :, None] * grad[:, None]
r_in = r_out
loss = - (target * np.log(r_out) + (1 - target) * np.log(1 - r_out))
return r_out, loss
def forward_pass(step_fn, x, y, weights, hyperplanes, learning_rate, update):
losses, outputs = np.zeros(len(y)), np.zeros(len(y))
for i, (x_i, y_i) in enumerate(zip(x, y)):
outputs[i], losses[i] = step_fn(x_i, weights, hyperplanes, target=y_i,
learning_rate=learning_rate, update=update)
return np.mean(losses), outputs
###Output
_____no_output_____
###Markdown
Define architecture
###Code
# number of neurons per layer, the last element must be 1
n_neurons = np.array([100, 10, 1])
n_branches = 20 # number of dendritic brancher per neuron
###Output
_____no_output_____
###Markdown
Initialise weights and gating parameters
###Code
n_inputs = np.hstack([n_features + 1, n_neurons[:-1] + 1]) # 1 for the bias
dgn_weights = [np.zeros((n_neuron, n_branches, n_input))
for n_neuron, n_input in zip(n_neurons, n_inputs)]
# Fixing random seed for reproducibility:
np.random.seed(12345)
dgn_hyperplanes = [
np.random.normal(0, 1, size=(n_neuron, n_branches, n_features + 1))
for n_neuron in n_neurons]
# By default, the weight parameters are drawn from a normalised Gaussian:
dgn_hyperplanes = [
h_ / np.linalg.norm(h_[:, :, :-1], axis=(1, 2))[:, None, None]
for h_ in dgn_hyperplanes]
###Output
_____no_output_____
###Markdown
Train
###Code
if do_classification:
eta = 1e-4
n_epochs = 3
step = step_bernoulli
else:
eta = 1e-5
n_epochs = 10
step = step_square_loss
if do_classification:
step = step_bernoulli
else:
step = step_square_loss
print('Training on {} problem for {} epochs with learning rate {}.'.format(
['regression', 'classification'][do_classification], n_epochs, eta))
print('This may take a minute. Please be patient...')
for epoch in range(0, n_epochs + 1):
train_loss, train_pred = forward_pass(
step, x_train, y_train, dgn_weights,
dgn_hyperplanes, eta, update=(epoch > 0))
test_loss, test_pred = forward_pass(
step, x_test, y_test, dgn_weights,
dgn_hyperplanes, eta, update=False)
to_print = 'epoch: {}, test loss: {:.3f} (train: {:.3f})'.format(
epoch, test_loss, train_loss)
if do_classification:
accuracy_train = np.mean(np.round(train_pred) == y_train)
accuracy = np.mean(np.round(test_pred) == y_test)
to_print += ', test accuracy: {:.3f} (train: {:.3f})'.format(
accuracy, accuracy_train)
print(to_print)
###Output
_____no_output_____
###Markdown
Simple Dendritic Gated Networks in numpyThis colab implements a Dendritic Gated Network (DGN) solving a regression (using square loss) or a binary classification problem (using Bernoulli log loss). See our paper titled "A rapid and efficient learning rule for biological neural circuits" for details of the DGN model.Some implementation details:- We utilize `sklearn.datasets.load_breast_cancer` for binary classification and `sklearn.datasets.load_diabetes` for regression.- This code is meant for educational purposes only. It is not optimized for high-performance, both in terms of computational efficiency and quality of fit. - Network is trained on 80% of the dataset and tested on the rest. Test MSE or log loss is reported at the end of each epoch.
###Code
# Copyright 2021 DeepMind Technologies Limited. All rights reserved.
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
from sklearn import datasets
from sklearn import preprocessing
from sklearn import model_selection
from typing import List, Optional
###Output
_____no_output_____
###Markdown
Choose classification or regression
###Code
do_classification = True # if False, does regression
###Output
_____no_output_____
###Markdown
Load dataset
###Code
if do_classification:
features, targets = datasets.load_breast_cancer(return_X_y=True)
else:
features, targets = datasets.load_diabetes(return_X_y=True)
x_train, x_test, y_train, y_test = model_selection.train_test_split(
features, targets, test_size=0.2, random_state=0)
input_dim = x_train.shape[-1]
feature_encoder = preprocessing.StandardScaler()
x_train = feature_encoder.fit_transform(x_train)
x_test = feature_encoder.transform(x_test)
if not do_classification:
target_encoder = preprocessing.StandardScaler()
y_train = np.squeeze(target_encoder.fit_transform(y_train[:, np.newaxis]))
y_test = np.squeeze(target_encoder.transform(y_test[:, np.newaxis]))
###Output
_____no_output_____
###Markdown
DGN inference/update
###Code
def step_square_loss(inputs: np.ndarray,
weights: List[np.ndarray],
hyperplanes: List[np.ndarray],
hyperplane_bias_magnitude: Optional[float] = 1.,
learning_rate: Optional[float] = 1e-5,
target: Optional[float] = None,
update: bool = False,
):
"""Implements a DGN inference/update using square loss."""
r_in = inputs
side_info = np.hstack([hyperplane_bias_magnitude, inputs])
for w, h in zip(weights, hyperplanes): # loop over layers
r_in = np.hstack([1., r_in]) # add biases
gate_values = np.heaviside(h.dot(side_info), 0).astype(bool)
effective_weights = gate_values.dot(w).sum(axis=1)
r_out = effective_weights.dot(r_in)
if update:
grad = (r_out[:, None] - target) * r_in[None]
w -= learning_rate * gate_values[:, :, None] * grad[:, None]
r_in = r_out
r_out = r_out[0]
loss = (target - r_out)**2 / 2
return r_out, loss
def sigmoid(x): # numerically stable sigmoid
return np.exp(-np.logaddexp(0, -x))
def inverse_sigmoid(x):
return np.log(x/(1-x))
def step_bernoulli(inputs: np.ndarray,
weights: List[np.ndarray],
hyperplanes: List[np.ndarray],
hyperplane_bias_magnitude: Optional[float] = 1.,
learning_rate: Optional[float] = 1e-5,
epsilon: float = 0.01,
target: Optional[float] = None,
update: bool = False,
):
"""Implements a DGN inference/update using Bernoulli log loss."""
r_in = np.clip(sigmoid(inputs), epsilon, 1-epsilon)
side_info = np.hstack([hyperplane_bias_magnitude, inputs])
for w, h in zip(weights, hyperplanes): # loop over layers
r_in = np.hstack([sigmoid(1.), r_in]) # add biases
h_in = inverse_sigmoid(r_in)
gate_values = np.heaviside(h.dot(side_info), 0).astype(bool)
effective_weights = gate_values.dot(w).sum(axis=1)
h_out = effective_weights.dot(h_in)
r_out = np.clip(sigmoid(h_out), epsilon, 1 - epsilon)
if update:
update_indicator = np.logical_and(r_out < 1 - epsilon, r_out > epsilon)
grad = (r_out[:, None] - target) * h_in[None] * update_indicator[:, None]
w -= learning_rate * gate_values[:, :, None] * grad[:, None]
r_in = r_out
r_out = r_out[0]
loss = -(target * r_out + (1 - target) * (1 - r_out))
return r_out, loss
def forward_pass(step_fn, x, y, weights, hyperplanes, learning_rate, update):
losses, outputs = [], []
for x_i, y_i in zip(x, y):
y, l = step_fn(x_i, weights, hyperplanes, target=y_i,
learning_rate=learning_rate, update=update)
losses.append(l)
outputs.append(y)
return np.mean(losses), np.array(outputs)
###Output
_____no_output_____
###Markdown
Define architecture
###Code
# number of neurons per layer, the last element must be 1
num_neurons = np.array([100, 10, 1])
num_branches = 20 # number of dendritic brancher per neuron
###Output
_____no_output_____
###Markdown
Initialise weights and gating parameters
###Code
num_inputs = np.hstack([input_dim + 1, num_neurons[:-1] + 1]) # 1 for the bias
weights_ = [np.zeros((num_neuron, num_branches, num_input))
for num_neuron, num_input in zip(num_neurons, num_inputs)]
hyperplanes_ = [np.random.normal(0, 1, size=(num_neuron, num_branches, input_dim + 1))
for num_neuron in num_neurons]
# By default, the weight parameters are drawn from a normalised Gaussian:
hyperplanes_ = [h_ / np.linalg.norm(h_[:, :, :-1], axis=(1, 2))[:, None, None]
for h_ in hyperplanes_]
###Output
_____no_output_____
###Markdown
Train
###Code
if do_classification:
n_epochs = 3
learning_rate_const = 1e-4
step = step_bernoulli
else:
n_epochs = 10
learning_rate_const = 1e-5
step = step_square_loss
for epoch in range(0, n_epochs):
train_loss, train_pred = forward_pass(
step, x_train, y_train, weights_,
hyperplanes_, learning_rate_const, update=True)
test_loss, test_pred = forward_pass(
step, x_test, y_test, weights_, hyperplanes_, learning_rate_const, update=False)
print('epoch: {:d}, test loss: {:.3f} (train_loss: {:.3f})'.format(
epoch, np.mean(test_loss), np.mean(train_loss)))
if do_classification:
accuracy = 1 - np.mean(np.logical_xor(np.round(test_pred), y_test))
print('test accuracy: {:.3f}'.format(accuracy))
###Output
_____no_output_____
###Markdown
Simple Dendritic Gated Networks in numpyThis colab implements a Dendritic Gated Network (DGN) solving a regression (using quadratic loss) or a binary classification problem (using Bernoulli log loss).See our paper titled "A rapid and efficient learning rule for biological neural circuits" for details of the DGN model.Some implementation details:- We utilize `sklearn.datasets.load_breast_cancer` for binary classification and `sklearn.datasets.load_diabetes` for regression.- This code is meant for educational purposes only. It is not optimized for high-performance, both in terms of computational efficiency and quality of fit.- Network is trained on 80% of the dataset and tested on the rest. For classification, we report log loss (negative log likelihood) and accuracy (percentage of correctly identified labels). For regression, we report MSE expressed in units of target variance.
###Code
# Copyright 2021 DeepMind Technologies Limited. All rights reserved.
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
from sklearn import datasets
from sklearn import preprocessing
from sklearn import model_selection
from typing import List, Optional
###Output
_____no_output_____
###Markdown
Choose classification or regression
###Code
do_classification = True # if False, does regression
###Output
_____no_output_____
###Markdown
Load dataset
###Code
if do_classification:
features, targets = datasets.load_breast_cancer(return_X_y=True)
else:
features, targets = datasets.load_diabetes(return_X_y=True)
x_train, x_test, y_train, y_test = model_selection.train_test_split(
features, targets, test_size=0.2, random_state=0)
n_features = x_train.shape[-1]
# Input features are centered and scaled to unit variance:
feature_encoder = preprocessing.StandardScaler()
x_train = feature_encoder.fit_transform(x_train)
x_test = feature_encoder.transform(x_test)
if not do_classification:
# Continuous targets are centered and scaled to unit variance:
target_encoder = preprocessing.StandardScaler()
y_train = np.squeeze(target_encoder.fit_transform(y_train[:, np.newaxis]))
y_test = np.squeeze(target_encoder.transform(y_test[:, np.newaxis]))
###Output
_____no_output_____
###Markdown
DGN inference/update
###Code
def step_square_loss(inputs: np.ndarray,
weights: List[np.ndarray],
hyperplanes: List[np.ndarray],
hyperplane_bias_magnitude: Optional[float] = 1.,
learning_rate: Optional[float] = 1e-5,
target: Optional[float] = None,
update: bool = False,
):
"""Implements a DGN inference/update using square loss."""
r_in = inputs
side_info = np.hstack([hyperplane_bias_magnitude, inputs])
for w, h in zip(weights, hyperplanes): # loop over layers
r_in = np.hstack([1., r_in]) # add biases
gate_values = np.heaviside(h.dot(side_info), 0).astype(bool)
effective_weights = gate_values.dot(w).sum(axis=1)
r_out = effective_weights.dot(r_in)
if update:
grad = (r_out[:, None] - target) * r_in[None]
w -= learning_rate * gate_values[:, :, None] * grad[:, None]
r_in = r_out
loss = (target - r_out)**2 / 2
return r_out, loss
def sigmoid(x): # numerically stable sigmoid
return np.exp(-np.logaddexp(0, -x))
def inverse_sigmoid(x):
return np.log(x/(1-x))
def step_bernoulli(inputs: np.ndarray,
weights: List[np.ndarray],
hyperplanes: List[np.ndarray],
hyperplane_bias_magnitude: Optional[float] = 1.,
learning_rate: Optional[float] = 1e-5,
epsilon: float = 0.01,
target: Optional[float] = None,
update: bool = False,
):
"""Implements a DGN inference/update using Bernoulli log loss."""
r_in = np.clip(sigmoid(inputs), epsilon, 1-epsilon)
side_info = np.hstack([hyperplane_bias_magnitude, inputs])
for w, h in zip(weights, hyperplanes): # loop over layers
r_in = np.hstack([sigmoid(1.), r_in]) # add biases
h_in = inverse_sigmoid(r_in)
gate_values = np.heaviside(h.dot(side_info), 0).astype(bool)
effective_weights = gate_values.dot(w).sum(axis=1)
h_out = effective_weights.dot(h_in)
r_out_unclipped = sigmoid(h_out)
r_out = np.clip(r_out_unclipped, epsilon, 1 - epsilon)
if update:
update_indicator = np.abs(target - r_out_unclipped) > epsilon
grad = (r_out[:, None] - target) * h_in[None] * update_indicator[:, None]
w -= learning_rate * gate_values[:, :, None] * grad[:, None]
r_in = r_out
loss = - (target * np.log(r_out) + (1 - target) * np.log(1 - r_out))
return r_out, loss
def forward_pass(step_fn, x, y, weights, hyperplanes, learning_rate, update):
losses, outputs = np.zeros(len(y)), np.zeros(len(y))
for i, (x_i, y_i) in enumerate(zip(x, y)):
outputs[i], losses[i] = step_fn(x_i, weights, hyperplanes, target=y_i,
learning_rate=learning_rate, update=update)
return np.mean(losses), outputs
###Output
_____no_output_____
###Markdown
Define architecture
###Code
# number of neurons per layer, the last element must be 1
n_neurons = np.array([100, 10, 1])
n_branches = 20 # number of dendritic brancher per neuron
###Output
_____no_output_____
###Markdown
Initialise weights and gating parameters
###Code
n_inputs = np.hstack([n_features + 1, n_neurons[:-1] + 1]) # 1 for the bias
dgn_weights = [np.zeros((n_neuron, n_branches, n_input))
for n_neuron, n_input in zip(n_neurons, n_inputs)]
# Fixing random seed for reproducibility:
np.random.seed(12345)
dgn_hyperplanes = [
np.random.normal(0, 1, size=(n_neuron, n_branches, n_features + 1))
for n_neuron in n_neurons]
# By default, the weight parameters are drawn from a normalised Gaussian:
dgn_hyperplanes = [
h_ / np.linalg.norm(h_[:, :, :-1], axis=(1, 2))[:, None, None]
for h_ in dgn_hyperplanes]
###Output
_____no_output_____
###Markdown
Train
###Code
if do_classification:
eta = 1e-4
n_epochs = 3
step = step_bernoulli
else:
eta = 1e-5
n_epochs = 10
step = step_square_loss
if do_classification:
step = step_bernoulli
else:
step = step_square_loss
print('Training on {} problem for {} epochs with learning rate {}.'.format(
['regression', 'classification'][do_classification], n_epochs, eta))
print('This may take a minute. Please be patient...')
for epoch in range(0, n_epochs + 1):
train_loss, train_pred = forward_pass(
step, x_train, y_train, dgn_weights,
dgn_hyperplanes, eta, update=(epoch > 0))
test_loss, test_pred = forward_pass(
step, x_test, y_test, dgn_weights,
dgn_hyperplanes, eta, update=False)
to_print = 'epoch: {}, test loss: {:.3f} (train: {:.3f})'.format(
epoch, test_loss, train_loss)
if do_classification:
accuracy_train = np.mean(np.round(train_pred) == y_train)
accuracy = np.mean(np.round(test_pred) == y_test)
to_print += ', test accuracy: {:.3f} (train: {:.3f})'.format(
accuracy, accuracy_train)
print(to_print)
###Output
_____no_output_____ |
Mathematics/Linear Algebra/0.0b Supplement of MIT Course - Concept Explanations with Problems_and_Questions to Solve/15-Singular-value-decomposition.ipynb | ###Markdown
Singular value decomposition  This work by Jephian Lin is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg as LA
###Output
_____no_output_____
###Markdown
Main idea Every $m\times n$ matrix $A$ can be written as $$ A = U\Sigma V^\top,$$where $U$ and $V$ are orthogonal matrices ($UU^\top = U^\top U = I$ and $VV^\top = V^\top V = I$) and $$\Sigma = \begin{bmatrix} \operatorname{diag}(\sigma_1,\ldots,\sigma_r) & O_{r,n-r} \\ O_{m-r, r} & O_{m-r,n-r} \end{bmatrix}.$$Here $r$ is the rank of $A$ and $$\sigma_1\geq\cdots\geq\sigma_r>0$$ are called the **singular values** of $A$. Equivalently, the action $$A{\bf x}\xleftarrow{A} {\bf x}$$ can be partitioned into three steps $$A{\bf x}\xleftarrow{U}[A{\bf x}]_\alpha\xleftarrow{\Sigma}[{\bf x}]_\beta\xleftarrow{U^\top}{\bf x}.$$ Here $\alpha$ is the columns of $U$ and $\beta$ is the columns of $V$. Side stories - image compression Experiments Exercise 1Let ```pythonA = np.ones((3,4))U,s,Vh = LA.svd(A)Sigma = np.zeros_like(A)Sigma[np.arange(3), np.arange(3)] = s``` 1(a)Check if $A = U\Sigma V^\top$. (Note that `Vh` is $V^\top$ but not $V$.)
###Code
### your answer here
###Output
_____no_output_____
###Markdown
1(b)Check if $UU^\top = U^\top U = I$ and $VV^\top = V^\top V = I$.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
Exercise 2Let ```pythonU = np.array([[1,1,1], [-1,0,1], [0,-1,1]])V = np.array([[1,1,0,0], [-1,1,0,0], [0,0,1,1], [0,0,-1,1]])U = U / np.linalg.norm(U, axis=0)V = V / np.linalg.norm(V, axis=0)```Let $\alpha=\{{\bf u}_0,\ldots, {\bf u}_2\}$ be the columns of $U$ and $\beta=\{{\bf v}_0,\ldots, {\bf v}_3\}$ the columns of $V$. 2(a)Find a matrix such that $A{\bf v}_0 = 3{\bf u}_0$ $A{\bf v}_1 = 2{\bf u}_1$ $A{\bf v}_2 = 1{\bf u}_2$ $A{\bf v}_3 = {\bf 0}$.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
2(b)Find a matrix such that $A{\bf v}_0 = 1{\bf u}_0$ $A{\bf v}_1 = {\bf 0}$ $A{\bf v}_2 = {\bf 0}$ $A{\bf v}_3 = {\bf 0}$. Compare it with ${\bf u}_0{\bf v}_0^\top$.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
Exercises Exercise 3Pick a random $3\times 4$ matrix $A$. Let $A = U\Sigma V^\top$ be its singular value decomposition. 3(a)Compare the following:1. the square of singular values of $A$.2. the eigenvalues of $AA^\top$. 3. the eigenvalues of $A^\top A$.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
3(b)Compare the following:1. the columns of $U$. 2. the eigenvectors of $AA^\top$.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
3(b)Compare the following:1. the columns of $V$. 2. the eigenvectors of $A^\top A$.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
3(c)Pick an eigenvector ${\bf v}$ of $A^\top A$ such that $A^\top A{\bf v} = \lambda{\bf v}$ with $\lambda\neq 0$. Check if $A{\bf v}$ is an eigenvector of $AA^\top$. Can you verify this property by algebra?
###Code
### your answer here
###Output
_____no_output_____
###Markdown
3(d)Pick any two eigenvectors ${\bf v}$ and ${\bf v}'$ of $A^\top A$ such that they are orthogonal to each other. Check if $A{\bf v}$ and $A{\bf v}'$ are orthogonal to each other. Can you verify this property by algebra?
###Code
### your answer here
###Output
_____no_output_____
###Markdown
RemarkHere are the steps for finding a singular value decomposition of an $m\times n$ matrix $A$ with rank $r$. 1. Find an orthonormal eigenbasis $\{{\bf v}_1,\ldots,{\bf v}_n\}$ of $A^\top A$ with respect to eigenvalues $\lambda_1\geq\cdots\geq\lambda_r>0=\lambda_{r+1}=\cdots\lambda_n$. 2. Find an orthonormal eigenbasis $\{{\bf u}_1,\ldots,{\bf u}_m\}$ of $AA^\top$ with respect to eigenvalues $\lambda_1\geq\cdots\geq\lambda_r>0=\lambda_{r+1}=\cdots\lambda_n$. 3. Let $V$ be the matrix whose columns are $\{{\bf v}_1,\ldots,{\bf v}_n\}$. 4. Let $U$ be the matrix whose columns are $\{A{\bf v}_1,\ldots,A{\bf v}_r,{\bf u}_{r+1},\ldots,{\bf u}_n\}$. 5. Let $\sigma_1 = \sqrt{\lambda_1},\ldots, \sigma_r = \sqrt{\lambda_r}$. Exercise 4Suppose $A$ is an $m\times n$ matrix of rank $r$ and $A = U\Sigma V^\top$ is its singular decomposition. Let ${\bf u}_0,\ldots,{\bf u}_{m-1}$ be the columns of $U$ and ${\bf v}_0, \ldots, {\bf v}_{n-1}$ the columns of $V$. Similar to the spectral decomposition, the singular value decomposition can also be written as $$A = \sum_{i=0}^{r-1} \sigma_i{\bf u}_i{\bf v}_i^\top.$$ Therefore, $\sum_{i=0}^{k-1} \sigma_i{\bf u}_i{\bf v}_i^\top$, where $k\leq r$, is an approximation of $A$. 4(a)Let ```pythonarr = plt.imread('incrediville-side.jpg').mean(axis=-1)arr.shape```Show the image `arr` by `plt.imshow` with proper `vmin`, `vmax`, and `cmap` .
###Code
### your answer here
###Output
_____no_output_____
###Markdown
4(b)Let ```pythonU,s,Vh = LA.svd(arr)```Let ${\bf u}_0,\ldots,{\bf u}_m$ the columns of $U$, ${\bf v}_0,\ldots,{\bf v}_n$ the columns of $V$ (`Vh`), $\sigma_0,\ldots,\sigma_m$ the values in `s` . Pick $k = 10$. Calculate `approx =` $\sum_{i=0}^{k-1} \sigma_i{\bf u}_i{\bf v}_i^\top$ and show the image. Adjust $k$ to get a better quality if you wish.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
4(c)Run ```pythonplt.plot(s)```and pick a reasonable $k$.
###Code
### your answer here
###Output
_____no_output_____ |
notebooks/chapter_04/chapter-04.ipynb | ###Markdown
Baseline strategies
###Code
def pure_exploitation(env, n_episodes=1000):
# Array storing the Q function values for each action in the action space:
Q = np.zeros((env.action_space.n), dtype=np.float64)
# Array storing the counts of how many times each action in the action space is performed:
N = np.zeros((env.action_space.n), dtype=np.int)
# Array storing the Q function values for each action, at each iteration of the training:
Qe = np.empty((n_episodes, env.action_space.n), dtype=np.float64)
# Array storing the rewards received at each iteration of the training:
returns = np.empty(n_episodes, dtype=np.float64)
# Array storing the actions taken at each iteration of the training:
actions = np.empty(n_episodes, dtype=np.int)
name = 'Pure exploitation'
for e in tqdm(range(n_episodes),
desc='Episodes for: ' + name,
leave=False):
# This is the heart of the algorithm; at each step perform the
# action with the highest current Q function value:
action = np.argmax(Q)
_, reward, _, _ = env.step(action)
N[action] += 1
# Add the difference between the reward at this step and the current Q value
# for the action. Weight it by the number of times the action has been
# performed to produce the average.
Q[action] = Q[action] + (reward - Q[action])/N[action]
Qe[e] = Q
returns[e] = reward
actions[e] = action
return name, returns, Qe, actions
env = gym.make('BanditTwoArmedUniform-v0', seed=42); env.reset()
name, returns, Qe, actions = pure_exploitation(env, n_episodes=100)
def pure_exploration(env, n_episodes=1000):
Q = np.zeros((env.action_space.n), dtype=np.float64)
N = np.zeros((env.action_space.n), dtype=np.int)
Qe = np.empty((n_episodes, env.action_space.n), dtype=np.float64)
returns = np.empty(n_episodes, dtype=np.float64)
actions = np.empty(n_episodes, dtype=np.int)
name = 'Pure exploration'
for e in tqdm(range(n_episodes),
desc='Episodes for: ' + name,
leave=False):
# The difference in this algorithm from pure exploitation is that the
# actions are chosen uniformly at random, with no consideration for
# the current Q-value estimate for that action.
action = np.random.randint(len(Q))
_, reward, _, _ = env.step(action)
N[action] += 1
Q[action] = Q[action] + (reward - Q[action])/N[action]
Qe[e] = Q
returns[e] = reward
actions[e] = action
return name, returns, Qe, actions
env = gym.make('BanditTwoArmedUniform-v0', seed=42); env.reset()
name, returns, Qe, actions = pure_exploration(env, n_episodes=100)
###Output
_____no_output_____
###Markdown
Simple strategies
###Code
def epsilon_greedy(env, epsilon=0.01, n_episodes=1000):
Q = np.zeros((env.action_space.n), dtype=np.float64)
N = np.zeros((env.action_space.n), dtype=np.int)
Qe = np.empty((n_episodes, env.action_space.n), dtype=np.float64)
returns = np.empty(n_episodes, dtype=np.float64)
actions = np.empty(n_episodes, dtype=np.int)
name = 'Epsilon-Greedy {}'.format(epsilon)
for e in tqdm(range(n_episodes),
desc='Episodes for: ' + name,
leave=False):
# Alternates between the pure exploitation and exploration strategies,
# choosing the exploitation strategy 1-epsilon times on average.
if np.random.uniform() > epsilon:
action = np.argmax(Q)
else:
action = np.random.randint(len(Q))
_, reward, _, _ = env.step(action)
N[action] += 1
Q[action] = Q[action] + (reward - Q[action])/N[action]
Qe[e] = Q
returns[e] = reward
actions[e] = action
return name, returns, Qe, actions
env = gym.make('BanditTwoArmedUniform-v0', seed=42); env.reset()
name, returns, Qe, actions = epsilon_greedy(env, n_episodes=100)
def lin_dec_epsilon_greedy(env, init_epsilon=1.0, min_epsilon=0.01,
decay_ratio=0.05, n_episodes=1000):
Q = np.zeros((env.action_space.n), dtype=np.float64)
N = np.zeros((env.action_space.n), dtype=np.int)
Qe = np.empty((n_episodes, env.action_space.n), dtype=np.float64)
returns = np.empty(n_episodes, dtype=np.float64)
actions = np.empty(n_episodes, dtype=np.int)
name = 'Lin Epsilon-Greedy {}, {}, {}'.format(init_epsilon,
min_epsilon,
decay_ratio)
# Calculate the number of episodes in the loop in which epsilon
# will decay, before it settles at min_epsilon:
decay_episodes = n_episodes * decay_ratio
for e in tqdm(range(n_episodes),
desc='Episodes for: ' + name,
leave=False):
# Calculcate the current episode as a fraction of the total decay episodes:
epsilon = 1 - e / decay_episodes
# Multiply the fraction by the total range of values epsilon can take:
epsilon *= (init_epsilon - min_epsilon)
# Add minimum value to epsilon so that first value is 1:
epsilon += min_epsilon
# Clip epsilon to the minimum/maximum allowed values:
epsilon = np.clip(epsilon, min_epsilon, init_epsilon)
# Proceed as with epsilon_greedy strategy:
if np.random.uniform() > epsilon:
action = np.argmax(Q)
else:
action = np.random.randint(len(Q))
_, reward, _, _ = env.step(action)
N[action] += 1
Q[action] = Q[action] + (reward - Q[action])/N[action]
Qe[e] = Q
returns[e] = reward
actions[e] = action
return name, returns, Qe, actions
env = gym.make('BanditTwoArmedUniform-v0', seed=42); env.reset()
name, returns, Qe, actions = lin_dec_epsilon_greedy(env, n_episodes=100)
def exp_dec_epsilon_greedy(env,
init_epsilon=1.0,
min_epsilon=0.01,
decay_ratio=0.1,
n_episodes=1000):
Q = np.zeros((env.action_space.n), dtype=np.float64)
N = np.zeros((env.action_space.n), dtype=np.int)
Qe = np.empty((n_episodes, env.action_space.n), dtype=np.float64)
returns = np.empty(n_episodes, dtype=np.float64)
actions = np.empty(n_episodes, dtype=np.int)
# Calculate the number of episodes in which epsilon
# will decay, before it settles at min_epsilon:
decay_episodes = int(n_episodes * decay_ratio)
# The remaining episodes in which epsilon will be min_epsilon:
rem_episodes = n_episodes - decay_episodes
# Create a logarithmically spaced array of epsilon values
# for all decay episodes, ranging from 1 to 0.01:
epsilons = 0.01
epsilons /= np.logspace(-2, 0, decay_episodes)
# Multiply the logarithmic array by the total range of epsilon values:
epsilons *= init_epsilon - min_epsilon
# Add the min_epsilon value to the array so that the first value is 1:
epsilons += min_epsilon
# Pad the array with the min_epsilon value for all
# episodes after decay finishes:
epsilons = np.pad(epsilons, (0, rem_episodes), 'edge')
name = 'Exp Epsilon-Greedy {}, {}, {}'.format(init_epsilon,
min_epsilon,
decay_ratio)
for e in tqdm(range(n_episodes),
desc='Episodes for: ' + name,
leave=False):
# Proceed as with epsilon_greedy strategy, but get
# the epsilon from the epsilon array:
if np.random.uniform() > epsilons[e]:
action = np.argmax(Q)
else:
action = np.random.randint(len(Q))
_, reward, _, _ = env.step(action)
N[action] += 1
Q[action] = Q[action] + (reward - Q[action])/N[action]
Qe[e] = Q
returns[e] = reward
actions[e] = action
return name, returns, Qe, actions
env = gym.make('BanditTwoArmedUniform-v0', seed=42); env.reset()
name, returns, Qe, actions = exp_dec_epsilon_greedy(env, n_episodes=100)
def optimistic_initialization(env,
optimistic_estimate=1.0,
initial_count=100,
n_episodes=1000):
# Initialize the Q values with the optimistic_estimate
# (this requires knowing the maximum possible reward ahead
# of time, which isn't always realistic).
Q = np.full((env.action_space.n), optimistic_estimate, dtype=np.float64)
# Counts must be initialized with values to prevent
# division by zero errors:
N = np.full((env.action_space.n), initial_count, dtype=np.int)
Qe = np.empty((n_episodes, env.action_space.n), dtype=np.float64)
returns = np.empty(n_episodes, dtype=np.float64)
actions = np.empty(n_episodes, dtype=np.int)
name = 'Optimistic {}, {}'.format(optimistic_estimate,
initial_count)
for e in tqdm(range(n_episodes),
desc='Episodes for: ' + name,
leave=False):
action = np.argmax(Q)
_, reward, _, _ = env.step(action)
N[action] += 1
# Exactly the same as the pure expoitation strategy, but
# effect this time will be to reduce Q values for each action:
Q[action] = Q[action] + (reward - Q[action])/N[action]
Qe[e] = Q
returns[e] = reward
actions[e] = action
return name, returns, Qe, actions
env = gym.make('BanditTwoArmedUniform-v0', seed=42); env.reset()
name, returns, Qe, actions = optimistic_initialization(env, n_episodes=100)
###Output
_____no_output_____
###Markdown
Two-Armed Bandit environments
###Code
b2_Vs = []
for seed in SEEDS:
env_name = 'BanditTwoArmedUniform-v0'
env = gym.make(env_name, seed=seed) ; env.reset()
b2_Q = np.array(env.env.p_dist * env.env.r_dist)
print('Two-Armed Bandit environment with seed', seed)
print('Probability of reward:', env.env.p_dist)
print('Reward:', env.env.r_dist)
print('Q(.):', b2_Q)
b2_Vs.append(np.max(b2_Q))
print('V*:', b2_Vs[-1])
print()
print('Mean V* across all seeds:', np.mean(b2_Vs))
###Output
Two-Armed Bandit environment with seed 12
Probability of reward: [0.41630234 0.5545003 ]
Reward: [1 1]
Q(.): [0.41630234 0.5545003 ]
V*: 0.5545003042316209
Two-Armed Bandit environment with seed 34
Probability of reward: [0.88039337 0.56881791]
Reward: [1 1]
Q(.): [0.88039337 0.56881791]
V*: 0.8803933660102791
Two-Armed Bandit environment with seed 56
Probability of reward: [0.44859284 0.9499771 ]
Reward: [1 1]
Q(.): [0.44859284 0.9499771 ]
V*: 0.9499771030206514
Two-Armed Bandit environment with seed 78
Probability of reward: [0.53235706 0.84511988]
Reward: [1 1]
Q(.): [0.53235706 0.84511988]
V*: 0.8451198776828125
Two-Armed Bandit environment with seed 90
Probability of reward: [0.56461729 0.91744039]
Reward: [1 1]
Q(.): [0.56461729 0.91744039]
V*: 0.9174403942290458
Mean V* across all seeds: 0.8294862090348818
###Markdown
Running simple strategies on Two-Armed Bandit environments
###Code
def b2_run_simple_strategies_experiment(env_name='BanditTwoArmedUniform-v0'):
results = {}
experiments = [
# baseline strategies
lambda env: pure_exploitation(env),
lambda env: pure_exploration(env),
# epsilon greedy
lambda env: epsilon_greedy(env, epsilon=0.07),
lambda env: epsilon_greedy(env, epsilon=0.1),
# epsilon greedy linearly decaying
lambda env: lin_dec_epsilon_greedy(env,
init_epsilon=1.0,
min_epsilon=0.0,
decay_ratio=0.1),
lambda env: lin_dec_epsilon_greedy(env,
init_epsilon=0.3,
min_epsilon=0.001,
decay_ratio=0.1),
# epsilon greedy exponentially decaying
lambda env: exp_dec_epsilon_greedy(env,
init_epsilon=1.0,
min_epsilon=0.0,
decay_ratio=0.1),
lambda env: exp_dec_epsilon_greedy(env,
init_epsilon=0.3,
min_epsilon=0.0,
decay_ratio=0.3),
# optimistic
lambda env: optimistic_initialization(env,
optimistic_estimate=1.0,
initial_count=10),
lambda env: optimistic_initialization(env,
optimistic_estimate=1.0,
initial_count=50),
]
for env_seed in tqdm(SEEDS, desc='All experiments'):
env = gym.make(env_name, seed=env_seed) ; env.reset()
true_Q = np.array(env.env.p_dist * env.env.r_dist)
opt_V = np.max(true_Q)
for seed in tqdm(SEEDS, desc='All environments', leave=False):
for experiment in tqdm(experiments,
desc='Experiments with seed {}'.format(seed),
leave=False):
env.seed(seed) ; np.random.seed(seed) ; random.seed(seed)
name, Re, Qe, Ae = experiment(env)
Ae = np.expand_dims(Ae, -1)
episode_mean_rew = np.cumsum(Re) / (np.arange(len(Re)) + 1)
Q_selected = np.take_along_axis(
np.tile(true_Q, Ae.shape), Ae, axis=1).squeeze()
regret = opt_V - Q_selected
cum_regret = np.cumsum(regret)
if name not in results.keys(): results[name] = {}
if 'Re' not in results[name].keys(): results[name]['Re'] = []
if 'Qe' not in results[name].keys(): results[name]['Qe'] = []
if 'Ae' not in results[name].keys(): results[name]['Ae'] = []
if 'cum_regret' not in results[name].keys():
results[name]['cum_regret'] = []
if 'episode_mean_rew' not in results[name].keys():
results[name]['episode_mean_rew'] = []
results[name]['Re'].append(Re)
results[name]['Qe'].append(Qe)
results[name]['Ae'].append(Ae)
results[name]['cum_regret'].append(cum_regret)
results[name]['episode_mean_rew'].append(episode_mean_rew)
return results
b2_results_s = b2_run_simple_strategies_experiment()
###Output
_____no_output_____
###Markdown
Plotting results of simple strategies on Two-Armed Bandit environments
###Code
fig, axs = plt.subplots(5, 1, figsize=(28, 28), sharey=False, sharex=False)
lines = ["-","--",":","-."]
linecycler = cycle(lines)
min_reg, max_ret = float('inf'), float('-inf')
for label, result in b2_results_s.items():
color = next(linecycler)
# reward
episode_mean_rew = np.array(result['episode_mean_rew'])
mean_episode_mean_rew = np.mean(episode_mean_rew, axis=0)
axs[0].plot(mean_episode_mean_rew, color, linewidth=2, label=label)
axs[1].plot(mean_episode_mean_rew, color, linewidth=2, label=label)
axs[1].set_xscale('log')
axs[2].plot(mean_episode_mean_rew, color, linewidth=2, label=label)
if max_ret < mean_episode_mean_rew[-1]: max_ret = mean_episode_mean_rew[-1]
axs[2].axis((mean_episode_mean_rew.shape[0]*0.989,
mean_episode_mean_rew.shape[0],
max_ret-0.005,
max_ret+0.0001))
# regret
cum_regret = np.array(result['cum_regret'])
mean_cum_regret = np.mean(cum_regret, axis=0)
axs[3].plot(mean_cum_regret, color, linewidth=2, label=label)
axs[4].plot(mean_cum_regret, color, linewidth=2, label=label)
if min_reg > mean_cum_regret[-1]: min_reg = mean_cum_regret[-1]
plt.axis((mean_cum_regret.shape[0]*0.989,
mean_cum_regret.shape[0],
min_reg-0.5,
min_reg+5))
# config plot
axs[0].set_title('Mean Episode Reward')
axs[1].set_title('Mean Episode Reward (Log scale)')
axs[2].set_title('Mean Episode Reward (Zoom on best)')
axs[3].set_title('Total Regret')
axs[4].set_title('Total Regret (Zoom on best)')
plt.xlabel('Episodes')
axs[0].legend(loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Advanced strategies
###Code
def softmax(env,
init_temp=float('inf'),
min_temp=0.0,
decay_ratio=0.04,
n_episodes=1000):
Q = np.zeros((env.action_space.n), dtype=np.float64)
N = np.zeros((env.action_space.n), dtype=np.int)
Qe = np.empty((n_episodes, env.action_space.n), dtype=np.float64)
returns = np.empty(n_episodes, dtype=np.float64)
actions = np.empty(n_episodes, dtype=np.int)
name = 'Lin SoftMax {}, {}, {}'.format(init_temp,
min_temp,
decay_ratio)
# can't really use infinity
init_temp = min(init_temp,
sys.float_info.max)
# can't really use zero
min_temp = max(min_temp,
np.nextafter(np.float32(0),
np.float32(1)))
for e in tqdm(range(n_episodes),
desc='Episodes for: ' + name,
leave=False):
decay_episodes = n_episodes * decay_ratio
temp = 1 - e / decay_episodes
temp *= init_temp - min_temp
temp += min_temp
temp = np.clip(temp, min_temp, init_temp)
scaled_Q = Q / temp
norm_Q = scaled_Q - np.max(scaled_Q)
exp_Q = np.exp(norm_Q)
probs = exp_Q / np.sum(exp_Q)
assert np.isclose(probs.sum(), 1.0)
action = np.random.choice(np.arange(len(probs)),
size=1,
p=probs)[0]
_, reward, _, _ = env.step(action)
N[action] += 1
Q[action] = Q[action] + (reward - Q[action])/N[action]
Qe[e] = Q
returns[e] = reward
actions[e] = action
return name, returns, Qe, actions
def upper_confidence_bound(env,
c=2,
n_episodes=1000):
Q = np.zeros((env.action_space.n), dtype=np.float64)
N = np.zeros((env.action_space.n), dtype=np.int)
Qe = np.empty((n_episodes, env.action_space.n), dtype=np.float64)
returns = np.empty(n_episodes, dtype=np.float64)
actions = np.empty(n_episodes, dtype=np.int)
name = 'UCB {}'.format(c)
for e in tqdm(range(n_episodes),
desc='Episodes for: ' + name,
leave=False):
action = e
if e >= len(Q):
U = np.sqrt(c * np.log(e)/N)
action = np.argmax(Q + U)
_, reward, _, _ = env.step(action)
N[action] += 1
Q[action] = Q[action] + (reward - Q[action])/N[action]
Qe[e] = Q
returns[e] = reward
actions[e] = action
return name, returns, Qe, actions
def thompson_sampling(env,
alpha=1,
beta=0,
n_episodes=1000):
Q = np.zeros((env.action_space.n), dtype=np.float64)
N = np.zeros((env.action_space.n), dtype=np.int)
Qe = np.empty((n_episodes, env.action_space.n), dtype=np.float64)
returns = np.empty(n_episodes, dtype=np.float64)
actions = np.empty(n_episodes, dtype=np.int)
name = 'Thompson Sampling {}, {}'.format(alpha, beta)
for e in tqdm(range(n_episodes),
desc='Episodes for: ' + name,
leave=False):
samples = np.random.normal(
loc=Q, scale=alpha/(np.sqrt(N) + beta))
action = np.argmax(samples)
_, reward, _, _ = env.step(action)
N[action] += 1
Q[action] = Q[action] + (reward - Q[action])/N[action]
Qe[e] = Q
returns[e] = reward
actions[e] = action
return name, returns, Qe, actions
###Output
_____no_output_____
###Markdown
Running advanced strategies on Two-Armed Bandit environments
###Code
def b2_run_advanced_strategies_experiment(env_name='BanditTwoArmedUniform-v0'):
results = {}
experiments = [
# baseline strategies
lambda env: pure_exploitation(env),
lambda env: pure_exploration(env),
# best from simple strategies
lambda env: optimistic_initialization(env,
optimistic_estimate=1.0,
initial_count=10),
lambda env: exp_dec_epsilon_greedy(env,
init_epsilon=0.3,
min_epsilon=0.0,
decay_ratio=0.3),
# softmax
lambda env: softmax(env,
init_temp=float('inf'),
min_temp=0.0,
decay_ratio=0.005),
lambda env: softmax(env,
init_temp=100,
min_temp=0.01,
decay_ratio=0.01),
# ucb
lambda env: upper_confidence_bound(env, c=0.2),
lambda env: upper_confidence_bound(env, c=0.5),
# thompson sampling
lambda env: thompson_sampling(env, alpha=1, beta=1),
lambda env: thompson_sampling(env, alpha=0.5, beta=0.5),
]
for env_seed in tqdm(SEEDS, desc='All experiments'):
env = gym.make(env_name, seed=env_seed) ; env.reset()
true_Q = np.array(env.env.p_dist * env.env.r_dist)
opt_V = np.max(true_Q)
for seed in tqdm(SEEDS, desc='All environments', leave=False):
for experiment in tqdm(experiments,
desc='Experiments with seed {}'.format(seed),
leave=False):
env.seed(seed) ; np.random.seed(seed) ; random.seed(seed)
name, Re, Qe, Ae = experiment(env)
Ae = np.expand_dims(Ae, -1)
episode_mean_rew = np.cumsum(Re) / (np.arange(len(Re)) + 1)
Q_selected = np.take_along_axis(
np.tile(true_Q, Ae.shape), Ae, axis=1).squeeze()
regret = opt_V - Q_selected
cum_regret = np.cumsum(regret)
if name not in results.keys(): results[name] = {}
if 'Re' not in results[name].keys(): results[name]['Re'] = []
if 'Qe' not in results[name].keys(): results[name]['Qe'] = []
if 'Ae' not in results[name].keys(): results[name]['Ae'] = []
if 'cum_regret' not in results[name].keys():
results[name]['cum_regret'] = []
if 'episode_mean_rew' not in results[name].keys():
results[name]['episode_mean_rew'] = []
results[name]['Re'].append(Re)
results[name]['Qe'].append(Qe)
results[name]['Ae'].append(Ae)
results[name]['cum_regret'].append(cum_regret)
results[name]['episode_mean_rew'].append(episode_mean_rew)
return results
b2_results_a = b2_run_advanced_strategies_experiment()
###Output
_____no_output_____
###Markdown
Plotting results of advanced strategies on Two-Armed Bandit environments
###Code
fig, axs = plt.subplots(5, 1, figsize=(28, 28), sharey=False, sharex=False)
lines = ["-","--",":","-."]
linecycler = cycle(lines)
min_reg, max_ret = float('inf'), float('-inf')
for label, result in b2_results_a.items():
color = next(linecycler)
# reward
episode_mean_rew = np.array(result['episode_mean_rew'])
mean_episode_mean_rew = np.mean(episode_mean_rew, axis=0)
axs[0].plot(mean_episode_mean_rew, color, linewidth=2, label=label)
axs[1].plot(mean_episode_mean_rew, color, linewidth=2, label=label)
axs[1].set_xscale('log')
axs[2].plot(mean_episode_mean_rew, color, linewidth=2, label=label)
if max_ret < mean_episode_mean_rew[-1]: max_ret = mean_episode_mean_rew[-1]
axs[2].axis((mean_episode_mean_rew.shape[0]*0.989,
mean_episode_mean_rew.shape[0],
max_ret-0.004,
max_ret+0.0001))
# regret
cum_regret = np.array(result['cum_regret'])
mean_cum_regret = np.mean(cum_regret, axis=0)
axs[3].plot(mean_cum_regret, color, linewidth=2, label=label)
axs[4].plot(mean_cum_regret, color, linewidth=2, label=label)
if min_reg > mean_cum_regret[-1]: min_reg = mean_cum_regret[-1]
plt.axis((mean_cum_regret.shape[0]*0.989,
mean_cum_regret.shape[0],
min_reg-1,
min_reg+4))
# config plot
axs[0].set_title('Mean Episode Reward')
axs[1].set_title('Mean Episode Reward (Log scale)')
axs[2].set_title('Mean Episode Reward (Zoom on best)')
axs[3].set_title('Total Regret')
axs[4].set_title('Total Regret (Zoom on best)')
plt.xlabel('Episodes')
axs[0].legend(loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
10-Armed Gaussian Bandit environments
###Code
b10_Vs = []
for seed in SEEDS:
env_name = 'BanditTenArmedGaussian-v0'
env = gym.make(env_name, seed=seed) ; env.reset()
r_dist = np.array(env.env.r_dist)[:,0]
b10_Q = np.array(env.env.p_dist * r_dist)
print('10-Armed Bandit environment with seed', seed)
print('Probability of reward:', env.env.p_dist)
print('Reward:', r_dist)
print('Q(.):', b10_Q)
b10_Vs.append(np.max(b10_Q))
print('V*:', b10_Vs[-1])
print()
print('Mean V* across all seeds:', np.mean(b10_Vs))
###Output
_____no_output_____
###Markdown
Running simple strategies on 10-Armed Bandit environments
###Code
def b10_run_simple_strategies_experiment(env_name='BanditTenArmedGaussian-v0'):
results = {}
experiments = [
# baseline strategies
lambda env: pure_exploitation(env),
lambda env: pure_exploration(env),
# epsilon greedy
lambda env: epsilon_greedy(env, epsilon=0.07),
lambda env: epsilon_greedy(env, epsilon=0.1),
# epsilon greedy linearly decaying
lambda env: lin_dec_epsilon_greedy(env,
init_epsilon=1.0,
min_epsilon=0.0,
decay_ratio=0.1),
lambda env: lin_dec_epsilon_greedy(env,
init_epsilon=0.3,
min_epsilon=0.001,
decay_ratio=0.1),
# epsilon greedy exponentially decaying
lambda env: exp_dec_epsilon_greedy(env,
init_epsilon=1.0,
min_epsilon=0.0,
decay_ratio=0.1),
lambda env: exp_dec_epsilon_greedy(env,
init_epsilon=0.3,
min_epsilon=0.0,
decay_ratio=0.3),
# optimistic
lambda env: optimistic_initialization(env,
optimistic_estimate=1.0,
initial_count=10),
lambda env: optimistic_initialization(env,
optimistic_estimate=1.0,
initial_count=50),
]
for env_seed in tqdm(SEEDS, desc='All experiments'):
env = gym.make(env_name, seed=env_seed) ; env.reset()
r_dist = np.array(env.env.r_dist)[:,0]
true_Q = np.array(env.env.p_dist * r_dist)
opt_V = np.max(true_Q)
for seed in tqdm(SEEDS, desc='All environments', leave=False):
for experiment in tqdm(experiments,
desc='Experiments with seed {}'.format(seed),
leave=False):
env.seed(seed) ; np.random.seed(seed) ; random.seed(seed)
name, Re, Qe, Ae = experiment(env)
Ae = np.expand_dims(Ae, -1)
episode_mean_rew = np.cumsum(Re) / (np.arange(len(Re)) + 1)
Q_selected = np.take_along_axis(
np.tile(true_Q, Ae.shape), Ae, axis=1).squeeze()
regret = opt_V - Q_selected
cum_regret = np.cumsum(regret)
if name not in results.keys(): results[name] = {}
if 'Re' not in results[name].keys(): results[name]['Re'] = []
if 'Qe' not in results[name].keys(): results[name]['Qe'] = []
if 'Ae' not in results[name].keys(): results[name]['Ae'] = []
if 'cum_regret' not in results[name].keys():
results[name]['cum_regret'] = []
if 'episode_mean_rew' not in results[name].keys():
results[name]['episode_mean_rew'] = []
results[name]['Re'].append(Re)
results[name]['Qe'].append(Qe)
results[name]['Ae'].append(Ae)
results[name]['cum_regret'].append(cum_regret)
results[name]['episode_mean_rew'].append(episode_mean_rew)
return results
b10_results_s = b10_run_simple_strategies_experiment()
###Output
_____no_output_____
###Markdown
Plotting results of simple strategies on 10-Armed Bandit environments
###Code
fig, axs = plt.subplots(5, 1, figsize=(28, 28), sharey=False, sharex=False)
lines = ["-","--",":","-."]
linecycler = cycle(lines)
min_reg, max_ret = float('inf'), float('-inf')
for label, result in b10_results_s.items():
color = next(linecycler)
# reward
episode_mean_rew = np.array(result['episode_mean_rew'])
mean_episode_mean_rew = np.mean(episode_mean_rew, axis=0)
axs[0].plot(mean_episode_mean_rew, color, linewidth=2, label=label)
axs[1].plot(mean_episode_mean_rew, color, linewidth=2, label=label)
axs[1].set_xscale('log')
axs[2].plot(mean_episode_mean_rew, color, linewidth=2, label=label)
if max_ret < mean_episode_mean_rew[-1]: max_ret = mean_episode_mean_rew[-1]
axs[2].axis((mean_episode_mean_rew.shape[0]*0.989,
mean_episode_mean_rew.shape[0],
max_ret-0.06,
max_ret+0.005))
# regret
cum_regret = np.array(result['cum_regret'])
mean_cum_regret = np.mean(cum_regret, axis=0)
axs[3].plot(mean_cum_regret, color, linewidth=2, label=label)
axs[4].plot(mean_cum_regret, color, linewidth=2, label=label)
if min_reg > mean_cum_regret[-1]: min_reg = mean_cum_regret[-1]
plt.axis((mean_cum_regret.shape[0]*0.989,
mean_cum_regret.shape[0],
min_reg-5,
min_reg+45))
# config plot
axs[0].set_title('Mean Episode Reward')
axs[1].set_title('Mean Episode Reward (Log scale)')
axs[2].set_title('Mean Episode Reward (Zoom on best)')
axs[3].set_title('Total Regret')
axs[4].set_title('Total Regret (Zoom on best)')
plt.xlabel('Episodes')
axs[0].legend(loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Running advanced strategies on 10-Armed Bandit environments
###Code
def b10_run_advanced_strategies_experiment(env_name='BanditTenArmedGaussian-v0'):
results = {}
experiments = [
# baseline strategies
lambda env: pure_exploitation(env),
lambda env: pure_exploration(env),
# best from simple strategies
lambda env: lin_dec_epsilon_greedy(env,
init_epsilon=1.0,
min_epsilon=0.0,
decay_ratio=0.1),
lambda env: exp_dec_epsilon_greedy(env,
init_epsilon=1.0,
min_epsilon=0.0,
decay_ratio=0.1),
# softmax
lambda env: softmax(env,
init_temp=float('inf'),
min_temp=0.0,
decay_ratio=0.005),
lambda env: softmax(env,
init_temp=100,
min_temp=0.01,
decay_ratio=0.01),
# ucb
lambda env: upper_confidence_bound(env, c=0.2),
lambda env: upper_confidence_bound(env, c=0.5),
# thompson sampling
lambda env: thompson_sampling(env, alpha=1, beta=1),
lambda env: thompson_sampling(env, alpha=0.5, beta=0.5),
]
for env_seed in tqdm(SEEDS, desc='All experiments'):
env = gym.make(env_name, seed=env_seed) ; env.reset()
r_dist = np.array(env.env.r_dist)[:,0]
true_Q = np.array(env.env.p_dist * r_dist)
opt_V = np.max(true_Q)
for seed in tqdm(SEEDS, desc='All environments', leave=False):
for experiment in tqdm(experiments,
desc='Experiments with seed {}'.format(seed),
leave=False):
env.seed(seed) ; np.random.seed(seed) ; random.seed(seed)
name, Re, Qe, Ae = experiment(env)
Ae = np.expand_dims(Ae, -1)
episode_mean_rew = np.cumsum(Re) / (np.arange(len(Re)) + 1)
Q_selected = np.take_along_axis(
np.tile(true_Q, Ae.shape), Ae, axis=1).squeeze()
regret = opt_V - Q_selected
cum_regret = np.cumsum(regret)
if name not in results.keys(): results[name] = {}
if 'Re' not in results[name].keys(): results[name]['Re'] = []
if 'Qe' not in results[name].keys(): results[name]['Qe'] = []
if 'Ae' not in results[name].keys(): results[name]['Ae'] = []
if 'cum_regret' not in results[name].keys():
results[name]['cum_regret'] = []
if 'episode_mean_rew' not in results[name].keys():
results[name]['episode_mean_rew'] = []
results[name]['Re'].append(Re)
results[name]['Qe'].append(Qe)
results[name]['Ae'].append(Ae)
results[name]['cum_regret'].append(cum_regret)
results[name]['episode_mean_rew'].append(episode_mean_rew)
return results
b10_results_a = b10_run_advanced_strategies_experiment()
###Output
_____no_output_____
###Markdown
Plotting results of advanced strategies on 10-Armed Bandit environments
###Code
fig, axs = plt.subplots(5, 1, figsize=(28, 28), sharey=False, sharex=False)
lines = ["-","--",":","-."]
linecycler = cycle(lines)
min_reg, max_ret = float('inf'), float('-inf')
for label, result in b10_results_a.items():
color = next(linecycler)
# reward
episode_mean_rew = np.array(result['episode_mean_rew'])
mean_episode_mean_rew = np.mean(episode_mean_rew, axis=0)
axs[0].plot(mean_episode_mean_rew, color, linewidth=2, label=label)
axs[1].plot(mean_episode_mean_rew, color, linewidth=2, label=label)
axs[1].set_xscale('log')
axs[2].plot(mean_episode_mean_rew, color, linewidth=2, label=label)
if max_ret < mean_episode_mean_rew[-1]: max_ret = mean_episode_mean_rew[-1]
axs[2].axis((mean_episode_mean_rew.shape[0]*0.989,
mean_episode_mean_rew.shape[0],
max_ret-0.01,
max_ret+0.005))
# regret
cum_regret = np.array(result['cum_regret'])
mean_cum_regret = np.mean(cum_regret, axis=0)
axs[3].plot(mean_cum_regret, color, linewidth=2, label=label)
axs[4].plot(mean_cum_regret, color, linewidth=2, label=label)
if min_reg > mean_cum_regret[-1]: min_reg = mean_cum_regret[-1]
plt.axis((mean_cum_regret.shape[0]*0.989,
mean_cum_regret.shape[0],
min_reg-5,
min_reg+12))
# config plot
axs[0].set_title('Mean Episode Reward')
axs[1].set_title('Mean Episode Reward (Log scale)')
axs[2].set_title('Mean Episode Reward (Zoom on best)')
axs[3].set_title('Total Regret')
axs[4].set_title('Total Regret (Zoom on best)')
plt.xlabel('Episodes')
axs[0].legend(loc='upper left')
plt.show()
###Output
_____no_output_____ |
cognitive class - deep learning tensorflow/ML0120EN-1.1-Exercise-TensorFlowHelloWorld.ipynb | ###Markdown
 "Hello World" in TensorFlow - Exercise Notebook Before everything, let's import the TensorFlow library
###Code
%matplotlib inline
import tensorflow as tf
###Output
_____no_output_____
###Markdown
First, try to add the two constants and print the result.
###Code
a = tf.constant([5])
b = tf.constant([2])
###Output
_____no_output_____
###Markdown
create another TensorFlow object applying the sum (+) operation:
###Code
#Your code goes here
###Output
_____no_output_____
###Markdown
Click here for the solution 1Click here for the solution 2```c=a+b``````c=tf.add(a,b)```
###Code
with tf.Session() as session:
result = session.run(c)
print "The addition of this two constants is: {0}".format(result)
###Output
_____no_output_____
###Markdown
--- Now let's try to multiply them.
###Code
# Your code goes here. Use the multiplication operator.
###Output
_____no_output_____
###Markdown
Click here for the solution 1Click here for the solution 2```c=a*b``````c=tf.multiply(a,b)```
###Code
with tf.Session() as session:
result = session.run(c)
print "The Multiplication of this two constants is: {0}".format(result)
###Output
_____no_output_____
###Markdown
Multiplication: element-wise or matrix multiplicationLet's practice the different ways to multiply matrices:- **Element-wise** multiplication in the **first operation** ;- **Matrix multiplication** on the **second operation** ;
###Code
matrixA = tf.constant([[2,3],[3,4]])
matrixB = tf.constant([[2,3],[3,4]])
# Your code goes here
###Output
_____no_output_____
###Markdown
Click here for the solution```first_operation=tf.multiply(matrixA, matrixB)second_operation=tf.matmul(matrixA,matrixB)```
###Code
with tf.Session() as session:
result = session.run(first_operation)
print "Element-wise multiplication: \n", result
result = session.run(second_operation)
print "Matrix Multiplication: \n", result
###Output
_____no_output_____
###Markdown
--- Modify the value of variable b to the value in constant a:
###Code
a=tf.constant(1000)
b=tf.Variable(0)
init_op = tf.global_variables_initializer()
# Your code goes here
###Output
_____no_output_____
###Markdown
Click here for the solution```a=tf.constant(1000)b=tf.Variable(0)init_op = tf.global_variables_initializer()update = tf.assign(b,a)with tf.Session() as session: session.run(init_op) session.run(update) print(session.run(b))``` --- Fibonacci sequenceNow try to do something more advanced. Try to create a __fibonnacci sequence__ and print the first few values using TensorFlow:If you don't know, the fibonnacci sequence is defined by the equation: $$F_{n} = F_{n-1} + F_{n-2}$$Resulting in a sequence like: 1,1,2,3,5,8,13,21...
###Code
###Output
_____no_output_____
###Markdown
Click here for the solution 1Click here for the solution 2```a=tf.Variable(0)b=tf.Variable(1)temp=tf.Variable(0)c=a+bupdate1=tf.assign(temp,c)update2=tf.assign(a,b)update3=tf.assign(b,temp)init_op = tf.initialize_all_variables()with tf.Session() as s: s.run(init_op) for _ in range(15): print(s.run(a)) s.run(update1) s.run(update2) s.run(update3)``````f = [tf.constant(1),tf.constant(1)]for i in range(2,10): temp = f[i-1] + f[i-2] f.append(temp)with tf.Session() as sess: result = sess.run(f) print result``` --- Now try to create your own placeholders and define any kind of operation between them:
###Code
# Your code goes here
###Output
_____no_output_____
###Markdown
Click here for the solution```a=tf.placeholder(tf.float32)b=tf.placeholder(tf.float32)c=2*a -bdictionary = {a:[2,2],b:[3,4]}with tf.Session() as session: print session.run(c,feed_dict=dictionary)``` Try changing our example with some other operations and see the result.Some examples of functions: tf.multiply(x, y)tf.div(x, y)tf.square(x)tf.sqrt(x)tf.pow(x, y)tf.exp(x)tf.log(x)tf.cos(x)tf.sin(x) You can also take a look at [more operations]( https://www.tensorflow.org/versions/r0.9/api_docs/python/math_ops.html)
###Code
a = tf.constant(5.)
b = tf.constant(2.)
###Output
_____no_output_____
###Markdown
create a variable named **`c`** to receive the result an operation (at your choice):
###Code
#your code goes here
###Output
_____no_output_____
###Markdown
Click here for the solution```c=tf.sin(a)```
###Code
with tf.Session() as session:
result = session.run(c)
print "c =: {}".format(result)
###Output
_____no_output_____ |
phasepy-examples/13. Fitting Interaction Parameters for Mixtures.ipynb | ###Markdown
Fitting interaction parameters for mixtures---Let's call $\underline{\xi}$ the optimization parameters of a mixture. In order to optimize them, you need to provide experimental phase equilibria data. This can include VLE, LLE and VLLE data. The objective function used for each equilibria type are shown below: Vapor-Liquid Equilibria Data$$ OF_{VLE}(\underline{\xi}) = w_y \sum_{j=1}^{Np} \left[ \sum_{i=1}^c (y_{i,j}^{cal} - y_{i,j}^{exp})^2 \right] + w_P \sum_{j=1}^{Np} \left[ \frac{P_{j}^{cal} - P_{j}^{exp}}{P_{j}^{exp}} \right]^2$$Where, $Np$ is the number of experimental data points, $y_i$ is the vapor molar fraction of the component $i$ and $P$ is the bubble pressure. The superscripts $cal$ and $exp$ refers to the computed and experimental values, respectively. Finally, $w_y$ is the weight for the vapor composition error and $w_P$ is the weight for the bubble pressure error. Liquid-Liquid Equilibria Data$$ OF_{LLE}(\underline{\xi}) = w_x \sum_{j=1}^{Np} \sum_{i=1}^c \left[x_{i,j} - x_{i,j}^{exp}\right]^2 + w_w \sum_{j=1}^{Np} \sum_{i=1}^c \left[ w_{i,j} - w_{i,j}^{exp}) \right]^2 $$Where, $Np$ is the number of experimental data points, $x_i$ and $w_i$ are the molar fraction of the component $i$ on the liquids phases. The superscripts $cal$ and $exp$ refers to the computed and experimental values, respectively. Finally, $w_x$ and $w_w$ are the weights for the liquid 1 ($x$) and liquid 2 ($w$) composition error. Vapor-Liquid-Liquid Equilibria Data$$ OF_{VLLE}(\underline{\xi}) = w_x \sum_{j=1}^{Np} \sum_{i=1}^c \left[x_{i,j}^{cal} - x_{i,j}^{exp}\right]^2 + w_w \sum_{j=1}^{Np} \sum_{i=1}^c \left[w_{i,j}^{cal} - w_{i,j}^{exp}\right]^2 + w_y \sum_{j=1}^{Np} \sum_{i=1}^c \left[y_{i,j}^{cal} - y_{i,j}^{exp}\right]^2 + w_P \sum_{j=1}^{Np} \left[ \frac{P_{j}^{cal}}{P_{j}^{exp}} - 1\right]^2 $$Where, $Np$ is the number of experimental data points, $y_i$, $x_i$ and $w_i$ are the molar fraction of the component $i$ on the vapor and liquids phases, respectively. The superscripts $cal$ and $exp$ refers to the computed and experimental values, respectively. Finally, $w_x$ and $w_w$ are the weights for the liquid 1 ($x$) and liquid 2 ($w$) composition error, $w_y$ is the weight for vapor composition error and $w_P$ is weight for three phase equilibria pressure error.If there is data for more than one equilibria type, the errors can be added accordinly. So the objective funcion becomes:$$ OF(\underline{\xi}) =OF_{ELV}(\underline{\xi}) + OF_{ELL}(\underline{\xi}) + OF_{ELLV}(\underline{\xi})$$---This notebook has the purpose of showing examples of how to fit interaction parameters for binary mixtures using experimental equilibria data.
###Code
import numpy as np
from phasepy import component, mixture, prsveos
###Output
_____no_output_____
###Markdown
Now that the functions are available it is necessary to create the mixture.
###Code
water = component(name = 'Water', Tc = 647.13, Pc = 220.55, Zc = 0.229, Vc = 55.948, w = 0.344861,
ksv = [ 0.87292043, -0.06844994],
Ant = [ 11.72091059, 3852.20302815, -44.10441047],
cii = [ 1.16776082e-25, -4.76738739e-23, 1.79640647e-20],
GC = {'H2O':1},
ri=0.92, qi=1.4)
ethanol = component(name = 'Ethanol', Tc = 514.0, Pc = 61.37, Zc = 0.241, Vc = 168.0, w = 0.643558,
ksv = [1.27092923, 0.0440421 ],
Ant = [ 12.26474221, 3851.89284329, -36.99114863],
cii = [ 2.35206942e-24, -1.32498074e-21, 2.31193555e-19],
GC = {'CH3':1, 'CH2':1, 'OH(P)':1},
ri=2.1055, qi=1.972)
mix = mixture(ethanol, water)
###Output
_____no_output_____
###Markdown
Now the experimental equilibria data is read and a tuple is created. It includes the experimental liquid composition, vapor composition, equilibrium temperature and pressure. This is done with ```datavle = (Xexp, Yexp, Texp, Pexp)```If the mixture exhibits other equilibria types you can supply this experimental data to the ``datalle`` or ``datavlle`` parameters.- ``datalle``: (Xexp, Wexp, Texp, Pexp)- ``datavlle``: (Xexp, Wexp, Yexp, Texp, Pexp)You can specify the weights for each objetive function through the following parameters:- ``weights_vle``: list or array_like, weights for the VLE objective function. - weights_vle[0] = weight for Y composition error, default to 1. - weights_vle[1] = weight for bubble pressure error, default to 1.- ``weights_lle``: list or array_like, weights for the LLE objective function. - weights_lle[0] = weight for X (liquid 1) composition error, default to 1. - weights_lle[1] = weight for W (liquid 2) composition error, default to 1.- ``weights_vlle``: list or array_like, weights for the VLLE objective function. - weights_vlle[0] = weight for X (liquid 1) composition error, default to 1. - weights_vlle[1] = weight for W (liquid 2) composition error, default to 1. - weights_vlle[2] = weight for Y (vapor) composition error, default to 1. - weights_vlle[3] = weight for equilibrium pressure error, default to 1.Additionally, you can set options to the SciPy's ``minimize`` function using the ``minimize_options`` parameter.
###Code
#Vapor Liquid equilibria data obtanied from Rieder, Robert M. y A. Ralph Thompson (1949).
# «Vapor-Liquid Equilibria Measured by a GillespieStill - Ethyl Alcohol - Water System».
#Ind. Eng. Chem. 41.12, 2905-2908.
#Saturation Pressure in bar
Pexp = np.array([1.013, 1.013, 1.013, 1.013, 1.013, 1.013, 1.013, 1.013, 1.013,
1.013, 1.013, 1.013, 1.013, 1.013, 1.013, 1.013, 1.013, 1.013,
1.013, 1.013, 1.013, 1.013, 1.013, 1.013, 1.013, 1.013, 1.013,
1.013, 1.013, 1.013, 1.013, 1.013, 1.013, 1.013])
#Saturation temeprature in Kelvin
Texp = np.array([372.45, 370.05, 369.15, 369.15, 368.75, 367.95, 366.95, 366.65,
366.05, 363.65, 363.65, 362.55, 361.55, 361.75, 360.35, 358.55,
357.65, 357.15, 356.55, 356.15, 355.45, 355.15, 354.55, 354.65,
354.35, 354.05, 353.65, 353.35, 353.15, 352.65, 351.95, 351.65,
351.55, 351.45])
#Liquid fraction mole array
Xexp = np.array([[0.0028, 0.0118, 0.0137, 0.0144, 0.0176, 0.0222, 0.0246, 0.0302,
0.0331, 0.0519, 0.053 , 0.0625, 0.0673, 0.0715, 0.0871, 0.126 ,
0.143 , 0.172 , 0.206 , 0.21 , 0.255 , 0.284 , 0.321 , 0.324 ,
0.345 , 0.405 , 0.43 , 0.449 , 0.506 , 0.545 , 0.663 , 0.735 ,
0.804 , 0.917 ],
[0.9972, 0.9882, 0.9863, 0.9856, 0.9824, 0.9778, 0.9754, 0.9698,
0.9669, 0.9481, 0.947 , 0.9375, 0.9327, 0.9285, 0.9129, 0.874 ,
0.857 , 0.828 , 0.794 , 0.79 , 0.745 , 0.716 , 0.679 , 0.676 ,
0.655 , 0.595 , 0.57 , 0.551 , 0.494 , 0.455 , 0.337 , 0.265 ,
0.196 , 0.083 ]])
#Vapor fraction mole array
Yexp = np.array([[0.032, 0.113, 0.157, 0.135, 0.156, 0.186, 0.212, 0.231, 0.248,
0.318, 0.314, 0.339, 0.37 , 0.362, 0.406, 0.468, 0.487, 0.505,
0.53 , 0.527, 0.552, 0.567, 0.586, 0.586, 0.591, 0.614, 0.626,
0.633, 0.661, 0.673, 0.733, 0.776, 0.815, 0.906],
[0.968, 0.887, 0.843, 0.865, 0.844, 0.814, 0.788, 0.769, 0.752,
0.682, 0.686, 0.661, 0.63 , 0.638, 0.594, 0.532, 0.513, 0.495,
0.47 , 0.473, 0.448, 0.433, 0.414, 0.414, 0.409, 0.386, 0.374,
0.367, 0.339, 0.327, 0.267, 0.224, 0.185, 0.094]])
datavle = (Xexp, Yexp, Texp, Pexp)
###Output
_____no_output_____
###Markdown
Fitting QMR mixing rule As a scalar is been fitted, SciPy recommends giving a certain interval where the minimum could be found, the function ```fit_kij``` handles this optimization.
###Code
from phasepy.fit import fit_kij
mixkij = mix.copy()
fit_kij((-0.15, -0.05), prsveos, mixkij, datavle)
###Output
_____no_output_____
###Markdown
Fitting NRTL interaction parameters As an array is been fitted, multidimensional optimization algorithms are used, the function ```fit_nrtl``` handles this optimization with several options available. If a fixed value of the aleatory factor is used the initial guess has the following form:nrtl0 = np.array([A12, A21])If the aleatory factor needs to be optimized it can be included setting alpha_fixed to False, in this case, the initial guess has the following form:nrtl0 = np.array([A12, A21, alpha])Temperature-dependent parameters can be fitted setting the option Tdep = True in ```fit_nrtl```, when this option is used the parameters are computed as:$$A12 = A12_0 + A12_1 T \\A21 = A21_0 + A21_1 T$$The initial guess passed to the fit function has the following form:nrtl0 = np.array([A12_0, A21_0, A12_1, A21_1, alpha])or, if alpha fixed is used.nrtl0 = np.array([A12_0, A21_0, A12_1, A21_1])
###Code
from phasepy.fit import fit_nrtl
mixnrtl = mix.copy()
#Initial guess of A12, A21
nrtl0 = np.array([-80., 650.])
fit_nrtl(nrtl0, mixnrtl, datavle, alpha_fixed = True)
#optimized values
#[-84.77530335, 648.78439102]
#Initial guess of A12, A21
nrtl0 = np.array([-80., 650., 0.2])
fit_nrtl(nrtl0, mixnrtl, datavle, alpha_fixed = False)
#optimized values for A12, A21, alpha
# [-5.53112687e+01, 6.72701992e+02, 3.19740734e-01]
###Output
_____no_output_____
###Markdown
By default Tsonopoulos virial correlation is calculated for vapor phase, if desired ideal gas or Abbott correlation can be used.
###Code
#Initial guess of A12, A21
nrtl0 = np.array([-80., 650.])
fit_nrtl(nrtl0, mixnrtl, datavle, alpha_fixed = True, virialmodel = 'ideal_gas')
#Initial guess of A12, A21
nrtl0 = np.array([-80., 650.])
fit_nrtl(nrtl0, mixnrtl, datavle, alpha_fixed = True, virialmodel = 'Abbott')
###Output
_____no_output_____
###Markdown
Fitting Wilson interaction parameters As an array is been fitted, multidimensional optimization algorithms are used, the function ```fit_wilson``` handles this optimization.
###Code
from phasepy.fit import fit_wilson
mixwilson = mix.copy()
#Initial guess of A12, A21
wilson0 = np.array([-80., 650.])
fit_wilson(wilson0, mixwilson, datavle)
###Output
_____no_output_____
###Markdown
Similarly as when fitting nrtl parameters, Tsonopoulos virial correlation is used by default. Ideal gas or Abbott correlation can be used.
###Code
fit_wilson(wilson0, mixwilson, datavle, virialmodel = 'ideal_gas')
###Output
_____no_output_____
###Markdown
Fitting Redlich-Kister interaction parameters As an array is been fitted, multidimensional optimization algorithms are used, the function ```fit_rk``` handles this optimization. Redlich-Kister expansion is programmed for n terms of the expansion, this fitting function will optimize considering the length of the array passed as an initial guess.If rk0 is a scalar it is reduced to Porter model, if it is an array of size 2 it reduces to Margules Model.Temperature-dependent parameters can be fitted in which case the initial guess will split into two arrays.``c, c1 = np.split(rk0, 2) ``Finally, the parameters are computed as:G = c + c1/T
###Code
from phasepy.fit import fit_rk
mixrk = mix.copy()
rk0 = np.array([0, 0])
fit_rk(rk0, mixrk, datavle, Tdep = False)
fit_rk(rk0, mixrk, datavle, Tdep = False, virialmodel = 'ideal_gas')
###Output
_____no_output_____
###Markdown
Fitting UNIQUAC interaction parameters As an array is been fitted, multidimensional optimization algorithms are used, the function ```fit_uniquac``` handles this optimization. If constant interaction energies are been considered, the initial guess has the following form:A0 = np.array([A12, A21])Temperature-dependent parameters can be fitted setting the option Tdep = True in ```fit_uniquac```, when this option is used the interaction energies are computed as:$$A12 = A12_0 + A12_1 T \\A21 = A21_0 + A21_1 T$$The initial guess passed to the fit function has the following form:A0 = np.array([A12_0, A21_0, A12_1, A21_1])**note:** you need to provide the molecular surface and volume (``ri`` and ``qi``) to the components for this method to work.
###Code
from phasepy.fit import fit_uniquac
mixuniquac = mix.copy()
# initial guesses for the interaction energies (in K)
A0 = np.array([100., 200])
fit_uniquac(A0, mixuniquac, datavle)
###Output
_____no_output_____
###Markdown
After the optimizations have been carried out, fitted data can be compared against experimental data.
###Code
from phasepy import virialgamma
from phasepy.equilibrium import bubbleTy
prkij = prsveos(mixkij)
virialnrtl = virialgamma(mixnrtl, actmodel = 'nrtl')
virialwilson = virialgamma(mixwilson, actmodel = 'wilson')
virialrk = virialgamma(mixrk, actmodel = 'rk')
virialuniquac = virialgamma(mixuniquac, actmodel='uniquac')
Ykij = np.zeros_like(Yexp)
Tkij = np.zeros_like(Pexp)
Ynrtl = np.zeros_like(Yexp)
Tnrtl = np.zeros_like(Pexp)
Ywilson = np.zeros_like(Yexp)
Twilson = np.zeros_like(Pexp)
Yrk = np.zeros_like(Yexp)
Trk = np.zeros_like(Pexp)
Yuniquac = np.zeros_like(Yexp)
Tuniquac = np.zeros_like(Pexp)
n = len(Pexp)
for i in range(n):
Ykij[:,i],Tkij[i] = bubbleTy(Yexp[:,i],Texp[i],Xexp[:,i],Pexp[i],prkij)
Ynrtl[:,i],Tnrtl[i] = bubbleTy(Yexp[:,i],Texp[i],Xexp[:,i],Pexp[i],virialnrtl)
Ywilson[:,i],Twilson[i] = bubbleTy(Yexp[:,i],Texp[i],Xexp[:,i],Pexp[i],virialwilson)
Yrk[:,i],Trk[i] = bubbleTy(Yexp[:,i],Texp[i],Xexp[:,i],Pexp[i],virialrk)
Yuniquac[:,i],Tuniquac[i] = bubbleTy(Yexp[:,i],Texp[i],Xexp[:,i],Pexp[i],virialuniquac)
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(12,8))
fig.subplots_adjust(hspace=0.3, wspace=0.3)
ax=fig.add_subplot(231)
ax.plot(Xexp[0], Texp,'.', Yexp[0], Texp,'.')
ax.plot(Xexp[0], Tkij, Ykij[0], Tkij)
ax.set_xlabel('x,y')
ax.set_ylabel('T/K')
ax.text(0.5, 370, 'QMR')
ax2 = fig.add_subplot(232)
ax2.plot(Xexp[0], Texp,'.', Yexp[0], Texp,'.')
ax2.plot(Xexp[0], Tnrtl, Ynrtl[0], Tnrtl)
ax2.set_xlabel('x,y')
ax2.set_ylabel('T/K')
ax2.text(0.5, 370, 'NRTL')
ax3 = fig.add_subplot(233)
ax3.plot(Xexp[0], Texp,'.', Yexp[0], Texp,'.')
ax3.plot(Xexp[0], Trk, Yrk[0], Trk)
ax3.set_xlabel('x,y')
ax3.set_ylabel('T/K')
ax3.text(0.5, 370, 'Redlich-Kister')
ax4 = fig.add_subplot(234)
ax4.plot(Xexp[0], Texp,'.', Yexp[0], Texp,'.')
ax4.plot(Xexp[0], Twilson, Ywilson[0], Twilson)
ax4.set_xlabel('x,y')
ax4.set_ylabel('T/K')
ax4.text(0.5, 370, 'Wilson')
ax5 = fig.add_subplot(235)
ax5.plot(Xexp[0], Texp,'.', Yexp[0], Texp,'.')
ax5.plot(Xexp[0], Tuniquac, Yuniquac[0], Tuniquac)
ax5.set_xlabel('x,y')
ax5.set_ylabel('T/K')
ax5.text(0.5, 370, 'UNIQUAC')
fig.show()
###Output
<ipython-input-15-7000fe8ab98a>:40: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
fig.show()
|
docs/ALPN.ipynb | ###Markdown
Recreate pedigo, GM no prior, barycenter init, matching just intra
###Code
options = {'maximize':True, 'maxiter':100, 'shuffle_input':True,'tol':1e-5,}
res = max([quadratic_assignment(dataset1_intra_pad, dataset2_intra, options=options)
for k in range(50)], key=lambda x: x.fun)
gen_result(res)
###Output
objective function value: 6227.852163812509
Matching Accuracy ignoring unknowns 0.88
###Markdown
GOAT no prior, barycenter init, matching just intra
###Code
options = {'maximize':True, 'maxiter':100, 'shuffle_input':True,'tol':1e-6,'reg':300}
res = max([quadratic_assignment_ot(dataset1_intra_pad, dataset2_intra, options=options)
for k in range(50)], key=lambda x: x.fun)
gen_result(res)
###Output
objective function value: 6198.336414141535
Matching Accuracy ignoring unknowns 0.95
###Markdown
still no prior, using inter-hemisphere info to make a "better" init
###Code
from ot import sinkhorn
sim = np.zeros((n,n))
sim[:len(dataset1_intra), :] = dataset1_to_dataset2 + dataset2_to_dataset1.T
power = -1
reg = 100
lamb = reg / np.max(np.abs(sim))
ones = np.ones(n)
sim = sinkhorn(ones, ones, sim, power/lamb, stopInnerThr=1e-4,numItermax=1000)
from scipy.optimize import linear_sum_assignment
s = np.zeros((n,n))
s[:len(dataset1_intra), :] = dataset1_to_dataset2 + dataset2_to_dataset1.T
row, col = linear_sum_assignment(s, maximize = True)
###Output
_____no_output_____
###Markdown
GM
###Code
options = {'maximize':True, 'maxiter':100, 'shuffle_input':False,'tol':1e-6,'P0':sim}
res = max([quadratic_assignment(dataset1_intra_pad, dataset2_intra, options=options)
for k in range(50)], key=lambda x: x.fun)
gen_result(res)
###Output
objective function value: 6235.083554444008
Matching Accuracy ignoring unknowns 0.97
###Markdown
GOAT
###Code
options = {'maximize':True, 'maxiter':100, 'shuffle_input':True,'tol':1e-6,'reg':300, 'P0':sim}
res = max([quadratic_assignment_ot(dataset1_intra_pad, dataset2_intra, options=options)
for k in range(50)], key=lambda x: x.fun)
gen_result(res)
###Output
objective function value: 6207.4674251068145
Matching Accuracy ignoring unknowns 0.97
###Markdown
GM augmented with inter-connection similarity matrix
###Code
from graspologic.match import GraphMatch
gmp = GraphMatch(n_init =50, max_iter = 100, eps=1e-5)
a = np.linalg.norm(dataset1_intra_pad) * np.linalg.norm(dataset2_intra)
b = np.linalg.norm(s)
gmp = gmp.fit(dataset1_intra_pad, dataset2_intra,S=s * (a/b))
perm_inds = gmp.perm_inds_
print(f'objective function value: {np.sum(dataset1_intra_pad * dataset2_intra[perm_inds][:, perm_inds])}')
dataset2_intra_matched = dataset2_intra[perm_inds][:, perm_inds][: len(dataset1_ids)]
dataset2_meta_matched = dataset2_meta.iloc[perm_inds][: len(dataset1_ids)]
labels1 = dataset1_meta["label"]
dataset1_vmax = labels1.value_counts()[1:].max()
labels2 = dataset2_meta_matched["label"]
dataset2_vmax = labels2.value_counts()[1:].max()
vmax = max(dataset1_vmax, dataset2_vmax)
unique_labels = np.unique(list(labels1) + list(labels2))
conf_mat = confusion_matrix(labels1, labels2, labels=unique_labels, normalize=None)
conf_mat = pd.DataFrame(data=conf_mat, index=unique_labels, columns=unique_labels)
conf_mat = conf_mat.iloc[:-5, :-5] # hack to ignore anything "unclear"
on_diag = np.trace(conf_mat.values) / np.sum(conf_mat.values)
print(f"Matching Accuracy ignoring unknowns {on_diag:.2f}")
confusionplot(
labels1,
labels2,
ylabel=datasets[0],
xlabel=datasets[1],
title="Label confusion matrix",
annot=False,
vmax=vmax,
xticklabels=False,
yticklabels=False,
)
###Output
objective function value: 6162.901828036416
Matching Accuracy ignoring unknowns 0.97
|
exercises/ch02.ipynb | ###Markdown
Load data
###Code
HOUSING_PATH = os.path.join("../datasets", "housing")
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
housing = load_housing_data()
###Output
_____no_output_____
###Markdown
Stratified Sampling by income category
###Code
housing["income_cat"] = pd.cut(housing["median_income"],
bins=[0., 1.5, 3.0, 4.5, 6., np.inf],
labels=[1, 2, 3, 4, 5])
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing["income_cat"]):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
for set_ in (strat_train_set, strat_test_set, housing):
set_.drop("income_cat", axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Feature TransformersSteps:1. Cat pipeline - one hot encode for cat vars 2. Num pipeline - impute na - add some new features - standardize for num vars3. Combine with ColumnTransformer
###Code
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder, StandardScaler
num_cols = strat_train_set.select_dtypes(include='number').columns.difference(['median_house_value']).sort_values()
cat_cols = strat_train_set.select_dtypes(include='object').columns
num_ix_mapper = pd.Series(index=num_cols, data=range(len(num_cols)))
rooms_ix, households_ix, pop_ix, bedrooms_ix = num_ix_mapper[['total_rooms', 'households', 'population', 'total_bedrooms']]
class AttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room = True):
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y = None):
return self
def transform(self, X):
rooms_per_household = X[:, rooms_ix] / X[:, households_ix]
population_per_household = X[:, pop_ix] / X[:, households_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('attribs_adder', AttributesAdder(add_bedrooms_per_room=True)),
('std_scaler', StandardScaler()),
])
full_pipeline = ColumnTransformer([
("num", num_pipeline, num_cols),
("cat", OneHotEncoder(), cat_cols),
])
train_X = full_pipeline.fit_transform(strat_train_set)
train_y = strat_train_set['median_house_value']
test_X = full_pipeline.transform(strat_test_set)
test_y = strat_test_set['median_house_value']
###Output
_____no_output_____
###Markdown
Exercise 1. Try SVM with GridSearchCV
###Code
from sklearn.svm import SVR
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
param_grid = [
{'kernel': ['linear'], "C": [0.5, 1, 2, 4]},
{'kernel': ['rbf'], "C": [0.5, 1, 2, 4], "gamma": [0.05, 0.1, 0.5, 1]}
]
grid_search = GridSearchCV(SVR(), param_grid, cv = 5, n_jobs=4,
scoring='neg_mean_squared_error',
return_train_score=True, verbose=2)
grid_search.fit(train_X, train_y)
np.sqrt(-grid_search.best_score_)
grid_search.best_params_
cvres = grid_search.cv_results_
for mean_score, params in sorted(zip(cvres['mean_test_score'], cvres['params']), key=lambda x: -x[0]):
print(np.sqrt(-mean_score), params)
###Output
98407.97874755673 {'C': 4, 'kernel': 'linear'}
107136.26398576611 {'C': 2, 'kernel': 'linear'}
112571.9845974018 {'C': 1, 'kernel': 'linear'}
115595.79530238075 {'C': 0.5, 'kernel': 'linear'}
117797.57469769163 {'C': 4, 'gamma': 0.1, 'kernel': 'rbf'}
117854.46605174112 {'C': 4, 'gamma': 0.05, 'kernel': 'rbf'}
118361.99343356273 {'C': 2, 'gamma': 0.1, 'kernel': 'rbf'}
118379.00983337601 {'C': 2, 'gamma': 0.05, 'kernel': 'rbf'}
118573.69774338815 {'C': 4, 'gamma': 0.5, 'kernel': 'rbf'}
118638.14057023903 {'C': 1, 'gamma': 0.1, 'kernel': 'rbf'}
118646.48325535003 {'C': 1, 'gamma': 0.05, 'kernel': 'rbf'}
118750.7191426293 {'C': 2, 'gamma': 0.5, 'kernel': 'rbf'}
118776.0249577243 {'C': 0.5, 'gamma': 0.1, 'kernel': 'rbf'}
118782.55031155302 {'C': 0.5, 'gamma': 0.05, 'kernel': 'rbf'}
118796.03701146161 {'C': 4, 'gamma': 1, 'kernel': 'rbf'}
118832.52522524683 {'C': 1, 'gamma': 0.5, 'kernel': 'rbf'}
118857.90522178201 {'C': 2, 'gamma': 1, 'kernel': 'rbf'}
118874.89705492764 {'C': 0.5, 'gamma': 0.5, 'kernel': 'rbf'}
118886.58611084612 {'C': 1, 'gamma': 1, 'kernel': 'rbf'}
118903.05475528448 {'C': 0.5, 'gamma': 1, 'kernel': 'rbf'}
###Markdown
Exercise 2. Try GBM with RandomizedSearchCV
###Code
from sklearn.ensemble import GradientBoostingRegressor
param_distr = {
'loss': ['ls', 'huber'],
'learning_rate': stats.beta(a=2, b=3),
'n_estimators': stats.poisson(mu=400)
}
gbr = GradientBoostingRegressor()
rnd_search = RandomizedSearchCV(gbr, param_distr, n_jobs=4, cv = 5,
scoring = 'neg_mean_squared_error')
rnd_search.fit(train_X, train_y)
np.sqrt(-rnd_search.best_score_)
rnd_search.best_estimator_
rndres = rnd_search.cv_results_
for mean_score, params in sorted(zip(rndres['mean_test_score'], rndres['params']), key=lambda x: -x[0]):
print(np.sqrt(-mean_score), params)
feature_importances=rnd_search.best_estimator_.feature_importances_
extra_attribs = ["rooms_per_hhold", "pop_per_hhold", "bedrooms_per_room"]
cat_encoder = full_pipeline.named_transformers_["cat"]
cat_one_hot_attribs = list(cat_encoder.categories_[0])
attributes = num_cols.tolist() + extra_attribs + cat_one_hot_attribs
sorted(zip(feature_importances, attributes), reverse=True)
###Output
_____no_output_____
###Markdown
Exercise 3. Add a transformer to select only the most important attributes
###Code
class TopKAttributesPicker(BaseEstimator, TransformerMixin):
def __init__(self, k):
self.k = k
def fit(self, X, y):
gbr = GradientBoostingRegressor(loss='huber', n_estimators=500, learning_rate=0.2)
gbr.fit(X, y)
self.top_k_ix = np.argsort(-gbr.feature_importances_)[:self.k]
return self
def transform(self, X):
return X[:, self.top_k_ix]
topk_picker = TopKAttributesPicker(k=5)
tmp_topk_X = topk_picker.fit_transform(train_X[:100,:], train_y[:100])
tmp_topk_X.shape
###Output
_____no_output_____
###Markdown
Exercise 4. Create a single pipeline for both data prep plus prediction
###Code
train_X.shape
param_grid = {
"topk_picker__k": [5, 9, 13],
"gbr__learning_rate": [0.7, 0.5, 0.3],
"gbr__n_estimators": [300, 600, 900],
}
e2e_pipeline = Pipeline([
('data_preper', full_pipeline),
('topk_picker', TopKAttributesPicker(k=5)),
('gbr', GradientBoostingRegressor(loss='huber', random_state=42)),
])
cv_reg = GridSearchCV(e2e_pipeline, param_grid = param_grid, cv = 4, n_jobs=4,
scoring='neg_mean_squared_error', refit=True,
return_train_score=True)
cv_reg.fit(strat_train_set.drop('median_house_value', axis=1), strat_train_set['median_house_value'])
np.sqrt(-cv_reg.best_score_)
cv_reg.best_params_
###Output
_____no_output_____
###Markdown
Predict on test set
###Code
test_y_hat = cv_reg.predict(strat_test_set.drop('median_house_value', axis=1))
from sklearn.metrics import mean_squared_error
test_rmse = np.sqrt(mean_squared_error(strat_test_set['median_house_value'], test_y_hat))
pred_diff = test_y_hat - strat_test_set['median_house_value'].to_numpy()
test_rmse
az.summary(pred_diff, kind="stats")
az.plot_kde(pred_diff)
pd.Series(pred_diff).describe()
strat_test_set.iloc[np.argmax(pred_diff), :]
test_y_hat[np.argmax(pred_diff)]
strat_test_set.iloc[np.argmin(pred_diff), :]
test_y_hat[np.argmin(pred_diff)]
plt.figure(figsize=(5,5))
plt.scatter(strat_test_set['median_house_value'], test_y_hat, alpha=0.1)
plt.plot(np.sort(test_y_hat)[:-10], np.sort(test_y_hat)[:-10], 'k--')
plt.xlabel("predicted house value")
plt.ylabel("actual house value")
pass
###Output
_____no_output_____ |
Strategies (Visualization)/Bear Spread.ipynb | ###Markdown
Put Payoff
###Code
def put_payoff(sT, strike_price, premium):
return np.where(sT < strike_price, strike_price - sT, 0) - premium
# Assumed Sport Price
s0 = 160
# Long Put
strike_price_long_put =165
premium_long_put = 9.7
# Short Put
strike_price_short_put = 155
premium_short_put = 5.4
# Range of put option at expiry
sT = np.arange(100,220,1)
###Output
_____no_output_____
###Markdown
Long Put Payoff
###Code
long_put_payoff = put_payoff(sT, strike_price_long_put, premium_long_put)
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT, long_put_payoff, color ='g')
ax.set_title('Long 165 Strike Put')
plt.xlabel('Stock Price (sT)')
plt.ylabel('Profit & Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Short Put Payoff
###Code
short_put_payoff = put_payoff(sT, strike_price_short_put, premium_short_put) * -1.0
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT, short_put_payoff, color ='r')
ax.set_title('Short 155 Strike Put')
plt.xlabel('Stock Price (sT)')
plt.ylabel('Profit & Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Bear Put Payoff
###Code
Bear_Put_payoff = long_put_payoff + short_put_payoff
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT, Bear_Put_payoff, color ='b')
ax.set_title('Bear Put Spread Payoff')
plt.xlabel('Stock Price (sT)')
plt.ylabel('Profit & Loss')
plt.show()
profit = max (Bear_Put_payoff)
loss = min (Bear_Put_payoff)
print ("Max Profit %.2f" %profit)
print ("Max Loss %.2f" %loss)
fig, ax = plt.subplots(figsize=(10,5))
ax.spines['bottom'].set_position('zero')
ax.plot(sT, Bear_Put_payoff, color ='b', label = 'Bear Put Spread')
ax.plot(sT, long_put_payoff,'--', color ='g', label ='Long Put')
ax.plot(sT, short_put_payoff,'--', color ='r', label ='Short Put')
plt.legend()
plt.xlabel('Stock Price (sT)')
plt.ylabel('Profit & Loss')
plt.show()
###Output
_____no_output_____ |
Classification Problems/Drug Type Classification/drug-classification-data-analysis-modelling.ipynb | ###Markdown
Drug ClassificationIn this notebook we will be solving the problem of classifying the type of drug from the $5$ drug types given (i.e.):* drugX* drugY* drugC* drugA* drugBThis is a *multiclass classification* problem as we have five classes in the target to predict.**Data Attributes*** Age* Sex* Blood Pressure Levels* Cholesterol Levels* Na to Potassium Ratio**Target Feature*** Drug TypeRoughly, we will be following the below structure: * Load the data.* Display useful statistics.* Build generic functions to detect nulls and missing values.* Handle missing values.* Make Visualizations to understand data better.* Build Models Table of Contents* [Import Libraries](lib)* [Load Data](load_data)* [Summary Statistics](summary_stats)* [Identify Missing or Null Values](missing_values)* [EDA & Data Visualization](eda_data_vis)* [Encoding Categorical Features](encoding)* [Developing Classification Models](model)* [Evaluating Classification Models](evaluate) Import Libraries
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
import missingno as msno
import seaborn as sns
import pandas as pd
pd.options.mode.chained_assignment = None # default='warn'
from IPython.core.display import HTML, display
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Load Data
###Code
drugs_df = pd.read_csv('drug200.csv')
print(drugs_df.head(10))
###Output
Age Sex BP Cholesterol Na_to_K Drug
0 23 F HIGH HIGH 25.355 DrugY
1 47 M LOW HIGH 13.093 drugC
2 47 M LOW HIGH 10.114 drugC
3 28 F NORMAL HIGH 7.798 drugX
4 61 F LOW HIGH 18.043 DrugY
5 22 F NORMAL HIGH 8.607 drugX
6 49 F NORMAL HIGH 16.275 DrugY
7 41 M LOW HIGH 11.037 drugC
8 60 M NORMAL HIGH 15.171 DrugY
9 43 M LOW NORMAL 19.368 DrugY
###Markdown
Display summary statistics
###Code
# Display column names
print(drugs_df.columns)
print(drugs_df.info())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 200 entries, 0 to 199
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Age 200 non-null int64
1 Sex 200 non-null object
2 BP 200 non-null object
3 Cholesterol 200 non-null object
4 Na_to_K 200 non-null float64
5 Drug 200 non-null object
dtypes: float64(1), int64(1), object(4)
memory usage: 9.5+ KB
None
###Markdown
This dataset has more categorical features than numerical. So we may have to encode the categorical features.
###Code
print(drugs_df.describe())
###Output
Age Na_to_K
count 200.000000 200.000000
mean 44.315000 16.084485
std 16.544315 7.223956
min 15.000000 6.269000
25% 31.000000 10.445500
50% 45.000000 13.936500
75% 58.000000 19.380000
max 74.000000 38.247000
###Markdown
The maximum or the oldest age give is $74$ and the youngest being $15$ Investigating Missing Values
###Code
# Generic function to calculate missing values, zero values
def calcMissingValues(df: pd.DataFrame):
'''
Function to calculate zero,missing and empty values in the dataframe
'''
# Calculate zero values
zero_values = (df == 0.0).astype(int).sum(axis = 0)
# Calculate missing values
missing_vals = df.isnull().sum()
missing_val_percent = round((missing_vals / len(df)) * 100.0 , 2)
df_missing_stat = pd.concat([zero_values , missing_vals , missing_val_percent] , axis = 1)
df_missing_stat = df_missing_stat.rename(columns = {0: 'zero_values' , 1: 'missing_vals' , 2: '%_missing_vals'})
df_missing_stat['data_types'] = df.dtypes
print(df_missing_stat)
calcMissingValues(drugs_df)
###Output
zero_values missing_vals %_missing_vals data_types
Age 0 0 0.0 int64
Sex 0 0 0.0 object
BP 0 0 0.0 object
Cholesterol 0 0 0.0 object
Na_to_K 0 0 0.0 float64
Drug 0 0 0.0 object
###Markdown
As seen, the dataset is clean without any missing values to impute. EDA & Data Visualization Visualize **Age** versus **Drug Type*** Stripplot* Boxplot **Stripplot**
###Code
# Visualize age and drug type using strip plot
plt.figure(figsize = (10 , 6))
# Plotting a swarmplot to get a distribution of categorical and numerical variables
sns.stripplot(x = 'Drug' , y = 'Age' , data = drugs_df)
plt.title('Distribution of Age & Drug')
plt.show()
###Output
_____no_output_____
###Markdown
The stripplot is used to visualize multiple data distributions, from the plot it looks like *DrugY* and *drugX* are more commonly prescribed or used by the populace. **Box Plot**
###Code
# Visualize age and drug type using Box plot
props = dict(boxes = "orange", whiskers="black", medians= "green", caps ="Gray")
drugs_df.boxplot(by = 'Drug' , column = ['Age'] , figsize = (10 , 8) , color = props )
plt.title('Distribution of Age & Drug')
plt.suptitle("")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
This confirms our assumption that *DrugY* and *drugX* are most commonly used Visualize target variable **Drug**We now visualize the distribution of the target variable to see if there are any imbalances in class distribution as this is a multiclass classification and any imbalances might affect the outcome.There will be two plots:* Bar plot* Pie Chart **Bar Plot**
###Code
# Get unique class values
print(drugs_df['Drug'].unique())
# Plot a bar chart of the various classes
drugs_df['Drug'].value_counts().plot(kind = 'bar' , x = 'Drug Type' , y = 'Drug Type Count' , color = 'yellow' , figsize = (10 , 8))
plt.title('Drug Type Distribution')
plt.show()
###Output
['DrugY' 'drugC' 'drugX' 'drugA' 'drugB']
###Markdown
**Pie Chart**
###Code
print(drugs_df.groupby(['Drug']).size())
drug_type = drugs_df.groupby(['Drug']).size()
sizes = list(drugs_df.groupby(['Drug']).size())
labels = ['Bachelors' , 'Below Secondary', 'Masters']
pie_chart_drug = {'labels': list(drug_type.index.values) , 'vals': sizes}
# print(drug_type.index.values)
# print(sizes)
colors = ['#b79c4f', '#4fb772', '#eb7d26' , '#77e8c2' , '#99eff2']
#print(pie_chart_drug)
# colors = ['#ff9999','#1f70f0','#99ff99']
pie_explode = [0 , 0 , 0.3 , 0 , 0]
plt.figure(figsize = (10 , 8))
plt.pie(pie_chart_drug['vals'] , labels = pie_chart_drug['labels'] , explode = pie_explode , colors = colors , shadow = True, startangle = 90 , textprops={'fontsize': 14} , autopct = '%.1f%%')
plt.ylabel('')
plt.title('Drug Type distribution in the data' , fontsize = 20)
plt.tight_layout()
plt.show()
###Output
Drug
DrugY 91
drugA 23
drugB 16
drugC 16
drugX 54
dtype: int64
###Markdown
From the two plots we see the distribution of *drugA*, *drugB* and *drugC* is relatively lower. This may affect the prediction and based on the accuracy metrics we can use **SMOTE** to oversample classes having lower distributions Visualize **Gender** and **Drug**
###Code
gender_drug = drugs_df.groupby(['Sex' , 'Drug']).size().reset_index(name = 'value_count')
print(gender_drug)
gender_drug_pivot = pd.pivot_table(
gender_drug,
values = 'value_count',
index = 'Drug',
columns = 'Sex'
)
# aggfunc=np.mean
print(gender_drug_pivot)
gender_drug_pivot.plot(kind = 'bar' , figsize = (10 , 8) , fontsize = 12 , rot = 360)
plt.xlabel('Drug Type', fontsize = 14)
plt.ylabel('Value' , fontsize = 14)
plt.title('Gender vs Drug Type', fontsize = 16)
plt.tight_layout()
plt.show()
###Output
Sex Drug value_count
0 F DrugY 47
1 F drugA 9
2 F drugB 6
3 F drugC 7
4 F drugX 27
5 M DrugY 44
6 M drugA 14
7 M drugB 10
8 M drugC 9
9 M drugX 27
Sex F M
Drug
DrugY 47 44
drugA 9 14
drugB 6 10
drugC 7 9
drugX 27 27
###Markdown
Nothing substantial can be interpreted from plotting Gender vs Drug. There is no bias towards genders for any specific type of Drug. Visualize **BP** and **Drug**Ploltting to see if there is any relation between BP and Drug type. The chart will be a gouped bar chart.
###Code
print(drugs_df.groupby(['Drug']).mean())
print(drugs_df['BP'].unique())
print(drugs_df.groupby(['BP']).mean())
bp_drug = drugs_df.groupby(['BP' , 'Drug']).size().reset_index(name = 'value_count')
print(bp_drug)
gender_drug_pivot = pd.pivot_table(
gender_drug,
values = 'value_count',
index = 'Drug',
columns = 'Sex'
)
bp_drug_pivot = pd.pivot_table(bp_drug , values = 'value_count' , columns = 'BP' , index = 'Drug')
bp_drug_pivot.plot(kind = 'bar' , figsize = (10 , 8) , fontsize = 12 , rot = 360)
plt.xlabel('Drug Type', fontsize = 14)
plt.ylabel('Value' , fontsize = 14)
plt.title('BP vs Drug Type', fontsize = 16)
plt.tight_layout()
plt.show()
###Output
Age Na_to_K
Drug
DrugY 43.747253 22.374780
drugA 35.869565 10.918783
drugB 62.500000 11.524375
drugC 42.500000 10.633750
drugX 44.018519 10.650556
['HIGH' 'LOW' 'NORMAL']
Age Na_to_K
BP
HIGH 42.233766 17.040623
LOW 47.031250 16.539797
NORMAL 44.084746 14.342746
BP Drug value_count
0 HIGH DrugY 38
1 HIGH drugA 23
2 HIGH drugB 16
3 LOW DrugY 30
4 LOW drugC 16
5 LOW drugX 18
6 NORMAL DrugY 23
7 NORMAL drugX 36
###Markdown
A majority of normal BP take DrugX and those with a higher BP take predominantly DrugY with Drug A and Drug B being close contenders. Visualize **Na_to_K** and **Drug**
###Code
print(drugs_df[['Na_to_K' , 'Drug']])
drug_na_k = drugs_df.groupby(['Drug'])['Na_to_K'].mean()
print(drug_na_k)
drug_na_k.plot(kind = 'bar' , color = 'red' , alpha = 0.5 , rot = 360 , fontsize = 14 , figsize = (10 , 8))
plt.xlabel('Drug Type' , fontsize = 15)
plt.ylabel('Na_to_K Avg' , fontsize = 15)
plt.title('Distirbution of Drug type under Na_to_K' , fontsize = 15)
plt.tight_layout()
plt.show()
###Output
Na_to_K Drug
0 25.355 DrugY
1 13.093 drugC
2 10.114 drugC
3 7.798 drugX
4 18.043 DrugY
.. ... ...
195 11.567 drugC
196 12.006 drugC
197 9.894 drugX
198 14.020 drugX
199 11.349 drugX
[200 rows x 2 columns]
Drug
DrugY 22.374780
drugA 10.918783
drugB 11.524375
drugC 10.633750
drugX 10.650556
Name: Na_to_K, dtype: float64
###Markdown
The bar chart tells that if the average Na_to_K value exceeds15 then DrugY is preffered and so this feature also plays an important role in classification. We can view the joint distribution of variables in Stripplot.
###Code
# Visualize Na_to_K and drug type using strip plot
plt.figure(figsize = (10 , 6))
# Plotting a swarmplot to get a distribution of categorical and numerical variables
sns.stripplot(x = 'Drug' , y = 'Na_to_K' , data = drugs_df)
plt.xlabel('Drug Type' , fontsize = 12)
plt.ylabel('Na_to_K Avg' , fontsize = 12)
plt.title('Distribution of Na_to_K & Drug')
plt.show()
###Output
_____no_output_____
###Markdown
Encoding Categorical Features
###Code
# Get all non-numerical columns
print(drugs_df.select_dtypes(exclude=["number","bool_"]))
###Output
Sex BP Cholesterol Drug
0 F HIGH HIGH DrugY
1 M LOW HIGH drugC
2 M LOW HIGH drugC
3 F NORMAL HIGH drugX
4 F LOW HIGH DrugY
.. .. ... ... ...
195 F LOW HIGH drugC
196 M LOW HIGH drugC
197 M NORMAL HIGH drugX
198 M NORMAL NORMAL drugX
199 F LOW NORMAL drugX
[200 rows x 4 columns]
###Markdown
Label EncodingWe can use label encoding for the *Sex* as there is no problem of precedance or hierarchy.The target feature need not be encoded as scikit-learn encodes by default if the target values are strings.The following columns will be label encoded:* Sex
###Code
from sklearn.preprocessing import LabelEncoder
labelEncoder = LabelEncoder()
# Make a copy of the dataset
drugs_train_df = drugs_df.copy()
#drugs_temp = drugs_train_df.copy()
drugs_train_df['Sex'] = labelEncoder.fit_transform(drugs_train_df['Sex'])
#drugs_temp['BP'].sort_values() = labelEncoder.fit_transform(drugs_temp['BP'])
#print(drugs_temp.loc[0 : 5, 'BP'].sort_values(ascending = True))
print(drugs_train_df.loc[0 : 5, 'Sex'])
print(drugs_df.loc[0 : 5, 'Sex'])
###Output
0 0
1 1
2 1
3 0
4 0
5 0
Name: Sex, dtype: int32
0 F
1 M
2 M
3 F
4 F
5 F
Name: Sex, dtype: object
###Markdown
Ordinal EncodingColumns *BP* and *Cholesterol* are odrdinal in nature as they have an order of sorts (i.e.) LOW, NORMAL and HIGH, we can use pandas map function to ordinally encode these variables.The following columns will be label encoded:* BP* Cholesterol
###Code
# Get the unique values
print('BP: ', drugs_train_df['BP'].unique())
print('Cholesterol: ', drugs_train_df['Cholesterol'].unique())
# Define a map function
ord_dict = {'LOW': 1 , 'NORMAL' : 2, 'HIGH' : 3}
#chol_dict = {}
drugs_train_df['BP'] = drugs_train_df['BP'].map(ord_dict)
drugs_train_df['Cholesterol'] = drugs_train_df['Cholesterol'].map(ord_dict)
print('BP: ', drugs_train_df['BP'].unique())
print('Cholesterol: ', drugs_train_df['Cholesterol'].unique())
print(drugs_train_df)
###Output
Age Sex BP Cholesterol Na_to_K Drug
0 23 0 3 3 25.355 DrugY
1 47 1 1 3 13.093 drugC
2 47 1 1 3 10.114 drugC
3 28 0 2 3 7.798 drugX
4 61 0 1 3 18.043 DrugY
.. ... ... .. ... ... ...
195 56 0 1 3 11.567 drugC
196 16 1 1 3 12.006 drugC
197 52 1 2 3 9.894 drugX
198 23 1 2 2 14.020 drugX
199 40 0 1 2 11.349 drugX
[200 rows x 6 columns]
###Markdown
Now the data does not lose its meaning since we have done ordinal encoding of the key feature columns. Building Classification models **Split into Trian and Test data**
###Code
# Number of records
print(drugs_train_df.shape)
def splitDataset(x_df: pd.DataFrame , y_df: pd.DataFrame)-> (pd.DataFrame, pd.DataFrame, pd.DataFrame, pd.DataFrame):
'''
Function to split a dataset into Train and test sets
'''
ratio = 0.8
mask = np.random.rand(len(x_df)) <= ratio
x_train = x_df[mask]
x_test = x_df[~mask]
y_train = y_df[mask]
y_test = y_df[~mask]
return x_train, y_train, x_test, y_test
np.random.seed(123)
y_df = drugs_train_df['Drug']
x_df = drugs_train_df.drop(['Drug'] , axis = 1)
x_train, y_train, x_test, y_test = splitDataset(x_df , y_df)
print('X Train Shape: ', x_train.shape)
print('X Test Shape: ', x_test.shape)
print('Y Train Shape: ', y_train.shape)
print('Y Test Shape: ', y_test.shape)
###Output
X Train Shape: (166, 5)
X Test Shape: (34, 5)
Y Train Shape: (166,)
Y Test Shape: (34,)
###Markdown
Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
# Define the model
logistic_regression = LogisticRegression(solver='liblinear')
logistic_regression.fit(x_train , y_train)
y_pred = logistic_regression.predict(x_test)
# Get scores
train_score = logistic_regression.score(x_train , y_train)
test_score = logistic_regression.score(x_test , y_test)
print('Train score: {:.2f}'.format(train_score))
print('Test score: {:.2f}'.format(test_score))
###Output
Train score: 0.86
Test score: 0.82
###Markdown
Evaluating Classification Models
###Code
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score , precision_score , recall_score , f1_score
from sklearn.metrics import classification_report
###Output
_____no_output_____
###Markdown
**Confusion Matrix**
###Code
conf_matrix = confusion_matrix(y_test , y_pred)
# print(conf_matrix)
plt.figure(figsize = (10, 8))
sns.heatmap(conf_matrix, annot = True, fmt = ".3f", linewidths =.5, square = True, cmap = 'Blues_r')
plt.ylabel('Actual label' , fontsize = 12)
plt.xlabel('Predicted label' , fontsize = 12)
plt.title('Confusion Matrix' , fontsize = 15)
plt.show()
###Output
_____no_output_____
###Markdown
**Precision, Recall and F1-Score**
###Code
# Classification Report
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
DrugY 0.89 1.00 0.94 17
drugA 1.00 1.00 1.00 1
drugB 1.00 0.60 0.75 5
drugC 1.00 0.33 0.50 3
drugX 0.60 0.75 0.67 8
accuracy 0.82 34
macro avg 0.90 0.74 0.77 34
weighted avg 0.85 0.82 0.81 34
###Markdown
The **Recall** score for the various classes are high which is a good indicator that the model is predicting a positive case when the actual value is also true. Recall tells from all the positive cases how many were predicted correctly$recall = \frac{TP}{TP + FN} $Precision tells about predicting positive classes when the result is actually positive and the scores look good.$precision = \frac{TP}{TP + FP}$ **Classification Error or Misclassification Rate**This tells overall how often the classification is incorrect.$accuracy = \frac{TP + TN}{TP + TN + FP + FN}$$classification{\_}error = \frac{FP + FN}{TP + TN + FP + FN}$$classification{\_}error = 1 - accuracy$
###Code
# Get accuracy score
acc = accuracy_score(y_test , y_pred)
print('Accuracy: {:.2f}'.format(acc))
class_err = 1 - acc
print('Misclassification rate: {:.2f}'.format(class_err))
###Output
Accuracy: 0.82
Misclassification rate: 0.18
|
.ipynb_checkpoints/Taller2_RegresionLogistica_Vacio-checkpoint.ipynb | ###Markdown
Regresión logística con una perspectiva de Redes NeuronalesEn este ejercicio construirá un clasificador de regresión logística. En este cuaderno encontrará la guía para hacerlo desde una perspectiva de redes neuronales, ganando una intuición sobre lo que es el aprendizaje computacional y el deep learning.**Instrucciones:**- No utilize bucles-for/while en su código, a menos que se le pida hacerlo explícitamente.**Tras este taller usted va a ser capaz de:**- Construir la arquitectura general de un algoritmo de aprendizaje, incluyendo: - Inicialización de parámetros - Calcular la función de coste y su gradiente - Utilizar un algoritmo de optimización (descenso en la dirección del gradiente, GD) - Reunir todas las tres funciones en un modelo principal en el orden adecuado.Manos a la obra!! 1 - Paquetes Primero, importamos los paquetes que vamos a necesitar a lo largo de este taller. - [numpy](www.numpy.org) paquete básico para ciencias computacionales con Python.- [h5py](http://www.h5py.org) paquete para interactuar con un conjunto de datos guardado en un archivo de tipo H5.- [matplotlib](http://matplotlib.org) librería para graficar en Python.- [PIL](http://www.pythonware.com/products/pil/) y [scipy](https://www.scipy.org/) usados para probar el modelo con sus propias imagenes al final del taller
###Code
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
###Output
_____no_output_____
###Markdown
2 - Enunciado del problema **Enunciado**: El siguiente conjunto de datos está disponible ("data.h5") con la siguiente información: - un conjunto de entrenamiento m_train con imagenes etiquetadas como gato (y=1) o no-gato (y=0) - un conjunto de prueba m_test con imagenes etiquetadas como cat/gato o non-cat/no-gato - cada imagen tiene dimensiones (num_px, num_px, 3) donde 3 es para los canales RGB (nótese que cada imagen es cuadrada (altura = num_px) y (ancho = num_px).Usted debe construir un algoritmo simple de reconocimiento de imagenes que pueda clasificar correctamente las imagenes como gato o no-gato.Primero, examinemos los datos. Cargue el archivo con el siguiente código.
###Code
# Carga de datos (gato/no-gato)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
clases=["no-gato", "gato"]
###Output
_____no_output_____
###Markdown
Comprobamos las dimensiones de una observación
###Code
train_set_x_orig[0].shape
###Output
_____no_output_____
###Markdown
Añadimos "_orig" al final de los datos de entrenamiento y prueba porque vamos a pre-procesarlos. Luego de esto, vamos a obtener un train_set_x y un test_set_x (nótese que las etiquetas de train_set_y y test_set_y no necesitan ningún pre-procesamiento).Cada observación (línea) del train_set_x_orig y del test_set_x_orig es un arreglo representando una imagen. Se puede visualizar cada observación mediante el siguiente código. Puede cambiar el valor del `indice` para visualizar imagenes distintas.
###Code
# Ejemplo de una imagen
indice = 0
plt.imshow(train_set_x_orig[indice])
print ("La imagen #" + str(indice) + ", es un '" + str(clases[np.squeeze(train_set_y[:, indice])]) + "'" )
###Output
La imagen #0, es un 'no-gato'
###Markdown
Muchos fallos/bugs del código en deep learning ocurren por tener dimensiones de la matriz/vector que no encajan. Si puede mantener las dimensiones correctas podrá evitar tener que dedicar tiempo a corregir estos fallos. **Ejercicio:** Encuentre los valores para: - m_train (número de ejemplos de entrenamiento) - m_test (número de ejemplos de prueba) - num_px (= altura = ancho de la imagen)Recuerde que `train_set_x_orig` es un arreglo numpy de dimensiones (m_train, num_px, num_px, 3). De esta manera, puede acceder a `m_train` escribiendo `train_set_x_orig.shape[0]`.
###Code
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 3 líneas de código)
m_train =
m_test =
num_px =
### TERMINE EL CÓDIGO AQUÍ ###
print ("Número de ejemplos de entrenamiento: m_train = " + str(m_train))
print ("Número de ejemplos de prueba: m_test = " + str(m_test))
print ("Altura/Ancho de cada imagen: num_px = " + str(num_px))
print ("Cada imagen es de tamaño: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("Dimensión del train_set_x: " + str(train_set_x_orig.shape))
print ("Dimensión del train_set_y: " + str(train_set_y.shape))
print ("Dimensión del test_set_x: " + str(test_set_x_orig.shape))
print ("Dimensión del test_set_y: " + str(test_set_y.shape))
###Output
_____no_output_____
###Markdown
**Salida esperada para m_train, m_test y num_px**: **m_train** 209 **m_test** 50 **num_px** 64 Es recomendable ahora re-dimensionar las imagenes de tamaño (num_px, num_px, 3) en un arreglo numpy de dimensión (num_px $*$ num_px $*$ 3, 1). Luego, los conjuntos de entrenamiento y prueba serán un arreglo numpy donde cada columna representa una imagen (aplanada). Deberían haber m_train y m_test columnas.**Ejercicio:** Re-dimensione los conjuntos de datos de entrenamiento y prueba para que las imagenes de tamaño (num_px, num_px, 3) sean aplanadas en vectores de dimensión (num\_px $*$ num\_px $*$ 3, 1).Ayuda. Cuando se quiere aplanar una matriz X de dimensión (a,b,c,d) en una matriz X_flatten de dimensión (b$*$c$*$d, a) se puede usar: ```pythonX_flatten = X.reshape(X.shape[0], -1).T X.T es la transpuesta de X```
###Code
# Re-dimensione los ejemplos de entrenamiento y prueba
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 2 líneas de código)
train_set_x_flatten =
test_set_x_flatten =
### TERMINE EL CÓDIGO AQUÍ ###
print ("Dimensión del train_set_x_flatten: " + str(train_set_x_flatten.shape))
print ("Dimensión del train_set_y: " + str(train_set_y.shape))
print ("Dimensión del test_set_x_flatten: " + str(test_set_x_flatten.shape))
print ("Dimensión del test_set_y: " + str(test_set_y.shape))
print ("Chequeo luego del re-dimensionamiento: " + str(train_set_x_flatten[0:5,0]))
###Output
_____no_output_____
###Markdown
**Salida esperada**: **Dimensión train_set_x_flatten** (12288, 209) **Dimensión train_set_y** (1, 209) **Dimensión test_set_x_flatten** (12288, 50) **Dimensión test_set_y** (1, 50) **Chequeo tras el re-dimensionamiento** [17 31 56 22 33] Las imagenes a color son comúnmente representadas mediante los tres canales rojo, verde y azul (RGB) para cada pixel, de tal manera que a cada pixel le corresponde un vector de tres números en el rango de 0 a 255.Un paso muy común en el pre-procesamiento de datos en machine learning es el de estandarizar el conjunto de datos multivariado, es decir, restando la media de cada vector a cada ejemplo, y dividiendo por la desviación estandar del vector. En este caso del tratamiento de imagenes, es más simple y conveniente tan solo dividir todas las filas del conjunto de datos por 255 (el valor máximo de un canal RGB). Normalizemos los datos.
###Code
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
###Output
_____no_output_____
###Markdown
**Recapitulemos:**Pasos comunes para el pre-procesamiento de un nuevo conjunto de datos:- Examinar las dimensiones del problema (m_train, m_test, num_px, ...)- Re-dimensionar los conjuntos de datos para que cada ejemplo sea un vector de tamaño (num_px \* num_px \* 3, 1)- Normalizar o estandarizar los datos 3 - Arquitectura general de un algoritmo de aprendizaje Llegó el momento de diseñar un algoritmo simple para distinguir imagenes de gatos y de aquello que no son gatos.Debe constuir un modelos de regresión logística, desde una perspectiva de Redes Neuronales.**Formulación del algoritmo**:Para un ejemplo $x^{(i)}$:$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$$$\hat{y}^{(i)} = a^{(i)} = sigmoide(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$El coste se calcula sumando sobre todos los ejemplos de entrenamiento:$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$**Paso a paso**:En este ejercicio, se deben dar los siguientes pasos: - Inicializar los parámetros del modelo - Aprender los parámetros del modelo a partir de la minimización del coste - Utilizar los parámetros aprendidos para hacer predicciones (sobr el conjunto de prueba) - Analizar los resultados y concluir 4 - Construyendo las partes del algoritmo Los pasos principales para construir una red neuronal son: 1. Definir la estructura del modelo (tal como el número de patrones en el input) 2. Inicializar los parámetros del modelo3. Bucle: - Calcular la pérdida actual (propagación hacia delante) - Calcular el gradiente actual (retro-propagación) - Actualizar los parámetros (descenso en la dirección del gradiente)Se suele construir 1-3 de manera separada e integrarlos en una función que llamaremos `model()`. 4.1 - Funciones de ayuda**Ejercicio**: Utilizando su código del Taller_1 "IntroPython_numpy", implemente `sigmoid()`. Como se puede ver en la figura arriba, se debe computar $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ para predecir. Para ello puede utilizar np.exp().
###Code
# FUNCIÓN A CALIFICAR: sigmoid
def sigmoid(z):
"""
Calcule el sigmoide de z
Input:
z: Un escalar o arreglo numpy de cualquier tamaño
Output:
s: sigmoid(z)
"""
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 1 línea de código)
s =
### TERMINE EL CÓDIGO AQUÍ ###
return s
print ("sigmoide([0, 2]) = " + str(sigmoid(np.array([0,2]))))
###Output
_____no_output_____
###Markdown
**Salida esperada**: **sigmoid([0, 2])** [ 0.5 0.88079708] 4.2 - Inicialización de parámteros**Ejercicio:** Implemente la inicialización de parámetros. Se tiene un vector w de ceros. Si no sabe qué función de numpy puede utilizar, puede buscar np.zeros() en la documentación de la biblioteca Numpy.
###Code
# FUNCIÓN A CALIFICAR: initialize_with_zeros
def initialize_with_zeros(dim):
"""
Esta función crea un vector de ceros de dimensión (dim, 1) para w e inicializa b a 0.
Input:
dim: tamaño del vector w (número de parámetros para este caso)
Output:
w: vector inicializado de tamaño (dim, 1)
b: escalar inicializado (corresponde con el sesgo)
"""
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 1 línea de código)
w =
b =
### TERMINE EL CÓDIGO AQUÍ ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
###Output
_____no_output_____
###Markdown
**Salida esperada**: ** w ** [[ 0.] [ 0.]] ** b ** 0 Para inputs de imagen, w será de tamaño (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Propagación hacia delante y hacia atrásUna vez los parámetros están inicializados, se pueden implementar los pasos de propagación hacia "delante" y hacia "atrás" para el aprendizaje de los parámetros.**Ejercicio:** Implemente la función `propagate()` que calcula la función de coste y su gradiente.**Ayuda**:Propagación hacia delante:- Se tiene X- Se calcula $A = \sigma(w^T X + b) = (a^{(1)}, a^{(2)}, ..., a^{(m-1)}, a^{(m)})$- Se calcula la función de coste/pérdida: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$Se pueden usar las siguientes fórmulas: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
###Code
# FUNCIÓN A CALIFICAR: propagate
def propagate(w, b, X, Y):
"""
Implemente la función de coste y su gradiente para la propagación
Input:
w: pesos, un arreglo numpy de tamaño (num_px * num_px * 3, 1)
b: sesgo, un escalar
X: datos de tamaño (num_px * num_px * 3, número de ejemplos)
Y: vector de etiquetas observadas (0 si es no-gato, 1 si es gato) de tamaño (1, número de ejemplos)
Output:
coste: coste negativo de log-verosimilitud para la regresión logística
dw: gradiente de la pérdida con respecto a w, con las mismas dimensiones que w
db: gradiente de la pérdida con respecto a b, con las mismas dimensiones que b
(Sugerencia: escriba su código paso a paso para la propagación. np.log(), np.dot()
"""
m = X.shape[1]
# PROPAGACIÓN HACIA DELANTE
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 2 líneas de código)
A = # compute la activación
cost = # compute el coste
### TERMINE EL CÓDIGO AQUÍ ###
# RETRO-PROPAGACIÓN (PROPAGACIÓN HACIA ATRÁS)
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 2 líneas de código)
dw =
db =
### TERMINE EL CÓDIGO AQUÍ ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("coste = " + str(cost))
###Output
_____no_output_____
###Markdown
**Salida esperada**: ** dw ** [[ 0.99845601] [ 2.39507239]] ** db ** 0.00145557813678 ** cost ** 5.801545319394553 4.4 - Optimización- Se tienen los parámetros inicializados.- También se tiene el código para calcular la función de coste y su gradiente.- Ahora se quieren actualizar los parámetros utilizando el descenso en la dirección del gradiente (GD).**Ejercicio:** Escriba la función de optimización. EL objetivo es el de aprender $w$ y $b$ minimizando la función de coste $J$. Para un parámetro $\theta$, la regla de actualización es $ \theta = \theta - \alpha \text{ } d\theta$, donde $\alpha$ es la tasa de aprendizaje.
###Code
# FUNCIÓN A CALIFICAR: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
Esta función optimiza w y b implementando el algoritmo de GD
Input:
w: pesos, un arreglo numpy de tamaño (num_px * num_px * 3, 1)
b: sesgo, un escalar
X: datos de tamaño (num_px * num_px * 3, número de ejemplos)
Y: vector de etiquetas observadas (0 si es no-gato, 1 si es gato) de tamaño (1, número de ejemplos)
num_iterations: número de iteracionespara el bucle de optimización
learning_rate: tasa de aprendizaje para la regla de actualización del GD
print_cost: True para imprimir la pérdida cada 100 iteraciones
Output:
params: diccionario con los pesos w y el sesgo b
grads: diccionario con los gradientes de los pesos y el sesgo con respecto a la función de pérdida
costs: lista de todos los costes calculados durante la optimización, usados para graficar la curva de aprendizaje.
Sugerencia: puede escribir dos pasos e iterar sobre ellos:
1) Calcule el coste y el gradiente de los parámetros actuales. Use propagate().
2) Actualize los parámetros usando la regla del GD para w y b.
"""
costs = []
for i in range(num_iterations):
# Computación del coste y el gradiente (≈ 1-4 líneas de código)
### EMPIEZE EL CÓDIGO AQUÍ ###
grads, cost =
### TERMINE EL CÓDIGO AQUÍ ###
# Recupere las derivadas de grads
dw = grads["dw"]
db = grads["db"]
# Actualize la regla (≈ 2 líneas de código)
### EMPIEZE EL CÓDIGO AQUÍ ###
w =
b =
### TERMINE EL CÓDIGO AQUÍ ###
# Guarde los costes
if i % 100 == 0:
costs.append(cost)
# Se muestra el coste cada 100 iteraciones de entrenamiento
if print_cost and i % 100 == 0:
print ("Coste tras la iteración %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
###Output
_____no_output_____
###Markdown
**Salida esperada**: **w** [[ 0.19033591] [ 0.12259159]] **b** 1.92535983008 **dw** [[ 0.67752042] [ 1.41625495]] **db** 0.219194504541 **Ejercicio:** La función anterior aprende los parámetros w y b, que se pueden usar para predecir las etiquetas para el conjunto de datos X. Ahora implemente la función `predict()`. Hay dos pasos para calcular las predicciones:1. Calcule $\hat{Y} = A = \sigma(w^T X + b)$2. Convierta a 0 las entradas de a (si la activación es 0.5), guarde las predicciones en un vector `Y_prediction`. Si lo desea, puede usar un `if`/`else` en un bucle `for`, aunque también hay una manera de vectorizarlo.
###Code
# FUNCIÓN A CALIFICAR: predict
def predict(w, b, X):
'''
Prediga si una etiqueta es 0 o 1 usando los parámetros de regresión logística aprendidos (w, b)
Input:
w: pesos, un arreglo numpy de tamaño (num_px * num_px * 3, 1)
b: sesgo, un escalar
X: datos de tamaño (num_px * num_px * 3, número de ejemplos)
Output:
Y_prediction: vector con todas las predicciones (0/1) para los ejemplos en X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute el vector "A" prediciendo las probabilidades de que la imagen contenga un gato
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 1 línea de código)
A =
### TERMINE EL CÓDIGO AQUÍ ###
for i in range(A.shape[1]):
# Convierta las probabilidades A[0,i] a predicciones p[0,i]
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 1-4 líneas de código)
Y_prediction[0,i] =
### TERMINE EL CÓDIGO AQUÍ ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predicciones = " + str(predict(w, b, X)))
###Output
_____no_output_____
###Markdown
**Salida esperada**: **predicciones** [[ 1. 1. 0.]] **Recapitulemos:**Se han implementado varias funciones:- Inicialización de (w,b)- Optimización iterativa de la pérdida para aprender los parametros (w,b): - computando el coste y su gradiente - actualizando los parametros usando el GD- Se utilizan los parámetros aprendidos (w,b) para predecir las etiquetas para un conjunto dado de ejemplos 5 - Fusione todas las funciones Ahora debe construir el modelo global, estructurando todos los bloques que ha programado arriba.**Ejercicio:** Implemente la función madre. Use la siguiente notación: - Y_prediction_test para las predicciones sobr el conjunto de prueba - Y_prediction_train para las predicciones sobre el conjunto de entrenamiento - w, costs, grads para las salidas de optimize()
###Code
# FUNCIÓN A CALIFICAR: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Construye el modelo de regresión logística llamando las funciones implementadas anteriormente
Input:
X_train: conjunto de entrenamiento con dimensiones (num_px * num_px * 3, m_train)
Y_train: vector con las etiquetas de entrenamiento con dimensiones (1, m_train)
X_test: conjunto de prueba con dimensiones (num_px * num_px * 3, m_test)
Y_test: vector con las etiquetas de prueba con dimensiones (1, m_test)
num_iterations: (hiper-parámetro) número de iteracionespara para optimizar los parámetros
learning_rate: (hiper-parámetro) tasa de aprendizaje para la regla de optimize()
print_cost: True para imprimir la pérdida cada 100 iteraciones
Output:
d: diccionario con la información sobre el modelo.
"""
### EMPIEZE EL CÓDIGO AQUÍ ###
# Inicialize los parametros con ceros (≈ 1 línea de código)
w, b =
# Descenso en la dirección del gradiente (GD) (≈ 1 línea de código)
parameters, grads, costs =
# Recupere los parámetros w y b del diccionario "parameters"
w = parameters["w"]
b = parameters["b"]
# Prediga los ejemplos de prueba y entrenamiento (≈ 2 líneas de código)
Y_prediction_test =
Y_prediction_train =
### TERMINE EL CÓDIGO AQUÍ ###
# Imprima los errores de entrenamiento y prueba
print("Precisión de entrenamiento: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("Precisión de prueba: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"Costes": costs,
"Prediccion_prueba": Y_prediction_test,
"Prediccion_entrenamiento" : Y_prediction_train,
"w" : w,
"b" : b,
"Tasa de aprendizaje" : learning_rate,
"Num_iteraciones": num_iterations}
return d
###Output
_____no_output_____
###Markdown
Run the following cell to train your model.
###Code
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
###Output
_____no_output_____
###Markdown
**Salida esperada**: **Coste tras la iteración 0 ** 0.693147 $\vdots$ $\vdots$ **Precisión de entrenamiento** 99.04306220095694 % **Precisión de prueba** 70.0 % **Nota**: La precisión de entrenamiento es cercana al 100%. Esto es una buena señal de que el modelo está aprendiendo, ya que muestra capacidad suficiente para ajustarse a los datos de entrenamiento. Por el otro lado, el error de prueba es del 70%. Este resultado no está mal tomando en cuenta que es un modelo bastante simple, dado el conjunto de datos que se ha usado, el cual es relativamente pequeño, y que el modelo de regresión logística es un calsificador lineal. La próxima semana veremos un clasificador más complejo, y que permitirá obtener mejores resultados. Nótese también que el modelo se está sobre-ajustando a los datos de entrenamiento. Más adelante veremos cómo se peude reducir este sobre-ajuste ("overfitting"), por ejemplo mediante regularización. A continuación puede examinar las predicciones de las imagenes de prueba.
###Code
# Ejemplo de una imagen mal clasificada.
index = 6
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
clase=clases[int(d["Prediccion_prueba"][0,index])]
print ("Para y = " + str(test_set_y[0,index]) + ", el modelo dice que es una imagen de un \"" + clase + "\".")
###Output
_____no_output_____
###Markdown
Grafiquemos la función de pérdida y los gradientes.
###Code
# Gráfica de la curva de aprendizaje (con costes)
costes = np.squeeze(d['Costes'])
plt.plot(costes)
plt.ylabel('coste')
plt.xlabel('iteraciones (en cientos)')
plt.title("Tasa de aprendizaje =" + str(d["Tasa de aprendizaje"]))
plt.show()
###Output
_____no_output_____
###Markdown
**Interpretación**:Se puede ver el coste decreciendo, demostrando que los parámetros están siendo aprendidos. Sin embargo, el modelo se podría entrenar aun más sobre el conjunto de entrenamiento. Intente aumentar el número de iteraciones arriba y vuelva a ejecutar el código. Podrá ver precisión del conjunto de entrnamiento aumenta, pero la del conjunto de prueba decrece. Este es evidencia del sobre-ajuste. 6 - Profundizando en el análisis Ya tienes un primer modelo de clasificación de imagenes. Analizémoslo un poco más, como por ejemplo examinando distintos valores para la tasa de aprendizaje $\alpha$. Selección de la tasa de aprendizaje **Nota**:Para que el método del GD funcione de manera adecuada, se debe elegir la tasa de aprendiazaje de manera acertada. Esta tasa $\alpha$ determina qué tan rápido se actualizan los parámetros. Si la tasa es muy grande se puede "sobrepasar" el valor óptimo. Y de manera similar, si es muy pequeña se van a necesitar muchas iteraciones para converger a los mejores valores. Por ello la importancia de tener una tase de aprensizaje bien afinada. Ahora, comparemos la curva de aprendizaje de nuestro modelo con distintas elecciones para $\alpha$. Ejecute el código abajo. También puede intentar con valores distintos a los tres que estamos utilizando abajo para `learning_rates` y analize los resultados.
###Code
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("La tasa de aprendizaje es: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["Costes"]), label= str(models[str(i)]["Tasa de aprendizaje"]))
plt.ylabel('coste')
plt.xlabel('iteraciones (en cientos)')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
###Output
_____no_output_____
###Markdown
**Discusión**: - Tasas diferentes obtienen costes diferentes y por lo tanto, predicciones diferentes.- Si la tasa es muy grande (0.01), el coste puede oscilar arriba y abajo. Hasta puede diverger, aunque en este ejemplo $\alpha=0.01$ aun consigue un buen valor para el coste. - Un coste más bajo no implica un mejor modelo. Se debe chequear si hay una posibilidad de sobre-ajuste. Esto ocurre cuando la precisión de entrenamiento es mucho mayor que la precisión de prueba.- En deep learning, es recomendable que se elija la tasa de aprendizaje que minimize la función de coste. Y si el modelo sobre-ajusta, se pueden probar otras técnicas (que veremos más adelante) para reducir dicho sobre-ajuste. 7 - Pruebe con otras imagenes Puede utilizar imagenes propias para ver los resultados de su modelo. Para ello, agregue su(s) imagen(es) al directorio de este cuaderno en la carpeta "images", cambie el nombre de la(s) imagen(es) en el siguiente código, y compruebe si el algoritmo acierta (1=gato, 0=no-gato).
###Code
### EMPIEZE EL CÓDIGO AQUÍ ### (INTRODUZCA EL NOMBRE DE SU IMAGEN)
my_image = " " # el nombre debe coincidir con el de su imagen
### TERMINE EL CÓDIGO AQUÍ ###
# Pre-procesamos la imagen
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(int(np.squeeze(my_predicted_image))) + ", el algoritmo predice que es una imagen de un \"" + clases[int(np.squeeze(my_predicted_image))]+ "\".")
###Output
_____no_output_____ |
course/project_build_tf_sentiment_model/03_load_and_predict.ipynb | ###Markdown
Load and PredictWe've now built our model, trained it, and saved it to file - now we can begin applying it to making predictions. First, we load the model with `tf.keras.models.load_model`.
###Code
import tensorflow as tf
model = tf.keras.models.load_model('sentiment_model')
# view model architecture to confirm we have save and loaded correctly
model.summary()
###Output
_____no_output_____
###Markdown
Before making our predictions we need to format our data, which requires two steps:* Tokenizing the data using the `bert-base-cased` tokenizer.* Transforming the data into a dictionary containing *'input_ids'* and *'attention_mask'* tensors.
###Code
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
def prep_data(text):
tokens = tokenizer.encode_plus(text, max_length=512,
truncation=True, padding='max_length',
add_special_tokens=True, return_token_type_ids=False,
return_tensors='tf')
# tokenizer returns int32 tensors, we need to return float64, so we use tf.cast
return {'input_ids': tf.cast(tokens['input_ids'], tf.float64),
'attention_mask': tf.cast(tokens['attention_mask'], tf.float64)}
probs = model.predict(prep_data("hello world"))[0]
probs
import numpy as np
np.argmax(probs)
###Output
_____no_output_____
###Markdown
So we have made a test prediction, but we want to be applying this to real phrases from *test.tsv*. We will load the data into a dataframe, remove fragment duplicates based on *SentenceId*, then iterate through the list and create a new sentiment column.
###Code
import pandas as pd
# so we can see full phrase
pd.set_option('display.max_colwidth', None)
df = pd.read_csv('test.tsv', sep='\t')
df.head()
df = df.drop_duplicates(subset=['SentenceId'], keep='first')
df.head()
###Output
_____no_output_____
###Markdown
Now we initialize our new sentiment column, and begin making predictions.
###Code
df['Sentiment'] = None
for i, row in df.iterrows():
# get token tensors
tokens = prep_data(row['Phrase'])
# get probabilities
probs = model.predict(tokens)
# find argmax for winning class
pred = np.argmax(probs)
# add to dataframe
df.at[i, 'Sentiment'] = pred
df.head()
df.tail()
###Output
_____no_output_____
###Markdown
Load and PredictWe've now built our model, trained it, and saved it to file - now we can begin applying it to making predictions. First, we load the model with `tf.keras.models.load_model`.
###Code
import tensorflow as tf
model = tf.keras.models.load_model("sentiment_model")
# view model architecture to confirm we have save and loaded correctly
model.summary()
###Output
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_ids (InputLayer) [(None, 512)] 0
__________________________________________________________________________________________________
attention_mask (InputLayer) [(None, 512)] 0
__________________________________________________________________________________________________
bert (Custom>TFBertMainLayer) {'last_hidden_state' 108310272 input_ids[0][0]
attention_mask[0][0]
__________________________________________________________________________________________________
dropout_37 (Dropout) (None, 512, 768) 0 bert[0][0]
__________________________________________________________________________________________________
global_max_pooling1d (GlobalMax (None, 768) 0 dropout_37[0][0]
__________________________________________________________________________________________________
outputs (Dense) (None, 5) 3845 global_max_pooling1d[0][0]
==================================================================================================
Total params: 108,314,117
Trainable params: 108,314,117
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
Before making our predictions we need to format our data, which requires two steps:* Tokenizing the data using the `bert-base-cased` tokenizer.* Transforming the data into a dictionary containing *'input_ids'* and *'attention_mask'* tensors.
###Code
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
def prep_data(text):
tokens = tokenizer.encode_plus(
text,
max_length=512,
truncation=True,
padding="max_length",
add_special_tokens=True,
return_token_type_ids=False,
return_tensors="tf",
)
# tokenizer returns int32 tensors, we need to return float64, so we use tf.cast
return {
"input_ids": tf.cast(tokens["input_ids"], tf.float64),
"attention_mask": tf.cast(tokens["attention_mask"], tf.float64),
}
probs = model.predict(prep_data("hello world"))[0]
probs
import numpy as np
np.argmax(probs)
###Output
_____no_output_____
###Markdown
So we have made a test prediction, but we want to be applying this to real phrases from *test.tsv*. We will load the data into a dataframe, remove fragment duplicates based on *SentenceId*, then iterate through the list and create a new sentiment column.
###Code
import pandas as pd
# so we can see full phrase
pd.set_option("display.max_colwidth", None)
df = pd.read_csv("test.tsv", sep="\t")
df.head()
df = df.drop_duplicates(subset=["SentenceId"], keep="first")
df.head()
###Output
_____no_output_____
###Markdown
Now we initialize our new sentiment column, and begin making predictions.
###Code
df["Sentiment"] = None
for i, row in df.iterrows():
# get token tensors
tokens = prep_data(row["Phrase"])
# get probabilities
probs = model.predict(tokens)
# find argmax for winning class
pred = np.argmax(probs)
# add to dataframe
df.at[i, "Sentiment"] = pred
df.head()
df.tail()
###Output
_____no_output_____
###Markdown
Load and PredictWe've now built our model, trained it, and saved it to file - now we can begin applying it to making predictions. First, we load the model with `tf.keras.models.load_model`.
###Code
import tensorflow as tf
model = tf.keras.models.load_model('sentiment_model')
# view model architecture to confirm we have save and loaded correctly
model.summary()
###Output
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_ids (InputLayer) [(None, 512)] 0
__________________________________________________________________________________________________
attention_mask (InputLayer) [(None, 512)] 0
__________________________________________________________________________________________________
bert (Custom>TFBertMainLayer) {'last_hidden_state' 108310272 input_ids[0][0]
attention_mask[0][0]
__________________________________________________________________________________________________
dropout_37 (Dropout) (None, 512, 768) 0 bert[0][0]
__________________________________________________________________________________________________
global_max_pooling1d (GlobalMax (None, 768) 0 dropout_37[0][0]
__________________________________________________________________________________________________
outputs (Dense) (None, 5) 3845 global_max_pooling1d[0][0]
==================================================================================================
Total params: 108,314,117
Trainable params: 108,314,117
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
Before making our predictions we need to format our data, which requires two steps:* Tokenizing the data using the `bert-base-cased` tokenizer.* Transforming the data into a dictionary containing *'input_ids'* and *'attention_mask'* tensors.
###Code
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
def prep_data(text):
tokens = tokenizer.encode_plus(text, max_length=512,
truncation=True, padding='max_length',
add_special_tokens=True, return_token_type_ids=False,
return_tensors='tf')
# tokenizer returns int32 tensors, we need to return float64, so we use tf.cast
return {'input_ids': tf.cast(tokens['input_ids'], tf.float64),
'attention_mask': tf.cast(tokens['attention_mask'], tf.float64)}
probs = model.predict(prep_data("hello world"))[0]
probs
import numpy as np
np.argmax(probs)
###Output
_____no_output_____
###Markdown
So we have made a test prediction, but we want to be applying this to real phrases from *test.tsv*. We will load the data into a dataframe, remove fragment duplicates based on *SentenceId*, then iterate through the list and create a new sentiment column.
###Code
import pandas as pd
# so we can see full phrase
pd.set_option('display.max_colwidth', None)
df = pd.read_csv('test.tsv', sep='\t')
df.head()
df = df.drop_duplicates(subset=['SentenceId'], keep='first')
df.head()
###Output
_____no_output_____
###Markdown
Now we initialize our new sentiment column, and begin making predictions.
###Code
df['Sentiment'] = None
for i, row in df.iterrows():
# get token tensors
tokens = prep_data(row['Phrase'])
# get probabilities
probs = model.predict(tokens)
# find argmax for winning class
pred = np.argmax(probs)
# add to dataframe
df.at[i, 'Sentiment'] = pred
df.head()
df.tail()
###Output
_____no_output_____ |
Models/Pair4/ASHOKLEY_ARIMA_Model.ipynb | ###Markdown
Data Analytics Project - Models Pair 4 - ASHOKLEY ARIMA Model--- 1. Import required modules
###Code
import numpy as np
import pandas as pd
from fastai.tabular.core import add_datepart
from pmdarima.arima import auto_arima
from sklearn import metrics
###Output
/home/varun487/.local/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
###Markdown
--- 2. Get Pair 4 Orders Dataset 2.1. Get the orders
###Code
orders_df = pd.read_csv('../../Preprocess/Pair4/Pair4_orders.csv')
orders_df.head()
orders_df.tail()
###Output
_____no_output_____
###Markdown
2.2. Visualize the orders
###Code
# Plotting the zscore of the Spread
orders_plt = orders_df.plot(x='Date', y='zscore', figsize=(30,15))
# Plotting the lines at mean, 1 and 2 std. dev.
orders_plt.axhline(0, c='black')
orders_plt.axhline(1, c='red', ls = "--")
orders_plt.axhline(-1, c='red', ls = "--")
# Extracting orders
Orders = orders_df['Orders']
# Plot vertical lines where orders are placed
for order in range(len(Orders)):
if Orders[order] != "FLAT":
# GREEN line for a long position
if Orders[order] == "LONG":
orders_plt.axvline(x=order, c = "green")
# RED line for a short position
elif Orders[order] == "SHORT":
orders_plt.axvline(x=order, c = "red")
# BLACK line for getting out of all positions at that point
else:
orders_plt.axvline(x=order, c = "black")
orders_plt.set_ylabel("zscore")
###Output
_____no_output_____
###Markdown
__In the figure above:__- __Blue line__ - zscore of the Spread- __Black horizontal line__ at 0 - Mean- __Red dotted horizontal lines__ - at +1 and -1 standard deviations- __Green vertical line__ - represents long position taken on that day- __Red vertical line__ - represents short position taken on that day- __Black vertical line__ - represents getting out of all open positions till that point 2.3 Visualize the close prices of both stocks
###Code
orders_df_plt = orders_df.plot(x='Date', y=['TATAMTRDVR_Close', 'ASHOKLEY_Close'], figsize=(30,15))
orders_df_plt.set_xlabel("Date")
orders_df_plt.set_ylabel("Price")
###Output
_____no_output_____
###Markdown
--- 3. ASHOKLEY Linear Regression Model 3.1. Get the Complete ASHOKLEY dataset
###Code
ashokley_df = pd.read_csv("../../Storage/Companies_with_names_exchange/ASHOKLEYNSE.csv")
ashokley_df.head()
###Output
_____no_output_____
###Markdown
- We can see that we have data from 2017-01-02 3.2. Get ASHOKLEY training data 3.2.1 Get complete ashokley dataset
###Code
ashokley_df = ashokley_df.drop(columns=['High', 'Low', 'Open', 'Volume', 'Adj Close', 'Company', 'Exchange'])
ashokley_df.head()
###Output
_____no_output_____
###Markdown
- We can see that the period where the stocks are correlated and co-integration starts from 2018-09-04.- Thus the test data for which we need to make predictions is from 2018-09-04 to when the period ends at 2018-12-03.- We take 1 year's worth of training data for our model, which means that the time period of our training data is from 2017-09-03 to 2018-09-04. 3.2.2. Crop dataset within training range
###Code
ashokley_df_train = ashokley_df[ashokley_df['Date'] >= '2017-09-03']
ashokley_df_train.head()
ashokley_df_train = ashokley_df_train[ashokley_df_train['Date'] <= '2018-09-04']
ashokley_df_train.tail()
###Output
_____no_output_____
###Markdown
3.2.3 Get the training data and labels
###Code
ashokley_train = ashokley_df_train.copy()
ashokley_train
ashokley_train = ashokley_train.reset_index(drop=True)
ashokley_train = ashokley_train['Close']
ashokley_train
len(ashokley_train)
###Output
_____no_output_____
###Markdown
3.3. Get ASHOKLEY Test Data
###Code
ashokley_test_df = orders_df.copy()
ashokley_test_df = ashokley_df[(ashokley_df['Date'] >= '2018-09-04') & (ashokley_df['Date'] <= '2018-12-03')].copy()
ashokley_test_df.head()
ashokley_test_df.tail()
ashokley_test = ashokley_test_df.copy()
ashokley_test.reset_index(drop=True, inplace=True)
ashokley_test.index += 251
ashokley_test.head()
ashokley_test.tail()
ashokley_test = ashokley_test['Close']
len(ashokley_test)
###Output
_____no_output_____
###Markdown
3.4 Create and Train ASHOKLEY Model
###Code
model = auto_arima(ashokley_train, start_p=1, start_q=1,max_p=3, max_q=3, m=12,start_P=0, seasonal=True,d=1, D=1, trace=True, error_action='ignore', suppress_warnings=True)
model.fit(ashokley_train)
###Output
Performing stepwise search to minimize aic
ARIMA(1,1,1)(0,1,1)[12] : AIC=inf, Time=3.88 sec
ARIMA(0,1,0)(0,1,0)[12] : AIC=1362.389, Time=0.06 sec
ARIMA(1,1,0)(1,1,0)[12] : AIC=1303.580, Time=0.40 sec
ARIMA(0,1,1)(0,1,1)[12] : AIC=inf, Time=2.64 sec
ARIMA(1,1,0)(0,1,0)[12] : AIC=1363.763, Time=0.08 sec
ARIMA(1,1,0)(2,1,0)[12] : AIC=1280.241, Time=0.96 sec
ARIMA(1,1,0)(2,1,1)[12] : AIC=inf, Time=5.44 sec
ARIMA(1,1,0)(1,1,1)[12] : AIC=inf, Time=3.05 sec
ARIMA(0,1,0)(2,1,0)[12] : AIC=1278.245, Time=0.66 sec
ARIMA(0,1,0)(1,1,0)[12] : AIC=1301.718, Time=0.20 sec
ARIMA(0,1,0)(2,1,1)[12] : AIC=inf, Time=3.50 sec
ARIMA(0,1,0)(1,1,1)[12] : AIC=inf, Time=2.68 sec
ARIMA(0,1,1)(2,1,0)[12] : AIC=1280.241, Time=0.89 sec
ARIMA(1,1,1)(2,1,0)[12] : AIC=inf, Time=7.80 sec
ARIMA(0,1,0)(2,1,0)[12] intercept : AIC=1280.220, Time=2.35 sec
Best model: ARIMA(0,1,0)(2,1,0)[12]
Total fit time: 34.614 seconds
###Markdown
3.5. Get predictions
###Code
forecast = model.predict(n_periods=len(ashokley_test))
forecast = pd.DataFrame(forecast, index = ashokley_test.index, columns=['Prediction'])
forecast
predictions = forecast['Prediction']
print('Mean Absolute Error:', metrics.mean_absolute_error(ashokley_test, predictions))
print('Mean Squared Error:', metrics.mean_squared_error(ashokley_test, predictions))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(ashokley_test, predictions)))
###Output
Mean Absolute Error: 14.596826363809276
Mean Squared Error: 283.7682122228967
Root Mean Squared Error: 16.84542110553775
###Markdown
3.6. Visualize the predicitons vs test data
###Code
ashokley_model_plt = forecast.plot(y=['Prediction'], figsize=(30,15), c='green')
ashokley_model_plt.plot(ashokley_train, c='blue')
ashokley_model_plt.plot(ashokley_test, c='orange')
###Output
_____no_output_____
###Markdown
__In the graph above:__- We can see the training data in blue- The test data in orange- The predictions made by the models in green 4. Put the results in a file
###Code
ashokley_predictions_df = pd.read_csv('Ashokley_predicitions.csv')
ashokley_predictions_df.head()
forecast = forecast.reset_index()
forecast = forecast.drop(columns='index')
ashokley_predictions_df['ARIMA_Close'] = forecast['Prediction']
ashokley_predictions_df.head()
ashokley_predictions_df.to_csv('Ashokley_predicitions.csv', index=False)
###Output
_____no_output_____ |
.ipynb_checkpoints/modelo_regresion-checkpoint.ipynb | ###Markdown
Regression Importando librerías
###Code
import pandas as pd
import numpy as np
from keras.datasets import boston_housing
from keras import models, layers, optimizers
###Output
_____no_output_____
###Markdown
Descargando datos
###Code
(train_data , train_targets) ,(test_data,test_targets) = boston_housing.load_data()
train_data[1]
train_targets[1]
###Output
_____no_output_____
###Markdown
Normalización
###Code
mean = train_data.mean(axis=0)
train_data = train_data - mean
std = train_data.std(axis=0)
train_data = train_data / std
test_data = test_data - mean
test_data = test_data / std
###Output
_____no_output_____
###Markdown
Definiendo nuestra red
###Code
def build_model_regression(input_data):
model = models.Sequential()
model.add(layers.Dense(64,activation='relu',input_shape=(input_data,)))
model.add(layers.Dense(64,activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer='rmsprop', loss='mse',metrics=['mae'])
return model
###Output
_____no_output_____
###Markdown
K - fold validation
###Code
k = 4
num_val_samples = len(train_data) // 4
num_epoch = 80
all_history = []
0*num_val_samples
(0+1) * num_val_samples
for i in range(k):
print("Fold " , i)
val_data = train_data[i*num_val_samples: (i+1) * num_val_samples]
val_targets = train_targets[i*num_val_samples: (i+1) * num_val_samples]
partial_train_data = np.concatenate(
[train_data[:i * num_val_samples],
train_data[(i+1) * num_val_samples:]],
axis= 0
)
partial_train_targets = np.concatenate(
[train_targets[:i * num_val_samples],
train_targets[(i+1) * num_val_samples:]],
axis= 0
)
model = build_model_regression(13)
history = model.fit(partial_train_data, partial_train_targets, epochs=num_epoch, batch_size =16,
validation_data = (val_data, val_targets),
verbose=0)
all_history.append(history.history['val_mae'])
###Output
Fold 0
Fold 1
Fold 2
Fold 3
###Markdown
Media de todos los MAE
###Code
len(all_history[0])
all_mae_avg = pd.DataFrame(all_history).mean(axis=0)
###Output
_____no_output_____
###Markdown
Visualizando resultados
###Code
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
plt.plot(range(1,len(all_mae_avg[15:])+1), all_mae_avg[15:])
plt.show()
###Output
_____no_output_____
###Markdown
Evaluando el modelo
###Code
model.evaluate(test_data, test_targets)
###Output
102/102 [==============================] - 0s 55us/step
|
VAE/.ipynb_checkpoints/Convolutional autoencoder-checkpoint.ipynb | ###Markdown
指定CPU跑
###Code
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
# tensorflow2.0
import tensorflow as tf
from tensorflow.keras import Input,Model
from tensorflow.keras import layers
from tensorflow.keras.layers import Flatten,Dense,Dropout,Conv2D,MaxPooling2D, UpSampling2D, Conv2DTranspose
from tensorflow.keras.datasets import mnist
from tensorflow.keras import regularizers
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_train = x_train[:10000]
x_test = x_test[:1000]
print( x_train.shape)
print( x_test.shape)
import cv2
X_train, X_test = x_train[0][...,-1], x_test[0][...,-1]
print(X_test.shape)
X_train, X_test = cv2.resize(X_train, (128,128)), cv2.resize(X_test, (128,128))
X_train, X_test = X_train.reshape(1,128,128,1), X_test.reshape(1,128,128,1)
for i, j in zip(x_train, x_test):
x_i = cv2.resize(i[...,-1], (128, 128))
x_j = cv2.resize(j[...,-1], (128, 128))
X_train = np.concatenate((X_train, x_i.reshape(1,128,128,1)), axis=0)
X_test = np.concatenate((X_test, x_j.reshape(1,128,128,1)), axis=0)
print(X_test.shape)
print(X_train.shape)
import glob
data_path = glob.glob(r'D:\BaiduNetdiskDownload\CelebA\Img\img_align_celeba\img_align_celeba\*')
import cv2
a = cv2.imread(data_path[0], 0)
w, h = 256,256
DATA = np.random.rand(1,w,h,1)
for i in data_path[:100]:
read_i = cv2.imread(i, 0)
read_i=cv2.resize(read_i, (128,128))
DATA = np.concatenate([DATA, read_i.reshape(1,w,h,1)], axis=0)
x_train = DATA[:150]
x_test = DATA[-20:]
w, h = 128,128
###Output
_____no_output_____
###Markdown
encoder maxpooling decoder upSampling
###Code
input_img = Input(shape=(w,h, 1)) # adapt this if using `channels_first` image data format
x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
# autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') # keras use this
autoencoder.compile(optimizer='adam', loss='binary_crossentropy') # tf.keras use this
autoencoder.summary()
###Output
Model: "model_28"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_29 (InputLayer) [(None, 2, 2, 1)] 0
_________________________________________________________________
conv2d_159 (Conv2D) (None, 2, 2, 16) 160
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 1, 1, 16) 0
_________________________________________________________________
conv2d_160 (Conv2D) (None, 1, 1, 8) 1160
_________________________________________________________________
max_pooling2d_16 (MaxPooling (None, 1, 1, 8) 0
_________________________________________________________________
conv2d_161 (Conv2D) (None, 1, 1, 8) 584
_________________________________________________________________
max_pooling2d_17 (MaxPooling (None, 1, 1, 8) 0
_________________________________________________________________
conv2d_162 (Conv2D) (None, 1, 1, 8) 584
_________________________________________________________________
up_sampling2d_27 (UpSampling (None, 2, 2, 8) 0
_________________________________________________________________
conv2d_163 (Conv2D) (None, 2, 2, 8) 584
_________________________________________________________________
up_sampling2d_28 (UpSampling (None, 4, 4, 8) 0
_________________________________________________________________
conv2d_164 (Conv2D) (None, 4, 4, 16) 1168
_________________________________________________________________
up_sampling2d_29 (UpSampling (None, 8, 8, 16) 0
_________________________________________________________________
conv2d_165 (Conv2D) (None, 8, 8, 1) 145
=================================================================
Total params: 4,385
Trainable params: 4,385
Non-trainable params: 0
_________________________________________________________________
###Markdown
全卷积: 容易造成棋盘现象
###Code
input_img = Input(shape=(w,h, 1)) # adapt this if using `channels_first` image data format
x = Conv2D(16, (3, 3),strides=(2,2), activation='relu', padding='same')(input_img)
x = Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same')(x)
encoded = Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
x = Conv2DTranspose(8, (3, 3), strides=(2,2), activation='relu', padding='same')(encoded)
x = Conv2DTranspose(8, (3, 3), strides=(2,2), activation='relu', padding='same')(x)
x = Conv2DTranspose(16, (3, 3), strides=(2,2), activation='relu', padding='same')(x)
x = Conv2DTranspose(16, (3, 3), strides=(1,1), activation='relu', padding='same')(x) # 添加一个步长为1的反卷积层,但效果一般
decoded = Conv2D(1, (3,3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
# autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') # keras use this
autoencoder.compile(optimizer='adam', loss='binary_crossentropy') # tf.keras use this
autoencoder.summary()
###Output
Model: "model_23"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_24 (InputLayer) [(None, 128, 128, 1)] 0
_________________________________________________________________
conv2d_127 (Conv2D) (None, 64, 64, 16) 160
_________________________________________________________________
conv2d_128 (Conv2D) (None, 32, 32, 8) 1160
_________________________________________________________________
conv2d_129 (Conv2D) (None, 16, 16, 8) 584
_________________________________________________________________
conv2d_transpose_42 (Conv2DT (None, 32, 32, 8) 584
_________________________________________________________________
conv2d_transpose_43 (Conv2DT (None, 64, 64, 8) 584
_________________________________________________________________
conv2d_transpose_44 (Conv2DT (None, 128, 128, 16) 1168
_________________________________________________________________
conv2d_transpose_45 (Conv2DT (None, 128, 128, 16) 2320
_________________________________________________________________
conv2d_130 (Conv2D) (None, 128, 128, 1) 145
=================================================================
Total params: 6,705
Trainable params: 6,705
Non-trainable params: 0
_________________________________________________________________
###Markdown
全卷积: 改进 1. 棋盘效应产生的原因在于反卷积时 卷积核与卷积步长的不吻合,导致 2. 添加噪声可能是必要的3. 添加约束 改进1平均时间,47.5s
###Code
input_img = Input(shape=(w,h, 1)) # adapt this if using `channels_first` image data format
x = Conv2D(16, (3, 3),strides=(2,2), activation='relu', padding='same')(input_img)
x = Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same')(x)
encoded = Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
t_b, t_h,t_w, t_c = x.shape
x = tf.image.resize(x, ( t_w*2, t_h*2))
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
t_b, t_h,t_w, t_c = x.shape
x = tf.image.resize(x, ( t_w*2, t_h*2))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
t_b, t_h,t_w, t_c = x.shape
x = tf.image.resize(x, ( t_w*2, t_h*2))
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy') # tf.keras use this
autoencoder.summary()
###Output
Model: "model_43"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_46 (InputLayer) [(None, 128, 128, 1)] 0
_________________________________________________________________
conv2d_264 (Conv2D) (None, 64, 64, 16) 160
_________________________________________________________________
conv2d_265 (Conv2D) (None, 32, 32, 8) 1160
_________________________________________________________________
conv2d_266 (Conv2D) (None, 16, 16, 8) 584
_________________________________________________________________
conv2d_267 (Conv2D) (None, 16, 16, 8) 584
_________________________________________________________________
tf_op_layer_resize_18/Resize [(None, 32, 32, 8)] 0
_________________________________________________________________
conv2d_268 (Conv2D) (None, 32, 32, 8) 584
_________________________________________________________________
tf_op_layer_resize_19/Resize [(None, 64, 64, 8)] 0
_________________________________________________________________
conv2d_269 (Conv2D) (None, 64, 64, 16) 1168
_________________________________________________________________
tf_op_layer_resize_20/Resize [(None, 128, 128, 16)] 0
_________________________________________________________________
conv2d_270 (Conv2D) (None, 128, 128, 1) 145
=================================================================
Total params: 4,385
Trainable params: 4,385
Non-trainable params: 0
_________________________________________________________________
###Markdown
改进2平均时长46.s 但效果没有改进1好!
###Code
input_img = Input(shape=(w,h, 1)) # adapt this if using `channels_first` image data format
x = Conv2D(16, (3, 3),strides=(2,2), activation='relu', padding='same')(input_img)
x = Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same')(x)
encoded = Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2),interpolation='bilinear')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2),interpolation='bilinear')(x)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2),interpolation='bilinear')(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
# autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') # keras use this
autoencoder.compile(optimizer='adam', loss='binary_crossentropy') # tf.keras use this
autoencoder.summary()
%load_ext tensorboard
import os
import datetime
from tensorflow.keras.callbacks import TensorBoard
log_dir=os.path.join('logs','fit',datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))#
tensorboard_callback = TensorBoard(log_dir=log_dir, histogram_freq=1)
%tensorboard --logdir logs/fit
###Output
_____no_output_____
###Markdown
Fit
###Code
# from tensorflow.keras.callbacks import TensorBoard
x_train = X_train
x_test = X_test
autoencoder.fit(x_train, x_train,
epochs=5,
batch_size=128,
shuffle=True,
validation_data=(x_test, x_test),
callbacks=[tensorboard_callback]
)
decoded_imgs = autoencoder.predict(x_test[:20])
n = 5
plt.figure(figsize=(10, 4))
for i in range(1,n+1):
# display original
ax = plt.subplot(2, n, i)
plt.imshow(x_test[i].reshape(w,h))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n,i+n)
plt.imshow(decoded_imgs[i].reshape(w,h))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
ts = decoded_imgs[2]
ts.shape
plt.imshow(ts.reshape(128,128),'gray')
ts = decoded_imgs[0]
ts=ts[None,...]
ts = Conv2DTranspose(1, (3, 3), strides=(2,2), activation='relu', padding='same')(ts)
ts = Conv2DTranspose(1, (3, 3), strides=(2,2), activation='relu', padding='same')(ts)
ts = Conv2DTranspose(1, (3, 3), strides=(2,2), activation='relu', padding='same')(ts)
ts = Conv2DTranspose(1, (3, 3), strides=(2,2), activation='sigmoid', padding='same')(ts)
ts = ts[0,:,:,0]
ts.shape
plt.figure(figsize=(10,10))
plt.subplot(1,2,1)
plt.imshow(ts0[...,-1])
plt.subplot(1,2,2)
plt.imshow(ts,'gray')
ts0 = decoded_imgs[0]
ts=ts0[None,...]
ts = UpSampling2D((5,5))(ts)
ts = Conv2D(1, (5,5), strides=(2,2), padding='same')(ts)
ts = UpSampling2D((5,5))(ts)
ts = Conv2D(1, (5,5), strides=(2,2), padding='same')(ts)
# ts = UpSampling2D((10,10))(ts)
ts = Conv2D(1,(3,3), activation='sigmoid')(ts)
ts = ts[0,:,:,0]
ts.shape
plt.figure(figsize=(10,10))
plt.subplot(1,2,1)
plt.imshow(ts0[...,-1])
plt.subplot(1,2,2)
plt.imshow(ts,'gray')
import guang
from guang.DL.Model.cae import CAE
cae = CAE(128,128, 1)
cae.compile()
cae.fit(X_train,X_test)
###Output
Train on 1001 samples, validate on 1001 samples
Epoch 1/5
512/1001 [==============>...............] - ETA: 19s - loss: 0.6910 |
experiments/notebook/autoencoder-triangles.ipynb | ###Markdown
Dataset
###Code
DATASET_PATH = '/Users/mchrusci/uj/shaper_data/autoencoder/triangles.npz'
with np.load(DATASET_PATH) as data:
train = data['train']
test = data['test']
samples_train = np.random.choice(range(train.shape[0]), 3)
samples_test = np.random.choice(range(test.shape[0]), 3)
for i in samples_train:
plt.imshow(train[i] / 255)
plt.show()
for i in samples_test:
plt.imshow(test[i] / 255)
plt.show()
###Output
_____no_output_____ |
Fase 1 - Fundamentos de programacion/Tema 01 - Introduccion informal/Ejercicios/Soluciones.ipynb | ###Markdown
Tema 01: Introducción informal (Soluciones)*Nota: Estos ejercicios son optativos para hacer al final de la unidad y están pensados para apoyar tu aprendizaje*. **1) Identifica el tipo de dato (int, float, string o list) de los siguientes valores literales**```python"Hola Mundo" string[1, 10, 100] list-25 int1.167 float["Hola", "Mundo"] list' ' string``` **2) Determina mentalmente (sin programar) el resultado que aparecerá por pantalla a partir de las siguientes variables:**```pythona = 10b = -5c = "Hola "d = [1, 2, 3]``````pythonprint(a * 5) 50print(a - b) 15print(c + "Mundo") Hola Mundoprint(c * 2) Hola Hola print(d[-1]) 3print(d[1:]) [2, 3]print(d + d) [1, 2, 3, 1, 2, 3]``` **3) El siguiente código pretende realizar una media entre 3 números, pero no funciona correctamente. ¿Eres capaz de identificar el problema y solucionarlo?**
###Code
numero_1 = 9
numero_2 = 3
numero_3 = 6
media = (numero_1 + numero_2 + numero_3) / 3 # Hay que realizar primero la suma de los 3 números antes de dividir
print("La nota media es", media)
###Output
La nota media es 6.0
###Markdown
**4) A partir del ejercicio anterior, vamos a suponer que cada número es una nota, y lo que queremos es obtener la nota final. El problema es que cada nota tiene un valor porcentual: *** La primera nota vale un 15% del total* La segunda nota vale un 35% del total* La tercera nota vale un 50% del total**Desarrolla un programa para calcular perfectamente la nota final.**
###Code
nota_1 = 10
nota_2 = 7
nota_3 = 4
# Completa el ejercicio aquí
nota_final = nota_1 * 0.15 + nota_2 * 0.35 + nota_3 * 0.50 # Podemos multiplicar en tanto por 1 cada nota y sumarlas
print("La nota final es", nota_final)
###Output
La nota final es 5.949999999999999
###Markdown
**5) La siguiente matriz (o lista con listas anidadas) debe cumplir una condición, y es que en cada fila, el cuarto elemento siempre debe ser el resultado de sumar los tres primeros. ¿Eres capaz de modificar las sumas incorrectas utilizando la técnica del slicing?***Ayuda: La función llamada sum(lista) devuelve una suma de todos los elementos de la lista ¡Pruébalo!*
###Code
matriz = [
[1, 1, 1, 3],
[2, 2, 2, 7],
[3, 3, 3, 9],
[4, 4, 4, 13]
]
# Completa el ejercicio aquí
matriz[1][-1] = sum(matriz[1][:-1])
matriz[3][-1] = sum(matriz[3][:-1])
print(matriz)
###Output
[[1, 1, 1, 3], [2, 2, 2, 6], [3, 3, 3, 9], [4, 4, 4, 12]]
###Markdown
**6) Al realizar una consulta en un registro hemos obtenido una cadena de texto corrupta al revés. Al parecer contiene el nombre de un alumno y la nota de un exámen. ¿Cómo podríamos formatear la cadena y conseguir una estructura como la siguiente?:*** ***Nombre*** ***Apellido*** ha sacado un ***Nota*** de nota.*Ayuda: Para voltear una cadena rápidamente utilizando slicing podemos utilizar un tercer índice -1: **cadena[::-1]** *
###Code
cadena = "zeréP nauJ,01"
# Completa el ejercicio aquí
cadena_volteada = cadena[::-1]
print(cadena_volteada[3:], "ha sacado un", cadena_volteada[:2], "de nota.")
###Output
Juan Pérez ha sacado un 10 de nota.
###Markdown
Tema 01: Introducción informal (Soluciones)*Nota: Estos ejercicios son optativos para hacer al final de la unidad y están pensados para apoyar tu aprendizaje*. **1) Identifica el tipo de dato (int, float, string o list) de los siguientes valores literales**```python"Hola Mundo" string[1, 10, 100] list-25 int1.167 float["Hola", "Mundo"] list' ' string``` **2) Determina mentalmente (sin programar) el resultado que aparecerá por pantalla a partir de las siguientes variables:**```pythona = 10b = -5c = "Hola "d = [1, 2, 3]``````pythonprint(a * 5) 50print(a - b) 15print(c + "Mundo") Hola Mundoprint(c * 2) Hola Hola print(c[-1]) 3print(c[1:]) [2, 3]print(d + d) [1, 2, 3, 1, 2, 3]``` **3) El siguiente código pretende realizar una media entre 3 números, pero no funciona correctamente. ¿Eres capaz de identificar el problema y solucionarlo?**
###Code
numero_1 = 9
numero_2 = 3
numero_3 = 6
media = (numero_1 + numero_2 + numero_3) / 3 # Hay que realizar primero la suma de los 3 números antes de dividir
print("La nota media es", media)
###Output
La nota media es 6.0
###Markdown
**4) A partir del ejercicio anterior, vamos a suponer que cada número es una nota, y lo que queremos es obtener la nota final. El problema es que cada nota tiene un valor porcentual: *** La primera nota vale un 15% del total* La segunda nota vale un 35% del total* La tercera nota vale un 50% del total**Desarrolla un programa para calcular perfectamente la nota final.**
###Code
nota_1 = 10
nota_2 = 7
nota_3 = 4
# Completa el ejercicio aquí
media = numero_1 * 0.15 + numero_2 * 0.35 + numero_3 * 0.50 # Podemos multiplicar en tanto por 1 cada nota y sumarlas
print("La nota media es", media)
###Output
La nota media es 5.3999999999999995
###Markdown
**5) La siguiente matriz (o lista con listas anidadas) debe cumplir una condición, y es que en cada fila, el cuarto elemento siempre debe ser el resultado de sumar los tres primeros. ¿Eres capaz de modificar las sumas incorrectas utilizando la técnica del slicing?***Ayuda: La función llamada sum(lista) devuelve una suma de todos los elementos de la lista ¡Pruébalo!*
###Code
matriz = [
[1, 1, 1, 3],
[2, 2, 2, 7],
[3, 3, 3, 9],
[4, 4, 4, 13]
]
# Completa el ejercicio aquí
matriz[1][-1] = sum(matriz[1][:-1])
matriz[3][-1] = sum(matriz[3][:-1])
print(matriz)
###Output
[[1, 1, 1, 3], [2, 2, 2, 6], [3, 3, 3, 9], [4, 4, 4, 12]]
###Markdown
**6) Al realizar una consulta en un registro hemos obtenido una cadena de texto corrupta al revés. Al parecer contiene el nombre de un alumno y la nota de un exámen. ¿Cómo podríamos formatear la cadena y conseguir una estructura como la siguiente?:*** ***Nombre*** ***Apellido*** ha sacado un ***Nota*** de nota.*Ayuda: Para voltear una cadena rápidamente utilizando slicing podemos utilizar un tercer índice -1: **cadena[::-1]** *
###Code
cadena = "zeréP nauJ,01"
# Completa el ejercicio aquí
cadena_volteada = cadena[::-1]
print(cadena_volteada[3:], "ha sacado un", cadena_volteada[:2], "de nota.")
###Output
Juan Pérez ha sacado un 10 de nota.
|
Python-API/pal/notebooks/DiabetesUnifiedClassificationRandomForest.ipynb | ###Markdown
Unified Classification Example with Random Forest and Model ReportAn example of Unified Calssification with Random Forest using Diabetes Dataset. Pima Indians Diabetes DatasetOriginal data comes from National Institute of Diabetes and Digestive and Kidney Diseases. The collected dataset is aiming at, based on certain diagnostic measurements, diagnostically predicting whether or not a patient has diabetes. In particular, patients contained in the dataset are females of Pima Indian heritage, all above the age of 20. Dataset is form Kaggle, for tutorials use only.The dataset contains the following diagnositic attributes:$\rhd$ "PREGNANCIES" - Number of times pregnant,$\rhd$ "GLUCOSE" - Plasma glucose concentration a 2 hours in an oral glucose tolerance test,$\rhd$ "BLOODPRESSURE" - Diastolic blood pressure (mm Hg),$\rhd$ "SKINTHICKNESS" - Triceps skin fold thickness (mm),$\rhd$ "INSULIN" - 2-Hour serum insulin (mu U/ml),$\rhd$ "BMI" - Body mass index $(\text{weight in kg})/(\text{height in m})^2$,$\rhd$ "PEDIGREE" - Diabetes pedigree function,$\rhd$ "AGE" - Age (years),$\rhd$ "CLASS" - Class variable (0 or 1) 268 of 768 are 1(diabetes), the others are 0(non-diabetes).Import the related function:
###Code
import hana_ml
from hana_ml import dataframe
from hana_ml.algorithms.pal import metrics
from hana_ml.algorithms.pal.unified_classification import UnifiedClassification, json2tab_for_reason_code
import pandas as pd
###Output
_____no_output_____
###Markdown
Load DataThe data is loaded into 3 tables - full set, training-validation set, and test set as follows: PIMA_INDIANS_DIABETES_TBL PIMA_INDIANS_DIABETES_TRAIN_VALID_TBL PIMA_INDIANS_DIABETES_TEST_TBLTo do that, a connection is created and passed to the loader.There is a config file, config/e2edata.ini that controls the connection parameters and whether or not to reload the data from scratch. In case the data is already loaded, there would be no need to load the data. A sample section is below. If the config parameter, reload_data is true then the tables for test, training and validation are (re-)created and data inserted into them.[hana]url=host.sjc.sap.corpuser=usernamepasswd=userpasswordport=3xx15
###Code
from data_load_utils import DataSets, Settings
import plotting_utils
url, port, user, pwd = Settings.load_config("../../config/e2edata.ini")
connection_context = dataframe.ConnectionContext(url, port, user, pwd)
full_tbl, train_tbl, test_tbl, _ = DataSets.load_diabetes_data(connection_context)
###Output
Table PIMA_INDIANS_DIABETES_TBL exists and data exists
###Markdown
Define Dataframe - training set and testing setData frames are used keep references to data so computation on large data sets in HANA can happen in HANA. Trying to bring the entire data set into the client will likely result in out of memory exceptions.
###Code
diabetes_train = connection_context.table(train_tbl)
#diabetes_train = diabetes_train.cast('CLASS', 'VARCHAR(10)')
diabetes_test = connection_context.table(test_tbl)
#diabetes_test = diabetes_test.cast('CLASS', 'VARCHAR(10)')
###Output
_____no_output_____
###Markdown
Simple ExplorationLet us look at the number of rows in each dataset:
###Code
print('Number of rows in training set: {}'.format(diabetes_train.count()))
print('Number of rows in testing set: {}'.format(diabetes_test.count()))
###Output
Number of rows in training set: 614
Number of rows in testing set: 76
###Markdown
Let us look at columns of the dataset:
###Code
print(diabetes_train.columns)
###Output
['ID', 'PREGNANCIES', 'GLUCOSE', 'BLOODPRESSURE', 'SKINTHICKNESS', 'INSULIN', 'BMI', 'PEDIGREE', 'AGE', 'CLASS']
###Markdown
Let us also look some (in this example, the top 6) rows of the dataset:
###Code
diabetes_train.head(3).collect()
###Output
_____no_output_____
###Markdown
Check the data type of all columns:
###Code
diabetes_train.dtypes()
###Output
_____no_output_____
###Markdown
We have a 'CLASS' column in the dataset, let's check how many classes are contained in this dataset:
###Code
diabetes_train.distinct('CLASS').collect()
###Output
_____no_output_____
###Markdown
Two classes are available, assuring that this is a binary classification problem. Model TrainingInvoke the unified classification to train the model using random forest:
###Code
rdt_params = dict(random_state=2,
split_threshold=1e-7,
min_samples_leaf=1,
n_estimators=10,
max_depth=55)
uc_rdt = UnifiedClassification(func = 'RandomForest', **rdt_params)
uc_rdt.fit(data=diabetes_train,
key= 'ID',
label='CLASS',
partition_method='stratified',
stratified_column='CLASS',
partition_random_state=2,
training_percent=0.7, ntiles=2)
###Output
_____no_output_____
###Markdown
Visualize the modelIn unifiedclassfication function, we provide a function generate_notebook_iframe_report() to visualize the results.
###Code
uc_rdt.generate_notebook_iframe_report()
###Output
_____no_output_____
###Markdown
OutputWe could also see the result one by one: Output 1: variable importanceIndicates the importance of variables:
###Code
uc_rdt.importance_.collect().set_index('VARIABLE_NAME').sort_values(by=['IMPORTANCE'],ascending=False)
###Output
_____no_output_____
###Markdown
Output 2: confusion matrix
###Code
uc_rdt.confusion_matrix_.collect()
###Output
_____no_output_____
###Markdown
Output 3: statistics
###Code
uc_rdt.statistics_.collect()
###Output
_____no_output_____
###Markdown
Obtain the auc value for drawing the ROC curve in the next step:
###Code
dtr_auc=uc_rdt.statistics_.filter("STAT_NAME='AUC'").cast('STAT_VALUE','DOUBLE').collect().at[0, 'STAT_VALUE']
dtr_auc
###Output
_____no_output_____
###Markdown
Output 4: metrics and draw ROC curve
###Code
uc_rdt.metrics_.collect()
###Output
_____no_output_____
###Markdown
Draw the ROC curve based on the metrics_:
###Code
import matplotlib.pyplot as plt
tpr=uc_rdt.metrics_.filter("NAME='ROC_TPR'").select('Y').collect()
fpr=uc_rdt.metrics_.filter("NAME='ROC_FPR'").select('Y').collect()
plt.figure()
plt.plot(fpr, tpr, color='darkorange', lw=1, label='ROC curve (area = %0.2f)' % dtr_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=1, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Prediction Obtain the features in the prediction:
###Code
features = diabetes_train.columns
features.remove('CLASS')
features.remove('ID')
print(features)
###Output
['PREGNANCIES', 'GLUCOSE', 'BLOODPRESSURE', 'SKINTHICKNESS', 'INSULIN', 'BMI', 'PEDIGREE', 'AGE']
###Markdown
Invoke the prediction with diabetest_test:
###Code
pred_res = uc_rdt.predict(diabetes_test, key='ID', features=features)
pred_res.head(10).collect()
###Output
_____no_output_____
###Markdown
Global Interpretation using Shapley valuesNow that we can calculate Shap values for each feature of every observation, we can get a global interpretation using Shapley values by looking at it in a combined form. Let’s see how we can do that:
###Code
from hana_ml.visualizers.model_debriefing import TreeModelDebriefing
shapley_explainer = TreeModelDebriefing.shapley_explainer(pred_res, diabetes_test, key='ID', label='CLASS')
shapley_explainer.summary_plot()
###Output
[31m1.Using Shapley values to show the distribution of the impacts each feature has on the model output.
[31m2.The color represents the feature value (red high, blue low).
[31m3.The plot below shows the relationship between feature value and Shapley value.
[31m-- If the dots in the left area are blue and the dots in the right area are red, then it means that the feature value and the Shapley value are typically positive correlation.
[31m-- If the dots in the left area are red and the dots in the right area are blue, then it means that the feature value and the Shapley value are typically negative correlation.
[31m-- If all the dots are concentrated near 0, it means that the Shapley value has nothing to do with this feature.
###Markdown
Expand the REASON_CODE to see the detail of each item:
###Code
json2tab_for_reason_code(pred_res).collect()
###Output
_____no_output_____
###Markdown
confusion_matrix:
###Code
ts = diabetes_test.rename_columns({'ID': 'TID'}) .cast('CLASS', 'NVARCHAR(256)')
jsql = '{}."{}"={}."{}"'.format(pred_res.quoted_name, 'ID', ts.quoted_name, 'TID')
results_df = pred_res.join(ts, jsql, how='inner')
cm_df, classification_report_df = metrics.confusion_matrix(results_df, key='ID', label_true='CLASS', label_pred='SCORE')
import matplotlib.pyplot as plt
from hana_ml.visualizers.metrics import MetricsVisualizer
f, ax1 = plt.subplots(1,1)
mv1 = MetricsVisualizer(ax1)
ax1 = mv1.plot_confusion_matrix(cm_df, normalize=False)
print("Recall, Precision and F_measures.")
classification_report_df.collect()
###Output
Recall, Precision and F_measures.
###Markdown
Score
###Code
_,_,_,metrics_res = uc_rdt.score(data=diabetes_test, key='ID', label='CLASS')
metrics_res.collect()
metrics_res.distinct('NAME').collect()
###Output
_____no_output_____
###Markdown
Draw the cumulative lift curve:
###Code
import matplotlib.pyplot as plt
cumlift_x=metrics_res.filter("NAME='CUMLIFT'").select('X').collect()
cumlift_y=metrics_res.filter("NAME='CUMLIFT'").select('Y').collect()
plt.figure()
plt.plot(cumlift_x, cumlift_y, color='darkorange', lw=1)
plt.xlim([0.0, 1.0])
plt.ylim([0.8, 2.05])
plt.xlabel('Pencetage')
plt.ylabel('Cumulative lift')
plt.title('model: Random forest')
plt.show()
###Output
_____no_output_____
###Markdown
Draw the cumulative gains curve:
###Code
import matplotlib.pyplot as plt
cumgains_x=metrics_res.filter("NAME='CUMGAINS'").select('X').collect()
cumgains_y=metrics_res.filter("NAME='CUMGAINS'").select('Y').collect()
plt.figure()
plt.plot(cumgains_x, cumgains_y, color='darkorange', lw=1)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Pencetage')
plt.ylabel('Cumulative gains')
plt.title('model: Random forest')
plt.show()
###Output
_____no_output_____ |
Notebooks/connor_notebooks/road_hazard_scrape.ipynb | ###Markdown
Notebook Objectives Our objective is to scrape bridge height and coordinate data via the surface tracks API.
###Code
import pandas as pd
import numpy as np
import os
import glob
import json
import re
import time
from selenium import webdriver
###Output
_____no_output_____
###Markdown
Selenium View documentation [here](https://selenium-python.readthedocs.io/) to see how to web scrape using the Selenium library.
###Code
from selenium.webdriver import Chrome
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument('--dns-prefetch-disable')
# json file contains username and password
with open('st_credentials.json') as creds:
credentials = json.load(creds)
# Instantiate webdriver object for chrome browser
driver = webdriver.Chrome(chrome_options=chrome_options)
# Login page
driver.get("https://www.surfacetracks.com/amember/login")
# Locate login elements
user_element = driver.find_element_by_name("amember_login")
pass_element = driver.find_element_by_name("amember_pass")
# Input user credentials
user_element.send_keys(credentials['amember_login'])
pass_element.send_keys(credentials['amember_pass'])
# Click the login button
driver.find_element_by_xpath("//input[@value='Login']").click()
###Output
/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:2: DeprecationWarning: use options instead of chrome_options
###Markdown
Now that we have our login credentials instantiated into the selenium driver object, we can now access the surface tracks API.
###Code
# Sample API url
driver.get('https://www.surfacetracks.com/plus/get-feature.php?id=100660')
# Locate element containing json data
api_get = driver.find_element_by_xpath("//pre[@style='word-wrap: break-word; white-space: pre-wrap;']")
api_get.text
###Output
_____no_output_____
###Markdown
Auto Scrape (w/ Selenium) Our next goal is to automate the scraping process by iterating the id number in each API url. We want every combination of ID to be scraped so that we ensure all data is recorded.
###Code
# We will initially test with the last two digits of the ID
base_url = 'https://www.surfacetracks.com/plus/get-feature.php?id=1'
num_dict = {
0:'0',
1:'1',
2:'2',
3:'3',
4:'4',
5:'5',
6:'6',
7:'7',
8:'8',
9:'9'
}
url_list = []
# Counters will increase using 'abacus' function below
ones_counter = 0
twos_counter = 0
threes_counter = 0
fours_counter = 0
fives_counter = 0
for i in range(1,11001):
if ones_counter == 10:
ones_counter = ones_counter - 10
twos_counter += 1
if twos_counter == 10:
twos_counter = twos_counter - 10
threes_counter += 1
if threes_counter == 10:
threes_counter = threes_counter - 10
fours_counter += 1
if fours_counter == 10:
fours_counter = fours_counter - 10
fives_counter += 1
# url will change based upon the position of the counters
url = base_url + num_dict[fives_counter] + num_dict[fours_counter] + num_dict[threes_counter] + num_dict[twos_counter] + num_dict[ones_counter]
url_list.append(url)
ones_counter += 1
# Check list function is working properly
url_list
# Check last iteration
url_list[-1]
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import TimeoutException
json_list = []
# Scrape
for url in url_list:
try:
driver.get(url)
api_scrape = driver.find_element_by_xpath("//pre[@style='word-wrap: break-word; white-space: pre-wrap;']").text
if api_scrape == "null":
pass
else:
json_list.append(api_scrape)
# Time function so that we do not overload the server
time.sleep(.5)
except NoSuchElementException:
pass
except TimeoutException:
pass
###Output
_____no_output_____
###Markdown
DataFrame Creation
###Code
df = pd.DataFrame(data=eval(json_list[0]), index=[0])
for i in json_list[1:]:
index =1
df_test = pd.DataFrame(data = eval(i), index=[index])
df = pd.concat([df, df_test])
index += 1
df.info()
df.to_csv('bridge_data.csv')
###Output
_____no_output_____ |
.ipynb_checkpoints/05 - Softmax Regression with Keras-checkpoint.ipynb | ###Markdown
Softmax Regression with Keras- Install Keras from PyPI (recommended):```pip install keras```
###Code
%matplotlib inline
import keras
print('Keras version : %s' % keras.__version__)
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import model_from_json
from matplotlib import pyplot as plt
from IPython.display import clear_output
batch_size = 10
classes = 10
epoch = 20
# Load MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
Y_Train = np_utils.to_categorical(y_train, classes)
Y_Test = np_utils.to_categorical(y_test, classes)
# updatable plot
# a minimal example (sort of)
class PlotLosses(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.i = 0
self.x = []
self.losses = []
self.val_losses = []
self.fig = plt.figure()
self.logs = []
def on_epoch_end(self, epoch, logs={}):
self.logs.append(logs)
self.x.append(self.i)
self.losses.append(logs.get('loss'))
self.val_losses.append(logs.get('val_loss'))
self.i += 1
clear_output(wait=True)
plt.plot(self.x, self.losses, label="loss")
plt.plot(self.x, self.val_losses, label="val_loss")
plt.legend()
plt.show();
plot_losses = PlotLosses()
# Logistic regression model
model = Sequential()
model.add(Dense(10, input_shape=(784,), kernel_initializer ='normal', activation='softmax'))
model.compile(optimizer=SGD(lr=0.05), loss='categorical_crossentropy', metrics=['accuracy'])
#model.add(Dense(12, input_dim=8, kernel_initializer ='uniform', activation='relu'))
#model.add(Dense(8, kernel_initializer ='uniform', activation='relu'))
#model.add(Dense(1, kernel_initializer ='uniform', activation='sigmoid'))
print(model.summary())
#print(model.get_config())
#print(model.to_json())
# Train
# > val_loss is the value of cost function for your cross validation data
# and loss is the value of cost function for your training data
history = model.fit(X_train, Y_Train,
nb_epoch=epoch, validation_data=(X_test, Y_Test),
batch_size=batch_size, verbose=1,
callbacks=[plot_losses])
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Evaluate
evaluation = model.evaluate(X_test, Y_Test, verbose=1)
print('Summary: Loss over the test dataset: %.2f, Accuracy: %.2f' % (evaluation[0], evaluation[1]))
# serialize model to JSON
model_json = model.to_json()
with open("./data/03/03_logistic_regression.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("./data/03/03_logistic_regression.h5")
#print (model.get_weights())
print("Saved model to disk")
# load json and create model
json_file = open('./data/03/03_logistic_regression.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("./data/03/03_logistic_regression.h5")
#print (model.get_weights())
print("Loaded model from disk")
# evaluate loaded model on test data
loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
score = model.evaluate(X_test, Y_Test, verbose=1)
print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img_load = mpimg.imread('./media/MNIST Test/test3.png')
imgplot = plt.imshow(img_load)
from scipy.misc import imread
import numpy as np
x = imread('./media/MNIST Test/test3.png',mode='L')
#compute a bit-wise inversion so black becomes white and vice versa
x = np.invert(x)
x = x.reshape(1,784).astype('float32') / 255
#perform the prediction
#model = load_model('02_logistic_regression.h5')
out = model.predict(x)
print(np.argmax(out))
###Output
_____no_output_____ |
Script_final.ipynb | ###Markdown
Image classification with Neural NetworksUse `tensorflow` to train neural networks for the classification of fruit/vegetable types based on images from this dataset. Images must be transformed from JPG to RGB pixel values and scaled down (e.g., 32x32). Use fruit/vegetable types (as opposed to variety) as labels to predict and consider only the 10 most frequent types (apple, banana, plum, pepper, cherry, grape, tomato, potato, pear, peach). Experiment with different network architectures and training parameters documenting their influence of the final predictive performance. While the training loss can be chosen freely, the reported test errors must be measured according to the zero-one loss for multiclass classification. IntroductionIn these essay we are going to analyse the dataset available on Kaggle website [1] under the license CC BY-SA 4.0, using a Deep Learning approach. The dataset contains 90380 images of 131 fruits and vegetables divided in folders for training and test set respectively. We are going to select just a subsample of the available fruits creating 10 macrocategories with the most frequent types. Different Neural Networks architectures will be compared, starting from different settings of the Feedforward Neural Networks and concluding with two Convolutional Neural Network models. Setting up the environment
###Code
from google.colab import drive
drive.mount('/content/drive')
from google.colab import files
files.upload() #import the kaggle.json file
#install kaggle and download the data set in the desired path
!pip install -q kaggle
!mkdir ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
!kaggle datasets download -d moltean/fruits
!mkdir ML_assignment
!unzip fruits.zip -d ML_assignment
#import all the libraries and functions
import os
import random
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV, KFold
import tensorflow as tf
from tensorflow import keras
from keras.preprocessing.image import array_to_img, img_to_array, load_img
from keras.utils import np_utils
from keras.models import Sequential, Model
from keras.layers import Dense, Flatten, Dropout, Conv2D, MaxPool2D, Activation, MaxPooling2D, Input, AveragePooling2D, GlobalAveragePooling2D
from keras.losses import categorical_crossentropy
from keras.optimizers import Adam, SGD, RMSprop, Adamax
from keras import regularizers
from keras.callbacks import LearningRateScheduler, History, EarlyStopping
from keras.wrappers.scikit_learn import KerasClassifier
print('The Tensorflow version used is: ' + tf.__version__)
print('The Keras version used is: ' + keras.__version__)
#set the seed
seed = 33
random.seed(seed)
tf.random.set_seed(seed)
###Output
_____no_output_____
###Markdown
Dataset preprocessing
###Code
# import of the dataset divided in the 10 categories requested with the target size of 32x32
types = ["Apple", "Banana", "Plum", "Pepper", "Cherry", "Grape", "Tomato", "Potato", "Pear", "Peach"]
fruits = {}
def load_dataset(dire):
fruits = {}
images_as_array = []
labels = []
for category in os.listdir(dire):
for typ in types:
if(category.split()[0] == typ):
fruits[category]= typ
path = os.path.join(dire,category)
class_num =types.index(fruits[category])
class_name = fruits[category]
for img in os.listdir(path):
file = os.path.join(path,img)
images_as_array.append(img_to_array(load_img(file,target_size=(32, 32))))
labels.append(class_num)
images_as_array = np.array(images_as_array)
labels = np.array(labels)
return images_as_array, labels
train_path= '/content/ML_assignment/fruits-360/Training'
test_path= '/content/ML_assignment/fruits-360/Test'
train = load_dataset(train_path)
test = load_dataset(test_path)
X_train, y_train = train
X_test, y_test = test
X_train, y_train = shuffle(X_train, y_train)
X_test, y_test = shuffle(X_test, y_test)
print(X_train.shape)
print(X_test.shape)
n_classes = len(np.unique(y_train))
print(n_classes)
#look at the distribution of the classes in the sets to see if they are balanced_
unique_train, counts_train = np.unique(y_train, return_counts=True)
plt.bar(unique_train, counts_train)
unique_test, counts_test = np.unique(y_test, return_counts=True)
plt.bar(unique_test, counts_test)
plt.xticks(rotation=45)
plt.gca().legend(('y_train','y_test'))
plt.title('Class Frequency')
plt.xlabel('Class')
plt.ylabel('Frequency')
plt.show()
# creation the validation set as a 20% of the training one
X_val, X_train, y_val, y_train = train_test_split(X_train, y_train, train_size = 0.20)
# normalization of the sets
X_train = X_train.astype('float32')/255
X_test = X_test.astype('float32')/255
X_val = X_val.astype('float32')/255
print('Training X:\n',X_train.shape)
print('\nVaildation X:\n',X_val.shape)
print('\nTest X:\n',X_test.shape)
# image example of the data
n_rows = 3
n_cols = 6
plt.figure(figsize=(n_cols * 1.5, n_rows * 1.5))
for row in range(n_rows):
for col in range(n_cols):
index = n_cols * row + col
plt.subplot(n_rows, n_cols, index + 1)
plt.imshow(X_train[index], cmap="binary", interpolation="nearest")
plt.axis('off')
plt.title(types[y_train[index]], fontsize=12)
plt.subplots_adjust(wspace=0.2, hspace=0.5)
plt.show()
# convert labels to categorical
y_train = np_utils.to_categorical(y_train, n_classes)
y_val = np_utils.to_categorical(y_val, n_classes)
y_test = np_utils.to_categorical(y_test, n_classes)
# definition of the zero-one loss function used for the calculation of the test error
def zo_loss(test, pred):
y_hat = []
y_t = []
for i in range(len(pred)):
y_hat.append(np.argmax(pred[i]))
y_t.append(np.argmax(test[i]))
loss = []
for i in range(len(pred)):
if(y_hat[i] == y_t[i]):
loss.append(0)
else:
loss.append(1)
return np.mean(loss)
###Output
_____no_output_____
###Markdown
Feedforward Deep Neural Networks First basic model
###Code
model1 = keras.Sequential()
model1.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model1.add(keras.layers.Dense(1000, activation="relu"))
model1.add(keras.layers.Dense(400, activation="relu"))
model1.add(keras.layers.Dense(10, activation="softmax"))
model1.compile(loss = keras.losses.categorical_crossentropy,
optimizer = "sgd",
metrics = ["accuracy"])
model1.summary()
%%time
history1 = model1.fit(X_train, y_train, epochs=30,
validation_data=(X_val, y_val),
verbose = 1,
callbacks = [EarlyStopping(monitor='val_accuracy', patience=5, restore_best_weights=True)]
)
model1.evaluate(X_train, y_train)
model1.evaluate(X_test, y_test)
pd.DataFrame(history1.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
y_pred1 = model1.predict(X_test)
zo_loss(y_test, y_pred1)
###Output
_____no_output_____
###Markdown
Nesterov and exponential decay
###Code
# define the learning rate change
def exp_decay(epoch):
lrate = learning_rate * np.exp(-decay_rate*epoch)
return lrate
early_stop = EarlyStopping(monitor='val_accuracy', patience=5, restore_best_weights=True)
epochs = 30
decay_rate = 1e-6
momentum = 0.9
learning_rate = 0.01
sgd = SGD(lr=learning_rate, momentum=momentum, decay=decay_rate, nesterov=True)
loss_history = History()
lr_rate = LearningRateScheduler(exp_decay)
callbacks_list = [loss_history, lr_rate, early_stop]
model2 = keras.Sequential()
model2.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model2.add(keras.layers.Dense(1000, activation="relu"))
model2.add(keras.layers.Dense(400, activation="relu"))
model2.add(keras.layers.Dense(10, activation="softmax"))
model2.compile(loss = keras.losses.categorical_crossentropy,
optimizer = sgd,
metrics = ["accuracy"])
model2.summary()
%%time
model2_history = model2.fit(X_train, y_train, epochs=epochs,
verbose=1, callbacks=callbacks_list,
validation_data=(X_val, y_val))
pd.DataFrame(model2_history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
model2.evaluate(X_train, y_train)
model2.evaluate(X_test, y_test)
y_pred2 = model2.predict(X_test)
zo_loss(y_test, y_pred2)
###Output
_____no_output_____
###Markdown
Dropout
###Code
model3 = keras.Sequential()
model3.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model3.add(keras.layers.Dense(1000, activation="relu"))
model3.add(keras.layers.Dropout(0.1))
model3.add(keras.layers.Dense(400, activation="relu"))
model3.add(keras.layers.Dropout(0.2))
model3.add(keras.layers.Dense(10, activation="softmax"))
model3.compile(loss = keras.losses.categorical_crossentropy,
optimizer = sgd,
metrics = ["accuracy"])
model3.summary()
%%time
history3 = model3.fit(X_train, y_train, epochs=epochs,
verbose=1, callbacks=callbacks_list,
validation_data=(X_val, y_val))
print(model3.evaluate(X_train, y_train))
print(model3.evaluate(X_test, y_test))
pd.DataFrame(history3.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
y_pred3 = model3.predict(X_test)
zo_loss(y_test, y_pred3)
###Output
_____no_output_____
###Markdown
L1 and L2 regularizers
###Code
model4 = keras.Sequential()
model4.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model4.add(keras.layers.Dense(1000, activation="relu", kernel_regularizer=regularizers.l1_l2()))
model4.add(keras.layers.Dense(400, activation="relu", kernel_regularizer=regularizers.l1_l2()))
model4.add(keras.layers.Dense(10, activation="softmax"))
model4.compile(loss = keras.losses.categorical_crossentropy,
optimizer = sgd,
metrics = ["accuracy"])
model4.summary()
%%time
history4 = model4.fit(X_train, y_train, epochs=epochs,
verbose=1, callbacks=callbacks_list,
validation_data=(X_val, y_val))
pd.DataFrame(history4.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
print(model4.evaluate(X_train, y_train))
print(model4.evaluate(X_test, y_test))
y_pred4 = model4.predict(X_test)
print(zo_loss(y_test, y_pred4))
###Output
_____no_output_____
###Markdown
Hyperparameters Tuning
###Code
def create_model(optimizer = 'adam'):
model = Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(Dense(1000, activation=tf.nn.relu))
model.add(Dense(400, activation=tf.nn.relu))
model.add(Dense(10, activation=tf.nn.softmax))
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
return model
epochs = 30
model_CV = KerasClassifier(build_fn=create_model, epochs=epochs, verbose=1)
# define the grid search parameters
optimizer = ['adam', 'rmsprop', 'adamax', 'nadam']
param_grid = dict(optimizer=optimizer)
grid = GridSearchCV(estimator=model_CV, param_grid=param_grid, cv=5)
grid_result = grid.fit(X_train, y_train, callbacks=callbacks_list,
validation_data=(X_val, y_val))
# print results
print(f'Best Accuracy for {grid_result.best_score_} using {grid_result.best_params_}')
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print(f' mean={mean:.5}, std={stdev:.5} using {param}')
def create_model_SGD(nl1=1, nl2=1, nl3=1,
nn1=200, nn2=100, nn3 = 50, l1=0.01, l2=0.01,
dropout=0, output_shape=10, opt = sgd, act = 'relu'):
reg = keras.regularizers.l1_l2(l1=l1, l2=l2)
model = Sequential()
model.add(Flatten(input_shape=[32, 32, 3]))
first=True
for i in range(nl1):
if first:
model.add(Dense(nn1, activation=act, kernel_regularizer=reg))
first=False
else:
model.add(Dense(nn1, activation=act, kernel_regularizer=reg))
if dropout!=0:
model.add(Dropout(dropout))
for i in range(nl2):
if first:
model.add(Dense(nn2, activation=act, kernel_regularizer=reg))
first=False
else:
model.add(Dense(nn2, activation=act, kernel_regularizer=reg))
if dropout!=0:
model.add(Dropout(dropout))
for i in range(nl3):
if first:
model.add(Dense(nn3, activation=act, kernel_regularizer=reg))
first=False
else:
model.add(Dense(nn3, activation=act, kernel_regularizer=reg))
if dropout!=0:
model.add(Dropout(dropout))
model.add(Dense(output_shape, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer= opt, metrics=['accuracy'],)
return model
model_SGD = KerasClassifier(build_fn=create_model_SGD, epochs=30, verbose=1)
# numbers of layers
nl1 = [0,1,2,3]
nl2 = [0,1,2,3]
nl3 = [0,1,2,3]
# neurons in each layer
nn1=[1000, 1500, 2000,]
nn2=[500,1000,1500]
nn3=[250,500,1000]
# dropout and regularisation
dropout = [0, 0.1, 0.2, 0.3]
l1 = [0, 0.01, 0.003, 0.001,0.0001]
l2 = [0, 0.01, 0.003, 0.001,0.0001]
# dictionary summary
param_grid = dict(nl1=nl1, nl2=nl2, nl3=nl3, nn1=nn1, nn2=nn2, nn3=nn3,
l1=l1, l2=l2, dropout=dropout)
grid1 = RandomizedSearchCV(estimator=model_SGD, cv=KFold(5), param_distributions=param_grid,
verbose=20, n_iter=10)
grid_result_SGD = grid1.fit(X_train, y_train,
verbose=1, callbacks=callbacks_list,
validation_data=(X_val, y_val))
grid_result_SGD.best_params_
best_SGD = grid_result_SGD.best_estimator_
best_SGD.model.save("/content/drive/MyDrive/ML_NN/sgd")
tunedSGD = keras.models.load_model("/content/drive/MyDrive/ML_NN/sgd")
tunedSGD.summary()
historySGD = tunedSGD.fit(X_train, y_train,
verbose=1, callbacks=callbacks_list,
validation_data=(X_val, y_val),
epochs = 30)
tunedSGD.evaluate(X_train, y_train)
tunedSGD.evaluate(X_test, y_test)
pd.DataFrame(historySGD.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
y_pred = tunedSGD.predict(X_test)
print(zo_loss(y_test, y_pred))
def create_model_AM(nl1=1, nl2=1, nl3=1,
nn1=200, nn2=100, nn3 = 50, l1=0.01, l2=0.01,
dropout=0, output_shape=10, opt = keras.optimizers.Adamax(), act = 'relu'):
reg = keras.regularizers.l1_l2(l1=l1, l2=l2)
model = Sequential()
model.add(Flatten(input_shape=[32, 32, 3]))
first=True
for i in range(nl1):
if first:
model.add(Dense(nn1, activation=act, kernel_regularizer=reg)) #, kernel_initializer= init
first=False
else:
model.add(Dense(nn1, activation=act, kernel_regularizer=reg)) #, kernel_initializer= init
if dropout!=0:
model.add(Dropout(dropout))
for i in range(nl2):
if first:
model.add(Dense(nn2, activation=act, kernel_regularizer=reg)) #, kernel_initializer= init
first=False
else:
model.add(Dense(nn2, activation=act, kernel_regularizer=reg)) #, kernel_initializer= init
if dropout!=0:
model.add(Dropout(dropout))
for i in range(nl3):
if first:
model.add(Dense(nn3, activation=act, kernel_regularizer=reg)) #, kernel_initializer= init
first=False
else:
model.add(Dense(nn3, activation=act, kernel_regularizer=reg)) #, kernel_initializer= init
if dropout!=0:
model.add(Dropout(dropout))
model.add(Dense(output_shape, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer= opt, metrics=['accuracy'],)
return model
model_Adamax = KerasClassifier(build_fn=create_model_AM, epochs=30, verbose=1)
grid2 = RandomizedSearchCV(estimator= model_Adamax, cv=KFold(5), param_distributions=param_grid, verbose=20, n_iter=10)
grid_result_AM = grid2.fit(X_train, y_train, callbacks=callbacks_list,
validation_data=(X_val, y_val))
grid_result_AM.best_params_
best_AM = grid_result_AM.best_estimator_
best_AM.model.save('/content/drive/MyDrive/ML_NN/adamax')
tunedAdamax = keras.models.load_model("/content/drive/MyDrive/ML_NN/adamax")
tunedAdamax.summary()
historyAdamax = tunedAdamax.fit(X_train, y_train,
verbose=1, callbacks=callbacks_list,
validation_data=(X_val, y_val),
epochs = 30)
pd.DataFrame(historyAdamax.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
tunedAdamax.evaluate(X_train, y_train)
tunedAdamax.evaluate(X_test, y_test)
y_pred = tunedAdamax.predict(X_test)
print(zo_loss(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Convolutional Neural Network VGG16 Convolutional Network
###Code
model = Sequential()
model.add(Conv2D(input_shape=[32,32,3],filters=64,kernel_size=(3,3),padding="same", activation="relu"))
model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
model.add(Flatten())
model.add(Dense(units=4096,activation="relu"))
model.add(Dense(units=4096,activation="relu"))
model.add(Dense(units=10, activation="softmax"))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='adamax',
metrics=['accuracy'])
history = model.fit(X_train,y_train,
epochs=20,
validation_data=(X_val, y_val),
verbose=1, shuffle=True)
model.evaluate(X_train, y_train)
model.evaluate(X_test, y_test)
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
y_pred = model.predict(X_test)
print(zo_loss(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Res-Net 34
###Code
class ResidualUnit(keras.layers.Layer):
def __init__(self, filters, strides=1, activation="relu", **kwargs):
super().__init__(**kwargs)
self.activation = keras.activations.get(activation)
self.main_layers = [
keras.layers.Conv2D(filters, 3, strides=strides, padding="same", use_bias=False),
keras.layers.BatchNormalization(),
self.activation, keras.layers.Conv2D(filters, 3, strides=1, padding="same", use_bias=False), keras.layers.BatchNormalization()]
self.skip_layers = []
if strides > 1:
self.skip_layers = [keras.layers.Conv2D(filters, 1, strides=strides, padding="same", use_bias=False), keras.layers.BatchNormalization()]
def call(self, inputs):
Z = inputs
for layer in self.main_layers:
Z = layer(Z)
skip_Z = inputs
for layer in self.skip_layers:
skip_Z = layer(skip_Z)
return self.activation(Z + skip_Z)
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(64, 7, strides=2, input_shape=[32, 32, 3],
padding="same", use_bias=False))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("relu"))
model.add(keras.layers.MaxPool2D(pool_size=3, strides=2, padding="same"))
prev_filters = 64
for filters in [64] * 3 + [128] * 4 + [256] * 6 + [512] * 3:
strides = 1 if filters == prev_filters else 2
model.add(ResidualUnit(filters, strides=strides))
prev_filters = filters
model.add(keras.layers.GlobalAvgPool2D())
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(10, activation="softmax"))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='adamax',
metrics=['accuracy'])
history = model.fit(X_train,y_train,
epochs=20,
validation_data=(X_val, y_val),
verbose=1, shuffle=True)
model.evaluate(X_train, y_train)
model.evaluate(X_test, y_test)
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
y_pred = model.predict(X_test)
print(zo_loss(y_test, y_pred))
###Output
_____no_output_____ |
FEEG6016 Simulation and Modelling/SDE-Lecture-1.ipynb | ###Markdown
Stochastic differential equations Suggested references:* C Gardiner, *Stochastic Methods: A Handbook for the Natural and Social Sciences*, Springer.* D Higham, *An algorithmic introduction to Stochastic Differential Equations*, SIAM Review. Kostas Zygalakis A differential equation (homogeneous) can be written in *differential form* as\begin{equation} \frac{dX}{dt} = f(X)\end{equation}or in *integral form* as\begin{equation} dX = f(X) \, dt.\end{equation}This has formal solution\begin{equation} X(t) = X_0 + \int_0^t f(X(s)) \, ds.\end{equation}A *stochastic* differential equation can be written in *differential form* as\begin{equation} \frac{dX}{dt} = f(X) + g(X) \frac{dW}{dt}\end{equation}or in *integral form* as\begin{equation} dX = f(X) \, dt + g(X) dW.\end{equation}This has formal solution\begin{equation} X(t) = X_0 + \int_0^t f(X(s)) \, ds + \int_0^t g(X(s)) \, dW_s.\end{equation}It is this final term that we need to know how to solve in order to compute the solution of an SDE. Toy problemRemember that, for $a$ constant, the ODE\begin{equation} \frac{dX}{dt} = aX\end{equation}has the solution $X(t) = X(0) e^{at}$ which, as $t \to \infty$, tends either to 0 (for $a 0). ExerciseIn order to think about existence and uniqueness of solutions, consider $f(X) = \sqrt{X}$ with $X(0) = 0$. IntegralsTwo equivalent forms we can use:\begin{align} X(t) &= X(0) + \int_0^t f(X(s)) \, ds, \\ X(t+h) &= X(t) + \int_t^{t+h} f(X(s)) \, ds.\end{align}Now, graphically we can think about approximating the integral on $[t, t+h]$ by the minimum *or* maximum values of $f$ over the interval, multiplied by the width of the interval. Equivalently we could use the values at the start of the interval ($t$) or the end ($t+h$). This leads to the two approximate solutions\begin{align} X(t+h) &\approx X(t) + h f(X(t)), \\ X(t+h) &\approx X(t) + h f(X(t+h)).\end{align}In both cases the error is ${\cal O}(h^2)$.The first method is the *explicit Euler* method: the second is the *implicit Euler* method. Written out in standard notation where $X_n \approx X(n h)$ we have\begin{align} X_{n+1} &= X_n + h f(X_n), \\ X_{n+1} &= X_n + h f(X_{n+1}),\end{align}for explicit and implicit Euler respectively.Both Euler methods are first order in the sense that\begin{equation} | X(T) - X^h(T) | \le C(T) h\end{equation}where $X^h$ is the numerical approximation from the method and $C(T)$ is a constant depending only on time. Which method is better?Implicit Euler allows you to take larger timesteps, particularly for *stiff* problems where $f$ varies rapidly. This is particularly important for SDEs where the Brownian motion does vary extremely rapidly. Limiting behaviour and discretizationThe problem\begin{equation} \frac{dX}{dt} = -b X, \quad b > 0\end{equation}has limiting value $\lim_{t \to +\infty} X(t) = 0$. Applying Euler's *explicit* method to this problem gives\begin{align} && X_{n+1} &= X_n - b h X_n \\ \implies && X_{n+1} &= (1 - bh) X_n\end{align}In order to get the correct limiting behaviour we will need $|1 - bh| < 1$. If $b \gg 1$ then this implies that $h$ will have to be very small.If instead we use the *implicit* Euler method we get\begin{align} && X_{n+1} &= X_n - b h X_{n+1} \\ \implies && X_{n+1} &= \frac{1}{1 + bh} X_n\end{align}which results in the timestep requirement $|1 + bh| > 1$, which is always true for $h>0$ and $b>0$. Stochastic Differential Equations Idea is that the process we're trying to model goes as\begin{equation} \frac{dX}{ds} = ( a + \text{"error"} ) X(s), \quad X(0) = X_0.\end{equation}We want to include the error in our measurements, or uncertainty, using a stochastic term. Our first assumption is that, over time, the error should average to zero. We will write this problem as\begin{equation} X(t, w) = X_0 + \int_0^t f(X(s, w))\, ds + \sigma W(t, w) = X_0 + \int_0^t f(X(s, w))\, ds + \sigma \int_0^t dW(s, w) \end{equation}Every time we solve this problem the function $W$ is going to be a new function, so we will get a different answer.As the final term formally integrates to\begin{equation} \sigma \int_0^t dW_s = \sigma ( W(t) - W(0) ),\end{equation}we need that $W(0) = 0$ to match our requirements. Assumptions1. $W(0, w) = 0$ ("no error accumulated at time 0").2. $\mathbb{E} (W(t+h, w) - W(t, w)) = 0$ ("error averages to zero").3. $W(t_2, w) - W(t_1, w)$ is independent of $W(s_2, w) - W(s_1, w)$ for all times $s_1 < s_2 < t_1 < t_2$, where independent $X, Y$ means that $\mathbb{E}(XY) = \mathbb{E}(X) \mathbb{E}(Y)$.Quick reminder: if $X$ is a random variable that can take values in $(-\infty, +\infty)$, then it has probability distribution function $f(x)$ if the probability of it taking value less than $\gamma$ is\begin{equation} \mathbb{P}(X < \gamma) = \int_{-\infty}^{\gamma} f(x) \, dx.\end{equation}Then the expectation value of $X$ is\begin{equation} \mathbb{E}(X) = \int_{-\infty}^{\infty} x f(x) \, dx.\end{equation}The point then is that the various assumptions imply that $W(t+h) - W(t)$ depends on $h$, but **only** on $h$. Brownian motionDefinition follows the assumptions above:1. $W(0) = 0$.2. $W(t) - W(s) \sim N(0, |t - s|)$.3. $\text{Cov}(W(t), W(s)) = \mathbb{E} \left[ ( W(t) - \mathbb{E}(W(t)) ) ( W(s) - \mathbb{E}(W(s)) ) \right] = \min\{s, t\}$. Now let's generate this explicitly.
###Code
import numpy
T = 1.0
h = 0.01
N = int(T/h)
t = numpy.linspace(0.0, T, N)
dW = numpy.sqrt(h)*numpy.random.randn(N)
W = numpy.cumsum(dW)
W = W - W[0]
%matplotlib notebook
from matplotlib import pyplot
pyplot.plot(t, W, 'r')
pyplot.xlabel(r'$t$')
pyplot.ylabel(r'$W$');
###Output
_____no_output_____ |
Machine_Learning_In_Python_Essential_Techniques_For_Predictive_Analysis_CHP2_Glass.ipynb | ###Markdown
Way1 - Use pandas.read_csv to get data
###Code
from urllib.request import urlopen
import pandas as pd
import sys
target_url = ("http://archive.ics.uci.edu/ml/machine-learning-databases/glass/glass.data")
glass_df_org = pd.read_csv(target_url, header=None, sep=",", prefix = "V", encoding='ascii')
glass_df_org
import re
def ConvertNum(input_obj):
try:
if(re.match(r'^[-+]?[0-9]+\.[0-9]+$', str(input_obj)) is not None):
return float(input_obj)
else:
return int(input_obj)
except ValueError:
try:
return int(input_obj)
except ValueError:
return str(input_obj)
import numpy as np
def TruncateCategory(data_input):
all_columns = data_input.columns
row_num = len(data_input.values)
ret_tru_list = []
for i in data_input.columns:
for j in range(row_num):
if(isinstance(data_input[i][j], (int, np.int64))):
next
elif(isinstance(data_input[i][j], (float, np.float64))):
next
else:
ret_tru_list.append(i)
print(f'type of {data_input[i][j]} = {type(data_input[i][j])}')
break
return ret_tru_list
converted_x_list = []
for x in gkass_df_org.values:
try:
converted_x_list.append([pd.to_numeric(ConvertNum(j)) for j in x])
except ValueError:
next
glass_df = pd.DataFrame(converted_x_list, columns=glass_df_org.columns)
glass_df.columns = [ 'Id', 'Ri', 'Na', 'Mg', 'Al', 'Si', 'K', 'Ca', 'Ba', 'Fe', 'Type']
glass_df
###Output
_____no_output_____
###Markdown
Way2 - Use BeautifulSoup to get data
###Code
import requests
from bs4 import BeautifulSoup
#Disable Warning
requests.packages.urllib3.disable_warnings()
#Intrepretining the response
response = requests.get(target_url, cookies = {'over18':"1"}, verify = False)
soup = BeautifulSoup(response.text, 'lxml')
print(soup.prettify())
import re
X = []
Y = []
data_core = soup.find('p').get_text()
data_core_list = data_core.split('\n')
for data_line in data_core_list:
if(re.match(r'\S+', data_line)):
#print(f'row0 = {data_line}')
row1 = data_line.strip()
#print(f'row1 = {row1}')
row = data_line.strip().split(',')
row = [ConvertNum(x) for x in row]
X.append(row)
header_col = ['V'+str(x) for x in range(0, len(X[0]))]
glass_df_org = pd.DataFrame(X, columns=header_col)
glass_df_org
#print('Number of Rows of Data = {x}'.format(x = len(X)))
#print('Number of Columns of Data = {y}'.format(y = len(X[0])))
converted_x_list = []
for x in glass_df_org.values:
try:
converted_x_list.append([pd.to_numeric(ConvertNum(j)) for j in x])
except ValueError:
next
glass_df = pd.DataFrame(converted_x_list, columns=glass_df_org.columns)
glass_df.columns = [ 'Id', 'Ri', 'Na', 'Mg', 'Al', 'Si', 'K', 'Ca', 'Ba', 'Fe', 'Type']
glass_df
###Output
_____no_output_____
###Markdown
Get the statistics of the data
###Code
glass_df.describe()
ncol = len(glass_df.columns)
nrow = len(glass_df[glass_df.columns[0]])
count_col = 0
#print out the output statistics
print("Output:")
print('{sp:>3} {x:>5} {y:>5} {z:>5} {h:>5}'.format(sp="Col#", x="Int", y="Float", z="String", h='Others'))
for col in range(ncol):
type_list = [0]*4
for row in glass_df.values:
val = row[col]
if(isinstance(val, int)):
type_list[0] += 1
elif(isinstance(val, float)):
type_list[1] += 1
elif(isinstance(val, str)):
type_list[2] += 1
else:
type_list[3] += 1
print('{sp:03} {x:>5} {y:>5} {z:>5} {h:>5}'.format(sp=count_col, x=type_list[0], y=type_list[1], z=type_list[2], h=type_list[3]))
count_col += 1
###Output
Output:
Col# Int Float String Others
000 0 214 0 0
001 0 214 0 0
002 0 214 0 0
003 0 214 0 0
004 0 214 0 0
005 0 214 0 0
006 0 214 0 0
007 0 214 0 0
008 0 214 0 0
009 0 214 0 0
010 0 214 0 0
###Markdown
Calculate max/min/mean/std/percentiles 4 quantiles
###Code
import numpy as np
print('{sp:>3} {x:>9} {y:>9} {h:>11}\
{two_five:>5} {five_zero:>30} {seven_five:>9} {z:>10}'.format(sp="Col#",
x="Mean",
y="Std",
h='Min',
two_five='25%',
five_zero='50%',
seven_five='75%',
z="Max"))
count_col = 0
for col in glass_df.columns:
#print(f'col = {col}')
data_col = np.array(glass_df[col])
max_data_col = np.max(data_col)
min_data_col = np.min(data_col)
mean_data_col = np.mean(data_col)
std_data_col = np.std(data_col, ddof=1)
two_five_percentile = np.percentile(data_col, 25)
five_zero_percentile = np.percentile(data_col, 50)
seven_five_percentile = np.percentile(data_col, 75)
#hundred_percentile = np.percentile(data_col, 100)
#zero_percentile = np.percentile(data_col, 0)
#print(f'hundred_percentile = {hundred_percentile}')
#print(f'zero_percentile = {zero_percentile}')
print('{sp:>03} {x:>13.5f} {y:>10.5f} {h:>11.5f} {two_five:>11.5f} {five_zero:>30.5f} {seven_five:9.5f} {z:>10.5f}'.format(sp=count_col,
x=mean_data_col,
y=std_data_col,
h=min_data_col,
two_five=two_five_percentile,
five_zero=five_zero_percentile,
seven_five=seven_five_percentile,
z=max_data_col,))
count_col += 1
###Output
Col# Mean Std Min 25% 50% 75% Max
000 107.50000 61.92065 1.00000 54.25000 107.50000 160.75000 214.00000
001 1.51837 0.00304 1.51115 1.51652 1.51768 1.51916 1.53393
002 13.40785 0.81660 10.73000 12.90750 13.30000 13.82500 17.38000
003 2.68453 1.44241 0.00000 2.11500 3.48000 3.60000 4.49000
004 1.44491 0.49927 0.29000 1.19000 1.36000 1.63000 3.50000
005 72.65093 0.77455 69.81000 72.28000 72.79000 73.08750 75.41000
006 0.49706 0.65219 0.00000 0.12250 0.55500 0.61000 6.21000
007 8.95696 1.42315 5.43000 8.24000 8.60000 9.17250 16.19000
008 0.17505 0.49722 0.00000 0.00000 0.00000 0.00000 3.15000
009 0.05701 0.09744 0.00000 0.00000 0.00000 0.10000 0.51000
010 2.78037 2.10374 1.00000 1.00000 2.00000 3.00000 7.00000
###Markdown
10 quantiles
###Code
import numpy as np
ten_percentiles = []
count_col = 0
ntiles = 10
for col in glass_df.columns:
ten_percentiles = []
data_col = np.array(glass_df[col])
max_data_col = np.max(data_col)
min_data_col = np.min(data_col)
ten_percentiles = [np.percentile(data_col, (i/ntiles)*100) for i in range(ntiles+1)]
print(f'col = {col}, {ten_percentiles}')
count_col+=1
###Output
col = Id, [1.0, 22.3, 43.6, 64.9, 86.2, 107.5, 128.8, 150.1, 171.4, 192.70000000000002, 214.0]
col = Ri, [1.51115, 1.5159060000000002, 1.516302, 1.516697, 1.5173519999999998, 1.51768, 1.51811, 1.518693, 1.520292, 1.522107, 1.53393]
col = Na, [10.73, 12.683, 12.85, 13.0, 13.158, 13.3, 13.44, 13.700999999999999, 14.018, 14.397, 17.38]
col = Mg, [0.0, 0.0, 0.6000000000000006, 2.8049999999999997, 3.39, 3.48, 3.5380000000000003, 3.58, 3.6340000000000003, 3.757, 4.49]
col = Al, [0.29, 0.8700000000000001, 1.146, 1.23, 1.29, 1.36, 1.488, 1.56, 1.7480000000000002, 2.0740000000000003, 3.5]
col = Si, [69.81, 71.773, 72.132, 72.389, 72.662, 72.79, 72.944, 73.021, 73.144, 73.297, 75.41]
col = K, [0.0, 0.0, 0.08, 0.19, 0.49200000000000005, 0.555, 0.57, 0.6, 0.62, 0.68, 6.21]
col = Ca, [5.43, 7.970000000000001, 8.12, 8.339, 8.482, 8.6, 8.777999999999999, 9.020999999999999, 9.57, 10.443000000000007, 16.19]
col = Ba, [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.64, 3.15]
col = Fe, [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.07, 0.1280000000000001, 0.21999999999999997, 0.51]
col = Type, [1.0, 1.0, 1.0, 1.0, 2.0, 2.0, 2.0, 3.0, 5.0, 7.0, 7.0]
###Markdown
Use Quantile-Quantile Plot to visualize the outliers
###Code
import scipy.stats as stats
%matplotlib inline
import pylab
col = 9
cat_col = glass_df.columns[col]
print(f'col = {cat_col}')
data_col = np.array(glass_df[cat_col])
stats.probplot(data_col, dist='norm', plot=pylab)
pylab.show()
sorted(data_col)
###Output
_____no_output_____
###Markdown
Scatter Plot among attributes
###Code
import matplotlib.pyplot as plot
%matplotlib inline
row_num = len(glass_df.values)
col_num = len(glass_df.columns)
data_row2 = glass_df.iloc[0:row_num, 1]
data_row3 = glass_df.iloc[0:row_num, 2]
plot.xlabel('2nd Attribute')
plot.ylabel('3rd Attribute')
plot.scatter(data_row2, data_row3)
data_row2 = glass_df.iloc[0:row_num, 1]
data_row8 = glass_df.iloc[0:row_num, 7]
plot.xlabel('2nd Attribute')
plot.ylabel('8th Attribute')
plot.scatter(data_row2, data_row8)
###Output
_____no_output_____
###Markdown
BOX Plot
###Code
row_num = len(glass_df.values)
col_num = len(glass_df.columns)
print(f'row_num = {row_num}')
print(f'col_num = {col_num}')
import matplotlib.pyplot as plot
%matplotlib inline
glass_df_arr = glass_df.iloc[:, 0:col_num].values
plot.boxplot(glass_df_arr)
plot.xlabel('Attribute Index')
plot.ylabel('Quartile Ranges')
plot.show()
###Output
_____no_output_____
###Markdown
Use normalization instead of dropping V8 to see the details of other attributes
###Code
glass_df.copy().describe()
glass_df
glass_df_normalized = glass_df.copy()
summary_org = glass_df_normalized.describe()
for col in glass_df_normalized.columns:
glass_df_normalized[col][:] = (glass_df_normalized[col][:]-summary_org[col][1])/summary_org[col][2]
glass_df_normalized
glass_df_normalized.describe()
# column 0 is the ID number, not data.
glass_df_arr = glass_df_normalized.iloc[:, 1:col_num].values
plot.boxplot(glass_df_arr)
plot.xlabel('Attribute Index')
plot.ylabel('Quartile Ranges')
plot.show()
###Output
_____no_output_____
###Markdown
Parallel Coordinate Plot of Abalone Dataset
###Code
glass_df
summary = glass_df.describe()
summary
###Output
_____no_output_____
###Markdown
Use Zero-Mean Normalization of Target_Rings column to plot
###Code
import math
row_num = len(glass_df.values)
col_num = len(glass_df.columns)
mean_quality = summary.iloc[1, col_num-1]
std_quality = summary.iloc[2, col_num-1]
for i in range(row_num):
data_row = glass_df.iloc[i, 0:col_num-1]
label_color = (glass_df.iloc[i, col_num-1]-mean_quality)/std_quality
label_color = 1/(1+math.exp(-label_color))
data_row.plot(color=plot.cm.RdYlBu(label_color), alpha=0.5)
plot.xlabel("Attribute Index")
plot.ylabel("Attribute Values")
plot.show()
###Output
_____no_output_____
###Markdown
Use Zero-Mean Normalization of all columns to plot
###Code
glass_df_normalized = glass_df.iloc[:, 0:col_num-1].copy()
summary_org = glass_df_normalized.describe()
for col in glass_df_normalized.columns:
glass_df_normalized[col][:] = (glass_df_normalized[col][:]-summary_org[col][1])/summary_org[col][2]
glass_df_normalized
import math
row_num = len(glass_df_normalized.values)
col_num = len(glass_df_normalized.columns)
summary = glass_df_normalized.describe()
mean_quality = summary.iloc[1, col_num-1]
std_quality = summary.iloc[2, col_num-1]
for i in range(row_num):
data_row = glass_df_normalized.iloc[i, 1:col_num]
label_color = (glass_df.iloc[i, col_num])/7.0
data_row.plot(color=plot.cm.RdYlBu(label_color), alpha=0.5)
plot.xlabel("Attribute Index")
plot.ylabel("Attribute Values")
plot.show()
###Output
_____no_output_____
###Markdown
Heat Map
###Code
cor_mat = pd.DataFrame(glass_df.iloc[:, 1:col_num].corr())
cor_mat
plot.pcolor(cor_mat)
plot.show()
###Output
_____no_output_____ |
notebooks/sentiment_scoring_400K_tweets.ipynb | ###Markdown
Classifying the Complete Dataset* The tuned logistic regression baseline classifier will be used in this notebook to score tweet sentiment.* VADER compund scores will also generated and compared to the baseline model.* By comparing sentiment scores of the covid and non covid DataFrames, we will begin to assess the impact the covid has on tweet sentiment.
###Code
import sys
sys.path.insert(0, '/Users/lclark/data_bootcamp/data-science-final-project/scripts/')
# Import custom functions
from functions import *
pd.set_option('display.max_colwidth', None)
###Output
_____no_output_____
###Markdown
Importing Filtered Tweets
###Code
# Loading filtered tweets from pickle file
df_full = pd.read_pickle('~/data_bootcamp/data-science-final-project/data/df_filtered_tweets_master.pkl')
# All the files below are a subset of df_filtered_tweets_master
#df_no_retweets = pd.read_pickle('~/data_bootcamp/data-science-final-project/data/df_original_tweets.pkl')
#df_no_rt_covid = pd.read_pickle('~/data_bootcamp/data-science-final-project/data/df_original_tweets_covid_mention.pkl')
#df_no_rt_no_covid = pd.read_pickle('~/data_bootcamp/data-science-final-project/data/df_original_tweets_no_covid.pkl')
###Output
_____no_output_____
###Markdown
Load Model
###Code
lr_model = pickle.load(open('/Users/lclark/data_bootcamp/data-science-final-project/models/LogReg_GridCV_3C_87p_40kfeats.sav', 'rb'))
lr_model.best_params_
###Output
_____no_output_____
###Markdown
Classifying Tweets Logisitic Regression Classification* Given that the full dataset is roughly 25% original tweets versus retweets, analyzing the full dataset may provide us with an indication of whether people tend to retweet positive or negative tweets more frequently
###Code
df_full['full_text_clean'] = df_full['full_clean'].apply(joiner)
vectorizer = TfidfVectorizer(use_idf=True, lowercase=True, ngram_range=(1,2), max_features=40000)
X = vectorizer.fit_transform(df_full.full_text_clean)
df_full['lr_labels'] = lr_model.predict(X)
df_full.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 420634 entries, 1294232573636304896 to 1333143090723319808
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 created_at 420634 non-null datetime64[ns, UTC]
1 full_text 420634 non-null object
2 vader_text 420634 non-null object
3 no_hashtags 420634 non-null object
4 full_clean 420634 non-null object
5 covid_mention 420634 non-null int64
6 retweet_count 420634 non-null int64
7 user_name 420634 non-null object
8 is_retweet 420634 non-null int64
9 full_text_clean 420634 non-null object
10 lr_labels 420634 non-null int64
dtypes: datetime64[ns, UTC](1), int64(4), object(6)
memory usage: 38.5+ MB
###Markdown
VADER
###Code
%%time
# Analyze tweets, extract scores from dictionary result, drop dictionary result, categorize
df_full['vader_text'] = df_full['full_text'].apply(vader_preprocess)
df_full = vader_score_to_series(df_full)
# Testing wider thresholds than default +-0.05 of 0
#
df_full['vader_label_wider_neu'] = df_full['compound'].apply(lambda x: categorize(x, upper = 0.1,lower = -0.1))
df_full['vader_label_wider_neu'].value_counts().sort_index()
df_full.compound.describe()
df_full[(df_full['vader_label'] == 4)][['created_at','vader_text','lr_labels','compound','vader_label']].sample(n=10)
###Output
_____no_output_____
###Markdown
Comparing Logisitic Regression Classification with VADER
###Code
# Logisitic Regression Value Counts
df_full['lr_labels'].value_counts().sort_index()
# VADER Value Counts with extracted full_text from retweet_status
df_full.vader_label.value_counts().sort_index()
###Output
_____no_output_____
###Markdown
VADER Value Counts before extracting the full_text from the retweet_statusIf a tweet is a retweet, it will be truncated in the full_text column. You need to extract the full_text from the dictionary in retweet_status.Note: This comparison had a different number of tweets (more tweets in more recent tests), though the positive tweet count is less. This gives us some indiction that negative sentiment is more strongly dictated by the end of a tweet than the beginning.0 - 1068592 - 1045464 - 175328
###Code
# Create distributable labelled bcpoli dataset.
#df_full_distribute = df_full[['covid_mention','neg','neu','pos','compound','vader_label']].reset_index()
#df_full_distribute.to_pickle('/Users/lclark/data_bootcamp/data-science-final-project/data/bcpoli_vader_labelled_tweets.sav')
# Export labelled df_full
#df_full.to_pickle('/Users/lclark/data_bootcamp/data-science-final-project/data/bcpoli_labelled_tweets.pkl')
###Output
_____no_output_____ |
exoplanet_exploration/model_1.ipynb | ###Markdown
Read the CSV and Perform Basic Data Cleaning
###Code
df = pd.read_csv("exoplanet_data.csv")
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
df.head()
###Output
_____no_output_____
###Markdown
Select your features (columns)
###Code
# Set features. This will also be used as your x values.
y = df["koi_disposition"]
X = df.drop(columns="koi_disposition")
###Output
_____no_output_____
###Markdown
Create a Train Test SplitUse `koi_disposition` for the y values
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1,stratify=y)
X_train.head()
###Output
_____no_output_____
###Markdown
Pre-processingScale the data using the MinMaxScaler and perform some feature selection
###Code
# Scale your data
from sklearn.preprocessing import MinMaxScaler
X_scaler = MinMaxScaler().fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Train the Model
###Code
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
###Output
Training Data Score: 0.8512302117108526
Testing Data Score: 0.847254004576659
###Markdown
Hyperparameter TuningUse `GridSearchCV` to tune the model's parameters
###Code
# Create the GridSearchCV model
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [1, 5, 10, 50],
'penalty': ["l1","12"]}
model = LogisticRegression(solver="liblinear")
grid = GridSearchCV(model, param_grid, verbose=3)
# Train the model with GridSearch
grid.fit(X_train_scaled, y_train)
print(grid.best_params_)
print(grid.best_score_)
###Output
{'C': 50, 'penalty': 'l1'}
0.8842287092760099
###Markdown
Save the Model
###Code
# save your model by updating "your_name" with your name
# and "your_model" with your model variable
# be sure to turn this in to BCS
# if joblib fails to import, try running the command to install in terminal/git-bash
import joblib
filename = 'initial_exploration.sav'
joblib.dump(grid, filename)
###Output
_____no_output_____ |
tests/python-scientific/scikit-learn-validation.ipynb | ###Markdown
Validation and Model Selection Credits: Forked from [PyCon 2015 Scikit-learn Tutorial](https://github.com/jakevdp/sklearn_pycon2015) by Jake VanderPlasIn this section, we'll look at *model evaluation* and the tuning of *hyperparameters*, which are parameters that define the model.
###Code
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Use seaborn for plotting defaults
import seaborn as sns; sns.set()
###Output
_____no_output_____
###Markdown
Validating ModelsOne of the most important pieces of machine learning is **model validation**: that is, checking how well your model fits a given dataset. But there are some pitfalls you need to watch out for.Consider the digits example we've been looking at previously. How might we check how well our model fits the data?
###Code
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
###Output
_____no_output_____
###Markdown
Let's fit a K-neighbors classifier
###Code
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
###Output
_____no_output_____
###Markdown
Now we'll use this classifier to *predict* labels for the data
###Code
y_pred = knn.predict(X)
###Output
_____no_output_____
###Markdown
Finally, we can check how well our prediction did:
###Code
print("{0} / {1} correct".format(np.sum(y == y_pred), len(y)))
###Output
1797 / 1797 correct
###Markdown
It seems we have a perfect classifier!**Question: what's wrong with this?** Validation SetsAbove we made the mistake of testing our data on the same set of data that was used for training. **This is not generally a good idea**. If we optimize our estimator this way, we will tend to **over-fit** the data: that is, we learn the noise.A better way to test a model is to use a hold-out set which doesn't enter the training. We've seen this before using scikit-learn's train/test split utility:
###Code
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train.shape, X_test.shape
###Output
/srv/venv/lib/python3.6/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
###Markdown
Now we train on the training data, and validate on the test data:
###Code
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print("{0} / {1} correct".format(np.sum(y_test == y_pred), len(y_test)))
###Output
437 / 450 correct
###Markdown
This gives us a more reliable estimate of how our model is doing.The metric we're using here, comparing the number of matches to the total number of samples, is known as the **accuracy score**, and can be computed using the following routine:
###Code
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
This can also be computed directly from the ``model.score`` method:
###Code
knn.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Using this, we can ask how this changes as we change the model parameters, in this case the number of neighbors:
###Code
for n_neighbors in [1, 5, 10, 20, 30]:
knn = KNeighborsClassifier(n_neighbors)
knn.fit(X_train, y_train)
print(n_neighbors, knn.score(X_test, y_test))
###Output
1 0.971111111111
5 0.982222222222
10 0.975555555556
20 0.964444444444
30 0.964444444444
###Markdown
We see that in this case, a small number of neighbors seems to be the best option. Cross-ValidationOne problem with validation sets is that you "lose" some of the data. Above, we've only used 3/4 of the data for the training, and used 1/4 for the validation. Another option is to use **2-fold cross-validation**, where we split the sample in half and perform the validation twice:
###Code
X1, X2, y1, y2 = train_test_split(X, y, test_size=0.5, random_state=0)
X1.shape, X2.shape
print(KNeighborsClassifier(1).fit(X2, y2).score(X1, y1))
print(KNeighborsClassifier(1).fit(X1, y1).score(X2, y2))
###Output
0.983296213808
0.982202447164
###Markdown
Thus a two-fold cross-validation gives us two estimates of the score for that parameter.Because this is a bit of a pain to do by hand, scikit-learn has a utility routine to help:
###Code
from sklearn.cross_validation import cross_val_score
cv = cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
cv.mean()
###Output
_____no_output_____
###Markdown
K-fold Cross-ValidationHere we've used 2-fold cross-validation. This is just one specialization of $K$-fold cross-validation, where we split the data into $K$ chunks and perform $K$ fits, where each chunk gets a turn as the validation set.We can do this by changing the ``cv`` parameter above. Let's do 10-fold cross-validation:
###Code
cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
###Output
_____no_output_____
###Markdown
This gives us an even better idea of how well our model is doing. Overfitting, Underfitting and Model Selection Now that we've gone over the basics of validation, and cross-validation, it's time to go into even more depth regarding model selection.The issues associated with validation and cross-validation are some of the most importantaspects of the practice of machine learning. Selecting the optimal modelfor your data is vital, and is a piece of the problem that is not oftenappreciated by machine learning practitioners.Of core importance is the following question:**If our estimator is underperforming, how should we move forward?**- Use simpler or more complicated model?- Add more features to each observed data point?- Add more training samples?The answer is often counter-intuitive. In particular, **Sometimes using amore complicated model will give _worse_ results.** Also, **Sometimes addingtraining data will not improve your results.** The ability to determinewhat steps will improve your model is what separates the successful machinelearning practitioners from the unsuccessful. Illustration of the Bias-Variance TradeoffFor this section, we'll work with a simple 1D regression problem. This will help us toeasily visualize the data and the model, and the results generalize easily to higher-dimensionaldatasets. We'll explore a simple **linear regression** problem.This can be accomplished within scikit-learn with the `sklearn.linear_model` module.We'll create a simple nonlinear function that we'd like to fit
###Code
def test_func(x, err=0.5):
y = 10 - 1. / (x + 0.1)
if err > 0:
y = np.random.normal(y, err)
return y
###Output
_____no_output_____
###Markdown
Now let's create a realization of this dataset:
###Code
def make_data(N=40, error=1.0, random_seed=1):
# randomly sample the data
np.random.seed(1)
X = np.random.random(N)[:, np.newaxis]
y = test_func(X.ravel(), error)
return X, y
X, y = make_data(40, error=1)
plt.scatter(X.ravel(), y);
###Output
_____no_output_____
###Markdown
Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit:
###Code
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
model = LinearRegression()
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
###Output
_____no_output_____
###Markdown
We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is **biased**, or that it **under-fits** the data.Let's try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Scikit-learn makes this easy with the ``PolynomialFeatures`` preprocessor, which can be pipelined with a linear regression.Let's make a convenience routine to do this:
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
###Output
_____no_output_____
###Markdown
Now we'll use this to fit a quadratic curve to the data.
###Code
model = PolynomialRegression(2)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
###Output
_____no_output_____
###Markdown
This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?
###Code
model = PolynomialRegression(30)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)))
plt.ylim(-4, 14);
###Output
_____no_output_____
###Markdown
When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a **high-variance model**, and we say that it **over-fits** the data. Just for fun, let's use IPython's interact capability (only in IPython 2.0+) to explore this interactively:
###Code
from IPython.html.widgets import interact
def plot_fit(degree=1, Npts=50):
X, y = make_data(Npts, error=1)
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
model = PolynomialRegression(degree=degree)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.ylim(-4, 14)
plt.title("mean squared error: {0:.2f}".format(mean_squared_error(model.predict(X), y)))
interact(plot_fit, degree=[1, 30], Npts=[2, 100]);
###Output
/srv/venv/lib/python3.6/site-packages/IPython/html.py:14: ShimWarning: The `IPython.html` package has been deprecated since IPython 4.0. You should import from `notebook` instead. `IPython.html.widgets` has moved to `ipywidgets`.
"`IPython.html.widgets` has moved to `ipywidgets`.", ShimWarning)
###Markdown
Detecting Over-fitting with Validation CurvesClearly, computing the error on the training data is not enough (we saw this previously). As above, we can use **cross-validation** to get a better handle on how the model fit is working.Let's do this here, again using the ``validation_curve`` utility. To make things more clear, we'll use a slightly larger dataset:
###Code
X, y = make_data(120, error=1.0)
plt.scatter(X, y);
from sklearn.learning_curve import validation_curve
def rms_error(model, X, y):
y_pred = model.predict(X)
return np.sqrt(np.mean((y - y_pred) ** 2))
degree = np.arange(0, 18)
val_train, val_test = validation_curve(PolynomialRegression(), X, y,
'polynomialfeatures__degree', degree, cv=7,
scoring=rms_error)
###Output
/srv/venv/lib/python3.6/site-packages/sklearn/learning_curve.py:22: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the functions are moved. This module will be removed in 0.20
DeprecationWarning)
###Markdown
Now let's plot the validation curves:
###Code
def plot_with_err(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
plot_with_err(degree, val_train, label='training scores')
plot_with_err(degree, val_test, label='validation scores')
plt.xlabel('degree'); plt.ylabel('rms error')
plt.legend();
###Output
_____no_output_____
###Markdown
Notice the trend here, which is common for this type of plot.1. For a small model complexity, the training error and validation error are very similar. This indicates that the model is **under-fitting** the data: it doesn't have enough complexity to represent the data. Another way of putting it is that this is a **high-bias** model.2. As the model complexity grows, the training and validation scores diverge. This indicates that the model is **over-fitting** the data: it has so much flexibility, that it fits the noise rather than the underlying trend. Another way of putting it is that this is a **high-variance** model.3. Note that the training score (nearly) always improves with model complexity. This is because a more complicated model can fit the noise better, so the model improves. The validation data generally has a sweet spot, which here is around 5 terms.Here's our best-fit model according to the cross-validation:
###Code
model = PolynomialRegression(4).fit(X, y)
plt.scatter(X, y)
plt.plot(X_test, model.predict(X_test));
###Output
_____no_output_____
###Markdown
Detecting Data Sufficiency with Learning CurvesAs you might guess, the exact turning-point of the tradeoff between bias and variance is highly dependent on the number of training points used. Here we'll illustrate the use of *learning curves*, which display this property.The idea is to plot the mean-squared-error for the training and test set as a function of *Number of Training Points*
###Code
from sklearn.learning_curve import learning_curve
def plot_learning_curve(degree=3):
train_sizes = np.linspace(0.05, 1, 20)
N_train, val_train, val_test = learning_curve(PolynomialRegression(degree),
X, y, train_sizes, cv=5,
scoring=rms_error)
plot_with_err(N_train, val_train, label='training scores')
plot_with_err(N_train, val_test, label='validation scores')
plt.xlabel('Training Set Size'); plt.ylabel('rms error')
plt.ylim(0, 3)
plt.xlim(5, 80)
plt.legend()
###Output
_____no_output_____
###Markdown
Let's see what the learning curves look like for a linear model:
###Code
plot_learning_curve(1)
###Output
_____no_output_____
###Markdown
This shows a typical learning curve: for very few training points, there is a large separation between the training and test error, which indicates **over-fitting**. Given the same model, for a large number of training points, the training and testing errors converge, which indicates potential **under-fitting**.As you add more data points, the training error will never increase, and the testing error will never decrease (why do you think this is?)It is easy to see that, in this plot, if you'd like to reduce the MSE down to the nominal value of 1.0 (which is the magnitude of the scatter we put in when constructing the data), then adding more samples will *never* get you there. For $d=1$, the two curves have converged and cannot move lower. What about for a larger value of $d$?
###Code
plot_learning_curve(3)
###Output
_____no_output_____
###Markdown
Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0!What if we get even more complex?
###Code
plot_learning_curve(10)
###Output
_____no_output_____
###Markdown
For an even more complex model, we still converge, but the convergence only happens for *large* amounts of training data.So we see the following:- you can **cause the lines to converge** by adding more points or by simplifying the model.- you can **bring the convergence error down** only by increasing the complexity of the model.Thus these curves can give you hints about how you might improve a sub-optimal model. If the curves are already close together, you need more model complexity. If the curves are far apart, you might also improve the model by adding more data.To make this more concrete, imagine some telescope data in which the results are not robust enough. You must think about whether to spend your valuable telescope time observing *more objects* to get a larger training set, or *more attributes of each object* in order to improve the model. The answer to this question has real consequences, and can be addressed using these metrics. SummaryWe've gone over several useful tools for model validation- The **Training Score** shows how well a model fits the data it was trained on. This is not a good indication of model effectiveness- The **Validation Score** shows how well a model fits hold-out data. The most effective method is some form of cross-validation, where multiple hold-out sets are used.- **Validation Curves** are a plot of validation score and training score as a function of **model complexity**: + when the two curves are close, it indicates *underfitting* + when the two curves are separated, it indicates *overfitting* + the "sweet spot" is in the middle- **Learning Curves** are a plot of the validation score and training score as a function of **Number of training samples** + when the curves are close, it indicates *underfitting*, and adding more data will not generally improve the estimator. + when the curves are far apart, it indicates *overfitting*, and adding more data may increase the effectiveness of the model. These tools are powerful means of evaluating your model on your data.
###Code
test complete; Gopal
###Output
_____no_output_____ |
Deep-Learning-for-Satellite-Imagery-master/LULC_Final.ipynb | ###Markdown
Land uUse/Land Cover (LULC) classification with Deep LearningThis a mini-project to classify 9 Land use classes using transfer learning in Convolutional Neural Networks (CNN). The Dataset used in this project is published with the original paper tittled: __EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification__.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.plots import *
import rasterio
from rasterio import plot
import matplotlib.pyplot as plt
PATH = Path('/home/shakur/GeoProjects/EuroSat/Bands/')
train_path = PATH/'train'
classes = [str(f).split('/')[-1] for f in list(train_path.iterdir())]
###Output
_____no_output_____
###Markdown
Visualization Classes and Size
###Code
files = []
for i in classes:
paths =train_path/i
files.append(list(paths.iterdir())[0])
classes_num = {}
for i in classes:
folders = train_path/i
classes_num[i] = len(list(folders.iterdir()))
#print(f'{i} class has {len(list(folders.iterdir()))}')
plt.figure(figsize=(15,6))
plt.bar(classes_num.keys(), classes_num.values(), color='green')
plt.title('Land Use Classes & Size', fontsize=16)
plt.xlabel('Classes', fontsize=14)
plt.ylabel('Size', fontsize=14)
plt.tight_layout()
plt.savefig('classes.jpg')
###Output
_____no_output_____
###Markdown
Images
###Code
fig = plt.figure(figsize=(12,10))
ax1 = plt.subplot(331);plt.axis('off');plot.show((rasterio.open(files[0])), ax=ax1, title=classes[0])
ax2 = plt.subplot(332);plt.axis('off');plot.show((rasterio.open(files[1])), ax=ax2, title=classes[1])
ax3 = plt.subplot(333);plt.axis('off');plot.show((rasterio.open(files[2])), ax=ax3, title=classes[2])
ax1 = plt.subplot(334);plt.axis('off');plot.show((rasterio.open(files[3])), ax=ax1, title=classes[3])
ax2 = plt.subplot(335);plt.axis('off');plot.show((rasterio.open(files[4])), ax=ax2, title=classes[4])
ax3 = plt.subplot(336);plt.axis('off');plot.show((rasterio.open(files[5])), ax=ax3, title=classes[5])
ax1 = plt.subplot(337);plt.axis('off');plot.show((rasterio.open(files[6])), ax=ax1, title=classes[6])
ax2 = plt.subplot(338);plt.axis('off');plot.show((rasterio.open(files[7])), ax=ax2, title=classes[7])
ax3 = plt.subplot(339);plt.axis('off');plot.show((rasterio.open(files[8])), ax=ax3, title=classes[8])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Model
###Code
sz = 224
arch=resnet50
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(0.001, 2)
lrf=learn.lr_find(start_lr=1e-5, end_lr=1e-1)
learn.sched.plot_lr()
learn.sched.plot()
learn.fit(1e-5, 3, cycle_len=1)
learn.fit(1e-5, 3, cycle_len=1, cycle_mult=2)
learn.precompute = False
learn.fit(1e-5, 3, cycle_len=1, cycle_mult=2)
lr = 1e-4
lrs = np.array([lr/12,lr/6,lr])
learn.fit(lrs, 2, cycle_len=1, cycle_mult=2)
learn.unfreeze()
learn.fit(lrs, 3, cycle_len=1, cycle_mult=2)
learn.fit(lrs, 2, cycle_len=1, cycle_mult=2)
###Output
_____no_output_____
###Markdown
Analyzing results & Visualization
###Code
log_preds,y = learn.TTA()
probs = np.mean(np.exp(log_preds),0)
log_preds = learn.predict()
preds = np.argmax(log_preds, axis=1)
preds
data.val_ds.fnames[0]
###Output
_____no_output_____
###Markdown
Invidual Predictions
###Code
classes_dict = dict(enumerate(data.classes))
classes_dict
fn = data.val_ds.fnames[0]
pic1 = rasterio.open(str(PATH/fn))
plt.axis('off')
plot.show(pic1)
trn_tfms, val_tfms = tfms_from_model(arch, sz)
ds = FilesIndexArrayDataset([fn], np.array([0]), val_tfms, PATH)
dl = DataLoader(ds)
preds = learn.predict_dl(dl)
print(classes_dict[np.argmax(preds)] == 'AnnualCrop')
np.argmax(preds), classes_dict[np.argmax(preds)]
data.val_ds.fnames[2900]
fn = data.val_ds.fnames[2900]
pic2 = rasterio.open(str(PATH/fn))
plt.axis('off')
plot.show(pic2)
trn_tfms, val_tfms = tfms_from_model(arch, sz)
ds = FilesIndexArrayDataset([fn], np.array([2900]), val_tfms, PATH)
dl = DataLoader(ds)
preds = learn.predict_dl(dl)
print(classes_dict[np.argmax(preds)] == 'Pasture')
np.argmax(preds), classes_dict[np.argmax(preds)]
###Output
True
###Markdown
Confusion Matrix
###Code
multi_preds = learn.predict()
preds = np.argmax(multi_preds, axis=1)
preds
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y, preds)
plot_confusion_matrix(cm, data.classes, cmap='Reds',figsize=(12,6))
plt.tight_layout()
###Output
[[596 0 12 6 0 11 0 0 0]
[ 0 548 0 0 0 0 0 0 50]
[ 4 0 558 21 0 28 0 0 0]
[ 8 0 9 484 0 9 0 0 2]
[ 0 0 0 0 497 0 0 0 0]
[ 12 0 44 4 0 337 0 0 0]
[ 0 0 0 0 1 0 586 0 0]
[ 1 0 0 1 0 0 0 445 1]
[ 1 41 0 3 0 2 0 3 507]]
|
_10 optimization/adagrad.ipynb | ###Markdown
Adagrad:label:`chapter_adagrad`In the optimization algorithms we introduced previously, each element of the objective function's independent variables uses the same learning rate at the same time step for self-iteration. For example, if we assume that the objective function is $f$ and the independent variable is a two-dimensional vector $[x_1, x_2]^\top$, each element in the vector uses the same learning rate when iterating. For example, in gradient descent with the learning rate $\eta$, element $x_1$ and $x_2$ both use the same learning rate $\eta$ for iteration:$$x_1 \leftarrow x_1 - \eta \frac{\partial{f}}{\partial{x_1}}, \quadx_2 \leftarrow x_2 - \eta \frac{\partial{f}}{\partial{x_2}}.$$In :numref:`chapter_momentum`, we can see that, when there is a big differencebetween the gradient values $x_1$ and $x_2$, a sufficiently small learning rateneeds to be selected so that the independent variable will not diverge in thedimension of larger gradient values. However, this will cause the independentvariables to iterate too slowly in the dimension with smaller gradientvalues. The momentum method relies on the exponentially weighted moving average(EWMA) to make the direction of the independent variable more consistent, thusreducing the possibility of divergence. In this section, we are going tointroduce Adagrad :cite:`Duchi.Hazan.Singer.2011`, an algorithm that adjusts the learning rate according to thegradient value of the independent variable in each dimension to eliminateproblems caused when a unified learning rate has to adapt to all dimensions. The AlgorithmThe Adagrad algorithm uses the cumulative variable $\boldsymbol{s}_t$ obtained from a square by element operation on the mini-batch stochastic gradient $\boldsymbol{g}_t$. At time step 0, Adagrad initializes each element in $\boldsymbol{s}_0$ to 0. At time step $t$, we first sum the results of the square by element operation for the mini-batch gradient $\boldsymbol{g}_t$ to get the variable $\boldsymbol{s}_t$:$$\boldsymbol{s}_t \leftarrow \boldsymbol{s}_{t-1} + \boldsymbol{g}_t \odot \boldsymbol{g}_t,$$Here, $\odot$ is the symbol for multiplication by element. Next, we re-adjust the learning rate of each element in the independent variable of the objective function using element operations:$$\boldsymbol{x}_t \leftarrow \boldsymbol{x}_{t-1} - \frac{\eta}{\sqrt{\boldsymbol{s}_t + \epsilon}} \odot \boldsymbol{g}_t,$$Here, $\eta$ is the learning rate while $\epsilon$ is a constant added to maintain numerical stability, such as $10^{-6}$. Here, the square root, division, and multiplication operations are all element operations. Each element in the independent variable of the objective function will have its own learning rate after the operations by elements. FeaturesWe should emphasize that the cumulative variable $\boldsymbol{s}_t$ produced by a square by element operation on the mini-batch stochastic gradient is part of the learning rate denominator. Therefore, if an element in the independent variable of the objective function has a constant and large partial derivative, the learning rate of this element will drop faster. On the contrary, if the partial derivative of such an element remains small, then its learning rate will decline more slowly. However, since $\boldsymbol{s}_t$ accumulates the square by element gradient, the learning rate of each element in the independent variable declines (or remains unchanged) during iteration. Therefore, when the learning rate declines very fast during early iteration, yet the current solution is still not desirable, Adagrad might have difficulty finding a useful solution because the learning rate will be too small at later stages of iteration.Below we will continue to use the objective function $f(\boldsymbol{x})=0.1x_1^2+2x_2^2$ as an example to observe the iterative trajectory of the independent variable in Adagrad. We are going to implement Adagrad using the same learning rate as the experiment in last section, 0.4. As we can see, the iterative trajectory of the independent variable is smoother. However, due to the cumulative effect of $\boldsymbol{s}_t$, the learning rate continuously decays, so the independent variable does not move as much during later stages of iteration.
###Code
%matplotlib inline
import d2l
import math
from mxnet import np, npx
npx.set_np()
def adagrad_2d(x1, x2, s1, s2):
# The first two terms are the independent variable gradients
g1, g2, eps = 0.2 * x1, 4 * x2, 1e-6
s1 += g1 ** 2
s2 += g2 ** 2
x1 -= eta / math.sqrt(s1 + eps) * g1
x2 -= eta / math.sqrt(s2 + eps) * g2
return x1, x2, s1, s2
def f_2d(x1, x2):
return 0.1 * x1 ** 2 + 2 * x2 ** 2
eta = 0.4
d2l.show_trace_2d(f_2d, d2l.train_2d(adagrad_2d))
###Output
epoch 20, x1 -2.382563, x2 -0.158591
###Markdown
Now, we are going to increase the learning rate to $2$. As we can see, the independent variable approaches the optimal solution more quickly.
###Code
eta = 2
d2l.show_trace_2d(f_2d, d2l.train_2d(adagrad_2d))
###Output
epoch 20, x1 -0.002295, x2 -0.000000
###Markdown
Implementation from ScratchLike the momentum method, Adagrad needs to maintain a state variable of the same shape for each independent variable. We use the formula from the algorithm to implement Adagrad.
###Code
def init_adagrad_states(feature_dim):
s_w = np.zeros((feature_dim, 1))
s_b = np.zeros(1)
return (s_w, s_b)
def adagrad(params, states, hyperparams):
eps = 1e-6
for p, s in zip(params, states):
s[:] += np.square(p.grad)
p[:] -= hyperparams['lr'] * p.grad / np.sqrt(s + eps)
###Output
_____no_output_____
###Markdown
Compared with the experiment in :numref:`chapter_minibatch_sgd`, here, we use alarger learning rate to train the model.
###Code
data_iter, feature_dim = d2l.get_data_ch10(batch_size=10)
d2l.train_ch10(adagrad, init_adagrad_states(feature_dim),
{'lr': 0.1}, data_iter, feature_dim);
###Output
loss: 0.243, 0.051 sec/epoch
###Markdown
Concise ImplementationUsing the `Trainer` instance of the algorithm named “adagrad”, we can implement the Adagrad algorithm with Gluon to train models.
###Code
d2l.train_gluon_ch10('adagrad', {'learning_rate': 0.1}, data_iter)
###Output
loss: 0.242, 0.069 sec/epoch
|
Hierarchical+Clustering (1).ipynb | ###Markdown
Proteins example recreated in python from https://rstudio-pubs-static.s3.amazonaws.com/33876_1d7794d9a86647ca90c4f182df93f0e8.html and http://nbviewer.jupyter.org/github/OxanaSachenkova/hclust-python/blob/master/hclust.ipynb
###Code
import numpy as np
from numpy import genfromtxt
data = genfromtxt('http://www.biz.uiowa.edu/faculty/jledolter/DataMining/protein.csv',delimiter=',',names=True,dtype=float)
###Output
_____no_output_____
###Markdown
note numpy also has recfromcsv() and pandas can read_csv, with pandas DF.values giving a numpy array
###Code
len(data)
len(data.dtype.names)
data.dtype.names
type(data)
data
data_array = data.view((np.float, len(data.dtype.names)))
data_array
data_array = data_array.transpose()
print(data_array)
data_array[1:10]
###Output
_____no_output_____
###Markdown
Samples clustering using scipyFirst, we'll implement the clustering using scipy modules
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import pdist, squareform
from scipy.cluster.hierarchy import linkage, dendrogram
data_dist = pdist(data_array[1:10]) # computing the distance
data_link = linkage(data_dist) # computing the linkage
dendrogram(data_link,labels=data.dtype.names)
plt.xlabel('Samples')
plt.ylabel('Distance')
plt.suptitle('Samples clustering', fontweight='bold', fontsize=14);
plt.show()
# Compute and plot first dendrogram.
fig = plt.figure(figsize=(8,8))
# x ywidth height
ax1 = fig.add_axes([0.05,0.1,0.2,0.6])
Y = linkage(data_dist, method='single')
Z1 = dendrogram(Y, orientation='right',labels=data.dtype.names) # adding/removing the axes
ax1.set_xticks([])
# Compute and plot second dendrogram.
ax2 = fig.add_axes([0.3,0.71,0.6,0.2])
Z2 = dendrogram(Y)
ax2.set_xticks([])
ax2.set_yticks([])
#Compute and plot the heatmap
axmatrix = fig.add_axes([0.3,0.1,0.6,0.6])
idx1 = Z1['leaves']
idx2 = Z2['leaves']
D = squareform(data_dist)
D = D[idx1,:]
D = D[:,idx2]
im = axmatrix.matshow(D, aspect='auto', origin='lower', cmap=plt.cm.YlGnBu)
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([0.91,0.1,0.02,0.6])
plt.colorbar(im, cax=axcolor)
plt.show()
###Output
_____no_output_____
###Markdown
the fastcluster module http://math.stanford.edu/~muellner/fastcluster.html?section=0
###Code
! pip install fastcluster
from fastcluster import *
%timeit data_link = linkage(data_array[1:10], method='single', metric='euclidean', preserve_input=True)
dendrogram(data_link,labels=data.dtype.names)
plt.xlabel('Samples')
plt.ylabel('Distance')
plt.suptitle('Samples clustering', fontweight='bold', fontsize=14);
plt.show()
###Output
The slowest run took 5.77 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 45.1 µs per loop
###Markdown
Proteins example recreated in python from https://rstudio-pubs-static.s3.amazonaws.com/33876_1d7794d9a86647ca90c4f182df93f0e8.html and http://nbviewer.jupyter.org/github/OxanaSachenkova/hclust-python/blob/master/hclust.ipynb
###Code
import numpy as np
from numpy import genfromtxt
data = genfromtxt('http://www.biz.uiowa.edu/faculty/jledolter/DataMining/protein.csv',delimiter=',',names=True,dtype=float)
###Output
_____no_output_____
###Markdown
note numpy also has recfromcsv() and pandas can read_csv, with pandas DF.values giving a numpy array
###Code
len(data)
len(data.dtype.names)
data.dtype.names
type(data)
data
data_array = data.view((np.float, len(data.dtype.names)))
data_array
data_array = data_array.transpose()
print(data_array)
data_array[1:10]
###Output
_____no_output_____
###Markdown
Samples clustering using scipyFirst, we'll implement the clustering using scipy modules
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import pdist, squareform
from scipy.cluster.hierarchy import linkage, dendrogram
data_dist = pdist(data_array[1:10]) # computing the distance
data_link = linkage(data_dist) # computing the linkage
dendrogram(data_link,labels=data.dtype.names)
plt.xlabel('Samples')
plt.ylabel('Distance')
plt.suptitle('Samples clustering', fontweight='bold', fontsize=14);
plt.show()
# Compute and plot first dendrogram.
fig = plt.figure(figsize=(8,8))
# x ywidth height
ax1 = fig.add_axes([0.05,0.1,0.2,0.6])
Y = linkage(data_dist, method='single')
Z1 = dendrogram(Y, orientation='right',labels=data.dtype.names) # adding/removing the axes
ax1.set_xticks([])
# Compute and plot second dendrogram.
ax2 = fig.add_axes([0.3,0.71,0.6,0.2])
Z2 = dendrogram(Y)
ax2.set_xticks([])
ax2.set_yticks([])
#Compute and plot the heatmap
axmatrix = fig.add_axes([0.3,0.1,0.6,0.6])
idx1 = Z1['leaves']
idx2 = Z2['leaves']
D = squareform(data_dist)
D = D[idx1,:]
D = D[:,idx2]
im = axmatrix.matshow(D, aspect='auto', origin='lower', cmap=plt.cm.YlGnBu)
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([0.91,0.1,0.02,0.6])
plt.colorbar(im, cax=axcolor)
plt.show()
###Output
_____no_output_____
###Markdown
the fastcluster module http://math.stanford.edu/~muellner/fastcluster.html?section=0
###Code
! pip install fastcluster
from fastcluster import *
%timeit data_link = linkage(data_array[1:10], method='single', metric='euclidean', preserve_input=True)
dendrogram(data_link,labels=data.dtype.names)
plt.xlabel('Samples')
plt.ylabel('Distance')
plt.suptitle('Samples clustering', fontweight='bold', fontsize=14);
plt.show()
###Output
The slowest run took 5.77 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 45.1 µs per loop
|
tomolab/Examples/doc_Synthetic_Phantoms.ipynb | ###Markdown
Example of Digital Phantoms that can be programmatically generated
###Code
import tomolab as tl
from tomolab.Visualization.Visualization import TriplanarView
from tomolab.Visualization.Visualization import VolumeRenderer
import numpy as np
import matplotlib.pyplot as plt
res = (500/128,500/128,500/47)
###Output
_____no_output_____
###Markdown
CYLINDER PHANTOM
###Code
activity = tl.DataSources.Synthetic.Shapes.uniform_cylinder(
shape=(128, 128, 47),
size=(500, 500, 500),
center=(250, 250, 250),
radius=150,
length=400,
inner_value=1.0,
outer_value=0.0,
axis=2,
)
activity.display(res = res)
VolumeRenderer(activity,res=res).display()
###Output
/opt/conda/lib/python3.7/site-packages/ipyvolume/serialize.py:81: RuntimeWarning: invalid value encountered in true_divide
gradient = gradient / np.sqrt(gradient[0]**2 + gradient[1]**2 + gradient[2]**2)
###Markdown
SPHERE PHANTOM
###Code
activity = tl.DataSources.Synthetic.Shapes.uniform_sphere(
shape=(128, 128, 47),
size=(500, 500, 500),
center=(250, 250, 250),
radius=150,
inner_value=1.0,
outer_value=0.0,
)
activity.display(res = res)
VolumeRenderer(activity,res=res).display()
###Output
_____no_output_____
###Markdown
SPHERES RING PHANTOM
###Code
activity = tl.DataSources.Synthetic.Shapes.uniform_spheres_ring(
shape=(128, 128, 47),
size=(500, 500, 500),
center=(250, 250, 250),
ring_radius=170,
min_radius=20,
max_radius=60,
N_elems=8,
inner_value=1.0,
outer_value=0.0,
axis=2,
)
activity.display(res = res)
VolumeRenderer(activity,res=res).display()
###Output
_____no_output_____
###Markdown
CYLINDERS RING PHANTOM
###Code
activity = tl.DataSources.Synthetic.Shapes.uniform_cylinders_ring(
shape=(128, 128, 47),
size=(500, 500, 500),
center=(250, 250, 250),
ring_radius=170,
length=400,
min_radius=20,
max_radius=60,
N_elems=8,
inner_value=1.0,
outer_value=0.0,
axis=2,
)
activity.display(res = res)
VolumeRenderer(activity,res=res).display()
###Output
_____no_output_____
###Markdown
COMPLEX PHANTOM
###Code
activity = tl.DataSources.Synthetic.Shapes.complex_phantom(
shape=(128, 128, 47),
size=(500, 500, 500),
center=(250, 250, 250),
radius=180,
insert_radius=120,
hole_radius=50,
length=450,
insert_length=225,
insert_min_radius=10,
insert_max_radius=40,
insert_N_elems=8,
inner_value=1.0,
insert_value=1.0,
outer_value=0.0,
axis=2,
)
activity.display(res = res)
VolumeRenderer(activity,res=res).display()
###Output
_____no_output_____ |
model_notebooks/genre_embedding_exploration.ipynb | ###Markdown
Unpickling DF
###Code
import pickle
df = pickle.load(open("./data/song_list_v5_hashed.pkl","rb"))
df = df[df.columns[0:18]]
df.head()
###Output
_____no_output_____
###Markdown
Create Genre List
###Code
#clean up the genre list column for string matching formatting consistency
new_genres_list = []
for index,genre_list in enumerate(df["genres"]):
genre_list = genre_list.split(",")
new_genre_list = []
for genre in genre_list:
genre = genre.strip("]")
genre = genre.strip("[")
genre = genre.strip(" ")
genre = genre.strip("'")
new_genre_list.append(genre)
new_genres_list.append(new_genre_list)
df["genres"] = new_genres_list
all_genres = []
for genres in df["genres"]:
for genre in genres:
all_genres.append(genre)
len(all_genres)
###Output
_____no_output_____
###Markdown
Limited EmbedddingCreate bitwise embedding for fast vector addition of arbitrary elements in a set> Mimic inverted indices in DB
###Code
## Get a set of items
items = sorted(list(set(all_genres)))
items[0:10]
## Create a lookup table for these items
import numpy as np
def create_lookup(item_set):
lookup = {} # initialize empty lookup
max_len = len(item_set) # get size of empty array
base_array = np.zeros(max_len) # initialize an empty array to copy for each embedding/vector
# Iterate through each item in set and create unique embedding, storing embeddings in dictionary
for index, item in enumerate(item_set):
temp_array = base_array.copy()
temp_array[index] = 1
lookup[item] = temp_array
return lookup
lookup_dict = create_lookup(items)
for item in list(lookup_dict.items())[0:10]:
print(item[1])
def compress_list(items, lookup_dict=lookup_dict):
vec_list = np.array([lookup_dict[item] for item in items])
return np.sum(vec_list, axis=0)
## Example usage
# Given two rows of data:
# row_a = ['a', 'c']
# row_b = ['a', 'b']
# row_c = ['a', 'c', 'd']
# row_d = ['b', 'c', 'd']
# # Calculate their respective vectors
# vec_a = compress_list(row_a)
# vec_b = compress_list(row_b)
# vec_c = compress_list(row_c)
# vec_d = compress_list(row_d)
# vec_a
df['genre_embed'] = df.genres.apply(compress_list)
df['genre_embed'][0]
for index,val in enumerate(df['genre_embed'][0]):
if val == 1:
print(index,val)
for index,val in enumerate(df['genre_embed'][0]):
if val == 1:
print(index,val)
df["genre_embed"][0][1226]
df["genres"].loc[1226]
df["genres"].loc[3065]
df["genres"][0]
items[3065]
###Output
_____no_output_____
###Markdown
Pickling to Try out on KNN model in another notebook w/ Genre Embeds
###Code
# pickle.dump(df, open( "./data/song_list_v6_genre_embeds.pkl", "wb" ) )
###Output
_____no_output_____
###Markdown
Pickling without Genre Embeds
###Code
pickle.dump(df[df.columns[0:18]],open( "./data/song_list_v6", "wb" ))
###Output
_____no_output_____
###Markdown
Using with Pandascompress_list can be applied to your genre series via:> df['genre_embed'] = df.genres.apply(compress_list) Compare vectorsUse cosine distance to compare vectors (will be more meaningful if set is pre-ordered by similarity prior to making lookup_dict)
###Code
from scipy.spatial.distance import cosine
cosine(vec_a, vec_a), cosine(vec_a, vec_b), cosine(vec_a, vec_c), cosine(vec_b, vec_c), cosine(vec_a, vec_d), cosine(vec_c, vec_d)
# counter = -1
for index,genre_list in enumerate(df["genres"][0:1000]):
# counter += 1
genre_list = genre_list.split(",")
for genre in genre_list:
genre = genre.strip("]")
genre = genre.strip("[")
genre = genre.strip(" ")
print(genre)
# # genre = '"' + genre + '"'
# for column in df.columns[18:]:
# if column == genre:
# print(column,genre)
# # df[column].loc[counter] = 1
# else:
# continue
###Output
_____no_output_____
###Markdown
Unpickling DF
###Code
import pickle
df = pickle.load(open("./data/song_list_v5_hashed.pkl","rb"))
df = df[df.columns[0:18]]
df.head()
###Output
_____no_output_____
###Markdown
Create Genre List
###Code
#clean up the genre list column for string matching formatting consistency
new_genres_list = []
for index,genre_list in enumerate(df["genres"]):
genre_list = genre_list.split(",")
new_genre_list = []
for genre in genre_list:
genre = genre.strip("]")
genre = genre.strip("[")
genre = genre.strip(" ")
genre = genre.strip("'")
new_genre_list.append(genre)
new_genres_list.append(new_genre_list)
df["genres"] = new_genres_list
all_genres = []
for genres in df["genres"]:
for genre in genres:
all_genres.append(genre)
len(all_genres)
###Output
_____no_output_____
###Markdown
Limited EmbedddingCreate bitwise embedding for fast vector addition of arbitrary elements in a set> Mimic inverted indices in DB
###Code
## Get a set of items
items = sorted(list(set(all_genres)))
items[0:10]
## Create a lookup table for these items
import numpy as np
def create_lookup(item_set):
lookup = {} # initialize empty lookup
max_len = len(item_set) # get size of empty array
base_array = np.zeros(max_len) # initialize an empty array to copy for each embedding/vector
# Iterate through each item in set and create unique embedding, storing embeddings in dictionary
for index, item in enumerate(item_set):
temp_array = base_array.copy()
temp_array[index] = 1
lookup[item] = temp_array
return lookup
lookup_dict = create_lookup(items)
for item in list(lookup_dict.items())[0:10]:
print(item[1])
def compress_list(items, lookup_dict=lookup_dict):
vec_list = np.array([lookup_dict[item] for item in items])
return np.sum(vec_list, axis=0)
## Example usage
# Given two rows of data:
# row_a = ['a', 'c']
# row_b = ['a', 'b']
# row_c = ['a', 'c', 'd']
# row_d = ['b', 'c', 'd']
# # Calculate their respective vectors
# vec_a = compress_list(row_a)
# vec_b = compress_list(row_b)
# vec_c = compress_list(row_c)
# vec_d = compress_list(row_d)
# vec_a
df['genre_embed'] = df.genres.apply(compress_list)
df['genre_embed'][0]
for index,val in enumerate(df['genre_embed'][0]):
if val == 1:
print(index,val)
for index,val in enumerate(df['genre_embed'][0]):
if val == 1:
print(index,val)
df["genre_embed"][0][1226]
df["genres"].loc[1226]
df["genres"].loc[3065]
df["genres"][0]
items[3065]
###Output
_____no_output_____
###Markdown
Pickling to Try out on KNN model in another notebook w/ Genre Embeds
###Code
# pickle.dump(df, open( "./data/song_list_v6_genre_embeds.pkl", "wb" ) )
###Output
_____no_output_____
###Markdown
Pickling without Genre Embeds
###Code
pickle.dump(df[df.columns[0:18]],open( "./data/song_list_v6", "wb" ))
###Output
_____no_output_____
###Markdown
Using with Pandascompress_list can be applied to your genre series via:> df['genre_embed'] = df.genres.apply(compress_list) Compare vectorsUse cosine distance to compare vectors (will be more meaningful if set is pre-ordered by similarity prior to making lookup_dict)
###Code
from scipy.spatial.distance import cosine
cosine(vec_a, vec_a), cosine(vec_a, vec_b), cosine(vec_a, vec_c), cosine(vec_b, vec_c), cosine(vec_a, vec_d), cosine(vec_c, vec_d)
# counter = -1
for index,genre_list in enumerate(df["genres"][0:1000]):
# counter += 1
genre_list = genre_list.split(",")
for genre in genre_list:
genre = genre.strip("]")
genre = genre.strip("[")
genre = genre.strip(" ")
print(genre)
# # genre = '"' + genre + '"'
# for column in df.columns[18:]:
# if column == genre:
# print(column,genre)
# # df[column].loc[counter] = 1
# else:
# continue
###Output
_____no_output_____ |
.ipynb_checkpoints/AGII_chap06_modelling_monopole-checkpoint.ipynb | ###Markdown
AG Dynamics of the Earth Juypter notebooks Georg Kaufmann Angewandte Geophysik II: Kap 6: Magnetik Magnetfeldmodellierung----*Georg Kaufmann,Geophysics Section,Institute of Geological Sciences,Freie Universität Berlin,Germany*
###Code
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
# define profile
xmin = -500.
xmax = +500.
xstep = 101
x = np.linspace(xmin,xmax,xstep)
###Output
_____no_output_____
###Markdown
For the **magnetic induction** $\vec{B}$ [T], we define$$\vec{B} = \mu_0 \vec{H}$$with $\mu_0=4 \pi \times 10^{-7}$ Vs/A/m the **permeability of vacuum**, and $\vec{H}$ [A/m] the **magnetic field strength**.For the **magnetisation** $\vec{M}$ [A/m] we define$$\vec{M} = \chi \vec{H}$$with $\chi$ [-] the **susceptibility**. Monopole$$\begin{array}{rcl} B_z & = & \frac{\mu_0}{4\pi} M \pi R^2 \frac{z}{r^3} \\ B_x & = & \frac{\mu_0}{4\pi} M \pi R^2 \frac{x}{r^3}\end{array}$$
###Code
def B_monopole(x,D=100.,R=40.,M=0.04):
mu0 = 4.e-7*np.pi
r = np.sqrt(x**2 + D**2)
# magnetic induction of monopole
Bx = mu0 / 4. / np.pi * M * np.pi * R**2 * x / r**3
Bz = mu0 / 4. / np.pi * M * np.pi * R**2 * D / r**3
return Bx,Bz
def plot_monopole(f1=False,f2=False,f3=False,f4=False,f5=False):
D = [100,100,100,100,100]
R = [40,40,40,30,50]
M = [0.04,0.02,0.01,0.04,0.04]
fig,axs = plt.subplots(2,1,figsize=(12,8))
axs[0].set_xlim([-500,500])
axs[0].set_xticks([x for x in np.linspace(-400,400,9)])
axs[0].set_xlabel('Profile [m]')
axs[0].set_ylim([-1.5,2.5])
axs[0].set_yticks([y for y in np.linspace(-1.0,2.0,5)])
axs[0].set_ylabel('Bx,Bz [nT]')
axs[0].plot(x,1.e9*B_monopole(x)[0],linewidth=1.0,linestyle='-',color='black',label='B$_x$ - monopole')
axs[0].plot(x,1.e9*B_monopole(x)[1],linewidth=1.0,linestyle=':',color='black',label='B$_z$ - monopole')
if (f1):
axs[0].plot(x,1.e9*B_monopole(x,D=D[0],R=R[0],M=M[0])[0],linewidth=2.0,linestyle='-',color='black',
label='D='+str(D[0])+',R='+str(R[0])+',M='+str(M[0]))
axs[0].plot(x,1.e9*B_monopole(x,D=D[0],R=R[0],M=M[0])[1],linewidth=2.0,linestyle=':',color='black')
if (f2):
axs[0].plot(x,1.e9*B_monopole(x,D=D[1],R=R[1],M=M[1])[0],linewidth=2.0,linestyle='-',color='red',
label='D='+str(D[1])+',R='+str(R[1])+',M='+str(M[1]))
axs[0].plot(x,1.e9*B_monopole(x,D=D[1],R=R[1],M=M[1])[1],linewidth=2.0,linestyle=':',color='red')
if (f3):
axs[0].plot(x,1.e9*B_monopole(x,D=D[2],R=R[2],M=M[2])[0],linewidth=2.0,linestyle='-',color='orange',
label='D='+str(D[2])+',R='+str(R[2])+',M='+str(M[2]))
axs[0].plot(x,1.e9*B_monopole(x,D=D[2],R=R[2],M=M[2])[1],linewidth=2.0,linestyle=':',color='orange')
if (f4):
axs[0].plot(x,1.e9*B_monopole(x,D=D[3],R=R[3],M=M[3])[0],linewidth=2.0,linestyle='-',color='green',
label='D='+str(D[3])+',R='+str(R[3])+',M='+str(M[3]))
axs[0].plot(x,1.e9*B_monopole(x,D=D[3],R=R[3],M=M[3])[1],linewidth=2.0,linestyle=':',color='green')
if (f5):
axs[0].plot(x,1.e9*B_monopole(x,D=D[4],R=R[4],M=M[4])[0],linewidth=2.0,linestyle='-',color='blue',
label='D='+str(D[4])+',R='+str(R[4])+',M='+str(M[4]))
axs[0].plot(x,1.e9*B_monopole(x,D=D[4],R=R[4],M=M[4])[1],linewidth=2.0,linestyle=':',color='blue')
axs[0].legend()
axs[1].set_xlim([-500,500])
axs[1].set_xticks([x for x in np.linspace(-400,400,9)])
#axs[1].set_xlabel('Profile [m]')
axs[1].set_ylim([250,0])
axs[1].set_yticks([y for y in np.linspace(0.,200.,5)])
axs[1].set_ylabel('Depth [m]')
angle = [theta for theta in np.linspace(0,2*np.pi,41)]
if (f1):
axs[1].plot(R[0]*np.cos(angle),D[0]+R[0]*np.sin(angle),linewidth=2.0,linestyle='-',color='black')
if (f2):
axs[1].plot(R[1]*np.cos(angle),D[1]+R[1]*np.sin(angle),linewidth=2.0,linestyle='-',color='red')
if (f3):
axs[1].plot(R[2]*np.cos(angle),D[2]+R[2]*np.sin(angle),linewidth=2.0,linestyle='-',color='orange')
if (f4):
axs[1].plot(R[3]*np.cos(angle),D[3]+R[3]*np.sin(angle),linewidth=2.0,linestyle='-',color='green')
if (f5):
axs[1].plot(R[4]*np.cos(angle),D[4]+R[4]*np.sin(angle),linewidth=2.0,linestyle='-',color='blue')
plot_monopole(f1=True)
# call interactive module
w = dict(
f1=widgets.Checkbox(value=True,description='eins',continuous_update=False,disabled=False),
#a1=widgets.FloatSlider(min=0.,max=2.,step=0.1,value=1.0),
f2=widgets.Checkbox(value=False,description='zwei',continuous_update=False,disabled=False),
f3=widgets.Checkbox(value=False,description='drei',continuous_update=False,disabled=False),
f4=widgets.Checkbox(value=False,description='vier',continuous_update=False,disabled=False),
f5=widgets.Checkbox(value=False,description='fuenf',continuous_update=False,disabled=False))
output = widgets.interactive_output(plot_monopole, w)
box = widgets.HBox([widgets.VBox([*w.values()]), output])
display(box)
###Output
_____no_output_____ |
.ipynb_checkpoints/using_selenium-checkpoint.ipynb | ###Markdown
Downloading Meme Templates using SeleniumLet's say you have to build an AI meme generator. Firstly you need to download a LOT of memes.Here are a few meme ids that are used by the ImgFlip API.[Imgflip](https://api.imgflip.com/popular_meme_ids)Imgflip's provides us with the following API.URL: https://api.imgflip.com/get_memesMethod: GETCalling an API of a website is the fastest way to retrieve web data - Since the developers of the website have already taken care of everything for us.But, ofcourse there is a catch. These API's are usually paid. There is a limit on the number of queries you can make to these endpoints based on your payment plan. Imgflip has a free version.https://api.imgflip.com/
###Code
import requests
response = requests.get('https://api.imgflip.com/get_memes')
data = response.json()
data
MEME_IDS = [ '114585149',
'438680',
'100777631',
'181913649',
'161865971',
'217743513',
'131087935',
'119139145',
'93895088',
'112126428',
'97984',
'1035805',
'155067746',
'4087833',
'91538330',
'124822590',
'178591752',
'124055727',
'87743020',
'222403160',
'102156234',
'188390779',
'89370399',
'129242436']
###Output
_____no_output_____
###Markdown
Using Selenium for Web AutomationWe will use Selenium and ChromeDriver for downloading the memes. Selenium just mimics human behavior as if you are browsing the web. Apart from Web Automation, Selenium is also used in automated testing and robotic process automation (RPA).[More here](https://realpython.com/modern-web-automation-with-python-and-selenium/)
###Code
import os
import time
from selenium.webdriver.chrome.webdriver import WebDriver
from selenium.webdriver.chrome.options import Options
print(os.getcwd())
try:
os.mkdir('Downloads')
except FileExistsError:
print('Directory already exists...')
download_directory = os.path.join(os.getcwd(),'Downloads')
download_directory
options = Options()
options.add_argument('--start-maximized')
try:
# No need to specify executable_path if you are using windows
# and chromedriver.exe is in the current working directory.
driver = WebDriver(executable_path='./chromedriver',options=options)
driver.get('https://www.google.co.in')
time.sleep(10)
except Exception as err:
print(err)
finally:
driver.quit()
meme = MEME_IDS[1]
# No need to specify executable_path if you are using windows
# and chromedriver.exe is in the current working directory.
driver = WebDriver(executable_path='./chromedriver',options=options)
driver.get('https://www.google.co.in')
search_bar = None
search_bar.send_keys()
search_button = None
search_button.click()
time.sleep(2)
search_results = None
url = None
for i in search_results:
### CODE HERE
pass
driver.get(url)
img = None
response = requests.get(img.get_attribute('src'))
with open(os.path.join(download_directory,f'{meme}.jpg'), 'wb') as file:
file.write(response.content)
driver.quit()
path_of_file_downloaded = None
import cv2
from matplotlib import pyplot as plt
%matplotlib inline
img = cv2.imread(download_directory + path_of_file_downloaded)
rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure(figsize=(12,9))
plt.imshow(rgb)
plt.tick_params(
axis='both',
which='both',
bottom=False,
left=False,
top=False,
labelleft=False,
labelbottom=False)
plt.show()
###Output
_____no_output_____ |
notebooks/OrionExplorer - Usage Example.ipynb | ###Markdown
OrionExplorer - Usage Example This is a demo notebook showing how to use the `OrionExplorer` on one of thesignals from the Demo Dataset.In this case, we will be processing the `S-1` signal with the `lstm_dynamic_threshold.json`pipeline.Afterwards, we will explore the found events and add some comments on them. 1. Create an OrionExlorer Instance In this first step, we setup the environment, import the `OrionExplorer` and createan instance passing the name of the database which we want to connect to.
###Code
import logging;
logging.basicConfig(level=logging.ERROR)
logging.getLogger().setLevel(level=logging.ERROR)
import warnings
warnings.simplefilter("ignore")
from orion.explorer import OrionExplorer
explorer = OrionExplorer(database='orion-usage-example')
###Output
_____no_output_____
###Markdown
In this case we will drop the database before starting to make sure that we are workingon a clean environment.**WARNING**: This will remove all the data that exists in this database!
###Code
explorer.drop_database()
###Output
_____no_output_____
###Markdown
2. Add the pipeline that we will be usingThe second step is to register the pipeline that we are going to use by calling the `add_pipeline`method of our `OrionExplorer` instance passing:* the name that we want to give to this pipeline.* the path to the pipeline JSON.We will also capture the output of the `add_pipeline` call, which will be the pipeline objectthat we will use later on.
###Code
explorer.add_pipeline(
'lstm_dynamic_threshold',
'../orion/pipelines/lstm_dynamic_threshold.json'
)
###Output
_____no_output_____
###Markdown
Afterwards, we can obtain the list of pipelines to see if it has been properly registered
###Code
explorer.get_pipelines()
###Output
_____no_output_____
###Markdown
3. Register a datasetNow we will register a new dataset, for the signal that we want to process.In order to do this, we will call the `add_dataset` method from our `OrionExplorer` instancepassing:* the name that we are giving to this the dataset* the signal that we want to use
###Code
explorer.add_dataset('S-1', 'S-1')
###Output
_____no_output_____
###Markdown
Afterwards we can check that the dataset was properly registered
###Code
explorer.get_datasets()
###Output
_____no_output_____
###Markdown
4. Run the pipeline on the datasetOnce the pipeline and the dataset are registered, we can start the analysis.
###Code
explorer.analyze('S-1', 'lstm_dynamic_threshold')
###Output
Using TensorFlow backend.
###Markdown
5. Analyze the resultsOnce the execution has finished, we can explore the Dataruns and the detected Events.
###Code
explorer.get_dataruns()
explorer.get_events()
###Output
_____no_output_____ |
Big-Data-Clusters/CU8/Public/content/repair/tsg024-name-node-is-in-safe-mode.ipynb | ###Markdown
TSG024 - Namenode is in safe mode=================================HDFS can get itself into Safe mode. For example if too many Pods arere-cycled too quickly in the Storage Pool then Safe mode may beautomatically enabled.When starting a spark session, the user may see (for example, whentrying to start a PySpark or PySpark3 session in a notebook from AzureData Studio):> The code failed because of a fatal error: Error sending http request> and maximum retry encountered..>> Some things to try: a) Make sure Spark has enough available resources> for Jupyter to create a Spark context. b) Contact your Jupyter> administrator to make sure the Spark magics library is configured> correctly. c) Restart the kernel.Use this notebook to run a report to understand more about HDFS, andoptionally move the cluster out of Safe mode if it is safe to do.Steps----- Common functionsDefine helper functions used in this notebook.
###Code
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import json
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
first_run = True
rules = None
debug_logging = False
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
global first_run
global rules
if first_run:
first_run = False
rules = load_rules()
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
print(f"The path used to search for '{cmd_actual[0]}' was:")
print(sys.path)
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if rules is not None:
apply_expert_rules(line)
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# apply expert rules (to run follow-on notebooks), based on output
#
if rules is not None:
apply_expert_rules(line_decoded)
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
def load_json(filename):
"""Load a json file from disk and return the contents"""
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def load_rules():
"""Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable"""
# Load this notebook as json to get access to the expert rules in the notebook metadata.
#
try:
j = load_json("tsg024-name-node-is-in-safe-mode.ipynb")
except:
pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?
else:
if "metadata" in j and \
"azdata" in j["metadata"] and \
"expert" in j["metadata"]["azdata"] and \
"expanded_rules" in j["metadata"]["azdata"]["expert"]:
rules = j["metadata"]["azdata"]["expert"]["expanded_rules"]
rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.
# print (f"EXPERT: There are {len(rules)} rules to evaluate.")
return rules
def apply_expert_rules(line):
"""Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so
inject a 'HINT' to the follow-on SOP/TSG to run"""
global rules
for rule in rules:
notebook = rule[1]
cell_type = rule[2]
output_type = rule[3] # i.e. stream or error
output_type_name = rule[4] # i.e. ename or name
output_type_value = rule[5] # i.e. SystemExit or stdout
details_name = rule[6] # i.e. evalue or text
expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it!
if debug_logging:
print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.")
if re.match(expression, line, re.DOTALL):
if debug_logging:
print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook))
match_found = True
display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))
print('Common functions defined successfully.')
# Hints for binary (transient fault) retry, (known) error and install guide
#
retry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond']}
error_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']]}
install_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb']}
###Output
_____no_output_____
###Markdown
Instantiate Kubernetes client
###Code
# Instantiate the Python Kubernetes client into 'api' variable
import os
from IPython.display import Markdown
try:
from kubernetes import client, config
from kubernetes.stream import stream
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
except ImportError:
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
###Output
_____no_output_____
###Markdown
Get the namespace for the big data clusterGet the namespace of the Big Data Cluster from the Kuberenetes API.**NOTE:**If there is more than one Big Data Cluster in the target Kubernetescluster, then either:- set \[0\] to the correct value for the big data cluster.- set the environment variable AZDATA\_NAMESPACE, before starting Azure Data Studio.
###Code
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
###Output
_____no_output_____
###Markdown
Get the name of the namenode pod
###Code
namenode_pod = run(f'kubectl get pod --selector=role=namenode -n {namespace} -o jsonpath={{.items[0].metadata.name}}', return_output=True)
print ('Namenode pod name: ' + namenode_pod)
###Output
_____no_output_____
###Markdown
Get the `hdfs dfsadmin` report
###Code
name=namenode_pod
container='hadoop'
command='hdfs dfsadmin -report'
string=stream(api.connect_get_namespaced_pod_exec, name, namespace, command=['/bin/sh', '-c', command], container=container, stderr=True, stdout=True)
print(string)
###Output
_____no_output_____
###Markdown
Set the text that identifies this issue
###Code
precondition_text="Safe mode is ON"
###Output
_____no_output_____
###Markdown
PRECONDITION CHECK
###Code
if precondition_text not in string:
raise Exception("PRECONDITION NON-MATCH: 'tsg024-name-node-is-in-safe-mode' is not a match for an active problem")
print("PRECONDITION MATCH: 'tsg024-name-node-is-in-safe-mode' is a match for an active problem in this cluster")
###Output
_____no_output_____
###Markdown
Resolution----------NOTE: Only if it is determined there are no missing, corrupt or underreplicated blocks that should not be ignored is it safe to take the namenode out of safe mode. Use `hdfs dfsadmin -report` and `hdfs fsck` tounderstand more about missing, corrupt or under replicated blocks. Move the namenode out of safe mode
###Code
command='hdfs dfsadmin -safemode leave'
string=stream(api.connect_get_namespaced_pod_exec, name, namespace, command=['/bin/sh', '-c', command], container=container, stderr=True, stdout=True)
print(string)
###Output
_____no_output_____
###Markdown
Validate - Verify the namenode is no longer in safe modeValidate that the text ‘Safe mode is ON’ is no longer in the`hdfs dfsadmin -report` output
###Code
command='hdfs dfsadmin -report'
string=stream(api.connect_get_namespaced_pod_exec, name, namespace, command=['/bin/sh', '-c', command], container=container, stderr=True, stdout=True)
if precondition_text in string:
raise SystemExit ('FAILED - hdfs dfsadmin -report output still contains: ' + precondition_text)
print ('SUCCESS - hdfs dfsadmin -report output no longer contains: ' + precondition_text)
print('Notebook execution complete.')
###Output
_____no_output_____ |
python/notebooks/vertex_pipeline_pyspark.ipynb | ###Markdown
Run Dataproc Templates from Vertex AI Pipelines OverviewThis notebook shows how to build a Vertex AI Pipeline to run a Dataproc Template using the DataprocPySparkBatchOp component. References- [DataprocPySparkBatchOp reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.0/google_cloud_pipeline_components.experimental.dataproc.html)- [Kubeflow SDK Overview](https://www.kubeflow.org/docs/components/pipelines/sdk/sdk-overview/)- [Dataproc Serverless in Vertex AI Pipelines tutorial](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage3/get_started_with_dataproc_serverless_pipeline_components.ipynb)- [Build a Vertex AI Pipeline](https://cloud.google.com/vertex-ai/docs/pipelines/build-pipeline)This notebook is built to run a Vertex AI User-Managed Notebook using the default Compute Engine Service Account. Check the Dataproc Serverless in Vertex AI Pipelines tutorial linked above to learn how to setup a different Service Account. PermissionsMake sure that the service account used to run the notebook has the following roles:- roles/aiplatform.serviceAgent- roles/aiplatform.customCodeServiceAgent- roles/storage.objectCreator- roles/storage.objectViewer- roles/dataproc.editor- roles/dataproc.worker Install the required packages
###Code
import os
# Google Cloud notebooks requires dependencies to be installed with '--user'
! pip3 install --upgrade google-cloud-pipeline-components kfp --user -q
###Output
_____no_output_____
###Markdown
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Import dependencies
###Code
import google.cloud.aiplatform as aiplatform
from kfp import dsl
from kfp.v2 import compiler
from datetime import datetime
###Output
_____no_output_____
###Markdown
Change working directory to the Dataproc Templates python folder
###Code
WORKING_DIRECTORY = "/home/jupyter/dataproc-templates/python/"
%cd /home/jupyter/dataproc-templates/python
###Output
_____no_output_____
###Markdown
Set Google Cloud properties
###Code
get_project_id = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = get_project_id[0]
REGION = "<region>"
GCS_STAGING_LOCATION = "<gs://bucket>"
SUBNET = "projects/<project>/regions/<region>/subnetworks/<subnet>"
###Output
_____no_output_____
###Markdown
Build Dataproc Templates python package
###Code
PACKAGE_EGG_FILE = "dist/dataproc_templates_distribution.egg"
! python ./setup.py bdist_egg --output=$PACKAGE_EGG_FILE
###Output
_____no_output_____
###Markdown
Copy package to the GCS bucketFor this, make sure that the service account used to run the notebook has the following roles: - roles/storage.objectCreator - roles/storage.objectViewer
###Code
! gsutil cp main.py $GCS_STAGING_LOCATION/
! gsutil cp -r $PACKAGE_EGG_FILE $GCS_STAGING_LOCATION/dist/
###Output
_____no_output_____
###Markdown
Set Dataproc Templates properties
###Code
PIPELINE_ROOT = GCS_STAGING_LOCATION + "/pipeline_root/dataproc_pyspark"
MAIN_PYTHON_FILE = GCS_STAGING_LOCATION + "/main.py"
PYTHON_FILE_URIS = [GCS_STAGING_LOCATION + "/dist/dataproc_templates_distribution.egg"]
JARS = ["gs://spark-lib/bigquery/spark-bigquery-latest_2.12.jar"]
BATCH_ID = "dataproc-templates-" + datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Choose template and set template arguments GCSTOBIGQUERY is chosen in this notebook as an example. Check the arguments in the template's documentation.
###Code
TEMPLATE_SPARK_ARGS = [
"--template=GCSTOBIGQUERY",
"--gcs.bigquery.input.format=<format>",
"--gcs.bigquery.input.location=<gs://bucket/path>",
"--gcs.bigquery.output.dataset=<dataset>",
"--gcs.bigquery.output.table=<table>",
"--gcs.bigquery.temp.bucket.name=<bucket>"
]
###Output
_____no_output_____
###Markdown
Build pipeline and run Dataproc Template on Vertex AI PipelinesFor this, make sure that the service account used to run the notebook has the following roles: - roles/dataproc.editor - roles/dataproc.worker
###Code
aiplatform.init(project=PROJECT_ID, staging_bucket=GCS_STAGING_LOCATION)
@dsl.pipeline(
name="dataproc-templates-pyspark",
description="An example pipeline that uses DataprocPySparkBatchOp to run a PySpark Dataproc Template batch workload",
)
def pipeline(
batch_id: str = BATCH_ID,
project_id: str = PROJECT_ID,
location: str = REGION,
main_python_file_uri: str = MAIN_PYTHON_FILE,
python_file_uris: list = PYTHON_FILE_URIS,
jar_file_uris: list = JARS,
subnetwork_uri: str = SUBNET,
args: list = TEMPLATE_SPARK_ARGS,
):
from google_cloud_pipeline_components.experimental.dataproc import \
DataprocPySparkBatchOp
_ = DataprocPySparkBatchOp(
project=project_id,
location=location,
batch_id=batch_id,
main_python_file_uri=main_python_file_uri,
python_file_uris=python_file_uris,
jar_file_uris=jar_file_uris,
subnetwork_uri=subnetwork_uri,
args=args,
)
compiler.Compiler().compile(pipeline_func=pipeline, package_path="pipeline.json")
pipeline = aiplatform.PipelineJob(
display_name="pipeline",
template_path="pipeline.json",
pipeline_root=PIPELINE_ROOT,
enable_caching=False,
)
pipeline.run()
###Output
_____no_output_____ |
.ipynb_checkpoints/project_notebook-checkpoint.ipynb | ###Markdown
Implementing a Route PlannerIn this project you will use A\* search to implement a "Google-maps" style route planning algorithm.
###Code
# Run this cell first!
from helpers import Map, load_map, show_map
from student_code import shortest_path
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Map Basics
###Code
map_10 = load_map('map-10.pickle')
show_map(map_10)
###Output
_____no_output_____
###Markdown
The map above (run the code cell if you don't see it) shows a disconnected network of 10 intersections. The two intersections on the left are connected to each other but they are not connected to the rest of the road network. On the graph above, the edge between 2 nodes(intersections) represents a literal straight road not just an abstract connection of 2 cities.These `Map` objects have two properties you will want to use to implement A\* search: `intersections` and `roads`**Intersections**The `intersections` are represented as a dictionary. In this example, there are 10 intersections, each identified by an x,y coordinate. The coordinates are listed below. You can hover over each dot in the map above to see the intersection number.
###Code
map_10.intersections
import math
def euclidean_dist(a_coor_lst, b_coor_lst):
return math.sqrt(sum([(a - b)**2 for a, b in zip(a_coor_lst, b_coor_lst)]))
euclidean_dist(map_10.intersections[3], map_10.intersections[5])
###Output
_____no_output_____
###Markdown
**Roads**The `roads` property is a list where, if `i` is an intersection, `roads[i]` contains a list of the intersections that intersection `i` connects to.
###Code
# this shows that intersection 0 connects to intersections 7, 6, and 5
map_10.roads[0]
# This shows the full connectivity of the map
map_10.roads
def create_graph(m):
adj_lst = [list() for _ in range(len(m.roads))]
for source in range(len(m.roads)):
for dest in m.roads[source]:
cost = euclidean_dist(m.intersections[source], m.intersections[dest])
adj_lst[source].append((dest, cost))
return adj_lst
create_graph(map_10)
# map_40 is a bigger map than map_10
map_40 = load_map('map-40.pickle')
show_map(map_40)
###Output
_____no_output_____
###Markdown
Advanced VisualizationsThe map above shows a network of roads which spans 40 different intersections (labeled 0 through 39). The `show_map` function which generated this map also takes a few optional parameters which might be useful for visualizaing the output of the search algorithm you will write.* `start` - The "start" node for the search algorithm.* `goal` - The "goal" node.* `path` - An array of integers which corresponds to a valid sequence of intersection visits on the map.
###Code
# run this code, note the effect of including the optional
# parameters in the function call.
show_map(map_40, start=5, goal=34, path=[5,16,37,12,34])
###Output
_____no_output_____
###Markdown
Writing your algorithmYou should open the file `student_code.py` in another tab and work on your algorithm there. Do that by selecting `File > Open` and then selecting the appropriate file.The algorithm you write will be responsible for generating a `path` like the one passed into `show_map` above. In fact, when called with the same map, start and goal, as above you algorithm should produce the path `[5, 16, 37, 12, 34]````bash> shortest_path(map_40, 5, 34)[5, 16, 37, 12, 34]```
###Code
shortest_path(map_40, 5, 34)
path = shortest_path(map_40, 5, 34)
if path == [5, 16, 37, 12, 34]:
print("great! Your code works for these inputs!")
else:
print("something is off, your code produced the following:")
print(path)
###Output
_____no_output_____
###Markdown
Testing your CodeIf the code below produces no errors, your algorithm is behaving correctly. You are almost ready to submit! Before you submit, go through the following submission checklist:**Submission Checklist**1. Does my code pass all tests?2. Does my code implement `A*` search and not some other search algorithm?3. Do I use an **admissible heuristic** to direct search efforts towards the goal?4. Do I use data structures which avoid unnecessarily slow lookups?When you can answer "yes" to all of these questions, submit by pressing the Submit button in the lower right!
###Code
from test import test
test(shortest_path)
###Output
_____no_output_____
###Markdown
Implementing a Route PlannerIn this project you will use A\* search to implement a "Google-maps" style route planning algorithm. The Map
###Code
# Run this cell first!
from helpers import Map, load_map_10, load_map_40, show_map
import math
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Map Basics
###Code
map_10 = load_map_10()
show_map(map_10)
###Output
_____no_output_____
###Markdown
The map above (run the code cell if you don't see it) shows a disconnected network of 10 intersections. The two intersections on the left are connected to each other but they are not connected to the rest of the road network. This map is quite literal in its expression of distance and connectivity. On the graph above, the edge between 2 nodes(intersections) represents a literal straight road not just an abstract connection of 2 cities.These `Map` objects have two properties you will want to use to implement A\* search: `intersections` and `roads`**Intersections**The `intersections` are represented as a dictionary. In this example, there are 10 intersections, each identified by an x,y coordinate. The coordinates are listed below. You can hover over each dot in the map above to see the intersection number.
###Code
map_10.intersections
###Output
_____no_output_____
###Markdown
**Roads**The `roads` property is a list where `roads[i]` contains a list of the intersections that intersection `i` connects to.
###Code
# this shows that intersection 0 connects to intersections 7, 6, and 5
map_10.roads[0]
# This shows the full connectivity of the map
map_10.roads
# map_40 is a bigger map than map_10
map_40 = load_map_40()
show_map(map_40)
###Output
_____no_output_____
###Markdown
Advanced VisualizationsThe map above shows a network of roads which spans 40 different intersections (labeled 0 through 39). The `show_map` function which generated this map also takes a few optional parameters which might be useful for visualizing the output of the search algorithm you will write.* `start` - The "start" node for the search algorithm.* `goal` - The "goal" node.* `path` - An array of integers which corresponds to a valid sequence of intersection visits on the map.
###Code
# run this code, note the effect of including the optional
# parameters in the function call.
show_map(map_40, start=5, goal=34, path=[5,16,37,12,34])
###Output
_____no_output_____
###Markdown
The Algorithm Writing your algorithmThe algorithm written will be responsible for generating a `path` like the one passed into `show_map` above. In fact, when called with the same map, start and goal, as above you algorithm should produce the path `[5, 16, 37, 12, 34]`. However you must complete several methods before it will work.```bash> PathPlanner(map_40, 5, 34).path[5, 16, 37, 12, 34]``` PathPlanner classThe below class is already partly implemented for you - you will implement additional functions that will also get included within this class further below.Let's very briefly walk through each part below.`__init__` - We initialize our path planner with a map, M, and typically a start and goal node. If either of these are `None`, the rest of the variables here are also set to none. If you don't have both a start and a goal, there's no path to plan! The rest of these variables come from functions you will soon implement. - `closedSet` includes any explored/visited nodes. - `openSet` are any nodes on our frontier for potential future exploration. - `cameFrom` will hold the previous node that best reaches a given node- `gScore` is the `g` in our `f = g + h` equation, or the actual cost to reach our current node- `fScore` is the combination of `g` and `h`, i.e. the `gScore` plus a heuristic; total cost to reach the goal- `path` comes from the `run_search` function, which is already built for you.`reconstruct_path` - This function just rebuilds the path after search is run, going from the goal node backwards using each node's `cameFrom` information.`_reset` - Resets *most* of our initialized variables for PathPlanner. This *does not* reset the map, start or goal variables, for reasons which you may notice later, depending on your implementation.`run_search` - This does a lot of the legwork to run search once you've implemented everything else below. First, it checks whether the map, goal and start have been added to the class. Then, it will also check if the other variables, other than `path` are initialized (note that these are only needed to be re-run if the goal or start were not originally given when initializing the class, based on what we discussed above for `__init__`.From here, we use a function you will implement, `is_open_empty`, to check that there are still nodes to explore (you'll need to make sure to feed `openSet` the start node to make sure the algorithm doesn't immediately think there is nothing to open!). If we're at our goal, we reconstruct the path. If not, we move our current node from the frontier (`openSet`) and into explored (`closedSet`). Then, we check out the neighbors of the current node, check out their costs, and plan our next move.This is the main idea behind A*, but none of it is going to work until you implement all the relevant parts, which will be included below after the class code.
###Code
# Do not change this cell
# When you write your methods correctly this cell will execute
# without problems
class PathPlanner():
"""Construct a PathPlanner Object"""
def __init__(self, M, start=None, goal=None):
""" """
self.map = M
self.start= start
self.goal = goal
self.closedSet = self.create_closedSet() if goal != None and start != None else None
self.openSet = self.create_openSet() if goal != None and start != None else None
self.cameFrom = self.create_cameFrom() if goal != None and start != None else None
self.gScore = self.create_gScore() if goal != None and start != None else None
self.fScore = self.create_fScore() if goal != None and start != None else None
self.path = self.run_search() if self.map and self.start != None and self.goal != None else None
def reconstruct_path(self, current):
""" Reconstructs path after search """
total_path = [current]
while current in self.cameFrom.keys():
if current == self.start:
return total_path
current = self.cameFrom[current]
total_path.append(current)
def _reset(self):
"""Private method used to reset the closedSet, openSet, cameFrom, gScore, fScore, and path attributes"""
self.closedSet = None
self.openSet = None
self.cameFrom = None
self.gScore = None
self.fScore = None
self.path = self.run_search() if self.map and self.start and self.goal else None
def run_search(self):
""" """
if self.map == None:
raise(ValueError, "Must create map before running search. Try running PathPlanner.set_map(start_node)")
if self.goal == None:
raise(ValueError, "Must create goal node before running search. Try running PathPlanner.set_goal(start_node)")
if self.start == None:
raise(ValueError, "Must create start node before running search. Try running PathPlanner.set_start(start_node)")
self.closedSet = self.closedSet if self.closedSet != None else self.create_closedSet()
self.openSet = self.openSet if self.openSet != None else self.create_openSet()
self.cameFrom = self.cameFrom if self.cameFrom != None else self.create_cameFrom()
self.gScore = self.gScore if self.gScore != None else self.create_gScore()
self.fScore = self.fScore if self.fScore != None else self.create_fScore()
while not self.is_open_empty():
current = self.get_current_node()
if current == self.goal:
self.path = [x for x in reversed(self.reconstruct_path(current))]
return self.path
else:
self.openSet.remove(current)
self.closedSet.add(current)
for neighbor in self.get_neighbors(current):
if neighbor in self.closedSet:
continue # Ignore the neighbor which is already evaluated.
if not neighbor in self.openSet: # Discover a new node
self.openSet.add(neighbor)
# The distance from start to a neighbor
#the "dist_between" function may vary as per the solution requirements.
if self.get_tentative_gScore(current, neighbor) >= self.get_gScore(neighbor):
continue # This is not a better path.
# This path is the best until now. Record it!
self.record_best_path_to(current, neighbor)
print("No Path Found")
self.path = None
return False
###Output
_____no_output_____
###Markdown
Your TurnImplement the following functions to get your search algorithm running smoothly! Data StructuresThe next few functions requre you to decide on data structures to use - lists, sets, dictionaries, etc. Make sure to think about what would work most efficiently for each of these. Some can be returned as just an empty data structure (see `create_closedSet()` for an example), while others should be initialized with one or more values within.
###Code
def create_closedSet(self):
""" Creates and returns a data structure suitable to hold the set of nodes already evaluated"""
# EXAMPLE: return a data structure suitable to hold the set of nodes already evaluated
return set()
def create_openSet(self):
""" Creates and returns a data structure suitable to hold the set of currently discovered nodes
that are not evaluated yet. Initially, only the start node is known."""
if self.start != None:
# TODO: return a data structure suitable to hold the set of currently discovered nodes
# that are not evaluated yet. Make sure to include the start node.
return set([self.start])
raise(ValueError, "Must create start node before creating an open set. Try running PathPlanner.set_start(start_node)")
def create_cameFrom(self):
"""Creates and returns a data structure that shows which node can most efficiently be reached from another,
for each node."""
# TODO: return a data structure that shows which node can most efficiently be reached from another,
# for each node.
cameFrom = {}
for node in range(len(map_40.roads)):
dist = []
for neighbor in map_40.roads[node]:
dist.append(self.distance(node, neighbor))
minimum = min(dist)
for index in range(len(dist)):
if dist[index] == minimum:
cameFrom[node] = map_40.roads[node][index]
return cameFrom
def create_gScore(self):
"""Creates and returns a data structure that holds the cost of getting from the start node to that node,
for each node. The cost of going from start to start is zero."""
# TODO: return a data structure that holds the cost of getting from the start node to that node, for each node.
# for each node. The cost of going from start to start is zero. The rest of the node's values should
# be set to infinity.
cflist = {}
for node in self.map.intersections:
if node == self.start:
cflist[node] = 0
else:
cflist[node] = math.inf
return cflist
def create_fScore(self):
"""Creates and returns a data structure that holds the total cost of getting from the start node to the goal
by passing by that node, for each node. That value is partly known, partly heuristic.
For the first node, that value is completely heuristic."""
# TODO: return a data structure that holds the total cost of getting from the start node to the goal
# by passing by that node, for each node. That value is partly known, partly heuristic.
# For the first node, that value is completely heuristic. The rest of the node's value should be
# set to infinity.
flist = {}
for node in self.map.intersections:
if node == self.start: #tajkhke a look at this section to make sure what heuristic means
flist[node] = self.heuristic_cost_estimate(node)
else:
flist[node] = math.inf
return flist
###Output
_____no_output_____
###Markdown
Set certain variablesThe below functions help set certain variables if they weren't a part of initializating our `PathPlanner` class, or if they need to be changed for anothe reason.
###Code
def set_map(self, M):
"""Method used to set map attribute """
self._reset(self)
self.start = None
self.goal = None
# TODO: Set map to new value.
self.map = M
def set_start(self, start):
"""Method used to set start attribute """
self._reset(self)
# TODO: Set start value. Remember to remove goal, closedSet, openSet, cameFrom, gScore, fScore,
# and path attributes' values.
self.start = start
def set_goal(self, goal):
"""Method used to set goal attribute """
self._reset(self)
# TODO: Set goal value.
self.goal = goal
###Output
_____no_output_____
###Markdown
Get node informationThe below functions concern grabbing certain node information. In `is_open_empty`, you are checking whether there are still nodes on the frontier to explore. In `get_current_node()`, you'll want to come up with a way to find the lowest `fScore` of the nodes on the frontier. In `get_neighbors`, you'll need to gather information from the map to find the neighbors of the current node.
###Code
def is_open_empty(self):
"""returns True if the open set is empty. False otherwise. """
# TODO: Return True if the open set is empty. False otherwise.
if self.openSet is None:
return True
else:
return False
def get_current_node(self):
""" Returns the node in the open set with the lowest value of f(node)."""
# TODO: Return the node in the open set with the lowest value of f(node).
return min(self.openSet, key = lambda node: self.fScore[node])
def get_neighbors(self, node):
"""Returns the neighbors of a node"""
# TODO: Return the neighbors of a node
return map_40.roads[node]
###Output
_____no_output_____
###Markdown
Scores and CostsBelow, you'll get into the main part of the calculation for determining the best path - calculating the various parts of the `fScore`.
###Code
def get_gScore(self, node):
"""Returns the g Score of a node"""
# TODO: Return the g Score of a node
return self.gScore[node]
def distance(self, node_1, node_2):
""" Computes the Euclidean L2 Distance"""
# TODO: Compute and return the Euclidean L2 Distance
return math.sqrt((map_40.intersections[node_1][0]-map_40.intersections[node_2][0])**2+(map_40.intersections[node_1][1]-map_40.intersections[node_2][1])**2)
def get_tentative_gScore(self, current, neighbor):
"""Returns the tentative g Score of a node"""
# TODO: Return the g Score of the current node
# plus distance from the current node to it's neighbors
return self.get_gScore(current) + self.distance(current, neighbor)
def heuristic_cost_estimate(self, node):
""" Returns the heuristic cost estimate of a node """
# TODO: Return the heuristic cost estimate of a node
return self.distance(self.goal, node)
def calculate_fscore(self, node):
"""Calculate the f score of a node. """
# TODO: Calculate and returns the f score of a node.
# REMEMBER F = G + H
return self.get_gScore(node) + self.heuristic_cost_estimate(node)
###Output
_____no_output_____
###Markdown
Recording the best pathNow that you've implemented the various functions on scoring, you can record the best path to a given neighbor node from the current node!
###Code
def record_best_path_to(self, current, neighbor):
"""Record the best path to a node """
# TODO: Record the best path to a node, by updating cameFrom, gScore, and fScore
self.cameFrom[neighbor] = current
self.gScore[neighbor] = self.gScore[current] + self.distance(current, neighbor)
self.fScore[neighbor] = self.calculate_fscore(neighbor)
###Output
_____no_output_____
###Markdown
Associating your functions with the `PathPlanner` classTo check your implementations, we want to associate all of the above functions back to the `PathPlanner` class. Python makes this easy using the dot notation (i.e. `PathPlanner.myFunction`), and setting them equal to your function implementations. Run the below code cell for this to occur.*Note*: If you need to make further updates to your functions above, you'll need to re-run this code cell to associate the newly updated function back with the `PathPlanner` class again!
###Code
# Associates implemented functions with PathPlanner class
PathPlanner.create_closedSet = create_closedSet
PathPlanner.create_openSet = create_openSet
PathPlanner.create_cameFrom = create_cameFrom
PathPlanner.create_gScore = create_gScore
PathPlanner.create_fScore = create_fScore
PathPlanner.set_map = set_map
PathPlanner.set_start = set_start
PathPlanner.set_goal = set_goal
PathPlanner.is_open_empty = is_open_empty
PathPlanner.get_current_node = get_current_node
PathPlanner.get_neighbors = get_neighbors
PathPlanner.get_gScore = get_gScore
PathPlanner.distance = distance
PathPlanner.get_tentative_gScore = get_tentative_gScore
PathPlanner.heuristic_cost_estimate = heuristic_cost_estimate
PathPlanner.calculate_fscore = calculate_fscore
PathPlanner.record_best_path_to = record_best_path_to
###Output
_____no_output_____
###Markdown
Preliminary TestThe below is the first test case, just based off of one set of inputs. If some of the functions above aren't implemented yet, or are implemented incorrectly, you likely will get an error from running this cell. Try debugging the error to help you figure out what needs further revision!
###Code
planner = PathPlanner(map_40, 5, 34)
path = planner.path
if path == [5, 16, 37, 12, 34]:
print("great! Your code works for these inputs!")
else:
print("something is off, your code produced the following:")
print(path)
###Output
great! Your code works for these inputs!
###Markdown
VisualizeOnce the above code worked for you, let's visualize the results of your algorithm!
###Code
# Visualize your the result of the above test! You can also change start and goal here to check other paths
start = 5
goal = 34
show_map(map_40, start=start, goal=goal, path=PathPlanner(map_40, start, goal).path)
###Output
_____no_output_____
###Markdown
Testing your CodeIf the code below produces no errors, your algorithm is behaving correctly. You are almost ready to submit! Before you submit, go through the following submission checklist:**Submission Checklist**1. Does my code pass all tests?2. Does my code implement `A*` search and not some other search algorithm?3. Do I use an **admissible heuristic** to direct search efforts towards the goal?4. Do I use data structures which avoid unnecessarily slow lookups?When you can answer "yes" to all of these questions, and also have answered the written questions below, submit by pressing the Submit button in the lower right!
###Code
from test import test
test(PathPlanner)
###Output
All tests pass! Congratulations!
###Markdown
Implementing a Route PlannerIn this project you will use A\* search to implement a "Google-maps" style route planning algorithm. The Map
###Code
# Run this cell first!
from helpers import Map, load_map_10, load_map_40, show_map
import math
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Map Basics
###Code
map_10 = load_map_10()
show_map(map_10)
###Output
_____no_output_____
###Markdown
The map above (run the code cell if you don't see it) shows a disconnected network of 10 intersections. The two intersections on the left are connected to each other but they are not connected to the rest of the road network. This map is quite literal in its expression of distance and connectivity. On the graph above, the edge between 2 nodes(intersections) represents a literal straight road not just an abstract connection of 2 cities.These `Map` objects have two properties you will want to use to implement A\* search: `intersections` and `roads`**Intersections**The `intersections` are represented as a dictionary. In this example, there are 10 intersections, each identified by an x,y coordinate. The coordinates are listed below. You can hover over each dot in the map above to see the intersection number.
###Code
map_10.intersections
###Output
_____no_output_____
###Markdown
**Roads**The `roads` property is a list where `roads[i]` contains a list of the intersections that intersection `i` connects to.
###Code
# this shows that intersection 0 connects to intersections 7, 6, and 5
map_10.roads[0]
# This shows the full connectivity of the map
map_10.roads
# map_40 is a bigger map than map_10
map_40 = load_map_40()
show_map(map_40)
###Output
_____no_output_____
###Markdown
Advanced VisualizationsThe map above shows a network of roads which spans 40 different intersections (labeled 0 through 39). The `show_map` function which generated this map also takes a few optional parameters which might be useful for visualizing the output of the search algorithm you will write.* `start` - The "start" node for the search algorithm.* `goal` - The "goal" node.* `path` - An array of integers which corresponds to a valid sequence of intersection visits on the map.
###Code
# run this code, note the effect of including the optional
# parameters in the function call.
show_map(map_40, start=5, goal=34, path=[5,16,37,12,34])
###Output
_____no_output_____
###Markdown
The Algorithm Writing your algorithmThe algorithm written will be responsible for generating a `path` like the one passed into `show_map` above. In fact, when called with the same map, start and goal, as above you algorithm should produce the path `[5, 16, 37, 12, 34]`. However you must complete several methods before it will work.```bash> PathPlanner(map_40, 5, 34).path[5, 16, 37, 12, 34]``` PathPlanner classThe below class is already partly implemented for you - you will implement additional functions that will also get included within this class further below.Let's very briefly walk through each part below.`__init__` - We initialize our path planner with a map, M, and typically a start and goal node. If either of these are `None`, the rest of the variables here are also set to none. If you don't have both a start and a goal, there's no path to plan! The rest of these variables come from functions you will soon implement. - `closedSet` includes any explored/visited nodes. - `openSet` are any nodes on our frontier for potential future exploration. - `cameFrom` will hold the previous node that best reaches a given node- `gScore` is the `g` in our `f = g + h` equation, or the actual cost to reach our current node- `fScore` is the combination of `g` and `h`, i.e. the `gScore` plus a heuristic; total cost to reach the goal- `path` comes from the `run_search` function, which is already built for you.`reconstruct_path` - This function just rebuilds the path after search is run, going from the goal node backwards using each node's `cameFrom` information.`_reset` - Resets *most* of our initialized variables for PathPlanner. This *does not* reset the map, start or goal variables, for reasons which you may notice later, depending on your implementation.`run_search` - This does a lot of the legwork to run search once you've implemented everything else below. First, it checks whether the map, goal and start have been added to the class. Then, it will also check if the other variables, other than `path` are initialized (note that these are only needed to be re-run if the goal or start were not originally given when initializing the class, based on what we discussed above for `__init__`.From here, we use a function you will implement, `is_open_empty`, to check that there are still nodes to explore (you'll need to make sure to feed `openSet` the start node to make sure the algorithm doesn't immediately think there is nothing to open!). If we're at our goal, we reconstruct the path. If not, we move our current node from the frontier (`openSet`) and into explored (`closedSet`). Then, we check out the neighbors of the current node, check out their costs, and plan our next move.This is the main idea behind A*, but none of it is going to work until you implement all the relevant parts, which will be included below after the class code.
###Code
# Do not change this cell
# When you write your methods correctly this cell will execute
# without problems
class PathPlanner():
"""Construct a PathPlanner Object"""
def __init__(self, M, start=None, goal=None):
""" """
self.map = M
self.start= start
self.goal = goal
self.closedSet = self.create_closedSet() if goal != None and start != None else None
self.openSet = self.create_openSet() if goal != None and start != None else None
self.cameFrom = self.create_cameFrom() if goal != None and start != None else None
self.gScore = self.create_gScore() if goal != None and start != None else None
self.fScore = self.create_fScore() if goal != None and start != None else None
self.path = self.run_search() if self.map and self.start != None and self.goal != None else None
def reconstruct_path(self, current):
""" Reconstructs path after search """
total_path = [current]
while current in self.cameFrom.keys():
current = self.cameFrom[current]
total_path.append(current)
return total_path
def _reset(self):
"""Private method used to reset the closedSet, openSet, cameFrom, gScore, fScore, and path attributes"""
self.closedSet = None
self.openSet = None
self.cameFrom = None
self.gScore = None
self.fScore = None
self.path = self.run_search() if self.map and self.start and self.goal else None
def run_search(self):
""" """
if self.map == None:
raise(ValueError, "Must create map before running search. Try running PathPlanner.set_map(start_node)")
if self.goal == None:
raise(ValueError, "Must create goal node before running search. Try running PathPlanner.set_goal(start_node)")
if self.start == None:
raise(ValueError, "Must create start node before running search. Try running PathPlanner.set_start(start_node)")
self.closedSet = self.closedSet if self.closedSet != None else self.create_closedSet()
self.openSet = self.openSet if self.openSet != None else self.create_openSet()
self.cameFrom = self.cameFrom if self.cameFrom != None else self.create_cameFrom()
self.gScore = self.gScore if self.gScore != None else self.create_gScore()
self.fScore = self.fScore if self.fScore != None else self.create_fScore()
while not self.is_open_empty():
current = self.get_current_node()
if current == self.goal:
self.path = [x for x in reversed(self.reconstruct_path(current))]
return self.path
else:
self.openSet.remove(current)
self.closedSet.add(current)
for neighbor in self.get_neighbors(current):
if neighbor in self.closedSet:
continue # Ignore the neighbor which is already evaluated.
if not neighbor in self.openSet: # Discover a new node
self.openSet.add(neighbor)
# The distance from start to a neighbor
#the "dist_between" function may vary as per the solution requirements.
if self.get_tentative_gScore(current, neighbor) >= self.get_gScore(neighbor):
continue # This is not a better path.
# This path is the best until now. Record it!
self.record_best_path_to(current, neighbor)
print("No Path Found")
self.path = None
return False
###Output
_____no_output_____
###Markdown
Your TurnImplement the following functions to get your search algorithm running smoothly! Data StructuresThe next few functions requre you to decide on data structures to use - lists, sets, dictionaries, etc. Make sure to think about what would work most efficiently for each of these. Some can be returned as just an empty data structure (see `create_closedSet()` for an example), while others should be initialized with one or more values within.
###Code
def create_closedSet(self):
""" Creates and returns a data structure suitable to hold the set of nodes already evaluated"""
# EXAMPLE: return a data structure suitable to hold the set of nodes already evaluated
return set()
def create_openSet(self):
""" Creates and returns a data structure suitable to hold the set of currently discovered nodes
that are not evaluated yet. Initially, only the start node is known."""
if self.start != None:
# TODO: return a data structure suitable to hold the set of currently discovered nodes
# that are not evaluated yet. Make sure to include the start node.
openSet = set()
openSet.add(self.start)
return openSet
raise(ValueError, "Must create start node before creating an open set. Try running PathPlanner.set_start(start_node)")
def create_cameFrom(self):
"""Creates and returns a data structure that shows which node can most efficiently be reached from another,
for each node."""
# TODO: return a data structure that shows which node can most efficiently be reached from another,
# for each node.
return dict()
def create_gScore(self):
"""Creates and returns a data structure that holds the cost of getting from the start node to that node,
for each node. The cost of going from start to start is zero."""
# TODO: return a data structure that holds the cost of getting from the start node to that node, for each node.
# for each node. The cost of going from start to start is zero. The rest of the node's values should
# be set to infinity.
gScore = dict()
for i in self.map.intersections.keys():
if i == self.start:
gScore[i] = 0
else:
gScore[i]= float('inf')
return gScore
def create_fScore(self):
"""Creates and returns a data structure that holds the total cost of getting from the start node to the goal
by passing by that node, for each node. That value is partly known, partly heuristic.
For the first node, that value is completely heuristic."""
# TODO: return a data structure that holds the total cost of getting from the start node to the goal
# by passing by that node, for each node. That value is partly known, partly heuristic.
# For the first node, that value is completely heuristic. The rest of the node's value should be
# set to infinity.
H = ((self.map.intersections[self.start][0]-self.map.intersections[self.goal][0])**2 \
+(self.map.intersections[self.start][1]-self.map.intersections[self.goal][1])**2)**(1/2)
fScore = dict()
for i in self.map.intersections.keys():
if i == self.start:
fScore[i] = H
else:
fScore[i]= float('inf')
return fScore
###Output
_____no_output_____
###Markdown
Set certain variablesThe below functions help set certain variables if they weren't a part of initializating our `PathPlanner` class, or if they need to be changed for anothe reason.
###Code
def set_map(self, M):
"""Method used to set map attribute """
self._reset(self)
self.start = None
self.goal = None
# TODO: Set map to new value.
self.map = M
def set_start(self, start):
"""Method used to set start attribute """
self._reset(self)
# TODO: Set start value. Remember to remove goal, closedSet, openSet, cameFrom, gScore, fScore,
# and path attributes' values.
self.start = start
self.goal = None
self.closedSet = None
self.openSet = None
self.cameFrom = None
self.gScore = None
self.fScore = None
self.path = None
def set_goal(self, goal):
"""Method used to set goal attribute """
self._reset(self)
# TODO: Set goal value.
self.goal= goal
###Output
_____no_output_____
###Markdown
Get node informationThe below functions concern grabbing certain node information. In `is_open_empty`, you are checking whether there are still nodes on the frontier to explore. In `get_current_node()`, you'll want to come up with a way to find the lowest `fScore` of the nodes on the frontier. In `get_neighbors`, you'll need to gather information from the map to find the neighbors of the current node.
###Code
def is_open_empty(self):
"""returns True if the open set is empty. False otherwise. """
# TODO: Return True if the open set is empty. False otherwise.
return len(self.openSet) ==0
def get_current_node(self):
""" Returns the node in the open set with the lowest value of f(node)."""
# TODO: Return the node in the open set with the lowest value of f(node).
f_value = {}
for node in self.openSet:
f_value[node] = self.fScore[node]
current_node = [k for k, v in f_value.items() if v == min(f_value.values())]
return current_node[0]
def get_neighbors(self, node):
"""Returns the neighbors of a node"""
# TODO: Return the neighbors of a node
neighbors = self.map.roads[node]
return neighbors
###Output
_____no_output_____
###Markdown
Scores and CostsBelow, you'll get into the main part of the calculation for determining the best path - calculating the various parts of the `fScore`.
###Code
def get_gScore(self, node):
"""Returns the g Score of a node"""
# TODO: Return the g Score of a node
return self.gScore[node]
def distance(self, node_1, node_2):
""" Computes the Euclidean L2 Distance"""
# TODO: Compute and return the Euclidean L2 Distance
L2 = ((self.map.intersections[node_1][0]-self.map.intersections[node_2][0])**2 \
+(self.map.intersections[node_1][1]-self.map.intersections[node_2][1])**2)**(1/2)
return L2
def get_tentative_gScore(self, current, neighbor):
"""Returns the tentative g Score of a node"""
# TODO: Return the g Score of the current node
# plus distance from the current node to it's neighbors
G_t = get_gScore(self,current) + distance(self, current, neighbor)
return G_t
def heuristic_cost_estimate(self, node):
""" Returns the heuristic cost estimate of a node """
# TODO: Return the heuristic cost estimate of a node
H = ((self.map.intersections[node][0]-self.map.intersections[self.goal][0])**2 \
+(self.map.intersections[node][1]-self.map.intersections[self.goal][1])**2)**(1/2)
return H
def calculate_fscore(self, node):
"""Calculate the f score of a node. """
# TODO: Calculate and returns the f score of a node.
# REMEMBER F = G + H
F = get_gScore(self, node) + heuristic_cost_estimate(self, node)
return F
###Output
_____no_output_____
###Markdown
Recording the best pathNow that you've implemented the various functions on scoring, you can record the best path to a given neighbor node from the current node!
###Code
def record_best_path_to(self, current, neighbor):
"""Record the best path to a node """
# TODO: Record the best path to a node, by updating cameFrom, gScore, and fScore
self.cameFrom[neighbor] = current
self.gScore[neighbor] = get_tentative_gScore(self, current, neighbor)
self.fScore[neighbor] = calculate_fscore(self,neighbor)
###Output
_____no_output_____
###Markdown
Associating your functions with the `PathPlanner` classTo check your implementations, we want to associate all of the above functions back to the `PathPlanner` class. Python makes this easy using the dot notation (i.e. `PathPlanner.myFunction`), and setting them equal to your function implementations. Run the below code cell for this to occur.*Note*: If you need to make further updates to your functions above, you'll need to re-run this code cell to associate the newly updated function back with the `PathPlanner` class again!
###Code
# Associates implemented functions with PathPlanner class
PathPlanner.create_closedSet = create_closedSet
PathPlanner.create_openSet = create_openSet
PathPlanner.create_cameFrom = create_cameFrom
PathPlanner.create_gScore = create_gScore
PathPlanner.create_fScore = create_fScore
PathPlanner.set_map = set_map
PathPlanner.set_start = set_start
PathPlanner.set_goal = set_goal
PathPlanner.is_open_empty = is_open_empty
PathPlanner.get_current_node = get_current_node
PathPlanner.get_neighbors = get_neighbors
PathPlanner.get_gScore = get_gScore
PathPlanner.distance = distance
PathPlanner.get_tentative_gScore = get_tentative_gScore
PathPlanner.heuristic_cost_estimate = heuristic_cost_estimate
PathPlanner.calculate_fscore = calculate_fscore
PathPlanner.record_best_path_to = record_best_path_to
###Output
_____no_output_____
###Markdown
Preliminary TestThe below is the first test case, just based off of one set of inputs. If some of the functions above aren't implemented yet, or are implemented incorrectly, you likely will get an error from running this cell. Try debugging the error to help you figure out what needs further revision!
###Code
planner = PathPlanner(map_40, 5, 34)
path = planner.path
if path == [5, 16, 37, 12, 34]:
print("great! Your code works for these inputs!")
else:
print("something is off, your code produced the following:")
print(path)
###Output
great! Your code works for these inputs!
###Markdown
VisualizeOnce the above code worked for you, let's visualize the results of your algorithm!
###Code
# Visualize your the result of the above test! You can also change start and goal here to check other paths
start = 5
goal = 34
show_map(map_40, start=start, goal=goal, path=PathPlanner(map_40, start, goal).path)
###Output
_____no_output_____
###Markdown
Testing your CodeIf the code below produces no errors, your algorithm is behaving correctly. You are almost ready to submit! Before you submit, go through the following submission checklist:**Submission Checklist**1. Does my code pass all tests?2. Does my code implement `A*` search and not some other search algorithm?3. Do I use an **admissible heuristic** to direct search efforts towards the goal?4. Do I use data structures which avoid unnecessarily slow lookups?When you can answer "yes" to all of these questions, and also have answered the written questions below, submit by pressing the Submit button in the lower right!
###Code
from test import test
test(PathPlanner)
###Output
All tests pass! Congratulations!
|
curves_2d.ipynb | ###Markdown
Lissajous-curvesPlot two-dimensional curves (for instances the so-called LIssajous curves) with a prescribed parametrization UseThis notebook plots any parametrized cruve of the form $(x(t), y(t))$ with $t\in[-\pi,\pi]$. Functions $x(t)$ and $y(t)$ must be introduced by the user. The curve is always plotted in the square $[-1,1]\times [-1,1]$.To execute this notebook, please go to the top menu and click on ``Cell -> Run all``----In case of **error** or **warning** messages, please go to the top menu and click on ``Cell -> Run all``
###Code
hide_me
from numpy import *
import matplotlib.pyplot as plt
import ipywidgets as widgets
from IPython.display import display, clear_output
%matplotlib inline
# To prevent automatic figure display when execution of the cell ends
%config InlineBackend.close_figures=False
time = linspace(-pi, pi, 1000)
plt.ioff()
ax=plt.gca()
plt.plot(time,time,'b')
plt.xlabel('$x(t)$')
plt.ylabel('$y(t)$')
plt.axis('square')
plt.xlim([-1.,1.])
plt.ylim([-1.,1.])
out=widgets.Output()
def plot(t=0):
ax.lines[0].set_xdata(xfun(time))
ax.lines[0].set_ydata(yfun(time))
with out:
clear_output(wait=True)
display(ax.figure)
# Cells for x(t) and y(t)
x = widgets.Text(
value='',
placeholder='x(t)',
description='Function x(t):',
disabled=False
)
y = widgets.Text(
value='',
placeholder='y(t)',
description='Function y(t):',
disabled=False
)
button=widgets.Button(
description='Plot',
disabled=False,
button_style='',
tooltip='Plot',
icon='check'
)
vbox=widgets.VBox(children=(out,x,y,button))
display(vbox)
def on_button_clicked(b):
xfun = lambda t: eval(x.value)
yfun = lambda t: eval(y.value)
ax.lines[0].set_xdata(xfun(time))
ax.lines[0].set_ydata(yfun(time))
plt.axis('square')
plt.xlim([-1.,1.])
plt.ylim([-1.,1.])
with out:
clear_output(wait=True)
display(ax.figure)
button.on_click(on_button_clicked)
###Output
_____no_output_____ |
notebooks/03-COVID19-GlobalAnalysis.ipynb | ###Markdown
🔬 Global `SARS-CoV-2` DNA Sequence Analysis 🦠 `3631` Sequences of Covid-19 on GenBankDownloads avaliable from The National Library of Medicine [NCBI Virus](https://www.ncbi.nlm.nih.gov/labs/virus/vssi//virus?SeqType_s=Nucleotide&VirusLineage_ss=SARS-CoV-2,+taxid:2697049) resource.1. Download all `fasta` nulcleotide sequences2. Download `Current table view` csv for all sequence metadata Sequence data also available from the _China National Center for Bioinformation_ [CNCB](https://bigd.big.ac.cn/ncov/release_genome?lang=engoto)
###Code
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
pd.options.display.max_rows = 50
%matplotlib inline
from Bio.Seq import Seq
from Bio import SeqIO, pairwise2
from Bio.pairwise2 import format_alignment
from tqdm.notebook import tqdm
###Output
_____no_output_____
###Markdown
1. View NCBI Metadata
###Code
df = pd.read_csv('../data/NCBI_sequences_metadata.csv')
df.head(2)
df.Geo_Location.value_counts().head(20).plot(kind='barh')
df.Host.value_counts()
###Output
_____no_output_____
###Markdown
* `Panthera Tigris Jacksoni` is a Malaysian Tiger from the Bronx Zoo in New York From WCS Newsroom> On April 5, 2020, we reported that a four-year-old female Malayan tiger had tested positive for COVID-19. We can confirm that the **three other tigers in Tiger Mountain and the three African lions** that exhibited a cough have also **tested positive** for COVID-19. 2. Parse Sequence Records
###Code
%%time
data = SeqIO.index("../data/SARS-CoV-2-sequences.fasta", "fasta")
# Get accession
records = list(data)
print(f'COVID-19 DNA Sequences: {len(data)}\n')
for i, record in enumerate(records[:4]):
print(f'{i}. {data[record].description}')
###Output
COVID-19 DNA Sequences: 3632
0. NC_045512 |Severe acute respiratory syndrome coronavirus 2 isolate Wuhan-Hu-1| complete genome
1. MT459832 |Severe acute respiratory syndrome coronavirus 2 isolate SARS-CoV-2/human/GRC/34_36284/2020| complete genome
2. MT459833 |Severe acute respiratory syndrome coronavirus 2 isolate SARS-CoV-2/human/GRC/43_35679/2020| complete genome
3. MT459834 |Severe acute respiratory syndrome coronavirus 2 isolate SARS-CoV-2/human/GRC/50_36277/2020| complete genome
###Markdown
3. Create dataset
###Code
%%time
def load_dataset(n_samples='full'):
keep_cols = ['Accession', 'Geo_Location', 'Collection_Date']
print('loading full sequence dataset')
df = pd.read_csv('../data/NCBI_sequences_metadata.csv')[keep_cols]
df['Seq'] = [data[rec].seq for rec in list(data)]
if n_samples != 'full':
print(f'creating n={n_samples} samples')
df = pd.concat([df.head(1), df.sample(n_samples-1, random_state=1)])
df['Len'] = df.Seq.apply(lambda x: len(x))
df['Proteins'] = df.Seq.apply(lambda s: len(s.translate()))
df['Collection_Date'] = pd.to_datetime(df.Collection_Date)
df.at[0, 'Collection_Date'] = pd.to_datetime('2019-12-29')
df = df.sort_values('Collection_Date', ascending=True).reset_index(drop=True)
df.at[0, 'Collection_Date'] = pd.to_datetime('2019-12-29')
df['Days_Since_Origin'] = (df['Collection_Date'] - pd.to_datetime('2019-12-23')).dt.days
return df.reset_index(drop=True)
df = load_dataset(n_samples='full')
print('Dataframe:', df.shape)
df.head()
# filter out incomplete or partial genomes
print(df.shape)
df = df[df.Len > 29700]
print(df.shape)
# New distribution of similar length genomes
df.Len.hist(bins=100)
###Output
_____no_output_____
###Markdown
4.0 Sequence Alignment* `Global alignment` finds the best concordance/agreement between all characters in two sequences* `Local Alignment` finds just the subsequences that align the best
###Code
def pairwise_alignment(s1, s2, n, verbose=True):
if n == 'full': n = min(len(s1), len(s2))
alignment = pairwise2.align.globalxx(s1[0:n], s2[0:n], one_alignment_only=True, score_only=True)
if verbose:
print(f'Pairwise alignment: {alignment:.0f}/{n} ({(alignment/n)*100:0.2f}%)')
print(f'Number of sequence differences: {int(n-alignment)}')
return int(n-alignment)
# Sequence differences in samples:
pairwise_alignment(df.Seq[1], df.Seq[2], n=1000)
###Output
Pairwise alignment: 975/1000 (97.50%)
Number of sequence differences: 25
###Markdown
* Run Sequence Alignment Comparison
###Code
# Run sequence alignment against origin sample (slow)
res = []
for i in tqdm(range(len(df))):
res.append(pairwise_alignment(df.Seq[0], df.Seq[i], n='full', verbose=False))
df['Sequence_diff'] = pd.Series(res)
df.head()
df.to_csv('data/NCBI_sequence_alignment.csv')
sns.lmplot(x="Days_Since_Origin", y="Sequence_diff", data=df)
###Output
_____no_output_____
###Markdown
`Sars-CoV-2` Genome Sample Changes Over Time
###Code
sns.set(style="darkgrid")
seq_lim = 2000
g = sns.jointplot(x="Days_Since_Origin", y="Sequence_diff", data=df[df.Sequence_diff < seq_lim],
kind="reg", truncate=False,
xlim=(-5, df.Days_Since_Origin.max()+5),
ylim=(-seq_lim/30, seq_lim),
color="m", height=7, scatter_kws={'alpha': 0.15})
###Output
_____no_output_____
###Markdown
> The other bit of information to come out of this study is an indication of where changes in the virus' proteins are tolerated. **This inability to tolerate changes in an area of the genome tends to be an indication that the protein encoded by that part of the genome has an essential function.** The researchers identified a number of these, one of which is the part of the spike protein that came from the pangolin virus. >Of all 6,400 of the SARS-CoV-2 genomes isolated during the pandemic, **only eight from a single cluster of cases had any changes in this region.** So, it's looking likely that the pangolin sequence is essential for the virus' ability to target humans.John Timmer, 6/2/2020. Arstechnica. [SARS-CoV-2 looks like a hybrid of viruses from two different species](https://arstechnica.com/science/2020/06/sars-cov-2-looks-like-a-hybrid-of-viruses-from-two-different-species/)Science Advances, 2019. [DOI: 10.1126/sciadv.abb9153](https://advances.sciencemag.org/content/early/2020/05/28/sciadv.abb9153) > A key stretch of the spike protein, the one that determines which proteins on human cells it interacts with, **came from a pangolin version of the virus** through recombination.* TODO: Find the area that has very few mutations, find the spike protein
###Code
pairwise_alignment(df.Seq[0], df.Seq[1], n='full')
pairwise_alignment(df.Seq[2], df.Seq[11], n='full')
pairwise_alignment(df.Seq[3], df.Seq[33], n='full')
pairwise_alignment(df.Seq[44], df.Seq[88], n='full')
pairwise_alignment(df.Seq[0], df.Seq[99], n='full')
# Noteworthy records
patient_zero = 'NC_045512'
recent_cali = 'MT460092'
bronx_tiger = 'MT365033'
samples = [patient_zero, recent_cali, bronx_tiger]
data[patient_zero].seq
data[bronx_tiger].seq
df[df.Accession.isin(samples)]
from Bio.SeqUtils.ProtParam import ProteinAnalysis
from collections import Counter
# DNA to mRNA to Polypeptide (protein)
data[recent_cali].seq.transcribe().translate()
for s in samples:
print(s)
print(data[s].seq.transcribe().translate()[:100])
data[bronx_tiger].seq.transcribe().translate()
###Output
_____no_output_____
###Markdown
Dot Plots of Opening Sequences
###Code
def delta(x,y):
return 0 if x == y else 1
def M(seq1,seq2,i,j,k):
return sum(delta(x,y) for x,y in zip(seq1[i:i+k],seq2[j:j+k]))
def makeMatrix(seq1,seq2,k):
n = len(seq1)
m = len(seq2)
return [[M(seq1,seq2,i,j,k) for j in range(m-k+1)] for i in range(n-k+1)]
def plotMatrix(M,t, seq1, seq2, nonblank = chr(0x25A0), blank = ' '):
print(' |' + seq2)
print('-'*(2 + len(seq2)))
for label,row in zip(seq1,M):
line = ''.join(nonblank if s < t else blank for s in row)
print(label + '|' + line)
def dotplot(seq1,seq2,k = 1,t = 1):
M = makeMatrix(seq1,seq2,k)
plotMatrix(M, t, seq1,seq2) #experiment with character choice
# Plotting function to illustrate deeper matches
def dotplotx(seq1, seq2, n):
seq1=seq1[0:n]
seq2=seq2[0:n]
plt.imshow(np.array(makeMatrix(seq1,seq2,1)))
# on x-axis list all sequences of seq 2
xt=plt.xticks(np.arange(len(list(seq2))),list(seq2))
# on y-axis list all sequences of seq 1
yt=plt.yticks(np.arange(len(list(seq1))),list(seq1))
plt.show()
dotplotx(df.Seq[1].transcribe().translate(),
df.Seq[3].transcribe().translate(), n=100)
###Output
_____no_output_____ |
lectures/L10/Exercise_1-Final.ipynb | ###Markdown
Exercise 1Write a subclass called `Rcircle` of the superclass `Circle`. Requirements* Must inherit from `Circle`* Must have it's own constructor. The constructor accepts the circle radius supplied by the user as its argument. That is `__init__(self, r)`.* The circle radius must be set in the constructor* The `Rcircle` subclass must reimplement the `radius` function. It does not make sense for `Rcircle` to inherit the `radius` method from `Circle` since an instance of `Rcircle` doesn't know anything about the coordinates of the circle.* Include the `__eq__` special method to compare two circles.Demo your class.Your `Circle` class from last time should have looked something like this:
###Code
import math
class CircleClass():
def __init__(self, x, y):
self.x = x
self.y = y
def radius(self):
r2 = self.x**2 + self.y**2
r = math.sqrt(r2)
return r
def area(self):
r = self.radius()
area = math.pi*(r**2)
return area
def circumference(self):
r = self.radius()
c = 2*math.pi*r
return c
class Rcircle(CircleClass):
def __init__(self, r):
self.r = r
def radius(self):
return self.r
def __eq__(self,other):
return self.r == other.r
mycircle = Rcircle(4)
othercircle = Rcircle(5)
print(mycircle.radius())
print(mycircle == othercircle)
###Output
4
False
|
02_Sample_Based_Learning_Methods/week3/C2W3_programming_assignment/C2W3_programming_assignment.ipynb | ###Markdown
Assignment: Policy Evaluation in Cliff Walking EnvironmentWelcome to the Course 2 Module 2 Programming Assignment! In this assignment, you will implement one of the fundamental sample and bootstrapping based model free reinforcement learning agents for prediction. This is namely one that uses one-step temporal difference learning, also known as TD(0). The task is to design an agent for policy evaluation in the Cliff Walking environment. Recall that policy evaluation is the prediction problem where the goal is to accurately estimate the values of states given some policy. Learning Objectives- Implement parts of the Cliff Walking environment, to get experience specifying MDPs [Section 1].- Implement an agent that uses bootstrapping and, particularly, TD(0) [Section 2].- Apply TD(0) to estimate value functions for different policies, i.e., run policy evaluation experiments [Section 3]. The Cliff Walking EnvironmentThe Cliff Walking environment is a gridworld with a discrete state space and discrete action space. The agent starts at grid cell S. The agent can move (deterministically) to the four neighboring cells by taking actions Up, Down, Left or Right. Trying to move out of the boundary results in staying in the same location. So, for example, trying to move left when at a cell on the leftmost column results in no movement at all and the agent remains in the same location. The agent receives -1 reward per step in most states, and -100 reward when falling off of the cliff. This is an episodic task; termination occurs when the agent reaches the goal grid cell G. Falling off of the cliff results in resetting to the start state, without termination.The diagram below showcases the description above and also illustrates two of the policies we will be evaluating. Packages.We import the following libraries that are required for this assignment. We shall be using the following libraries:1. jdc: Jupyter magic that allows defining classes over multiple jupyter notebook cells.2. numpy: the fundamental package for scientific computing with Python.3. matplotlib: the library for plotting graphs in Python.4. RL-Glue: the library for reinforcement learning experiments.5. BaseEnvironment, BaseAgent: the base classes from which we will inherit when creating the environment and agent classes in order for them to support the RL-Glue framework.6. Manager: the file allowing for visualization and testing.7. itertools.product: the function that can be used easily to compute permutations.8. tqdm.tqdm: Provides progress bars for visualizing the status of loops.**Please do not import other libraries** — this will break the autograder.**NOTE: For this notebook, there is no need to make any calls to methods of random number generators. Spurious or missing calls to random number generators may affect your results.**
###Code
# Do not modify this cell!
import jdc
# --
import numpy as np
# --
from rl_glue import RLGlue
# --
from Agent import BaseAgent
from Environment import BaseEnvironment
# --
from manager import Manager
# --
from itertools import product
# --
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
Section 1. EnvironmentIn the first part of this assignment, you will get to see how the Cliff Walking environment is implemented. You will also get to implement parts of it to aid your understanding of the environment and more generally how MDPs are specified. In particular, you will implement the logic for: 1. Converting 2-dimensional coordinates to a single index for the state, 2. One of the actions (action up), and, 3. Reward and termination. Given below is an annotated diagram of the environment with more details that may help in completing the tasks of this part of the assignment. Note that we will be creating a more general environment where the height and width positions can be variable but the start, goal and cliff grid cells have the same relative positions (bottom left, bottom right and the cells between the start and goal grid cells respectively).Once you have gone through the code and begun implementing solutions, it may be a good idea to come back here and see if you can convince yourself that the diagram above is an accurate representation of the code given and the code you have written.
###Code
# Do not modify this cell!
# Create empty CliffWalkEnvironment class.
# These methods will be filled in later cells.
class CliffWalkEnvironment(BaseEnvironment):
def env_init(self, agent_info={}):
raise NotImplementedError
def env_start(self, state):
raise NotImplementedError
def env_step(self, reward, state):
raise NotImplementedError
def env_end(self, reward):
raise NotImplementedError
def env_cleanup(self, reward):
raise NotImplementedError
# helper method
def state(self, loc):
raise NotImplementedError
###Output
_____no_output_____
###Markdown
env_init()The first function we add to the environment is the initialization function which is called once when an environment object is created. In this function, the grid dimensions and special locations (start and goal locations and the cliff locations) are stored for easy use later.
###Code
%%add_to CliffWalkEnvironment
# Do not modify this cell!
# Work Required: No.
def env_init(self, env_info={}):
"""Setup for the environment called when the experiment first starts.
Note:
Initialize a tuple with the reward, first state, boolean
indicating if it's terminal.
"""
# Note, we can setup the following variables later, in env_start() as it is equivalent.
# Code is left here to adhere to the note above, but these variables are initialized once more
# in env_start() [See the env_start() function below.]
reward = None
state = None # See Aside
termination = None
self.reward_state_term = (reward, state, termination)
# AN ASIDE: Observation is a general term used in the RL-Glue files that can be interachangeably
# used with the term "state" for our purposes and for this assignment in particular.
# A difference arises in the use of the terms when we have what is called Partial Observability where
# the environment may return states that may not fully represent all the information needed to
# predict values or make decisions (i.e., the environment is non-Markovian.)
# Set the default height to 4 and width to 12 (as in the diagram given above)
self.grid_h = env_info.get("grid_height", 4)
self.grid_w = env_info.get("grid_width", 12)
# Now, we can define a frame of reference. Let positive x be towards the direction down and
# positive y be towards the direction right (following the row-major NumPy convention.)
# Then, keeping with the usual convention that arrays are 0-indexed, max x is then grid_h - 1
# and max y is then grid_w - 1. So, we have:
# Starting location of agent is the bottom-left corner, (max x, min y).
self.start_loc = (self.grid_h - 1, 0)
# Goal location is the bottom-right corner. (max x, max y).
self.goal_loc = (self.grid_h - 1, self.grid_w - 1)
# The cliff will contain all the cells between the start_loc and goal_loc.
self.cliff = [(self.grid_h - 1, i) for i in range(1, (self.grid_w - 1))]
# Take a look at the annotated environment diagram given in the above Jupyter Notebook cell to
# verify that your understanding of the above code is correct for the default case, i.e., where
# height = 4 and width = 12.
###Output
_____no_output_____
###Markdown
*Implement* state() The agent location can be described as a two-tuple or coordinate (x, y) describing the agent’s position. However, we can convert the (x, y) tuple into a single index and provide agents with just this integer.One reason for this choice is that the spatial aspect of the problem is secondary and there is no need for the agent to know about the exact dimensions of the environment. From the agent’s viewpoint, it is just perceiving some states, accessing their corresponding values in a table, and updating them. Both the coordinate (x, y) state representation and the converted coordinate representation are thus equivalent in this sense.Given a grid cell location, the state() function should return the state; a single index corresponding to the location in the grid.```Example: Suppose grid_h is 2 and grid_w is 2. Then, we can write the grid cell two-tuple or coordinatestates as follows (following the usual 0-index convention):|(0, 0) (0, 1)| |0 1||(1, 0) (1, 1)| |2 3|Assuming row-major order as NumPy does, we can flatten the latter to get a vector [0 1 2 3].So, if loc = (0, 0) we return 0. While, for loc = (1, 1) we return 3.```
###Code
%%add_to CliffWalkEnvironment
#GRADED FUNCTION: [state]
# Work Required: Yes. Modify the return statement of this function to return a correct single index as
# the state (see the logic for this in the previous cell.)
# Lines: 1
def state(self, loc):
### START CODE HERE ####
return loc[0]*self.grid_w+loc[1]
### END CODE HERE ###
### AUTOGRADER TESTS FOR STATE (5 POINTS)
# NOTE: The test below corresponds to the annotated diagram for the environment
# given previously and is limited in scope. Hidden tests are used in the autograder.
# You may wish to run other tests to check your implementation.
def test_state():
env = CliffWalkEnvironment()
env.env_init({"grid_height": 4, "grid_width": 12})
coords_to_test = [(0, 0), (0, 11), (1, 5), (3, 0), (3, 9), (3, 11)]
true_states = [0, 11, 17, 36, 45, 47]
output_states = [env.state(coords) for coords in coords_to_test]
assert(output_states == true_states)
test_state()
###Output
_____no_output_____
###Markdown
env_start()In env_start(), we initialize the agent location to be the start location and return the state corresponding to it as the first state for the agent to act upon. Additionally, we also set the reward and termination terms to be 0 and False respectively as they are consistent with the notion that there is no reward nor termination before the first action is even taken.
###Code
%%add_to CliffWalkEnvironment
# Do not modify this cell!
# Work Required: No.
def env_start(self):
"""The first method called when the episode starts, called before the
agent starts.
Returns:
The first state from the environment.
"""
reward = 0
# agent_loc will hold the current location of the agent
self.agent_loc = self.start_loc
# state is the one dimensional state representation of the agent location.
state = self.state(self.agent_loc)
termination = False
self.reward_state_term = (reward, state, termination)
return self.reward_state_term[1]
###Output
_____no_output_____
###Markdown
*Implement* env_step()Once an action is taken by the agent, the environment must provide a new state, reward and termination signal. In the Cliff Walking environment, agents move around using a 4-cell neighborhood called the Von Neumann neighborhood (https://en.wikipedia.org/wiki/Von_Neumann_neighborhood). Thus, the agent has 4 available actions at each state. Three of the actions have been implemented for you and your first task is to implement the logic for the fourth action (Action UP).Your second task for this function is to implement the reward logic. Look over the environment description given earlier in this notebook if you need a refresher for how the reward signal is defined.
###Code
%%add_to CliffWalkEnvironment
#GRADED FUNCTION: [env_step]
# Work Required: Yes. Fill in the code for action UP and implement the logic for reward and termination.
# Lines: ~7.
def env_step(self, action):
"""A step taken by the environment.
Args:
action: The action taken by the agent
Returns:
(float, state, Boolean): a tuple of the reward, state,
and boolean indicating if it's terminal.
"""
if action == 0: # UP (Task 1)
### START CODE HERE ###
# Hint: Look at the code given for the other actions and think about the logic in them.
possible_next_loc=(self.agent_loc[0]-1, self.agent_loc[1])
if possible_next_loc[0]>=0:
self.agent_loc=possible_next_loc
else:
pass
### END CODE HERE ###
elif action == 1: # LEFT
possible_next_loc = (self.agent_loc[0], self.agent_loc[1] - 1)
if possible_next_loc[1] >= 0: # Within Bounds?
self.agent_loc = possible_next_loc
else:
pass # Stay.
elif action == 2: # DOWN
possible_next_loc = (self.agent_loc[0] + 1, self.agent_loc[1])
if possible_next_loc[0] < self.grid_h: # Within Bounds?
self.agent_loc = possible_next_loc
else:
pass # Stay.
elif action == 3: # RIGHT
possible_next_loc = (self.agent_loc[0], self.agent_loc[1] + 1)
if possible_next_loc[1] < self.grid_w: # Within Bounds?
self.agent_loc = possible_next_loc
else:
pass # Stay.
else:
raise Exception(str(action) + " not in recognized actions [0: Up, 1: Left, 2: Down, 3: Right]!")
reward = -1
terminal = False
### START CODE HERE ###
# Hint: Consider the initialization of reward and terminal variables above. Then, note the
# conditional statements and comments given below and carefully ensure to set the variables reward
# and terminal correctly for each case.
if self.agent_loc == self.goal_loc: # Reached Goal!
terminal=True
elif self.agent_loc in self.cliff: # Fell into the cliff!
terminal=False
reward=-100
self.agent_loc=(3,0)
else:
pass
### END CODE HERE ###
self.reward_state_term = (reward, self.state(self.agent_loc), terminal)
return self.reward_state_term
### AUTOGRADER TESTS FOR ACTION UP (5 POINTS)
# NOTE: The test below is again limited in scope. Hidden tests are used in the autograder.
# You may wish to run other tests to check your implementation.
def test_action_up():
env = CliffWalkEnvironment()
env.env_init({"grid_height": 4, "grid_width": 12})
env.agent_loc = (0, 0)
env.env_step(0)
assert(env.agent_loc == (0, 0))
env.agent_loc = (1, 0)
env.env_step(0)
assert(env.agent_loc == (0, 0))
test_action_up()
### AUTOGRADER TESTS FOR REWARD & TERMINATION (10 POINTS)
# NOTE: The test below is limited in scope. Hidden tests are used in the autograder.
# You may wish to run other tests to check your implementation.
def test_reward():
env = CliffWalkEnvironment()
env.env_init({"grid_height": 4, "grid_width": 12})
env.agent_loc = (0, 0)
reward_state_term = env.env_step(0)
assert(reward_state_term[0] == -1 and reward_state_term[1] == env.state((0, 0)) and
reward_state_term[2] == False)
env.agent_loc = (3, 1)
reward_state_term = env.env_step(2)
assert(reward_state_term[0] == -100 and reward_state_term[1] == env.state((3, 0)) and
reward_state_term[2] == False)
env.agent_loc = (2, 11)
reward_state_term = env.env_step(2)
assert(reward_state_term[0] == -1 and reward_state_term[1] == env.state((3, 11)) and
reward_state_term[2] == True)
test_reward()
###Output
_____no_output_____
###Markdown
env_cleanup()There is not much cleanup to do for the Cliff Walking environment. Here, we simply reset the agent location to be the start location in this function.
###Code
%%add_to CliffWalkEnvironment
# Do not modify this cell!
# Work Required: No.
def env_cleanup(self):
"""Cleanup done after the environment ends"""
self.agent_loc = self.start_loc
###Output
_____no_output_____
###Markdown
Section 2. AgentIn this second part of the assignment, you will be implementing the key updates for Temporal Difference Learning. There are two cases to consider depending on whether an action leads to a terminal state or not.
###Code
# Do not modify this cell!
# Create empty TDAgent class.
# These methods will be filled in later cells.
class TDAgent(BaseAgent):
def agent_init(self, agent_info={}):
raise NotImplementedError
def agent_start(self, state):
raise NotImplementedError
def agent_step(self, reward, state):
raise NotImplementedError
def agent_end(self, reward):
raise NotImplementedError
def agent_cleanup(self):
raise NotImplementedError
def agent_message(self, message):
raise NotImplementedError
###Output
_____no_output_____
###Markdown
agent_init()As we did with the environment, we first initialize the agent once when a TDAgent object is created. In this function, we create a random number generator, seeded with the seed provided in the agent_info dictionary to get reproducible results. We also set the policy, discount and step size based on the agent_info dictionary. Finally, with a convention that the policy is always specified as a mapping from states to actions and so is an array of size ( States, Actions), we initialize a values array of shape ( States,) to zeros.
###Code
%%add_to TDAgent
# Do not modify this cell!
# Work Required: No.
def agent_init(self, agent_info={}):
"""Setup for the agent called when the experiment first starts."""
# Create a random number generator with the provided seed to seed the agent for reproducibility.
self.rand_generator = np.random.RandomState(agent_info.get("seed"))
# Policy will be given, recall that the goal is to accurately estimate its corresponding value function.
self.policy = agent_info.get("policy")
# Discount factor (gamma) to use in the updates.
self.discount = agent_info.get("discount")
# The learning rate or step size parameter (alpha) to use in updates.
self.step_size = agent_info.get("step_size")
# Initialize an array of zeros that will hold the values.
# Recall that the policy can be represented as a (# States, # Actions) array. With the
# assumption that this is the case, we can use the first dimension of the policy to
# initialize the array for values.
self.values = np.zeros((self.policy.shape[0],))
###Output
_____no_output_____
###Markdown
agent_start()In agent_start(), we choose an action based on the initial state and policy we are evaluating. We also cache the state so that we can later update its value when we perform a Temporal Difference update. Finally, we return the action chosen so that the RL loop can continue and the environment can execute this action.
###Code
%%add_to TDAgent
# Do not modify this cell!
# Work Required: No.
def agent_start(self, state):
"""The first method called when the episode starts, called after
the environment starts.
Args:
state (Numpy array): the state from the environment's env_start function.
Returns:
The first action the agent takes.
"""
# The policy can be represented as a (# States, # Actions) array. So, we can use
# the second dimension here when choosing an action.
action = self.rand_generator.choice(range(self.policy.shape[1]), p=self.policy[state])
self.last_state = state
return action
###Output
_____no_output_____
###Markdown
*Implement* agent_step()In agent_step(), the agent must:- Perform an update to improve the value estimate of the previously visited state, and- Act based on the state provided by the environment.The latter of the two steps above has been implemented for you. Implement the former. Note that, unlike later in agent_end(), the episode has not yet ended in agent_step(). in other words, the previously observed state was not a terminal state.
###Code
%%add_to TDAgent
#[GRADED] FUNCTION: [agent_step]
# Work Required: Yes. Fill in the TD-target and update.
# Lines: ~2.
def agent_step(self, reward, state):
"""A step taken by the agent.
Args:
reward (float): the reward received for taking the last action taken
state (Numpy array): the state from the
environment's step after the last action, i.e., where the agent ended up after the
last action
Returns:
The action the agent is taking.
"""
### START CODE HERE ###
# Hint: We should perform an update with the last state given that we now have the reward and
# next state. We break this into two steps. Recall for example that the Monte-Carlo update
# had the form: V[S_t] = V[S_t] + alpha * (target - V[S_t]), where the target was the return, G_t.
target = reward+self.discount*self.values[state]
self.values[self.last_state] = self.values[self.last_state]+self.step_size*(target-self.values[self.last_state])
### END CODE HERE ###
# Having updated the value for the last state, we now act based on the current
# state, and set the last state to be current one as we will next be making an
# update with it when agent_step is called next once the action we return from this function
# is executed in the environment.
action = self.rand_generator.choice(range(self.policy.shape[1]), p=self.policy[state])
self.last_state = state
return action
###Output
_____no_output_____
###Markdown
*Implement* agent_end() Implement the TD update for the case where an action leads to a terminal state.
###Code
%%add_to TDAgent
#[GRADED] FUNCTION: [agent_end]
# Work Required: Yes. Fill in the TD-target and update.
# Lines: ~2.
def agent_end(self, reward):
"""Run when the agent terminates.
Args:
reward (float): the reward the agent received for entering the terminal state.
"""
### START CODE HERE ###
# Hint: Here too, we should perform an update with the last state given that we now have the
# reward. Note that in this case, the action led to termination. Once more, we break this into
# two steps, computing the target and the update itself that uses the target and the
# current value estimate for the state whose value we are updating.
target = reward
self.values[self.last_state] = self.values[self.last_state]+self.step_size*(target-self.values[self.last_state])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
agent_cleanup()In cleanup, we simply reset the last state to be None to ensure that we are not storing any states past an episode.
###Code
%%add_to TDAgent
# Do not modify this cell!
# Work Required: No.
def agent_cleanup(self):
"""Cleanup done after the agent ends."""
self.last_state = None
###Output
_____no_output_____
###Markdown
agent_message()agent_message() can generally be used to get different kinds of information about an RLGlue agent in the interaction loop of RLGlue. Here, we conditonally check for a message matching "get_values" and use it to retrieve the values table the agent has been updating over time.
###Code
%%add_to TDAgent
# Do not modify this cell!
# Work Required: No.
def agent_message(self, message):
"""A function used to pass information from the agent to the experiment.
Args:
message: The message passed to the agent.
Returns:
The response (or answer) to the message.
"""
if message == "get_values":
return self.values
else:
raise Exception("TDAgent.agent_message(): Message not understood!")
### AUTOGRADER TESTS FOR TD-UPDATES (20 POINTS)
# NOTE: The test belows serve as a good check in debugging your code for the TD updates. However,
# as with the other tests, it is limited in scope. Hidden tests are used in the autograder.
# You may wish to run other tests to check your implementation.
def test_td_updates():
# The following test checks that the TD check works for a case where the transition
# garners reward -1 and does not lead to a terminal state. This is in a simple two state setting
# where there is only one action. The first state's current value estimate is 0 while the second is 1.
# Note the discount and step size if you are debugging this test.
agent = TDAgent()
policy_list = np.array([[1.], [1.]])
agent.agent_init({"policy": np.array(policy_list), "discount": 0.99, "step_size": 0.1})
agent.values = np.array([0., 1.])
agent.agent_start(0)
reward = -1
next_state = 1
agent.agent_step(reward, next_state)
assert(np.isclose(agent.values[0], -0.001) and np.isclose(agent.values[1], 1.))
# The following test checks that the TD check works for a case where the transition
# garners reward -100 and lead to a terminal state. This is in a simple one state setting
# where there is only one action. The state's current value estimate is 0.
# Note the discount and step size if you are debugging this test.
agent = TDAgent()
policy_list = np.array([[1.]])
agent.agent_init({"policy": np.array(policy_list), "discount": 0.99, "step_size": 0.1})
agent.values = np.array([0.])
agent.agent_start(0)
reward = -100
next_state = 0
agent.agent_end(reward)
assert(np.isclose(agent.values[0], -10))
test_td_updates()
###Output
_____no_output_____
###Markdown
Section 3. Policy Evaluation ExperimentsFinally, in this last part of the assignment, you will get to see the TD policy evaluation algorithm in action by looking at the estimated values, the per state value error and after the experiment is complete, the Mean Squared Value Error curve vs. episode number, summarizing how the value error changed over time.The code below runs one run of an experiment given env_info and agent_info dictionaries. A "manager" object is created for visualizations and is used in part for the autograder. By default, the run will be for 5000 episodes. The true_values_file is specified to compare the learned value function with the values stored in the true_values_file. Plotting of the learned value function occurs by default after every 100 episodes. In addition, when true_values_file is specified, the value error per state and the root mean square value error will also be plotted.
###Code
%matplotlib notebook
# Work Required: No.
def run_experiment(env_info, agent_info,
num_episodes=5000,
experiment_name=None,
plot_freq=100,
true_values_file=None,
value_error_threshold=1e-8):
env = CliffWalkEnvironment
agent = TDAgent
rl_glue = RLGlue(env, agent)
rl_glue.rl_init(agent_info, env_info)
manager = Manager(env_info, agent_info, true_values_file=true_values_file, experiment_name=experiment_name)
for episode in range(1, num_episodes + 1):
rl_glue.rl_episode(0) # no step limit
if episode % plot_freq == 0:
values = rl_glue.agent.agent_message("get_values")
manager.visualize(values, episode)
values = rl_glue.agent.agent_message("get_values")
if true_values_file is not None:
# Grading: The Manager will check that the values computed using your TD agent match
# the true values (within some small allowance) across the states. In addition, it also
# checks whether the root mean squared value error is close to 0.
manager.run_tests(values, value_error_threshold)
return values
###Output
_____no_output_____
###Markdown
The cell below just runs a policy evaluation experiment with the determinstic optimal policy that strides just above the cliff. You should observe that the per state value error and RMSVE curve asymptotically go towards 0. The arrows in the four directions denote the probabilities of taking each action. This experiment is ungraded but should serve as a good test for the later experiments. The true values file provided for this experiment may help with debugging as well.
###Code
# Do not modify this cell!
env_info = {"grid_height": 4, "grid_width": 12, "seed": 0}
agent_info = {"discount": 1, "step_size": 0.01, "seed": 0}
# The Optimal Policy that strides just along the cliff
policy = np.ones(shape=(env_info['grid_width'] * env_info['grid_height'], 4)) * 0.25
policy[36] = [1, 0, 0, 0]
for i in range(24, 35):
policy[i] = [0, 0, 0, 1]
policy[35] = [0, 0, 1, 0]
agent_info.update({"policy": policy})
true_values_file = "optimal_policy_value_fn.npy"
_ = run_experiment(env_info, agent_info, num_episodes=5000, experiment_name="Policy Evaluation on Optimal Policy",
plot_freq=500, true_values_file=true_values_file)
# The Safe Policy
# Hint: Fill in the array below (as done in the previous cell) based on the safe policy illustration
# in the environment diagram. This is the policy that strides as far as possible away from the cliff.
# We call it a "safe" policy because if the environment has any stochasticity, this policy would do a good job in
# keeping the agent from falling into the cliff (in contrast to the optimal policy shown before).
# BOILERPLATE:
policy = np.ones(shape=(env_info['grid_width'] * env_info['grid_height'], 4)) * 0.25
### START CODE HERE ###
policy[36]=[1, 0, 0, 0]
policy[24]=[1, 0, 0, 0]
policy[12]=[1, 0, 0, 0]
for i in range(0, 11):
policy[i] = [0, 0, 0, 1]
policy[11]=[0, 0, 1, 0]
policy[23]=[0, 0, 1, 0]
policy[35]=[0, 0, 1, 0]
### END CODE HERE ###
### AUTO-GRADER TESTS FOR POLICY EVALUATION WITH SAFE POLICY
agent_info.update({"policy": policy})
v = run_experiment(env_info, agent_info,
experiment_name="Policy Evaluation On Safe Policy",
num_episodes=5000, plot_freq=500)
# Do not modify this cell!
# A Near Optimal Stochastic Policy
# Now, we try a stochastic policy that deviates a little from the optimal policy seen above.
# This means we can get different results due to randomness.
# We will thus average the value function estimates we get over multiple runs.
# This can take some time, upto about 5 minutes from previous testing.
# NOTE: The autograder will compare . Re-run this cell upon making any changes.
env_info = {"grid_height": 4, "grid_width": 12}
agent_info = {"discount": 1, "step_size": 0.01}
policy = np.ones(shape=(env_info['grid_width'] * env_info['grid_height'], 4)) * 0.25
policy[36] = [0.9, 0.1/3., 0.1/3., 0.1/3.]
for i in range(24, 35):
policy[i] = [0.1/3., 0.1/3., 0.1/3., 0.9]
policy[35] = [0.1/3., 0.1/3., 0.9, 0.1/3.]
agent_info.update({"policy": policy})
agent_info.update({"step_size": 0.01})
### AUTO-GRADER TESTS FOR POLICY EVALUATION WITH NEAR OPTIMAL STOCHASTIC POLICY (40 POINTS)
arr = []
from tqdm import tqdm
for i in tqdm(range(30)):
env_info['seed'] = i
agent_info['seed'] = i
v = run_experiment(env_info, agent_info,
experiment_name="Policy Evaluation On Optimal Policy",
num_episodes=5000, plot_freq=10000)
arr.append(v)
average_v = np.array(arr).mean(axis=0)
###Output
100%|██████████| 30/30 [02:12<00:00, 4.42s/it]
|
Day_5_Assignment_3.ipynb | ###Markdown
###Code
list=['hey this is sai','I am in mumbai','i am learning python from letsupgrade']
capitalize = lambda list : list.title()
caps = map(capitalize,list)
caps_list = []
for i in caps:
caps_list.append(i)
print(caps_list)
###Output
['Hey This Is Sai', 'I Am In Mumbai', 'I Am Learning Python From Letsupgrade']
|
dev_sprint_2019/user_testing.ipynb | ###Markdown
sktime user testingFor help, see tutorial notebooks in our [example folder](https://github.com/alan-turing-institute/sktime/tree/dev/examples). 1. Convert data from long format to required nested pandas DataFrame
###Code
import numpy as np
import pandas as pd
def generate_example_long_table(num_cases=50, series_len=20, num_dims=2):
rows_per_case = series_len*num_dims
total_rows = num_cases*series_len*num_dims
case_ids = np.empty(total_rows, dtype=np.int)
idxs = np.empty(total_rows, dtype=np.int)
dims = np.empty(total_rows, dtype=np.int)
vals = np.random.rand(total_rows)
for i in range(total_rows):
case_ids[i] = int(i/rows_per_case)
rem = i%rows_per_case
dims[i] = int(rem/series_len)
idxs[i] = rem%series_len
df = pd.DataFrame()
df['case_id'] = pd.Series(case_ids)
df['dim_id'] = pd.Series(dims)
df['reading_id'] = pd.Series(idxs)
df['value'] = pd.Series(vals)
return df
long = generate_example_long_table()
long.head()
# now convert the long dataframe to the nested wide format
###Output
_____no_output_____
###Markdown
2. Run classifiers on the GunPoint dataset
###Code
from sktime.datasets import load_gunpoint
X_train, y_train = load_gunpoint(split='TRAIN', return_X_y=True)
X_test, y_test = load_gunpoint(split='TEST', return_X_y=True)
X_train.head()
# now pick one of the classifiers available in sktime and run it on the dataset
###Output
_____no_output_____
###Markdown
3. Compare multiple classifiers using the sktime orchestration functionality
###Code
from sktime.experiments import orchestrator
###Output
_____no_output_____ |
_doc/notebooks/2016/pydata/im_networkx.ipynb | ###Markdown
networkx *networkx* draws networks. It does not work too well on big graphs (< 1000 vertices).[documentation](https://networkx.github.io/) [source](https://github.com/networkx/networkx) [installation](https://networkx.readthedocs.io/en/stable/install.html) [tutorial](https://networkx.readthedocs.io/en/stable/tutorial/index.html) [gallerie](https://networkx.github.io/documentation/networkx-1.9.1/gallery.html)
###Code
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
###Output
_____no_output_____
###Markdown
example
###Code
import networkx as nx
import matplotlib.pyplot as plt
G=nx.random_geometric_graph(200,0.125)
# position is stored as node attribute data for random_geometric_graph
pos=nx.get_node_attributes(G,'pos')
# find node near center (0.5,0.5)
dmin=1
ncenter=0
for n in pos:
x,y=pos[n]
d=(x-0.5)**2+(y-0.5)**2
if d<dmin:
ncenter=n
dmin=d
# color by path length from node near center
p=nx.single_source_shortest_path_length(G,ncenter)
plt.figure(figsize=(8,8))
nx.draw_networkx_edges(G,pos,nodelist=[ncenter],alpha=0.4)
nx.draw_networkx_nodes(G,pos,nodelist=p.keys(),
node_size=80)
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
plt.axis('off')
###Output
_____no_output_____ |
tema_7.ipynb | ###Markdown
Curso de Programación en Python Curso de formación interna, CIEMAT. Madrid, Octubre de 2021Antonio Delgado Perishttps://github.com/andelpe/curso-intro-python/ Tema 7 - Clases y objetos Objetivos- Conocer cómo funciona la programación orientada a objetos (_OOP_) en Python - Definir clases como nuevos tipos de datos - Instanciar y usar objetos de clases - Aplicar a python conceptos clásicos de O.O.P - Privacidad, Herencia, Polimorfismo- Manejar las clases mismas como objetos Programación orientada a objetosLa programación orientada a objetos (O.O.P.) es un modelo (paradigma) de programación, que se basa en los siguientes principios:- Agrupar datos y funciones en objetos que pertenecen a clases (tipos definidos por el programador)- Los objetos modelizan entidades del mundo real- Se ocultan los detalles de la implementación tras un interfaz (_Encapsulación_)- Se prima la reutilización de código, y la jerarquización de clases usando _Herencia_ (unas clases extienden a otras).- La jerarquía de clases y la herencia permiten obtener _Polimorfismo_ (en Python, se consigue de otra manera)La O.O.P es una manera de afrontar un problema, y es posible utilizarla con casi cualquier lenguaje, pero algunos están diseñados específicamente para ello:- Con C es difícil, con C++ posible, con Java obligatorio- En python, la O.O.P es opcional, lo que algunos consideran un modelo menos elegante, pero en la práctica es versátilEs imposible enseñar O.O.P. en un curso introductorio, pero podemos mostrar cómo se hace técnicamente en Python para aquellos que ya la conocen de otros lenguajes (o que la pueden necesitar en el futuro). Clases e instanciasLas clases definen un _tipo_ de objetos- Crean su propio namespace- Definen atributos (miembros): - Datos - Funciones (métodos) Las _instancias_ de una clase son los _objetos_, cuyo tipo es esa clase. - Se pueden definir múltiples instancias para una misma clase Ya hemos visto muchos objetos:- Clase: `int`. Objetos: `3`, `int('4')`. Atributo: `int('4').real`- Clase: `str`. Objetos: `'abc'`, `'xy'`. Método: `'xy'.split`
###Code
d = dict(a=3, b=5)
print(type(d))
###Output
_____no_output_____
###Markdown
Nuevas clasesPodemos crear nuevos tipos de datos, definiendo clases con:```pythonclass : instrucción instrucción ...```Lo más esencial de una clase son las funciones miembro.- La función `__init__` es especial; es el _constructor_, que se llama cuando se crea una nueva instancia de la clase.- El primer argumento de todos los métodos (`self`) es una referencia a la instancia llamante (pasada automáticamente)- Si definimos atributos de `self` estamos creando un atributo de instancia (diferente para cada instancia)
###Code
class Sumador:
"""
Class keeping track of a value, which can only increase.
"""
def __init__(self, start):
"""
Constructor, accepting the initial value to track.
"""
self.val = start
def __str__(self):
return f'Sumador, val: {self.val}'
def add(self, amount):
"""
Adds the specified 'amount' from tracked value
"""
self.val += amount
# Llamamos a 'Numero.__init__', 'self' apunta al nuevo objeto 's1', 'val' a 3
s1 = Sumador(3)
print(s1)
# Llamamos a 'Numero.add', 'amount' es 5
s1.add(5)
# Accedemos al atributo 'val' de la instancia 's1'
print('s1.val:', s1.val)
print('\ntype(s1):', type(s1))
print('type(Sumador):', type(Sumador))
###Output
_____no_output_____
###Markdown
También se pueden definir atributos _de clase_, compartidas por todas las instancias, y por la clase en sí.
###Code
class Pcg:
const = 100
def __init__(self, val):
self.val = val
def pcg(self, num):
return Pcg.const * num/self.val
p = Pcg(1000)
print(p.pcg(300))
p2 = Pcg(500)
print(p2.pcg(300))
print()
print(Pcg.const)
print(p.const, p.val)
print(p2.const, p2.val)
###Output
_____no_output_____
###Markdown
Incluso podemos definir/modificar variables de clase o instancia dinámicamente (sin existir en su definición). Lo que nos permite usar una clase/instancia como una especie de diccionario.
###Code
Pcg.const2 = 500
print(Pcg.const2)
print(p.const2)
p.new = 20
print(p.new)
###Output
_____no_output_____
###Markdown
Un ejemplo de uso de las variables de clase puede ser una clase que símplemente sea un _contenedor_ de valores.
###Code
class color:
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
print(color.BOLD + color.BLUE + 'Hello World !' + color.END)
###Output
_____no_output_____
###Markdown
TODO: creo que convendría añadir un primer ejercicio simplón para practicar la creación de una clase desde cero, y luego ya nos vamos a los grafos. **EJERCICIO e7_1:** Crear una clase `Persona`, con atributos `nombre`, y `edad`, y con una función `saludo`, que muestre (`print`) una frase presentándose (incluyendo la información de nombre y edad).Instanciar un objeto de tipo `Persona`, acceder a los atributos y llamar a la función `saludo`. **NOTA** (_avanzado_): Hemos visto los atributos de instancia y de clase. También existen _métodos_ de instancia y de clase, e incluso un tercer tipo: _estáticos_.Por defecto, los métodos que definimos son _de instancia_. Es decir, reciben una referencia a la instancia como primer argumento (`self`, por convención).Los _métodos de clase_ reciben una referencia a la clase (en lugar de a la instancia) como primer argumento, y los _métodos estáticos_ no reciben ningún primer argumento especial.Para definir métodos de clase o estáticos debemos usar el _decorador_ apropiado:```pythonclass X: @classmethod def f1(cls, arg1): bla bla @staticmethod def f2(arg1, arg2): bla bla```Hablaremos sobre decoradores en el próximo tema. Privacidad y convencionesEn muchos lenguajes se fuerza la privacidad de los miembros:- Algunos métodos no son accesibles- Se desaconseja el acceso directo a los datos atributos (se usan _setters_ y _getters_)En Python, todo queda a la voluntad del llamante (no se imponen restricciones)- Convención: atributos privados comienzan con ‘_’ (no utilizarlos en código a mantener)- Nota: no usar nombres con `____`, que tienen usos especiales, como `__init__`En Python, se considera adecuado acceder directamente a atributos (`instancia.dato`)- Pero existe un modo de interponer un interfaz controlado si es necesario (`Properties`) **EJERCICIO e7_2:** Crear una clase `Grafo` que represente a un grafo. - Contendrá un atributo diccionario (privado) con información sobre nodos y conexiones (como en temas anteriores). - Su constructor será: `__init__(self, dicc)` - Ofrecerá los siguientes métodos (utilizar las funciones de `modulos.graph_plot`): - `path(start, end)`: devuelve el path entre dos nodos, `start` y `end`, como lista de nodos - `draw(path=None)` : muestra un plot del grafo, opcionalmente marcando un camino entre dos nodosInstanciar un objeto de tipo `Grafo`, y probar los métodos anteriores. HerenciaUna clase que extiende a otra, hereda sus atributos (sin reescribirlos)- Puede usarlos, redefinirlos, añadir nuevos- Python soporta la herencia múltiple (no la vamos a ver)
###Code
class Medidor(Sumador):
"""
Class keeping track of a value, which can increase or decrease, but
not below the specified minimum.
"""
def __init__(self, start, minimum):
"""
Constructor, accepting the initial value to track, and the minimum.
"""
super().__init__(start) # Invocamos el constructor de Sumador
self.minimum = minimum
def __str__(self): # Método modificado
return f'Medidor, min: {self.minimum}, val: {self.val}'
def sub(self, amount): # Nuevo método
"""
Substracts the specified 'amount' from tracked value
"""
self.val = max(self.minimum, self.val - amount)
m1 = Medidor(10, 5)
print(m1)
m1.add(2)
print(m1)
m1.sub(5)
print(m1)
m1.sub(5)
print(m1)
print(f"Tipo de s1: {type(s1)}; tipo de m1: {type(m1)}")
print(isinstance(m1, Medidor), isinstance(m1, Sumador))
print(isinstance(s1, Medidor), isinstance(s1, Sumador))
print(hasattr(s1, 'sub'), hasattr(m1, 'sub'))
###Output
_____no_output_____
###Markdown
PolimorfismoUn objeto puede servir diferentes roles, y una operación puede aceptar diferentes objetos.- En algunos lenguajes el polimorfismo en O.O.P. va ligado a la herencia:```javafuncion(Figura fig) { // Acepta Figura, Cuadro y Circulo fig.draw()}```- En python va implícito en el tipado dinámico```pythondef funcion(fig): Acepta cualquier cosa fig.draw() que implemente draw()``` **EJERCICIO e7_3:** Ampliar la clase Grafo en una clase que herede de ella: `GrafoDict` Posibilitar acceso directo a los nodos con la siguiente notación:```python g = GrafoDict(…) g['C'] = ['B', 'E'] print(g['C'])``` Para ello, añadir los métodos especiales `__getitem__(self, node)`, y `__setitem__(self, node, val)`, a la implementación del ejercicio e7_2. **EJERCICIO e7_4:** Reescribir la clase anterior, como una nueva `DictGrafo`, que herede de `dict`, ampliándola con los métodos propios `path` y `draw`. Comprobar también si podemos usar `len`, o `keys`, con objetos `GrafoDict` y `DictGrafo`. Nota: La clase `dict` ofrece un constructor que acepta a otro diccionario como argumento, y que podemos utilizar directamente (i.e.: no necesitamos codificar `__init__`). Por ejemplo:```python d = {'a': 1, 'b'} d2 = dict(d)```Aquí `d2` es un _nuevo_ diccionario, con una copia de los contenidos de `d`. Las clases también son objetosAl igual que sucede con las funciones, podemos usar las clases mismas como objetos: asignarlos a una variable, pasarlas como argumento a una función, etc.
###Code
def objectFactory(clase, args):
return clase(*args)
var = Medidor
m2 = objectFactory(Medidor, (10, 0))
print(m2)
help(var)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.